Advent of Code 2025 in Typescript

After two years of Advent of Code in Rust, I thought I’d try TypeScript. I’ve always wanted to improve repo-review’s webapp, and that requires knowledge of the packaging systems for JavaScript, so I thought I’d try TypeScript this year. I also used this as an opportunity to learn more AI tooling too, mostly CoPilot in VSCode & ChatGPT. I’d like to share my experience and thoughts! My code is at aoc2025 (and aoc2024, aoc2023).

Background

Since this is my experience with TypeScript, I should start with my background. I’m very familiar with Python, C++, Rust, and Ruby, and some experience with a few other languages, including JavaScript. I largely interact with JavaScript because I am providing WebAssembly code in the browser, and that’s how you set up the WebApp running Python or whatever. That’s also why I’m interested in how to do that properly; repo-review’s webapp runs in live JSX, and is not properly bundled. Of course, it uses Python, which is several MB, so it isn’t that important to bundle it up, but I’d like to do better eventually.

I’m pretty heavily involved in Python packaging, having written the Scientific Python Library Development Guide and parts of packaging.python.org, and maintain a variety of foundational tools, including packaging, build, scikit-build-core, pybind11, and nox, among others.

As part of the Princeton RSE program, I’ve seen a lot of my colleagues starting to use and talking on AI (also developing it), and I’ve been wanting a chance to do more with it. Several of us also do the Advent of Code each year as a language learning exercise.

Getting started: packaging

In my opinion, “packaging” is the most important software skill. By packaging, I don’t just mean “how to ship code”, I mean how to develop code - the infrastructure you use to run tests, formatters and linters, manage dependencies, etc. I started by asking ChatGPT for what was commonly done, and also did some searching; and settled on pnpm, which was a fast modern alternative to npm (and yarn, etc). AI tends to be really bad at this, by the way; it doesn’t handle changes all that well to the way things are done. Once I had picked a tool, then it produced somewhat useful suggestions on how to set it up, and I also had to consult the documentation a little (but not much). This is approximately the commands I ended up with at first:

brew install pnpm node
pnpm init
pnpm install --save-dev typescript tsk @types/node

I also had to change VSCode’s package manager from auto to pnpm, otherwise it kept trying to use npm. I went with vitest after a little chat about testing alternatives. I set up aliases for the commands I wanted to run, like pnpm test run to run the tests. (run is required if you want it to act like pytest; otherwise, it starts a live reload server).

I set up a test runner, aliased to the pnpm day. I followed the suggestion to run everything with tsx, which means later I’ll need to figure out how to build it into a JavaScript package. Later I added a newday script, too. I also set up pnpm format and pnpm lint commands. Figuring out the modern config setting for the linter was a bit painful, as it’s changed recently to a fully JavaScript based config, so AI didn’t want to write the new one, but the docs worked.

I also eventually added notes on how to start a REPL and import stuff:

$ pnpm tsx
# > const { Grid } = await import("./src/grid.ts");

Getting started: bun

After I finished, a colleague pointed out bun, which replaces node.js, pnpm, and vitest with an all in one solution that is supposed to be faster (it also replaces packaging stuff I haven’t tried yet). I tried it out and really liked it; the current codebase uses it instead. The differences are:

  • It’s not on brew, you have to use a tap (officially from the bun team).
  • It has a built-in test runner, bun test, no setup and also doesn’t need run.
  • The docs are great.

Dependencies

One thing I noticed was the lack of great TypeScript dependencies for things I wanted to do. I wanted a Grid class, and AI found a blog post from last year and a package that was 12 hours old (and was created for exactly the same Advent of Code problem!), but there didn’t seem to be anything like NumPy. Other times, I’d find dependencies that hadn’t been touched in years, and often dependences didn’t have types. The only dependency I ended up using (other than development dependencies) was a linear solver that I had to generate type stubs for (but AI was happy to do that). I didn’t find a graph library I liked, they all seemed to be tied to graphics or abandoned.

I feel Python and Rust both had better dependency stories for this sort of general, slightly mathematical computing. Even Ruby does. Still miles above C++, though.

Types

TypeScript itself feels nearly exactly like Python static typing. It even uses a similar syntax to Python (and Rust) for adding types. It works differently; you have to process TypeScript to strip the types out, while Python just ignores the types. But most JavaScript development is processed anyway for polyfills and such, so overall it’s pretty seamless.

Here’s an example:

export class Grid<T> {
  private data: T[][];
  // ...
}

grid = new Grid<number>();

export function f(x: Grid<boolean>): number {
  // ...
}

It does use <> for templates, like Rust/C++. You have to add export to use it outside the file (like Rust’s pub). Return types can also use : because that’s not already used, like in Python.

The core language

The biggest problem is TypeScript really doesn’t do much to fix JavaScript, it just helps you avoid some mistakes. A lot of the quirks still shine through. I noticed:

  • It still was really easy to get silent errors. Due to the web focus, JavaScript tries never to crash, so can you get undefined’s or -1’s flowing through your code really easily.
  • Weird stuff in JavaScript is still weird. You have to convert an array to a string in order to use it in a Set, otherwise it’s address-based (no tuple type).
  • Lots of stuff you’d normally find is missing, like zip, Set operations, etc.
  • Missing or extra args is not an error with functions and callbacks. This can swallow mistakes.
  • No operator overloading. myObject[a] += 1 is not pretty to write if you can’t overload +=, for example!
  • camelCase naming is weird. I probably slipped up and used the wrong convention in a few places.

There are still good parts of JavaScript / TypeScript though:

  • Missing or extra args is not an error with functions and callbacks. When you are used to it, that can simplify things, at the cost of swallowing mistakes. And it’s great for callbacks, you can just define what you need.
  • You can have mutable function defaults.
  • Works well with functional code, AI can rewrite imperative to functional form for you. The => lambda, with single and multiple line forms, is powerful.
  • AI really is good at JavaScript/TypeScript.
  • Solutions were pretty concise – but Rust was too, due to the great libraries I could use to help.

And TypeScript at least did help somewhat - if I was writing very much, I’d definitely go with TypeScript, much like typed Python. Do I like it better than C++? No. I don’t even like it better than Ruby, which doesn’t have types. It sits just above JavaScript for me.

Using AI

Now the other part was the use of AI, especially later on. I tried using it quite a few ways, so let’s cover those. I’ve held that most AI detractors imagine asking AI for algorithms and then it spits out memorized copyrighted code, doing all the fun part. But realistically, what AI is good for is helping you with the boring parts of coding, like reviewing code, refactoring, and writing tests. Having it come up with algorithms is one of the least useful things to have it try to do, in my experience! But due to the this not being a “real” project, I did get a chance at trying it as a “solve it” tool on a couple of days, as you’ll see below.

AI: Asking ChatGPT

Early on, as I was learning, I often asked ChatGPT questions about TypeScript, and often told it what I wanted to do in Python. That worked really well, with thoughtful responses about ways to do it and why. Here is when I was asking about 2D arrays, including the less than one day old repo it found. It’s obviously doing searching on my behalf for questions like these.

I also used Microsoft 365 CoPilot, since I have that though Princeton, so don’t get timed out if I ask too much. Microsoft has too many products named CoPilot, IMO! This is not the same as CoPilot for GitHub or even copilot.microsoft.com.

This was mostly limited to things that were easy to describe and I wanted an answer that wasn’t primarily code. I did try for the “solve this problem”, but without the ability to test the code it was writing and iterate, it wasn’t useful. Not that I’d any better at an untested first pass, to be fair.

AI in VSCode: autocomplete

The autocomplete feature was useful for refactoring. Whenever you rename a variable, or switch the order of a parameter, it starts suggesting all the fixups that are required. Or if you copy-and-paste a previous day, it starts suggesting the day numbers be updated. If you write a function, it then suggests using it. That sort of thing. Longer suggestions (like the body of a function based on the name) were mostly not that helpful, but thankfully are less common these days.

AI in VSCode: Command-I

In VSCode, you can ask inline questions via pressing Command-I; this is based on your cursor position or selection. This was fantastic for simplifying code; since I’m not really familiar with TypeScript and best practices, I don’t write ideal code, so selecting a limited chunk and asking simplify this would 80% of the time provide a shorter solution with higher level constructs. 10% or so it wouldn’t really be better, and 10% or so it would make a mistake in simplifying and the code wouldn’t return the same thing, so tests were really important. It also occasionally just messed up and added a bracket or something; thankfully it asks you to “accept” the change, and you can run the tests before you do. Using git is also helpful.

I also could ask it to write a couple of lines or so based on a description, but it wasn’t as good at that, sometimes not following my prompt or messing up (this is where most of those bracket issues hit, actually). “compute the union of this set” or something like that is what it was best at. Still weird you don’t have a method to do that in JavaScript…

AI in VSCode: Command-Shift-I

The agentic coding panel in VScode was probably the most powerful tool. At first, I had to also tell it how to run the tests; I eventually added a .github/copilot-instructions.md, which fixed that, though it didn’t make quite as large a difference as I thought it might, sometimes I still had to ask it to run the tests, but at least I didn’t have to tell it how. I also enabled auto-run for a select set of explicit commands (mostly pnpm test run) so I didn’t have to keep telling it that is was okay to run the command.

I tried a variety of things. First, it was great at debugging. With a simple prompt, it would run the tests, inspect the code, fix it if possible, add debugging statements if not, and then rerun until either it or I could see the problem. Debugging is simply not fun, and watching an AI do it for you was satisfying. It also was good at cleaning up. And you can jump in at any time to guide it, or even just have it add and remove the debugging statements. I guess that means we are stuck with print statement debugging instead of a real debugger now? I never even looked up how to run a proper debugger!

It was also good at following directions, like me asking it to move functions out to avoid scope capture. (It wrote them that way in the first place, to be clear). It was also fun to describe a specific algorithm, step by step, that I had in mind, then watch it write it and try it, with very little investment from me (other than thinking up the algorithm, which is the fun part). I tried some interesting things (that didn’t work) before giving up and installing a linear optimizer. It also set up the linear optimizer code, and then we debugged it, I guessed it was adding undefined instead of 0 in some arrays, and it then fixed it for me. The solver also required async, and CoPilot went in and adjusted my runner and my test suite to handle optionally async functions.

Besides the linear solver on day 10b, I also used it heavily on day 8a. I worked out a solution there with just occasional help, but it wasn’t working. I then tried more and more CoPilot, including removing my solution then asking it, and it had the same mistake I was having. I checked the Redit thread of solutions, and found everyone was saying the problem description was confusing; when it said “do nothing” when a connection was not needed, you were still supposed to count that connection as being made. Since I’d already moved on from my solution, I decided to try feeding the entire problem text into CoPilot, with one added note about this confusing part. It came up with a great solution that I liked; with some cleanup, that’s what you see in my day 8.

AI in GitHub Agents

I also used GitHub Agents once, to generate the copilot-instructions.md. I just followed the directions, which has you paste a long prompt into a GitHub agent and that produces a PR with the file for you, based on your repository structure. I think it’s most useful for assigning @CoPilot to issues and custom prompts, but it does seem to help the agentic panel in VSCode too.

I was rather expecting this to make the VSCode stuff run the tests automatically, but I still had to prompt it most of the time; though I no longer had to tell it how to run them each time, which was nice.

Other stuff

There are other useful AI tools that I didn’t use, and I just wanted to point out places where I have used them:

  • Assigning CoPilot to issues: I don’t spend much time on Plumbum anymore, so I’ve tried assigning issues there, and to my surprise it solved an annoying bug that I would not have spent the time to solve. I checked the added test against removing either part of the two-part solution, and the bug shows up in the test with either part removed! To be clear, this is a best case, and the issue was excellently written, which is likely a requirement for good results.
  • CoPilot reviewing PRs is fantastic - while a lot of suggestions are useless or bad (it really doesn’t understand @overload in Python!), it also finds real problems, non-trivial mismatches in docs and code, mistakes in code, typos, things like that a lot; I’d always prefer it on a PR that doesn’t have a human reviewer, and even in addition to a human reviewer. I don’t use it everywhere just because some people are sensitive about having it comment on their PR, but I wouldn’t mind it on almost any PR. I wish I could run it just for my view only. :)
  • I’ve also used mini-SWE-agent locally with an Anthropic model before, and it was pretty good, similar to the CoPilot Agentic mode. A little more open about what it’s doing.

The problems themselves

The problems were fun, I liked the shorter set. They seemed a little easier than last couple of years; I think partially because they are no longer competitive, so effort to ensure AI can’t quickly answer them is gone (and AI has gotten to the point were it probably was leading answers, or would soon). I think a lot of it is the shorter format, it just doesn’t have time to ramp up quite as much.

  1. Day one was simple, but still gave me a chance to play with (Rust) enum-like structures. Which was basically a struct in TypeScript.
  2. Day two used iterators; the one thing I miss in Rust. They work just like Python, save for being called function* (nice!).
  3. Really simple, but wanted slices without copy (subarrays), so used Uint8Array.
  4. Wrote my own Grid helper, based on a blogpost ChatGPT found for me, and my knowledge of grid2d in Rust, which I contribute to.
  5. This is a range-based question, and TypeScript doesn’t have one, so had to write it.
  6. Another grid question, added .reduce for this one.
  7. More grids, this also had paths.
  8. This is described above, had AI help a lot on the final solution.
  9. I had to work on the alg quite a bit here, AI didn’t know how to do this until I came up with a specific idea on the alg to use.
  10. This one needed a solver, brute force was too slow.
  11. I was able to do this without a Graph library; the trick is caching, which I’ve seen before. No @functools.cache here (even Rust had it).
  12. The problem looks terrible generally, but you can solve the dataset given really easily (common on final day problems).

Final verdict

My final verdict on TypeScript: fine. Not great, just fine, clear improvement over JavaScript, but that’s all. I’d much rather do any heavy lifting in WebAssembly (via Rust or C++), then stick with this for the webpage stuff. Tooling was really nice, similar to Rust, Ruby, or modern Python tooling. I still want to work on making a little webapp, using JSX or whatever the typed version is, and bundling it up. Bun is really nice.

For AI, it’s come a long way. It’s really able to do quite a bit (and it’s crazy that the same tool can also do completely unrelated stuff too, things like ChatGPT aren’t really programming tools!), and all the stuff being built around them is working well. It’s a really powerful tool that I think is just something that you need to know if you have a career in programming. At the same time, don’t use it if it takes the fun out of what you do, especially if you program for fun. Also, a lot of the success I had were due to knowing exactly what I wanted to do. It can be great for learning, but you have to use it as such, running tests to make sure it didn’t just hallucinate.

Is all that enough for me to launch VSCode more often instead of VI? No.

We’ll see if I finally do Advent of Code in Julia next year.

AI acknowledgement

I wrote this post myself in vim without AI, but I had ChatGPT review it for mistakes and help me with adding links.