One-Shot and Retreat

30 March 2026

Years ago, I played a Timeline game with some coworkers, and I immediately wanted to play this game to sequence other events in more specific (or even personal) domains.

Years passed with only a TODO note, so this year, I figured this might be a good candidate to see what Gemini CLI can mostly do on its own in "one shot".

I wrote an initial spec, and I told the agent to read it and implement the whole thing. Of course, it ran off in the wrong direction for the first pass, so I deleted the resulting code, expanded the spec with some more detail, and kicked it off again. I did this about 7 more times, and the results kind of worked, but randomly included and ignored some of my directions. As usual, the guesses it took were mostly welcome, since I just needed to see something, but didn’t know what yet.

Spending tokens to generate the whole thing and throw it away spent lots of tokens and time. Is it really "one shot" if you need to do it multiple times?

A handful of those candidate applications would have been a fine starting point, so I kept one, and went into iterative mode. From that point, I asked the agent for small focused changes like I had for other apps. It’s really good at gathering the context from the existing project and implementing those fixes and enhancements. I (we?) worked much faster and consistently in small bites.

The agent was able to locate and build some datasets for me, but I also scraped and transformed a dataset from the Computer History Museum. That was one of my main motivations to get this project going.

My implementation of Timeline is fun to explore very specific domains and to learn. It’s like flashcards.


Teach the AI to Unit Test

19 February 2026

The Gemini AI will make some pretty good guesses about how a 3rd-party API may work. It is good at searching the internet, but when APIs have changed across versions, the old and new docs and examples it’ll find can confuse it. In a dynamic language and environment you’ll not spot these errors until runtime.

To combat the ambiguity and to give the AI agent more power to solve its own problems, ask it to add some tests around the code that uses the API. (In my case, the API is the XTDB client API.) Once it has a way to execute the code through tests, it’ll quickly start figuring out where it’s made mistakes and start running its own experiments to observe errors, search for fixes, and applying those fixes around the codebase. I exhibit the same pattern when I’m doing it by hand.

The tests also give you, the human, an easier entry point to evaluate the code the AI generated. If the tests look gnarly, you know to suggest refactorings to improve the architecture and make it easier to test. When the AI has the tests passing, and the test code is easy enough to read, then you can have a closer look at the application code to refine and keep that maintainable too.


Iterative Development with Gemini CLI

31 December 2025

Models and Expectations

I’ve had Gemini CLI installed on my workstation since August 2025.

Originally, it would default to use the gemini-2.5-pro model and your "access" to that would run out for the day, and it would switch to using gemini-2.5-flash. I found the flash model to be adequate for the way I’d use it to do Clojure and ClojureScript, so most the time I’d override it to just use flash from the beginning. I thought I could kick over to pro if I found a problem for which I’d need more power.

Eventually, Gemini CLI started switching back and forth between models more intelligently, so it didn’t burn through your limited access to pro, so I no longer override it with 3.0 models.

Pairing with a Junior Developer

The AI agent by itself has read lots of documentation, and it’s pretty good at Googling the answers to questions and picking something to try. (I often get a bit of analysis paralysis when trying to choose a library.) It can be surprisingly good at translating sample usage of some JavaScript library it finds into a simple bit of ClojureScript.

In my experience, it’s sometimes bad at matching parentheses, so I just fix them myself. Recently, it may be getting better, and some Clojure MCP projects can cleanup parentheses automatically.

I only ask it to do small tasks, and I closely review and test the code it generates. When it looks good, I commit and push the code, but I know I can always easily go back to a previous working version when the AI goes off the rails. I don’t have to worry too much about it getting too confused or destroying something. I tell it to forget what we were doing, /clear the context, or just restart the agent completely, and recover my known good state from git. (Update 2026-02-17: /rewind may be better these days for clearing some context.)

I find that even if it fails to complete a task, I at least learn a little from what it did, and often have an initial direction or two to explore.

It’s pretty good at keeping my momentum when working and keeping me from spinning my wheels, like pairing with another programmer.


Google Can't Reach SmartThings

28 January 2022

My Google Assistant on my phone has been refusing to turn on and off the 2 devices I have on smart plugs: "Can’t reach SmartThings."

I found an article about the Google Home doing the same thing. Fortunately, the advice there worked: go into Assistant’s settings → Devices → Add Devices. Upon clicking on the SmartThings entry that was already there, it gave me the option to re-link. Once I authorized access, I could again ask Google to control those devices.


All the Posts

March 2026

February 2026

December 2025

January 2022

September 2019

June 2016