You have just read a blog post written by Jason McIntosh.
If you wish, you can visit the rest of the blog, or subscribe to it via RSS. You can also find Jason on Twitter, or send him an email.
Thank you kindly for your time and attention today.
Two years ago, I used my then-employer’s nascent LLM-powered chatbot to help me with a personal coding project. It failed, but in a very interesting way. This suggested the potential of generative AI as not a replacement for human work, but an intriguingly novel creative tool that can use unpredictability and even confabulation to help you see your own work from new angles, sparking new insights.
One year ago, I attended a show that shocked me out of a professional crisis, one that very much involved my uncertainty about the explosive growth of LLM-based technology. It spurred me to walk away from a lucrative job with a global tech corporation that had taken a hard turn into a company-wide generative AI embrace. This remains one of the most difficult and meaningful things I’ve ever done, and I have no regrets.
This summer finds me returned to my comfortably pre-FAANG role as a freelancer. And: I have come to use LLM-based technologies nearly every day. I’ve become a paying customer of Anthropic’s Claude. My main client, CodeRabbit, sells an AI code-review service.
To be sure, last year’s exit had far more to do with recognizing how my priorities had diverged from those of my employer, rather than a protest of generative AI specifically. And yet, my personal adoption of the technology feels rather more accelerated than we might have expected at the time, doesn’t it? A fresh examination seems due.
I borrow the title of this post from Chats with the Void, a webcomic by the pseudonymous Skullbird about finding personal growth through uncanny encounters with the reflective surface of non-existence. The comic is a long-time favorite of mine and I think of it often, including when I think of what it feels like when I’m having an eerily productive session with an AI chatbot, one that feels connective even though I’m well aware there’s no other mind at all there. Just my own, talking to itself, through this mannequin facilitator.
And that’s my thesis, really: the current state of LLM chatbots, when used with mindful care, help me refine my work by forcing me to organize my thoughts conversationally, and then chat about them with a pseudoexistent entity. For the most part, it reflects my thoughts back at me—but coated with a squirming fuzz of connection and suggestion, picked up after rolling my writing around a unimaginably vast map of stored semantic vectors, trained on sketchily sourced seas of recorded human thought. Reading the chatbot’s replies forces me to regard my own thoughts from new angles, and challenges me to defend my creative choices, or see some areas where I could improve the clarity or completeness of my writing.
When I work with Claude, I bury the poor bot in conversational context. Not only do I feed it all my work, but I’ll pile in any immediately relevant supporting documents, and then wrap the whole thing in a prompt that is often a multi-paragraph screen-filler, laying out the relationship between all this material, my motivations for doing all this work, and the precise sorts of feedback that I hope to get out of the AI. Here are a handful of examples—liberally paraphrased for length—of prompts that I’ve used to launch long conversatins with Claude in the last few months:
Here is a list of assertions about a well-documented technology that I need to write an article about. Are all of these statements technically accurate? Does the whole collection represent the outline of a complete user-level understanding of the technology, or are there some key points missing?
Here is a pitch letter I just wrote to a magazine seeking new submissions, but which I haven’t sent yet. Here is a copy of a page from the magazine’s website about their pitch process, and here’s some more information about this particular call for pitches. Does my letter meet fulfill everything that the magazine’s process asks for? How does the pitch itself read in terms of clarity, flow, and length?
Here is a very long, boilerplate-filled contract that a new client just sent me. Here is a list of the working conditions that my client and I have already informally agreed on. Does this contract support these conditions? If so, show me how. If not, explain why. Beyond that, is there anything in this contract that I might come to regret, if I sign it? You already know what I do for a living and how I prefer working.
Here is the GitHub repository for a static-site generator of my own design. Despite not having worked on it in many years. I did recently start using it to publish a podcast, and it works remarkably well. But it has a significant shortcoming when used for this purpose, and here’s a detailed description of that. What’s a modest modification that I could make to improve this?
Here is a blog post that I wrote, as much for myself as for a public audience, examining my current relationship with rapidly developing LLM-based technology, and how I use AI chatbots like Claude in my day-to-day work. What do you understand the post’s thesis statement to be? Are there any modifications you would make to this draft in order to strengthen this thesis?
All of these have the general form of “Here’s a whole bunch of work I’ve already done. Observe as I salt them with meta-text about my motivations. Let’s crank this through your weights, paired with a question, and see what comes out.” And in every case, I found the output valuable, leading to a back-and-forth of challenges and counter-questions with an infinitely patient semi-entity that gives me sometimes surprising insights and perspectives into my own work, and often points at ways that I could refine it.
This isn’t my only use-case for generative AI, but I think it’s my strongest one. Not a miraculous servant, but a darkly magic mirror.
In no case do I ask the chatbot to generate “slop”; there is no step, in my AI-enabled workflows, where I just let the machine operate unbounded, or where I intend its output to be seen by anyone other than myself. In every prompt, I set boundaries, observe the results, and think about what, if anything, they might prompt me to do in response. It is part of a closed system, with me on either end of its function. It is, to use a phrase I bandy about sometimes, cognitive middleware.
Part of why I enjoy the work at CodeRabbit involves the product’s compatibility with my AI philosophy. In that case, CodeRabbit isn’t meant to supplant human code reviewers; it’s something more like a self-powered toolbox that scouts ahead of the human reviewers of any “pull request”—a formal request-for-comment regarding a code modification—attaching a summary and map of its findings, perhaps with some suggestions attached. These are presented tidily and atomically in a single comment, which human developers can heed or ignore, to any degree, at leisure. I legitimately enjoy seeing the bot’s responses to my own documentation pull requests, and more than once it has made apt suggestions for improvements which I interpret and then manually apply. Middleware.
I have experimented with using chatbots much more extensively, giving their leash more slack, and the results haven’t felt as good. In the two years since my first “rubber-ducking” adventures, AI chatbots have become far more apt at prompt-driven software creation. I know this because I do have running, on a private server, a web application that I directed Claude to stack together for me. The result, after a few hours of work, is a complete Python-based application that does what I asked for, to the letter. But it falls short of my own sense of taste in a hundred ways—and I don’t actually know how it works. I don’t like it.
In a recent episode of The Talk Show, Craig Mod—who seems to use generative AI under the same constraints that I do—described how he used Claude to build a web application to fulfill a very specific desire that he had, and was very satisfied with the results. In his case, he spent a couple of weeks on it, working gradually, and staying cognizant of how all of its parts worked, and why. Claude’s job was to help him build and iterate very quickly, far more quickly than he could have done by himself, but never straying from a place where Mod knew how the app worked, and where he should focus his next ideas for improvements.
So why did Mod use a generative AI tool at all, then? Because he knew he’d never have bothered, otherwise! It was a fun project, scratching a personal itch, and there are enough high priority demands on his time that he knew he’d never seriously commit to a solo coding project of great complexity but mild importance and a tiny audience. This resonated with me, and matched my own motivations for that bleak experiment on my server. I want to try it again, with a more expansive and patient attitude.
And so that’s where the summer of 2025 finds me with AI. I have always described my stance towards modern generative AI as “cautious curiosity”, and that hasn’t changed. My characterization from 2023 of the technology as a rubber duckie which can talk back to you remains in place, as well. What has changed over the last year or two is my willingness to explore—mindfully, and with constraints—how this technology can work for me, and help me with the things that I want to do, and make. And there’s something there, even if there’s nothing at all there, and I intend to continue digging. Or, anyway, gazing.
Previous post: We are Kilmar Abrego Garcia, or we are nothing
To share a response that links to this page from somewhere else on the web, paste its URL here.