As of today, I’ve been writing this blog for twelve years. Much has changed in that time. Like everyone in tech and tech-adjacent fields, much of the work I do on a daily basis has been transformed by recent advances in AI; first gradually, then suddenly.
I’ve been a pretty enthusiastic user of generative AI since late 2022, though I had a fair amount of initial skepticism about chatbots.
I’ve personally been much more interested in using Stable Diffusion than I ever was in text generation systems like GPT-3 and AI Dungeon. I’m a far better writer than artist, so the idea of generating text from a prompt is an amusing diversion at best. If I want some text, I’ll just write it myself. Although some artists are already integrating AI into their workflows, others will probably have the same meh reaction to its applications in their field.
Me in August 2022, Stable Diffusion
The earliest versions of ChatGPT did not have any kind of function-calling/tool-use capability, so any content you would get from them was pulled from somewhere in their latent space. Answers they gave were often correct, but had to be carefully verified as they weren’t grounded in anything but the model’s internal mathematics. These days, all major model providers have integrated tool use, allowing the model search the web in real time and provide citations for its claims, so the verification process has been reduced from “get an AI answer and then do all the work to get the same answer without AI” to “check whether the AI’s sources are legit and actually support its claims”.1
As I’ve previously opined about at length, the grounding provided by tool use is what makes these models useful, whether that’s research grounding through search results or grounding to a digital world through the output of terminal commands. This simple innovation is what makes an LLM-based system more than just a stochastic parrot (so anyone who still insists on citing that paper is a few years behind the curve).
Back when LLM chatbots got started, writing was the only thing they could do, so there was a lot of breathless talk about how they would replace writers. I wrote in 2023 that I did not see that happening, outside of the most boilerplate copy. Three years and many model upgrades later, I still don’t like AI writing – while some of the most annoying tells have been eliminated, I have yet to find a reliable way to generate writing that does not have the AI smell. In most cases, I would advise against publishing any AI writing that hasn’t been edited beyond the point of recognition.
Nevertheless, I find AI writing increasingly useful. The distinction is context. In the context of a chat with a chatbot, the writing is a living thing, highly personalised to my prompt. I can regenerate unsatisfactory responses, ask follow-up questions, or challenge conclusions. And at no point am I ever in any doubt that I’m speaking to an LLM. Contrast that experience with reading an article written by an LLM. You get all of the annoying writing tics without any of the interactivity. This is why I’m skeptical of the value of projects like Grokipedia: if we’re going to have an LLM-powered encyclopedia, it should be closer to The Young Lady’s Illustrated Primer from The Diamond Age than a bot-written Wikipedia. LLM writing is worth infinitely more alive than dead.
Considered from this angle, LLMs have replaced a lot of human writing that was previously common online. I can’t remember the last time I looked at StackOverflow. And it’s been a long time since I’ve felt compelled to write a tutorial for this blog, even though those have been some of my best performing posts. Whenever I encounter a technical issue these days, I get Claude Code or Google’s AI search to generate me a custom tutorial; whenever I want to implement some technical thing, I brainstorm a bit with Claude and then get it to write the code. So I think LLMs largely have replaced the technical tutorial. That’s one kind of post I probably won’t write again.2
Sometimes I change the variable names in the code I copy off StackOverflow, just to leave my personal stamp.
— David Yates (@davidyat_es) November 22, 2017
I do not mourn the loss, because I’m having too much fun programming with AI. The orange website has latterly been chock full of mournful dirges lamenting the death of the old order, of the thing that programming was before 24 November 2025. I’ve come to appreciate that I did not value the act of writing code by hand – certainly not to the extent that many others did. As generative AI improves, it shows us what we really care about – at least one of the blog posts I read lamenting the retreat of hand-authored code was very clearly AI-written, with an obviously AI-generated header image. And there are probably visual artists out there somewhere using LLMs to generate writing and code while hand-making images.
I get it, and I’m glad that this tech allows us to focus on what we care about and are good at. But AI should help us produce better output. The thesis of my universal translator post was that we should view these things as tools that allow us to control and shape outputs, rather than as entities to delegate everything to – and that you should take responsibility for what you use AI to produce. I firmly believe that agentic engineering and synthography are emerging fields with their own set of skills to learn, and I plan to write a lot more about both in the coming year(s).
So here’s to the next twelve years of handwritten thoughts about technological change.
David Yates.