I’ve wanted to write a follow-up to my April 2023 post How I’m using ChatGPT as an accelerator for several months. I want to share how I’m using AI today, what I think about it, and what I think we’ll see in the future. But I will admit I hesitated because people I respect and admire on the web have expressed that my use of or writing about AI would make me a very terrible person.
I understand why some are choosing to boycott the use of AI due to how the information to train the models was gathered … which is that much of it was most definitely stolen. There are models, such as Common Pile, and others that have only been trained on openly licensed and public data … but even then, what is public data?
I think a lot also about the profound amount of energy and other resources needed to create all of these model training datacenters.
So it leaves me in a spot in my career so far that feels unique. I can’t remember a time that aligns with this one. The licensing, piracy, DRM, and crypto periods that the web has seen all had their challenges but this feels different because, so far at least, it feels like LLMs and the skills to use them deftly might become vital for what I do.
I don’t think I could write it better than Frank Chimero said in a recent talk he gave in Brooklyn.
The believers demand devotion, the critics demand abstinence, and to see AI as just another technology is to be a heretic twice over.
So, with hesitation, doubt, confusion, and also intrigue and excitement, here are some random thoughts about AI as I see it in the fall of 2025.
I’m going to focus mostly on AI use as it relates to programming. But I will say this; I believe this technology is going to be applied to every single thing we can imagine from our refrigerators, water pumps, cars, lights, healthcare, yard tools, and every piece of software we use. Whether we like that or not. (I’m personally not a fan of the “add AI to everything” moment we’re having.)
In early 2023 I was using ChatGPT to make me quicker at doing things I was already doing. It was a shortcut. It saved me from typing something mundane that I already knew what I was going to type but ChatGPT could “type” it faster. It wasn’t long after that I began to use models locally, rather than relying on cloud services like ChatGPT, to both keep my data more private but also to be less reliant on a paid service. It also allowed me to test many different models that were created in a variety of different ways and with different goals.
It is my hope that there is a model or set of models that everyone agrees was ethically sourced, trained, and distributed. It is arguable some of those already exist. Even if it isn’t as good at some things as the leading models, I would use that in a heartbeat.
The rate of updates to the models, tooling, features, etc. to LLMs between my post in 2023 and somewhat recently was neck break. Agents, MCP, CLI, APIs, all improved rapidly. And I tried just about everything.
In my experience, LLMs are very good at helping me with my job but they aren’t very good (yet) at doing my job. Most code written by agents (meaning, LLM tools that have a bit more autonomy to do more than just suggest code updates) takes nearly the same amount of work to fix than it would have been if you wrote it yourself. It also has the added drawback of the programmer not being intimately familiar with the codebase. Which, in the longterm, could be a real issue. But perhaps this will be improved upon and go away and we’ll never need to see code again? I’m not sure.
Let’s say you were going to build a weather app against an API of weather data. You asked an agent to show various bits of data in an app view; temperature, humidity, wind speed, etc. But let’s say the API you’re using doesn’t support humidity levels it may just plop in what looks to be correct code to do so — even though you knew that it wasn’t supported. I see this all the time.
For APIs I know well, it isn’t too big of a deal. But for those I don’t, it is a huge time suck. Again, something that will no doubt improve until it no longer happens… but it is still very much happening.
Summarization, translation, transcription… these seem like solved problems at this point? They are incredibly useful.
My muscle memory has changed a bit (which is something I mentioned in that original post). I do use an LLM very early on in the process now where before I used it only after I got stuck on something. However, because of the setbacks and inconsistencies, I’m still reading the docs, doing the research, etc. before I jump into an LLM.
Recently we’re seeing AI browsers, like Dua and Atlas and others, pop up. I have no intention of ever installing these. In fact, I think the web browser landscape is dire. Mozilla seems lost, Safari is great (I use it daily) but locked on Apple platforms, and everything else is a version of Chromium which is supported by an ad-based business. Bleh! I think this is going to be a real problem.
One last thing, I think the hype and speed of improvement is about to ebb on the text-based solutions and perhaps flow on the video-based solutions for awhile. The tooling and communication across all of the platforms will likely improve greatly, agents likely will, and other things… but I think the text-based models and chats are going to stagnate for a little while. The focus is going to shift to image and video generation.