AI, (that is, the latest iteration of AI in the form of large language models,) has been called so many things, from “spicy autocomplete” to “the next industrial revolution”. At some point, I felt it’d be interesting to catalog an (inexhaustive) list of the comparisons I’ve heard so far as a way to explore how to think about and understand the technology.
Spicy Autocomplete
The thinking goes, that these AI tools are essentially just sophisticated autocomplete engines. Similar to how you can start to type “I hope this email” in an email app, and have it pop up the text “finds you well” as a suggested completion. What this comparison nails, is that this is actually exactly how LLMs work. You feed it a context of words, (or tokens, rather,) and it will spit out the next word (token) that is most likely to be correct. Feed everything back through the LLM, and you get the next word. Repeat until satisfied.
Where this parallel falls a bit short, is that the LLM, while a core part of the product, is only part of the complete picture. Running an LLM in a harness that is designed for some type of task, (e.g. coding,) and has the ability to take output from the LLM and convert it into file edits or interactions with tools, paints a more accurate picture, and at that point, it’s a bit more than just an autocomplete tool. It’s also pretty great at information retrieval, (just make sure you verify what it’s telling you, like the AI products always tell you to do, but you know you don’t. Who really cares about accurately representing the truth anyway?)
Industrial Revolution
Moving on into parallels that describe not so much how the technology works, but what its impact will be on society: The Industrial Revolution. Just like the Industrial Revolution, AI is going to impact how work is done and what we produce, but instead of impacting artisan work like making textiles and shoes, it’s going to hit knowledge workers.
One way the “Industrial Revolution” parallel feels apt to me is that they both unlocked mass production at higher speed, but usually at lower quality. In every application of AI that I can see, from software to writing, this very much seems to be the case. Articles written by AI feel dry. Software written by AI is buggy and less maintainable. For a lot of applications, that actually doesn’t really matter, and in those scenarios, I think AI will thrive. Just like we ended up with “fast fashion”, we’re going to find ourselves in an era of “fast software”: where tech companies are able to rapidly churn out software products to capitalize on trends, but where the software is essentially disposable.
The “tending the machine” parallel also feels apt to me as a software engineer. I’ve been using a AI for coding at work, and I can attest that there’s something less enjoyable about punching a prompt into an AI terminal and having it churn out a bunch of code for you. It’s as if your job has been reduced to turning the crank on a machine that spits out a shoddier version of what you could do, (albeit at a much higher velocity,) and inspecting the output to fix the mistakes it makes. This is not the only way to use AI for software development, but there’s a lot of pressure to use it this way simply because of the velocity at which it produces code.
I think one hole in the Industrial Revolution parallel, though, is that AI tools are built on top of a technology with fundamental limitations that are currently going unacknowledged in the current peak-AI-hype environment. Hallucinations are intrinsic to the technology and will never go away. In the land rush where everyone is seeing what they can get out of AI, we’re seeing legal letters citing fake cases, articles including fake quotes, and chatbots giving blatantly wrong information. It’s the “this tool can make mistakes, verify all output” disclaimer that will never go away that makes this feel like a less revolutionary technology than steam power, or the semiconductor, or the internet for that matter.
Self-driving Cars
One technology that seems very similar and has been around for many years is self-driving cars. The similarity that I see with self-driving cars is how we interact with a technology and its impact on how we approach a particular activity. It also gives a view into how to get an accurate view of a vastly-overhyped technology. (We’re almost at a decade of “full self-driving will come next year”!) One thing that makes self-driving cars a good parallel to the current AI wave is that they are both reliant on the same type of technology, which involves an inherent level of error. Any system with a 95% accuracy rate will be reliable enough for you give some really impressive demos, but the technology still can’t be considered functional if it may occasionally steer you off the freeway into a concrete barrier. (It’s telling that when I took to a search engine to find the incident where a tesla veered into a concrete barrier, it was like, “which one?”)
The other similarity with self-driving cars is how it changes our relationship to a task. Many levels of driving automation exist, and the more automated the task is, the more likely the driver is to disengage and stop paying attention, leaving the task fully up to the automated system. With adaptive cruise control, the car manages the acceleration and braking, and the driver stays engaged while steering. Automate steering as well, however, and it’s a struggle for the driver to stay engaged as a mere observer. So it is I’ve found when using AI to write code: if you write the code and AI writes the tests, or if you write the tests and ask the AI to write the code, you’re engaged in the process. If you’re simply writing prompts to the AI for everything: creating implementation plans, specs, tests, and the code, you eventually assume the role as a disengaged, passive observer, and stop paying attention to the code that the AI tool is writing. So much for “AI can make mistakes, verify its output”. Of course it’ll work great most of the time, but only until the AI tool inevitably makes a mistake, one that won’t get caught if the driver isn’t paying attention. I think the best outcomes will come from AI interaction models where the user leverages AI, but avoids outsourcing the entire task, making it easier to stay engaged.
Asbestos
Cory Doctorow called AI the “asbestos in the walls of our technological society,” and yes, remarkably, there are some real parallels to draw here. Asbestos is actually incredibly useful! It’s fire- and heat-resistant. It can be woven, packed, and shaped into many form factors. We were putting it in everything, until we fully realized that it’s made of microscopic fibers that shred your lungs and give you cancer if you breathe them in. (Asbestos cigarette filters may qualify as one of the absolute worst inventions of all time. Sucking cigarette smoke through a wad of asbestos in today’s language would be termed cancermaxxing.) Asbestos is a cautionary tale illustrating that the negative side-effects of something must be considered along with its usefulness. So, what are the negative side-effects of AI? Well, there’s the difference between a web search in 2018 vs now, where now the majority of top-ranked sites are AI-generated garbage that were created to collect ad dollars with minimal human effort while providing no real value. There are also the negative side effects of following an AI’s hallucinated advice, and the negative impact on a person’s critical thinking skills as they give in to the temptation to outsource more and more of their thinking to an AI chatbot. There are also all of the negative externalities, like the enormous amount of water and energy consumed by the data centers, with much of that energy being produced by burning fossil fuels.
Of course, it’s unlikely that we will ever consider the cost/benefit ratio of AI to be high enough to ban it outright like we did asbestos, but we will be paying the cost of cleaning up and removing AI slop from our digital spaces for some time.
A Sophisticated Guess-and-check Machine
Lastly, a way of thinking about AI tools that I think provides a roadmap for how they can be used effectively: AI tools are simply “guess-and-check” machines. Just like back in your school days, where sometimes you could figure out the answer to a math problem, not by working through the steps you were supposed to, but rather by simply taking a guess at the answer and plugging it into the original problem to see if you were right. That’s essentially what an LLM does: it makes an educated guess at what it thinks should go in whatever space you give it to fill in. That guess is informed almost entirely by hoovering up content across the internet and measuring statistical relationships between words and phrases. It doesn’t “know” the answer to anything. That’s why the “verify all output” disclaimer exists.
But, if you’ve got a way to verify the output, that does give you a pretty easy shortcut to a working result. That’s why AI coding even works. All the tools we’ve built over the ages to validate the correctness of software programs,– compilers, type systems, test suites, validation pipelines– work great for creating a cycle of code gurgitation and validation.
But ultimately, in most cases the validation will still need to come from you. Even with AI coding, the automated validation tools we have available only get you so far. They may be sufficient for a throw-away app, but for any sustained software effort, legal letter, article, or information lookup, AI will only be as helpful as your ability to review the output and recognize whether it’s correct.
This is a companion discussion topic for the original entry at https://www.saltytron.com/posts/2026-05-12-ai-parallels