I get asked fairly often whether I hate AI. The very simple answer is “No”. The more complicated answer is “No, but…” and there’s a whole slew of reasons why reality is complicated.
AI has utility. It has the ability to produce output that, while not authentically human, is human enough.
Note here, I’m specifically talking about code, science and so on. AI Art/Music/Books are and always will be an abomination.
What’s the Difference?
I’m so glad you asked. The difference is that code is, largely speaking, deterministic. An array is an array. It might have a different structure depending on the language, but the basic function is the same. You also may be able to write a more efficient array handler than someone else, but you’re still solving the same fundamental problem with the same tool. This makes it ideal for AI to work with since AI works best when it’s able to successfully predict* the next most viable token.
Human Art/Music/Books are by their nature, not deterministic. They’re not probabilistic either. Listening to Debussy or Sabrina Carpenter. If you told Stan Getz his riffs were easy to predict, he’d probably punch you** while cranking out a unique set of notes that would melt your head. This is why AI art/music/books are an abomination — they’re applying probabilistic methods to a creative output, and the resulting product is bland, soulless, and missing the entire point of creating art.
When Is AI Useful?
AI is useful (or has utility if you prefer) in fields where predictability is not strictly necessary OR there’s a sufficiently large corpus of training information that next-token is good enough to produce something that works. In other words, it’s perfectly useful for Code and other spheres that specifically rely on “good enough”. While we all want to think that AI is only useful outside of our own jealously guarded spheres, and that we write beautiful, performant code, reality is often different.
Nobody*** notices if your word processor takes 0.2s or 0.3s to finish a background task, but people are exceedingly good at figuring out when your LinkedIn post has been written by AI. They’re also really good at noticing when your Health AI has diagnosed a patient with a disease that they don’t have. Usually.
Why Not Use It Everywhere?
The word “usually” in the previous paragraph is doing a lot of heavy lifting. As we massively scale AI output, we’re not massively scaling human reviewing at the same pace. In fact, we’re reducing human review and becoming increasingly reliant on AI review as we grow. This is not sustainable long-term. Initially, humans simply work harder and longer in order to keep up. We’ve hit an inflection point where human attention is not enough, even in the most well-meaning organisations.
With many environments where AI is being rolled out (health, defence, education, welfare, government, etc.), we cannot afford ambiguity. Hallucinations here make real differences to real lives and have incredibly far-reaching complications. That’s not even talking about the security implications of systems running essentially unchecked in these environments. I don’t want ChatGPT targeting “almost” the right person when it’s introduced to a military system. I don’t want my GP “almost” prescribing me the right medication because that’s what the visit notes say.
Yes, this is even a problem in coding. Open Source projects and maintainers are shutting down because they simply cannot keep up with the number of machine-generated PRs. Even with the best intentions, if you don’t understand the codebase you’re submitting a PR for, you shouldn’t allow Codex, Claude, or any other AI to submit on your behalf.
Ok, What’s the Solution
I don’t know what the solution is; I hope to explore that over time. I do, however, know what the solution is not:
- Writing “MAKE SURE THIS IS SECURE” in your Claude.md or agents.md file
- Trying to persuade the agent not to do a particular thing: “Don’t hallucanate” “STOP here and wait for input”
What the solution looks like is, hopefully, co-parenting of AI between the increasingly large companies that provide it and the rapidly growing group of people who use it. If this is going to be successful, we need to work together, and I want a say in what my future looks like. That’s the only way AI is going to be useful, not just used.
* Yes, it’s more complicated than simple prediction, don’t @ me. Prediction is still the most accurate way to describe this, even with all the complex toolchains involved.
** He led a colourful life. Lovely Jazz though.
*** Except QA. Buy your QA people a drink and say thank you. Often.