The Relationship Between AI and Human Expertise

Wednesday, May 24th, 2023

I’ve been using AI to create art, articles, music, and code. Here are my notes and impressions so far. Let’s see how well this ages.

Large language and generative models are powerful performance enhancers that bridge the gap between natural and formal language, but they do not diminish the significance of human expertise in tackling novel challenges. The human brain is a pre-trained multimodal neural network that provides the crucial context and motivation for problem-solving, inasmuch as humans are the ultimate consumers of the output.

In addition to supplying motivation, human expertise reduces AI model complexity. It’s expensive and time consuming to train large models from scratch, and models are generally restrained by the size and quality of their datasets. There is a finite rate at which new training data can be generated and sanitized. If you ask ChatGPT (May 2023) for help with the Mojo programming language, it will warn you that no such language existed as of September 2021, its most recent training snapshot. We can refer ChatGPT to the web and use fine tuning to incrementally improve GPT’s knowledge, but these approaches have their own trade-offs in terms of cost and effectiveness, and ultimately still rely on the quality of the dataset.


Both humans and AI improve their proficiency through observation, iteration, and refinement over time. They reduce the incremental complexity of learning new tasks by relying on prior experience to distill complex data into simpler and more digestible patterns. The more we leverage human experience, the simpler the task of the AI. For example, we train physicians with a fraction of the data that a generic AI would require, thanks to our existing shared understanding of language, anatomy, empathy, and much more. The costs of training and fine-tuning a specialized artificial network will continue to decrease, but having a human in the loop tremendously simplifies the overall process by focusing the AI on the learning needed to assist humans, rather than fully replace them.

I’ve used several code, art, and audio generation tools in my projects. Developing with an AI copilot feels collaborative. When I’m stuck on a problem, I can consult with AI to get ideas, if not working code. I spend much less time on generic tasks like writing boilerplate code and tests, looking up APIs, and straightforward implementations that are easy enough to describe. When I encounter a crash, compiler error, or unexpected behavior, I can ask the AI for tailored remedies and discuss iterations until the issue is resolved. Of course, this collaboration existed long before generative AI, relying upon human teammates, search engines like Google, and forums like Stack Overflow. With AI, I still rely heavily on teammates, but have drastically cut my reliance on traditional search and answer forums.

I move much faster with AI, but this collaboration actually places a greater reliance on my understanding of software development. AI may generate valid, succinct, and performant code that satisfies requirements on the first shot. But I’d also estimate that 30% or more of its generated code is obviously wrong, and another 30% appears to be reasonable on first glance but actually contains serious flaws (e.g. unhandled corner cases, missed requirements, poor performance). This roughly lines up with OpenAI’s own estimates for accuracy, and improving this is a real challenge. Compounding the issue is the fact that humans may become over-reliant on AI over time, or hesitate to pursue new and better solutions, languages, or tools, that are not sufficiently covered by the AI’s training data. The key to minimizing these risks is to recognize, value, and continue to cultivate human expertise as a necessary ingredient to responsible innovation.

Ultimately, AI helps me write less code, which is great because I didn’t get into software for the love of typing. This in turn allows me to spend a much greater percentage of my time contemplating new projects, exploring new domains, and solving larger problems. I believe the foreseeable future of engineering is a collaboration between humans and AI, with a growing demand for human expertise. Of course, we can imagine robots that independently conceptualize, build, and deploy products exclusively for other robots, but I don’t see this happening anytime soon – and by then, we’d probably have other things to worry about besides job security.

It’s a great time to be – or become – an engineer!