Disenchanting AI

Jonathan Monson
4 min readSep 5, 2024

--

I can’t decide if I hope the AI bubble bursts or not.

The technology is interesting and there seems to be a fair bit of drama, but we’re also hearing more frequent public statements of disappointment. Investors let down by performance, company displays of insensitivity (think the Apple iPad commercial crushing non-tech art supplies), and companies positioning themselves as non-AI organizations for people committed to the human effort in their output.

Popping the bubble

I see how collective impatience can be a benefit. There’s a natural breaking point — for hype and waiting for eventual promises, for anxiety and tolerating potential risks, and for alienation and losing what makes us human. These organic time pressures naturally reduce our risk as we reject investment in things for which we lose patience.

I also think there’s this point at which we prove something out far enough we pass the threshold of interest. We lose energy because we’ve done it or have at least seen the end or even just proven we can do it. Perhaps it’s like when you know you’re going to win (or lose) a game — it doesn’t feel necessary to keep playing. Or when you don’t like how a book or movie resolved its conflict. Or when you’ve figured out a job and it’s no longer interesting.

Whether impatience or satisfaction with a job well [if not completely] done, something is lost that no longer justifies the time. I suspect underlying this is losing touch with a shared vision, or sense of agency, or goals with which you no longer relate. If AI is taking a direction we (society, investors, tech companies) no longer agree with, we don’t share that vision. If AI development seems to be disconnected from the outcomes most humans actually want, we may not have a sense of agency to change that. And if business leaders are working toward automation and efficiency from which only they will benefit, we don’t relate to the goals and seemingly stand to lose out.

Whether the investment bubble bursts or not, the novelty is wearing off, the appetite is dwindling, and patience for a promised future yet to be seen is wearing thin.

The rainbow in the bubble

Bubbles are useful metaphors largely because of their popping, but I think the glimmer of a glossy rainbow may also be useful, and perhaps the residue they leave behind after they pop. What are the positive, hopeful things in AI development that do glimmer and what are the things that will stay?

We see a tactical example in the boring work that can be made more or less obsolete with AI. I hope that if we do see less investment we keep those gains and take the opportunity to transition in place people who lost their jobs to AI.

Less tactically, though, I don’t think it’s the elimination of boring jobs per se but rather the outcomes that prioritize human dignity and creates space for human flourishing (if the transition is done well).

And really, that’s what we should be focusing on. That’s where the hype and interest and investment should continue. Optimizing for humanity. We should be discussing human flourishing, not investors and ROI, production efficiencies and reducing the bottom line. We should focus on how to help create a life worth living.

We saw some version of this in ESG (environmental, social, and governance) investing. People started optimizing for values, not returns. They are still investing and making traditional financial decisions, but with an overlay of self-reflection and intentionality. While guardrails are needed (ESG quickly loses appeal when we realize it’s all greenwashed), it’s still a model we can consider. If anything, it shows how much more important guardrails become — if companies can’t greenwash, investors don’t have to worry about it. If AI developers have clearer guardrails, society can focus on applying AI in a way that works for humans.

So, what is the real change we have already seen? What is the soapy, glimmering residue that even if some part of the bubble is gone is still there? People like Bill Gates and Sam Altman point to the AI that is already embedded in things like Microsoft products. And there are real benefits and industry-altering impacts from some of the efficiencies. Even potentially culture-shaping changes from translation services with natural language processing that may further change how (if?) people learn languages, and what it means for a language to “die.”

Amidst all of that, what is perhaps most striking to me is the reality that we have created sophisticated solutions to profound challenges and have already managed to take those technological achievements for granted. We’ve disenchanted AI. Human adaptability has popped another bubble.

Originally published at http://jsmonson.wordpress.com on September 5, 2024.

--

--

Jonathan Monson

With a propensity toward Hume’s “reflections of common life,” I write (because I like to) on whatever suits my fancy at the nexus of Philosophy and Culture.