In Defense of Luddites
How do we decide what AI should do regardless of what AI can do?
It’s easy to become overwhelmed with possibility. The news. The hype. The fear. There are many people thinking about its application, about how to monetize it, about the technical possibilities. There are also many people thinking about ethics, though mostly about existential risk: how do we know it won’t destroy humanity to optimize paper clip production? Or how do we regulate the development of autonomous surveillance and weapons systems (so, also about existential risk but with good old fashioned human power at root).
Regardless of these pessimistic potentialities, I think the darkest scenario is seeing AI work exactly as the AI evangelists say it can: AI will be able to automate any task, so we will automate away the need for human labor, critical thinking, or the experience of any challenge at all.
We will automate away the possibility of creating meaning in our lives.
A Fate Worse than Death
The worst outcome for an individual in any situation is never death.
If you were unaware: when you die, then you’re dead. And surprise number two: you are definitely going to die someday. Many religious beliefs have some sort of afterlife with various sorts of criteria for getting in (which may be some people’s response to my “…then you’re dead” claim). In most cases, unless you know of some arcane accounting methods of which I am unaware, you don’t know moments before your inevitable death if you’re on your way to heaven or needed five more good deeds to escape hell. Such accounting could theoretically make which moment you inevitably die matter. Otherwise, it will only matter to the people you left behind.
Surviving will always pose the greatest potential for suffering.
This may be why Camus suggested the fundamental question of philosophy is suicide, but my line of thinking is more optimistic. In the context of AI, I am not suggesting we have to decide if we so permanently escape the threat — and everything else — but rather that the threat of AI is not merely existential in terms of survival but existential in terms of anguish.
Making Meaning
There are at least three roots of meaning in life — two active and one passive. The passive root is context (which in itself has many active offshoots that catalyze the passive meaning). The active roots are based on creation and overcoming.
We are productive, creative, thinking creatures. We continually create ourselves in the effort of expression, whether artistic or practical. And it is in the moment of creation that we reify the self in an act of ontological manifestation. This is the root of meaning in creation.
In our capacity to use tools, in our creativity, in our thinking, we overcome challenges in our own lives and the lives of our community. It is in overcoming these challenges that we generate the narrative arcs of our lives. The obstacles are the plot. Our stories are crafted from difficulty, not comfort. This is the root of meaning in overcoming.
At the same time, there are at least two outcomes already promised from AI: creation and overcoming. AI is great at rapid iteration of writing and image generation — essential examples of human self-expression. AI is also great at mitigating challenges through automation, externalized processing, or simplified solutioning — those challenges we may have otherwise celebrated.
Sometimes not getting what you want is a wonderful stroke of luck! -Dalai Lama
While AI may be able to increase productivity, and it may be important to apply AI to dangerous, dirty, or demeaning jobs to free humans in poor conditions, it is essential to not conflate what AI can do with what AI should do.
We need value-based goals guiding AI outcomes, not just economic ones (where it may be cheaper to automate creative tasks than pay a creative human to do it).
We need value-based goals guiding human efforts, not just economic ones (where many people are not in a position to choose their work and do not have the flexibility to leave unfair working conditions).
We need alignment across humans and AI, not just the “Gain-of-Function” equivalent in AI development (where the technical challenge of AGI is more important than the consequences).
Perhaps AI could theoretically do anything. Perhaps there will be effective instruction hierarchies coded into AI that will prevent bad actors from misleading it. And perhaps our alignment teams will indeed ensure human and AI goals resonate. But I’m less confident in that human side — have we thought deeply about our goals in light of AI? How many of us would give up automation in order to still experience interesting challenges? Progress is not reduced challenge, and comfort is not meaning.
AI may be able to solve our problems, but surviving without meaning is a fate worse than death.
Originally published at http://jsmonson.wordpress.com on August 15, 2024.