AI: a disincentive to progress?
Does AI create an incentive for progress to stall? And if yes, how?
The release of ChatGPT at the end of November 2022 has caused a lot of commotion. Students don't write essays on their own anymore; and programming, once ubiquitous, might soon be forgotten like the compact disc.1
If middle-skill jobs aren't already under attack, this new 'industrial revolution' will soon make them so. But how did we get here?
ChatGPT is the latest step in a relatively recent progression in AI. It all started in May 2020 with GPT3. Then came Github Copilot, which had a strong impact on the programming world; and next DALL-E, which has ignited infinite debates over AI-generated imagery.2
Given the breadth of impact and amount of progress in such a short time, it's natural to ask ourselves where humanity will be one, ten or a hundred years from now.
But before digging into the question any deeper, my answer is going to be predicated on a fundamental assumption: that AI is unable to innovate, or create anything 'new' from scratch. By that I mean however wildly creative an AI-generated answer or image appears to be, it's just a reconstruction moulded from a vast amount of human knowledge that already exists.3 A bit like a lego set, the AI can use its pieces and assemble them in numerous ways to create combinations never seen before. However, it cannot create completely new blocks. Only humans can. And so it is confined by the limitations of human knowledge and our own innovation.
More innovation from us means more combinations of knowledge for the AI to unlock. But innovation takes expertise, acquired over time, and with considerable effort. In other words, it's a costly affair.
Think about how it will feel to come up against a bot that can combine existing information in unexpected ways, in an instant, constantly absorbing and utilising any new fragment that's out there. What's the point of innovating any more, when you're an amateur chess-player who's always coming up against a Grand Master? Will this be a disincentive for human-led innovation?
Then factor in that everyone has access to those Grand Master capabilities for free. The average Joe will become indistinguishable. In other words, looking at it through a game-theory lens: when it costs nothing just to imitate through AI, free-riding will become the optimal strategy.
If that happens, more and more people won't bother creating new lego pieces, they'll just use AI-assisted tools to find the best expression of what's already there. The 'geeks' who might once have done so will find the cost of doing so more daunting and frustrating, especially when if they do invent something new, everyone else will have instant access to the enhanced lego models available from that one extra piece.
And so, if everyone imitates, then who innovates? Only the very few, if any. Overall human knowledge will plateau.
And in a dystopian future, no one will know how to improve, let alone fix this AI (that skill will be long forgotten). We'll live comfortably in our ignorance, satisfied by the minimum expression of our potential, just capable enough to operate AI avatars. Until one day the system will fail and we'll be forced to crawl out into the world in search for food.
And that will be our new beginning. The END.
According to RIAA, sales of audio compact disks peaked in 1997 at 943 million and are down to 47 in 2021.↩
The GPT3 model was released in June 2020. DALL-E was released in January 2021, while GitHub Copilot in June 2021 and AlphaFold in July 2021.↩
The implications of AGI are a much broader topic for another day.↩