The prospect of existential risks from sentient computers seems to become more real as AI systems get more potent. The recent open letter asking for a six-month hiatus on AI development for systems more potent than GPT-4 echoes this sentiment strongly. The general public must be terrified if researchers are alarmed by the threat posed by rogue AI and AGI, right?
It would appear that there is much less risk involved in releasing AI systems than is generally believed. Even if well-known research organisations claim that AI will replace over 300 million jobs, the reality is, as always, much more complicated.
But the most important lesson is obvious: AI won’t rebel.
It is merely software.
An article recently put forth the hypothesis that the video game “Elden Ring” would soon develop sentience and represent an existential threat to humanity. This idea may appear absurd at first, but it’s really simply a metaphor for another, equally improbable scenario: current AI playing the part of Skynet.
The term “artificial intelligence” is used in the field to describe what is currently being developed, even if it just refers to another piece of software. Modern artificial intelligence is merely a step towards truly intelligent software that could endanger humans.
Consider the television programme Westworld. It considers the prospect of a theme park run by human-like robots in a near-future scenario. The manager of the park poses an intriguing question: What is consciousness? when these robots become sentient. There is no threshold that makes us bigger than the sum of our parts, no inflection point at which we become fully alive, the programmer says, speaking to a machine that understands its actual nature in a discussion. Because awareness doesn’t exist, we are unable to define it.
The question of how we can be afraid of a generally intelligent program, commonly known as AGI, when we haven’t even established what consciousness is, is one that must be addressed by AI researchers today.
There are still four definitional categories of consciousness that need to be resolved before we reach AGI, according to this paper on the nature of consciousness in AGI. In this regard, our current algorithms are only straightforward programmes that execute instructions, whereas AGI is a brand-new technology that goes beyond algorithms.
Despite being light years better than ELIZA or GANs, GPT-4 and Midjourney are still not clever programmes. The branding of the field of artificial intelligence is the issue, not the technology being studied.
AI faces a naming issue.
The fear around artificial intelligence, not the harm posed by AGI, is the fundamental issue with AI. For many years, images of thinking machines in popular culture have been used to frighten viewers.
However, there needs to be a paradigm shift in computing in order to get from the current state of AI technology to such powerful robots. Because of the binary nature of its foundations, modern computers are unable to even generate random integers.
“On a completely deterministic machine you can’t generate anything you could really call a random sequence of numbers, because the machine is using the same algorithm to generate them,” said Steve Ward, professor of computer science and engineering at MIT.