Rethinking AI: How our thinking about AI intelligence needs to change as artificial intelligence develops
At the mention of tests of machine intelligence, it’s natural to think of the Turing Test. However, what might come up less is the fact that the Turing Test, created at the dawn of the technological age, has actually been successfully passed by modern AI systems. Research from Nelson Phillips (Distinguished Professor of Technology Management at UCSB) and collaborator Mark Thomas Kennedy (Imperial College London) argues that the Turing Test is already out of date, and that it’s time to find new ways to approach the analysis of the intelligence of AI to account for its current knowledge and abilities.
If this seems like too much, it might be worth taking a moment to step back and see the progress that has been made since the era of the Turing Test. Designed by renowned computer scientist and mathematician Alan Turing, the original test required researchers to mimic human responses under specific conditions. To test this, a human and a computer were asked a series of questions, after which the person questioning them would be asked who answered each question: a computer or a human. If the questioner was correct for less than half of the test runs, the computer would have been considered to have artificial intelligence.
The problem with the Turing Test is that modern-day machines are now passing the test. While modern-day AI isn’t quite the same as the robotic life pictured in science fiction films like The Matrix, her, and Ex-Machina, it has still made astonishing leaps forward since even a decade ago. AI inventions like ChatGPT aren’t just capable of filling in missing information or satisfying our everyday curiosities anymore; they can now answer questions, write papers, create songs, compile recipes, and even hold interactive conversations. The line between human and machine has now blurred more than ever before.
In light of these new changes, Phillips and Kennedy propose a new way to look at machine intelligence through a game they’ve dubbed the “participation game.” Inspired by the Turing Test but differing in a few significant ways, the participation game isn’t about just passing as a human, but about being able to participate in complex human interactions. The game builds on a parlor game called Categories where participants compete against the clock and each other to generate a list of words starting with certain letters and falling into a particular category (for example, food or shoes). Each player’s list of words is then debated among the participants to decide the game’s winner. The goal isn’t to prove that machines can pass as humans, as much as it is to see if they compare to the influence, power, and credibility of their human counterparts. The game’s function is to study artificial intelligence’s effect on the social processes that it participates in, and use these findings to evaluate the level of intelligence of an AI system.
So how can we think about AI differently?
If AI’s effects on human interaction are shown to be comparable to that of humans, we can expect to face some new challenges. Phillips and Kennedy bring up three different topics for which they anticipate questions: influence, legitimacy, and agency. If machines have these traits, they should also be the subject of the same scrutiny that has previously only been reserved for humans.
Here are some questions to think about as we enter a new age of AI that exhibits more human-like intelligence:
- How do we redefine what intelligence means so that it isn’t exclusive to human intelligence?
- How do we regulate AI influencers and actors’ involvement in media as they gain more traction?
- What moral and ethical standards should we, as humans, hold AI to and how are they the same or different to the moral and ethical standards that we apply to ourselves?
All of this also goes to show that tests for improving technology also require improvement of the ways we evaluate its capabilities. If we continue to use outdated tests like the Turing Test to evaluate modern AI systems, we may be missing important information and unable to see problems—like how we think about and regulate the degree of AI agency—that are increasingly important to deal with.