Artificial intelligence (AI) is made up of data, chips or code, but also of the stories and metaphors we use to represent it. Stories matter. The imaginary around a technology determines how it is understood by the public and therefore guides its use, its design and its social impact.
So there is concern that, according to most studies, the dominant portrayal of AI bears little resemblance to its reality. The ubiquitous images of humanoid robots and the anthropomorphic narrative of chatbots as “assistants” and artificial brains are appealing at a commercial or journalistic level, but are based on myths that distort the essence, capabilities and limitations of current AI models.
If the way we represent AI is misleading, how will we truly understand this technology? And if we don't understand it, how will we be able to use it, regulate it or align it with our interests?
The myth of autonomous technology
The distorted representation of AI is part of a widespread confusion that the theorist Langdon Winner christened “autonomous technology” as early as 1977: the idea that machines have taken on a kind of life of their own and act on their own in a deterministic and often destructive way on society.
AI now offers the perfect embodiment of that vision, because it flirts with the myth of the creation of an intelligent, autonomous being... and the punishment derived for arrogating that divine function to itself. An ancestral narrative pattern that goes from Frankenstein to Terminator, from Prometheus to Ex Machina.
The myth of autonomous technology can already be sensed in the ambitious term “artificial intelligence”, coined by the computer scientist John McCarthy in 1955. The term turned out to be a success despite - or perhaps because of - its many misunderstandings.
Metaphors that confuse us
The language used by many media, institutions and even experts to talk about AI is riddled with anthropomorphism and animism, images of robots and brains, consistently false stories about machines rebelling or acting in inexplicable ways, and debates about their supposed consciousness, not to mention a sense of urgency and inevitability.
That vision culminates in the narrative that has driven the development of AI since its inception: the promise of general AI (GII), a supposed human- or superhuman-level intelligence that will change the world or even the species. Companies like Microsoft or Open AI and tech leaders like Elon Musk have been predicting GII as an ever-imminent milestone.
However, the path to such a technology is unclear and there is not even a consensus on whether it will ever be possible to develop it.
Story, power and the bubble
The problem is not only theoretical. The deterministic and animistic view of AI constructs a deterministic future. The myth of autonomous technology serves to inflate expectations about AI and divert attention from the real challenges it poses, thus hindering a more informed and pluralistic public debate about the technology. In a landmark report, the AI Now Institute therefore refers to the promise of AGI as “the argument to end all arguments”, a way to avoid any questioning of the technology.
In addition to a mixture of exaggerated expectations and fears, these narratives are also responsible for inflating the potential AI economic bubble that various reports and technology leaders have warned about. If such a bubble exists and eventually bursts, it is worth remembering that it was fuelled not only by technical achievements, but also by a representation that is as shocking as it is misleading.
A narrative shift
Fixing the broken narrative of AI requires foregrounding its cultural, social and political dimensions. That is, leaving behind the dualistic myth of autonomous technology and adopting a relational perspective that understands AI as the fruit of an encounter between technology and people.
In practice, this narrative shift consists of shifting the focus of representation in several ways: from technology to the humans who guide it, from the technutopian future to a present under construction, from apocalyptic visions to present risks, from AI presented as unique and inevitable to an emphasis on people's autonomy, choice and diversity.
Various strategies can drive such displacements. In my book Tecnohumanismo. For a narrative and aesthetic design of artificial intelligence, I propose a series of stylistic recommendations to escape from the narrative of autonomous AI. For example, avoiding its use as the subject of the sentence, when it corresponds to the role of tool, or not attributing anthropomorphic verbs to it.
Playing with the term “AI” also helps to see how much words can change our perception of the technology. What happens when we replace it in a sentence with, for example, “complex task processing”? This is one of the less ambitious but more accurate names for the discipline in its origins?
Key debates about AI, from its regulation to its impact on education or employment, will continue to rest on swampy ground as long as the way we represent it is not corrected. Designing a narrative that makes the socio-technical reality of AI visible is an urgent ethical challenge that will benefit both technology and society.
Source: theconversation.com
