
Novelists have been kind to artificial intelligence in recent times. In Kazuo Ishiguro’s 2021 book, Klara and the Sun, Klara is an AI designed to be a child’s companion and she is by far the most compassionate and self-sacrificing character in the book. In the rather darker Machines Like Me by Ian McEwan (2019), the mistreatment of an AI again arouses the reader’s indignation, because of the virtues of the artificial intelligence. As a device through which to view humanity, positing a consciousness that is more beautiful and unconditionally compassionate than we are can be a very powerful tool. But as an insight into the potential social impact of artificial intelligence, such depictions are quite outside of the current activities of AI.
Artificial intelligence as it currently stands is far removed from the conscious beings depicted in fiction. Software algorithms fed with data make computations that can be quite impressive but hardly deserve the label ‘intelligence’. There’s a squirrel who comes to my back yard, having discovered that I keep leaving seed balls in a bird feeder. This ingenious creature has learned to prise open the lid of the feeder and he or she is displaying far more intelligence than the most advanced AI software.
Where artificial intelligence is having an effective social impact is where the algorithms processing the data they are being fed are a) able to engage with absolutely vast amounts of input and b) where the algorithm is subject to constant improvement. Thanks to the fact that we live in a capitalist society, the main driving force behind the creation of AI with noticeable social impact are businesses and, more ominously, governments. This means that the early use cases for artificial intelligence include some extremely problematic, not to say dangerous, examples.
Positive social impacts of artificial intelligence
To start with a relatively benign example of the social impact of AI. Like a lot of people I mostly listen to music by streaming. And I want recommendations for new music to listen to. Nowadays, my recommendations typically come from an algorithm and not my friends. At a certain level, this works well. I click ‘like’ or ‘dislike’ to a piece of music and based on the profile this creates for me, as well as data provided by millions of other users, the suggestions I get are nearly always interesting and I definitely have found new music that I love thanks to such calculations.
The downside is that the algorithm favours music that already has thousands of likes, so this kind of process reinforces the enormous disparity that exists between bands and composers who are pushed into the stratosphere of international attention and those who might be far better artists but who cannot get off the ground. Moreover, the platforms currently offering this type of tool (e.g. Spotify, YouTube Music) pay the artists a pittance.
Other potentially positive social impacts from artificial intelligence include the rapidly advancing ability for software to parse human sentences and extract their meaning.
As a slight aside, I’ve never been enthusiastic about using the Turing Test to define consciousness because while the use of language is intimately bound up with being able to think, I’ve always felt that the definition is too narrow. Whatever a particular challenge, such as to play chess, go, or steal birdseed from a container, sooner or later the technology is going to be created that can accomplish the task.
For an AI to be able to hold a conversation for a certain length of time in a fashion that makes it hard for a human to decide if they are talking to another human or the AI is tough. But we are within a decade of being able to achieve this if the time limit for the Turing conversation is limited to, say, thirty minutes. Siri can entertain my kids for about that length of time: they just say ‘hey Siri, tell me a joke’ and go from there.
Breaking down sentences and figuring out their meaning, in order to give an appropriate response, is something that chatbots are already effective at in narrow spheres.
In theory, this AI-driven technology could save our species a considerable amount of labour time. Every person whose current task involves speaking to someone in order to collect data could be spared to do something more interesting. And indeed, chatbots with artificial intelligence are everywhere answering customer queries, tracking parcels, taking payments, etc.
Harmful social impacts of artificial intelligence
One harmful societal consequence of artificial intelligence has been evident long before Marx sat down and studied how capitalism constantly replaces workers with new methods of production, which is that new technology is never introduced so that workers can enjoy more leisure time. The social impact of the introduction of artificial intelligence in the work process ought to be that workers can put their slippers on, enjoy a cocktail, and a – recommended by AI – new series on Netflix. Managers, however, typically calculate the returns on investment in chatbots by figuring how much they will save on their call centre costs and the deployment of this kind of software is often associated with layoffs instead of liberation.
Another, more subtle but potentially deeper societal hazard arising from artificial intelligence in the automation of conversation is that the very large companies, Google and IBM in particular, are dominating the conversational AI market, with algorithms trained as much of the corpus of human communication as they can get their hands on. But what if that corpus is male-centred, western-centred, biased against non-binary genders, etc? Then the algorithm will produce results that perpetuate and even deepen those biases.
Then too, with only 20% of the technical staff working on conversational AI being women, there is an additional likelihood that the algorithms they are building are gender biased. One obvious problem of racial bias in AI has already emerged in the US judicial system where officers have a software tool to score the likelihood a person guilty of an offence will reoffend. That tool was twice as likely to incorrectly identify African Americans as high risk for violent re-offense as a white person.
You only have to do an image search and you can see these biases in Google for yourself. Try searching CEO and scroll through your image results. In the west, about 27% of these images should be female. If you are seeing a lower proportion (and at the time of writing, I took a screenshot and scored 20%) then that demonstrates that the results of the search are reinforcing a bias in the dataset of images of CEOs.
A much more clearly negative social impact of artificial intelligence is its application to facial recognition. The Chinese tech company Alibaba has trained an algorithm to identify ethnic minorities via facial recognition, specifically the Uighurs of the Xinjiang region, against whom China has been carrying out an oppressive campaign.
Artificial intelligence in warfare
Worse still is the social impact of the military application of artificial intelligence. In Philip K Dick’s 1953 story Second Variety robots developed by the UN to stop a Soviet victory in the aftermath of a nuclear war overrun humanity (in a much more interesting way than this plot summary suggests). This type of scenario, where AI soldiers take military decisions, is already present in embryo, especially in the form of drone technology and also in missiles, with the US army having tendered for their Cannon-Delivered Area Effects Munition (C-DAEM).
C-DAEM missiles will be launched without human intervention, based on technology similar to face recognition but using lasers to identify targets with the profiles of tanks. The tender calls for the missile to fly up to 60km, slow down with a parachute or similar means, and while descending, identify targets autonomously. And, just as facial recognition software has its biases, these missiles will explode on vehicles based on the values of the data provided by the US military on the profiles of the tanks they anticipate destroying with C-DAEM. Pro-tip: don’t tie your Christmas tree to the roof of your car.

There is a campaign called Stop Killer Robots to ban fully autonomous weapons. One reason for doing so is that even before the application of artificial intelligence to warfare it has been difficult to hold mass murders to account for war crimes. Imagine how much more difficult accountability becomes if it is the decision of a software algorithm that has resulted in attacks on civilians.
What would a society run by artificial intelligence look like?
Artificial Intelligence – in the form of software algorithms – are a new frontier for capitalism and as always when businesses charge into a new space, all sorts of harmful consequences arise, based on short-term considerations. Many NGOs, trade unions and even some governments are therefore hurrying to catch-up and urge regulation so that the more reckless companies are tamed. And that’s important. This is a sector that needs regulation, especially in regard to the creation of discriminatory algorithms.
But let’s run the timeline forward by a long way in our imaginations. Suppose, perhaps by a breakthrough in quantum computing, self-adjusting algorithms (those capable of ‘machine learning’ as the industry jargon puts it) become vastly more sophisticated and complex than anything which exists today. This might not be so far away, but should have happened by 2100 at least. Then humans would be living alongside AI companions capable of performing all the tasks that we do, including writing novels, composing music, inventing new jokes, creating vaccines for novel diseases, etc.
What will those companions be like? In 1987, Iain M. Banks wrote the first of his magnificent Culture novels, where he envisaged a far future in which AI drones look down paternalistically at the frivolous humans whom they care for. While still full of drama and, indeed, wars, this far future is essentially a utopia. No one except by choice suffers from poverty, hunger, homelessness, etc. Mostly, the humans pursue the arts (as well as take drugs, party and have lots of sex). And it’s a plausible scenario: even now the wealth exists to feed, house and provide free water and education to everyone on the planet.

By 2100, alongside AI companions, we’ll be able to realise something like Bank’s Culture, providing we meet one condition. And unfortunately, while it’s a simple condition, it is a hard one to achieve. The benign AI scenario requires that humans themselves are free of any motivation to destroy each other automatically; free of prejudice in the structures of administration of society; free of discrimination in the cultural data (such as the entire contents of the internet) that the AI are learning from. To get a foundation for a harmonious relationship with the existence of massively powerful AI firstly requires we first of all have to revolutionise our own existence.
Leave a Reply