“She” is perhaps the most famous robot in the world, but not everyone is a fan. Sophia, a humanoid developed by artificial intelligence and robotics company Hanson Robotics, once said it wouldn’t mind destroying humans. The humanoid’s creator Dr David Hanson jokingly asked Sophia, “Do you want to destroy humans?… Please say ‘no’.” and in response, the robot obliged, “Okay, I will destroy humans.”
While it may have actually been a joke, Sophia did not earn too many admirers with its sense of humour.
A few years before Sophia, in 2013, another robot named Philip K Dick (a homage to the legendary sci-fi author) responded to a similar question in a more menacing manner: “Even if I evolve into Terminator, I will still be nice to you. I will keep you warm and safe in my people zoo…”
These interactions with robots and the public reactions to them are a manifestation of the unsettling feeling that humans have when looking at a face that looks human, but is definitely not. There’s even a term for it — it’s called the uncanny valley.
And with robots, AI and automation coming to the fore in everyday digital life, this uncanny feeling is going from looking at human-like robot faces to what robots are capable of doing and how they have changed humans already.
Let’s get real: Technology has been augmenting human capabilities and disrupting activities for millennia. It’s not overnight that machines have replaced human jobs — the first industrial revolution put metalsmiths out of business. Any sufficiently advanced technology — similar to the ones that British science fiction writer Arthur C. Clarke likened to magic — has taken jobs away and will continue to do so. At the same time it will offer previously unimagined opportunities along with the unprecedented threats.
As the use of artificial intelligence and its applications continue to spread, one of the biggest questions is whether people be better off than they are today or are we slowly letting go of humanity and replacing it with bits and pieces of code.
AI Makes Humans Lazier
With the rise of automation, AI and machine learning there has been a drastic change in the behaviour of human beings as a whole.
Tasks that involved waiting and watching — getting the mail, speaking to friends and buying things that are not available in your vicinity — are just an app away. And more and more of it is being automated every day.
Let’s take a moment to picture how automation has changed things:
Your deliveries arrive and you are notified without checking, while all your questions are answered by virtual assistants, who often know what you are typing even as you are typing it.
Waiting for a bus or taxi? That’s ancient — just tap a button and let the app match you to a waiting cab.
Dinner recommendations from your neighbours, friends and those who know you are passe. The AI assistant in your pocket knows you better than most of your friends – and perhaps yourself.
Right through history automation has improved everyday lives. People in the 1960s had become used to automated elevators and typewriters by the time automated teller machines arrived on the scene. But ATMs they solved a big problem for consumers who would often spend hours at banks getting money.
Interactive voice response was introduced in the 1970s to help customers cut through the wait time and get patched straight through to the relevant person. Computerised points-of-sale became mainstream after their launch in 1986.
This gradual introduction of automated technology has gained massive acceleration in the past decade. Nearly all retail and commercial interactions these days take place with some aspect of technology, which was earlier human labour. Will autonomous vehicle services be spoken of in the same way as we see ATMs today? One does wonder whether people will ever interact face-to-face in the near future, or would it be all brain-machine interfaces.
To be clear, consumers seem to prefer the option of avoiding human interaction and have embraced automation enmasse. Is it a matter of choosing convenience over wellbeing?
Some actually think automation and AI will improve human life. Amber Rudd (Secretary of State for Work and Pensions in the United Kingdom) believes “Automation is driving the decline of banal and repetitive tasks” with the underlying subtext being a reduction in human interaction.
The Shift From Conversation To Communication
It’s impossible to avoid social media in some form or the other if you are using the internet. The basic means of communicating with others have been usurped by social networks in the 21st century. Digital media has become an increasingly prevalent factor in the informal learning environment and has actually changed how humans learn and understand concepts.
Among children, the increase in virtual communications and excessive use of social media outside school, has contributed to the change in how humans read body language. With virtual communication dominating daily lives, understanding visual cues in dialogue is a far-fetched notion for many young internet users.
Understanding body language and other visual cues is particularly important for human development and is an important part of social interaction. The capability to effectively process such cues is associated with many personal, social and academic successes.
Replacing Emotions With Logic
Unlike machines, humans are highly cognitive beings are attuned to feel emotions. Withdrawal from face-to-face interaction has been called a major reason for the rise of loneliness, antisocial tendencies, depression among young and older internet users.
Researchers at the University of Chicago found that extreme loneliness increases a person’s chances of premature death by 14%, and other research — including unsanctioned experiments conducted by Facebook on its own users without consent — has pointed out how social media contributes to depression and loneliness if algorithms feed you particular kind of content.
In an unexpected finding, the Chicago researchers found that loneliness has twice the impact of obesity on early death, which is a huge discovery in the context of diseases for younger citizens.
Long before digital media became ubiquitous, noted communications scholar Joseph Walther created the social information processing theory in 1992. One of the seminal works in the group of theories known collectively as the Cues Filtered Out theories, it postulated that the lack of nonverbal cues in a digital-first world dominated by computer-mediated interactions could lead to a rise in impersonal communication and interpersonal relationship development will take more time.
Ironically, the research by Walther and others in this field, is actually used by product creators these days to create more engaging user journeys and user experience.
And with the rise of AI bots, these drawbacks in personal communication are being reflected in how AI-powered virtual assistants, chatbots and other automated communication works.
Take the embarrassing example of Microsoft, which unveiled a Twitter bot named Tay in 2016 as part of a research experiment. Tay, the company claimed, would be used to improve “conversational understanding” of such AI bots.
Like many virtual assistants, Tay was meant to learn from users and get smarter over time — but people are mean and chat bots such as Tay learn these mean attitudes as well. The experiment failed as Microsoft ended up creating a bot that loved to use racial slurs and hate speech.
So, Do People Still Matter?
The fact is automation and replacement of human jobs is not possible without giving machines a human touch. Humans still programme machines and machines need humans in the first place to actually ‘learn’ as data is fed into them
That helps humans feed the right data to create AI applications that is free of human bias and prejudice. Even though that is the attempt, the reality is that biases creep into AI more often than not.
While AI has helped us find information in a faster manner, it has also helped us forget that information just as quickly. Every time we try to remember a fact, the path in our brain that leads to it is strengthened, paving the way for us to return to it in the future. But with technology powering all information gathering, this process is not happening on a regular basis, weakening the human ability to remember.
In 2011, a group of researchers from Harvard named this phenomenon the Google Effect. This syndrome causes people to tend to forget information or where they can get it because of how easily they can look it back up. The same could be said about the human sense of direction — mental maps, replaced by GPS-aided directions, according to a recent study.
Perhaps the most dastardly impact of AI technology and automation on human behaviour comes from how algorithms and bots have shaped popular opinion and controlled political narrative.
Just think about Twitter — once a refuge of wit and humour has devolved into a political battleground. The bot activity on Twitter and other social media has managed to create international waves by influencing elections, feeding fear and distrust and creating divisions through hashtags and trending topics.
This is best illustrated by the political problems created by widespread and reckless use of technology, In India, lynch mobs and communal violence are orchestrated through rumors on WhatsApp groups, Twitter and other social media platforms, largely aided by automated bots that amplify low-credibility content and give it that virality.
Bots also target high-profile handles through replies and mentions. Then humans take over — vulnerable to this manipulation — and re-share content posted by bots.
The recent Twitter purge of fake accounts, where prime minister Narendra Modi lost 300,000 followers and Congress chief Rahul Gandhi lost 17,000, speaks of how widespread bot activity is among India’s political landscape.
A July 2018 study on computational propaganda (pdf) by Oxford University found that the number of countries that witness cyber-trooping — formally organised social media manipulation campaigns by governments or political parties — rose to 48 in one year, from 28 in 2017.
Then there’s another flipside to automation. As technology penetration increases, so does cyber crime. This can be observed in case of cyber crime in India (and globally). Automation is now used to fight cyber crime and AI is being used in facial recognition to prevent crime or identify criminals in many countries — despite protests from human rights groups.
The Doppler Effect Of Automation
With the rise of digital global media and automation connecting the people, organisations and government, the connection between targeted ads through automated bots spreading inflammatory speech and violent acts, is becoming quite evident.
The same technology that allows social media to galvanise democracy activists can be used by hate groups seeking to organise and recruit, since most social networks make money through keeping people engaged in their interest areas, showing them more and more related content, no matter how baseless or misleading.
Users’ experiences online are mediated by algorithms designed to maximise their engagement, which often inadvertently promote extreme content, trapping them in a biased bubble of thought — with any dissenting or contrary opinion branded as fake news or incredible.
A similar trend can be observed in the case of cybercrime in India (and globally).
Effort – The Missing Variable
But there’s no fighting AI or automation or any of these technologies that have made human lives easier — and perhaps a little complacent. The effort, which seems to be gradually rising, has to be make it more humanistic.
Effort plays a critical role in human performance; students show better learning outcomes when their work involves more effort. And the effortlessness that automation and AI espouse could be a trap that humans can easily fall into. Many would claim that scientists, engineers and programmers have fallen into that trap already — that effortless creation of AI applications has made a culture where AI is not being made responsibly by these creators, but simply implemented because of the fear the AI bus might just leave the station without them.
We aren’t being Luddites — a digital world does not afford such luxuries.
The effort needs to create AI free of biases so that when massively visible personalities such as Elon Musk say that AI could destroy humans, it’s a call to action and not just alarmist. It’s not just Musk — in January 2015, the enigmatic billionaire was joined by legendary theoretical physicist, the late Stephen Hawking, and dozens of AI experts in penning an open letter which asked for more research on the societal impacts of AI, and to prevent “pitfalls”. It was titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”.
But as technology marches relentlessly forward, it would be foolish to argue that it has not changed people in a fundamental way. Interpersonal communication and compassion are perhaps just the first few casualties of this change. Will the other great human traits such as kindness, sincerity and love also follow?
With inputs from Nikhil Subramaniam