The premise is a simple one – a robot gains sentience deems its human masters inferior and sounds a binary war cry, overthrowing mankind’s yoke. It is a story that has been told umpteen times in all manner of sci-fi, born, perhaps, from the kernel that was Frankenstein or even Lucifer – the creation challenging its creator. But whatever issues we, as a species, maybe dealing with by telling these stories, the advent of a robot insurrection still largely seems like a whole lot of science fiction or is it?
With computers the size of our palms now our constant companions, technology on a whole only stands to get smarter and our lives, well, our lives are going to get a lot more complicated. This may seem counterintuitive at first but bare with me.
The rise of the machines, if you will, is two-pronged for on the one hand you have the technological singularity of an Artificial Intelligence (AI) being ‘born’ and on the other, you have less fanciful, more tangible applications of smarter machines.
The latter is something that has been within our midst since the industrial revolution and so warrants our attention first. Ever since an object other than our own two hands were used to perform a task, we immediately forged a double-edged sword – something that would aid us but also have the potential to harm. Increasing mechanization and automation have built the modern world but as byproducts of all these mod cons, we have compromised the natural world, leading to a future where the climate may strike before the robots get a chance. But forecasts of doom aside, a ‘smart’ world is all but assured, with glimpses of it now heralding the Fourth Industrial Revolution.
In the next 20 years, technological feats that may appear mere fancy will be with us – in our homes, at work, maybe even in our bodies. Drones, for example, are already (painfully) among us and this technology will only ‘improve’ with greater autonomy and miniaturization leading to such applications as drone swarms and smaller, more personalized drones capable of more than just buzzing around. Amazon has been testing drone delivery for a while now while passenger-carrying drones have been tested in the skies above Dubai.
The drone’s cousin, autonomous cars are already in their beta phase and they too will see rapid advancement with Tesla and Waymo clocking millions of driverless miles slowly getting us to the point where your next uber will be driverless.
Your home, of course, is the most obvious place where robots and AI are going to make their presence felt and in concert with the Internet of Things, homes of the future may be autonomously run with domestic units taking care of the daily chores while a basic AI will be managing these as well as the house’s utilities.
Of course, the robots of the next two decades aren’t just going to be basic geometric shapes, puttering about your living room, cleaning up after you, no, they are going to be the workhorses of the future, robotic assistants that will be both brain and brawn. Every couple of months or so, Boston Dynamics, the famed American robotics company manages to scare humanity into an existential crisis with videos of their creations running ‘amok’. But their robots, perhaps the most advanced out there (or at least that DARPA is willing to share) are exactly the kind we’ll be seeing more of – replacing animals or even humans in many roles. Their US military experiment – BigDog, was the equivalent of a robotic mule – an all-terrain vehicle with legs. Of course, being ‘too loud for the battlefield’ meant that BigDog went the way of the Dodo but its successor; the eerily lifelike Spot has all the makings of a personal assistant, more Jeeves than a pet.
Of course, these are but a taste of things to come with countless other areas such as health, biotech, and communications all benefitting from automation and the greater use of robots. But is there a downside to all of this or even a dark side?
A question that preys on the minds of industry owner, worker, and analyst alike, do more technology mean fewer jobs?
Technologists and futurists are both of the thinking that the very nature of the industry will be dominated by robots, with Kevin Kelly, founder of Wired saying “Most of the things that are going to be produced are going to be made by robots and automation.” According to projections by the McKinsey Global Institute (MGI) even if automation is conservatively adopted, 15% of the global workforce, or around 400 million workers, will out of jobs by 2030. But in the same breath, the projections also state that said automation will create entirely new jobs, with around 8 – 9% of the workforce engaged in these new fields. So again, a double-edged sword but one that will not see mass employment as once feared but mass displacement. Your average blue-collar worker may no longer be needed on the assembly line but they will be needed to manage the assembly line.
What is becoming clearer now is that while automation will continuously increase, no matter how sophisticated, the robots will nonetheless be smarter hammers and tongs; they are far from possessing the creativity, the judgment to say design haute couture, make a gourmet meal, argue the law and vice versa. These are skills that require training, talent, ambition, dedication – all qualities that are perhaps singular to human beings. But this begs the questions, for how long?
Thanks to the likes of Elon Musk, Stephen Hawkings, and a host of experts from the field, the story now goes that the single, gravest threat that we as a species will face is AI. Musk has gone on record numerous times to sound the alarm, to bring attention to humanity’s “biggest existential threat”. But is this a case of needless alarmism or as some put it, ‘technological pessimism’?
The futurist Ray Kurzweil had predicted that at some point within the next two to three decades a computer will pass the Turing Test, meaning that it will exhibit intelligent behaviour ‘equivalent to’ or ‘indistinguishable’ from humans.
Even though AI is one of the hottest properties out there, with everyone trying to cash in on the field, it is still largely shrouded in maybes. Google, one of the major players in the field, has developed some human confounding AIs like AlphaGo for example but its victory over Go master Ke Jie taught us a nuance about the possible nature of AIs ‘thought process’. Our idea of a computer ‘thinking’ as we may traditionally define the word was in the case of AlphaGo, a combination of machine learning and neural networks – in lay terms, learning from experience. This meant that the AI got better over time by learning from each game is played as opposed to an instinctual or intuitive flair for the game that players or masters have. And Go is a game, governed by seemingly straightforward principles – what about something less cerebral? Something that is influenced by numerous factors and requires a combination of skills? How will AIs fare then? Researchers in the autonomous automobile industry, the very same that has logged millions of driverless miles, are still scratching their heads for while driving may seem simple enough, AIs would flat out fail a driving a test. They may have gotten a hang of the mechanics of it but we all know that’s half the battle. Autonomous cars have a problem interacting with the human element on the road, which is all they will encounter. In 2018, an autonomous car hit and killed a pedestrian. While this does not mean that machines may not be capable of higher brain functions one day, separating right from wrong, managing the daily commute, it certainly does point to a complete puzzlement as to the state of AI and its possible future. But that does not mean that the concern that Musk et al express isn’t well-founded, returning to the argument of a double-edged sword.
This concern is not a typical sci-fi one – the AI deeming us inferior, marking us for extinction. It is much simpler than that. Max Tegmark, a physics professor at MIT, said in an interview, “When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and A.I., we don’t want to learn from our mistakes. We want to plan.”
In 1983, malfunctioning computers twice issued warnings to Soviet early-warning satellites that the US had launched a preemptive missile strike. If not for the intervention of Stanislav Petrov, a Russian air defence colonel, who insisted that the computers were in error, we would have been living in a very different world right now. The computers, in this case, were quite simple compared to the home PCs of today but imagine if they had AIs on board, similarly malfunctioning. It could be assumed that even if wrong, the AI would not recognize it as such or would be resistant to human intervention or entreaties – what then?
It is, as they say, better to err on the side of caution and if industry leaders are saying that it is paramount to approach AI with as much caution as possible, – to study, regulate, and even democratize it, then it’s a fair bet to listen loud and clear.
A. Rahim Khan is an Islamabad-based journalist who has previously worked for Al Jazeera, Dawn, Express Tribune, Hello!, Pique and Border Movement, largely covering science, culture and the arts.