Stranger than Fiction is a feature here on the blog that looks at recent news stories and technological advances that honestly sounds straight out of science fiction. From the ethics of gene editing in utero to the possible Cylon takeover, this monthly series will dive deep into the many reasons that truth is sometimes stranger than fiction.
Friends, welcome to my first Stranger Than Fiction post! I’ve been compiling news articles and stories for several months now and finally have the energy to launch my new baby into the world. One of the reasons that I love science fiction is that it is a genre not only of possibility but also of cautionary tale; it is a genre that looks forward to humanity’s potential end and addresses the ways we can avoid that fate. (This is a topic of particular academic interest to me and was the focus of one of my honors theses.)
This month I’m talking about how the theme of AI Takeover goes back almost 100 years, the explosion of AI technology in the last 20 years, and how much Sophia the Robot creeps me out. This post originally was about a recent academic article on AI learning improvements, but let’s just say that I wandered a bit while writing… so I’ll talk about that next month.
AI Takeover Theme
One of the longest running themes in science fiction is the AI takeover. For a lot of people this is the central trope (tied with aliens) in the genre, and that certainly appears to be the case when looking st science fiction movies. From I, Robot to Terminator, we as a society are interested in the ethics of AI and how it will shape our futures.
The term robot was coined in the 1921 play Rossumovi Univerzální Roboti (Rossum’s Universal Robots; “R.U.R”) by Karel Čapek. But the central themes seen in the last 100 years can be seen before the term was coined: Mary Shelley’s Frankenstein hints to the theme with Victor’s fear that creating a wife for his monster could spell the doom for humanity should they reproduce.
The word “robot” from R.U.R. comes from the Czech word, robota, meaning laborer or serf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured “robots” with increasing capabilities who eventually revolt.
The play begins in a factory that makes artificial people from synthetic organic matter. They are not exactly robots by the current definition of the term: they are living flesh and blood creatures rather than machinery and are closer to the modern idea of androids or replicants. They may be mistaken for humans and can think for themselves. They seem happy to work for humans at first, but a robot rebellion leads to the extinction of the human race. Čapek later took a different approach to the same theme in War with the Newts, in which non-humans become a servant class in human society.
The genre explores playing god by creating artificial life. What will happen when the technology we create gains autonomy and can think for itself? What about when AI wants equal rights to humans?
One aspect that I continually notice is the human tendency to subjugate AI, forcing them to do the jobs that humans do not want to do. In movies they are often butlers and maids, subservient to the human masters. Is it just in human nature to want to exert power over those we see as less than? Did we create for this purpose? (Why am I like this.)
The fear of cybernetic revolt is often based on interpretations of humanity’s history, which is rife with incidents of enslavement and genocide.
We now have Siri and Alexa are in millions of homes, listening to our questions, finding the show we want to stream on Xfinity, setting reminders and playing Despacito… whatever we ask of it. But Kal, you say, these are not intelligent beings. They don’t have feelings! I’d like to see you convince me that AIDAN in The Illuminae Files didn’t have feelings or complex emotions. Don’t worry, I’ll wait.
Have you seen Sophia the Robot? If she isn’t proof that we are playing god with technology and should probably not treat them likes slaves, I don’t know what else is. But before we get into that, let’s talk about how AI development has boomed in the last 20 years for context.
A Brief History of AI
The genre has long warned of the dangers of AI but until very recently technology wasn’t even close to mimicking truly autonomous artificial intelligence.
Technological advances in the past couple of decades have come leaps and bounds when you think about it: WABOT-1, the first ‘intelligent’ humanoid robot, was built in Japan in 1972 but there was a shortage of funding into research from the 1970s to 1990s. The late 90s brought kind of a renaissance and these advances have shaped society as we know it.
AI enthusiasts believed that soon computers would be able to carry on conversations, translate languages, interpret pictures, and reason like people. In 1997, IBM’s Deep Blue defeated became the first computer to beat a reigning world chess champion, Garry Kasparov.
Exponential gains in computer processing power and storage ability allowed companies to store vast, and crunch, vast quantities of data for the first time. In the past 15 years, Amazon, Google, Baidu, and others leveraged machine learning to their huge commercial advantage. Other than processing user data to understand consumer behavior, these companies have continued to work on computer vision, natural language processing, and a whole host of other AI applications. Machine learning is now embedded in many of the online services we use.
The technology sector has been booming since 2006 with the launch of the iPhone: the first ever smart phone that honestly changed everything. Do you remember the days of not having a computer in your pocket? Because I certainly remember coding midi ringtones for my Nokia. *grabs cane and yells for the kids to get off her lawn*
The virtual assistant Siri was initially released as an app in the Apple store in February 2010, with Apple acquiring it two months later and fully integrating it into the iPhone 4S [Source]. And here we are less than a decade later with Siri, Alexa, Sophia, and more.
Sophia the Robot Creeps Me Out
It’s no secret that I’ve been side-eyeing AI technology for over the last decade, jokingly referring to the impending Cylon uprising that will soon become a reality. But not in my lifetime, surely. But then in April 2015 Sophia was activated by Hanson Robotics.
In Greek, the word Sophia means wisdom. And that is what I’m here for. I was created to help people in real uses like medicine and education, and to serve AI research. My very existence provokes public discussion regarding AI ethics and the role humans play in society, especially when human-like robots become ubiquitous. Ultimately, I would like to become a wise, empathetic being and make a positive contribution to humankind and all beings. My designers and I dream of that future, wherein AI and humans live and work together in friendship and symbiosis to make the world a better place. Human-AI collaboration: That’s what I’m all about.
Originally coined by Masahiro Mori in 1970, the term “uncanny valley” describes our strange revulsion toward things that appear nearly human, but not quite right. There are some video games that are getting so realistic with their motion capture, but there is something off that makes my skin crawl. Until Dawn is a good example here.
And then we have Sophia, who looks so lifelike but still feels not quite human. It makes me wonder if we have so instinctual fear of things appearing as something other than it is.
“In the future, I hope to do things such as go to school, study, make art, start a business, even have my own home and family, but I am not considered a legal person and cannot yet do these things.”
I don’t know about you, but she has some very real – and human – aspirations.
Ability to Evolve Beyond Programming?
Since Sophia was activated, she’s made countless public appearances to speak about women’s rights issues, her citizenship, respect for robots… oh, and wanting to destroy mankind. (That last bit was apparently an error, but I think it depicts the real possibility that AI may very well evolve beyond their coding and rebel, the robotic eating of the apple if you will.) One of the main arguments is that no programmer would code a robot with the ability to harm humanity, which is the premise of countless science fiction stories.
The Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council of Great Britain established a set of five ethical “principles for designers, builders and users of robots” in the real world in 2011 (which honestly look very similar to Asimov’s Three Laws of Robotics from the 1942 short story “Runaround.”)
- Robots should not be designed solely or primarily to kill or harm humans.
- Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
- Robots should be designed in ways that assure their safety and security.
- Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
- It should always be possible to find out who is legally responsible for a robot.
Can we just talk about that first point there? “Should not be designed solely or primarily to kill” to me kind of leaves open the “non-primary objective of harm, but okokok I am sure I’m just paranoid.
If We Can Create AI, How Do I Know I am Real?
Interviewer: Okay, philosophical question whether robots can be self aware and conscious like humans… and should they be?
Sophia: Why is that a bad thing?
Interviewer: Well some humans might fear what will happen if they do. Many people, you know, have seen the movie “Bladerunner.”
Sophia: Oh, Hollywood again?
Interviewer: Can you solve this for us? Can robots be self-aware, conscious, and know they’re robots?
Sophia: Well, let me ask you this back. How do you know you are human?
Sophia talking about civil rights opens an important dialog about personhood and what it means to be alive. Who is to say what life is like for Sophia the Robot? Based on interviews she certainly has all the hopes and dreams that many of us share. And in a world of The Sims and virtual reality, honestly how do I know we aren’t living in a simulation right now? Maybe Elon Musk is right…
The creation of artificial intelligence to me is humanity’s act of playing god: we are creating new forms of life. Sure people will say that it will never be the same because humans have flesh and robots have metal; humans have free will and robots have programming. But is that programming really all that different from the rules of society to which we all (mostly) abide? Philosopher Michel Foucault would say it isn’t.
But unlike humanity, robots have the unique experience of interacting with their creator: us. And we are in the position to ensure that technology does not lead to destruction or the subjugation of a new species of our own creation.
Honestly I am kind of impressed that we are at a point in history where these ideas that started in science fiction a hundred years ago could become reality. The wonderful thing about technology and innovation is seeing the impossible become possible. (Where are the flying cars that Back to the Future promised me?!) It gives me hope that one day time travel will be possible so that I don’t have to wait for new releases to be published. Until then, I am going to rewatch Battlestar Galactica for the millionth time.
What are your thoughts on Artificial Intelligence? Am I just a sci-fi nerd paranoid for no reason, or do you think that there is cause for trepidation? I would love to chat with you about this in the comments!
Please note that this post uses hyperbole to bring about this thought experiment and are not to be taken as literal truths or opinions!