How AI
compares to its
science-fiction
portrayals

By Abby Shifley

Spring | 2019

What is the first thing that comes to mind when hearing the term “artificial intelligence?” Is it the apocalypse, or perhaps a jarring episode of Black Mirror? Or maybe, self-driving cars, which are becoming more viable every day

Since the late 1800s — when steampunk was emerging — AI has become a firmly rooted theme in popular culture, and typically takes form in terrifying, dystopian worlds. There have been many warnings of the dangers associated with creating sentient machines, not only in popular culture, but also by experts in the field of AI. But how real is the danger, and how much of it is just fantasy?

“In the field of Computer Science, the term Artificial Intelligence (AI) can be defined as a much broader term than what many people are thinking these days,” Computer Science Professor John Kwan “Jake” Lee explained.

Putting aside the pop culture references, Lee claims many people’s understand of AI is only a sub domain of the field called machine learning technology. This process is what self-drives a car or plays a computer game, but AI has more expansive definitions.

Lee’s definition of AI is “a field that considers (computer) machines as a tool that perceives its environment and takes actions to maximize its chance of achieving the pre-defined goals.”

In many instances of popular culture, AI is more grimly defined as a “nightmare.”

“A lot of these pop culture instances are that, kind of, ‘robots as nightmare,’ like robots are going to take things away from us. Like the Terminator, or replicants in the Blade Runner and things like that that are trying to trick us or kill us,” Manuscripts and Outreach Archivist for the Browne Popular Culture Library Steve Ammidown said.

“Whereas in sort of real life, artificial intelligence is a little more benign,” he said with a chuckle.

Which is how Lee would describe AI, as more of a tool with the human-like cognitive ability to solve problems. This mimicry of the human brain is extremely hard to achieve; however, machines are becoming more advanced every day and, in some instances, AI have been besting humans for a while.

Lee told the story of AlphaGo and AlphaGo Zero — two machines playing the game of Go but learning it in very different ways.

Go is a Chinese game vastly more intricate than chess. The AI AlphaGo learned how to play Go by watching (or analyzing) the moves of human players, and eventually went on to beat every Go champion.

AlphaGo Zero has a similar story, but it didn’t need any help from human players. It was given only the rules of the game, and in significantly less time than Alpha Go, it was able to beat every human champion.

“If we define ‘being smart’ as being able to memorize a lot of information and calculate something fast, then the AI is definitely much smarter than a human,” Lee speculated. “If we define being smart as being intelligent and having the ability to apply the intelligence to complicated situations, then I personally think human [sic] is smarter.”

Understanding natural language is a very complicated task. Not only does the machine have to process the standard meaning of each word, but also understand what the tone of each phrase means.

Before Watson the IBM supercomputer was developed, Lee said many AI experts believed this level of comprehension in a machine was impossible. During one segment of the show Jeopardy!, grand champions Ken Jennings and Brad Rutter faced off against Watson. The AI was able to process Jeopardy! host Alex Trebek’s voice and answer enough questions right to win the $1 million grand prize.

Just a side note — Trebek recently received a get-well-soon card from the supercomputer, as he has recently been diagnosed with pancreatic cancer.

Although the technology faces a few obstacles — such as developing the ability to process complex scenarios — it is moving fast. Perhaps too fast.

Lee explained the social implications of AI are difficult to consider because the technology is moving very quickly.

“For example, AI is leading automation which is already changing the nature of employment and working condition. Are we ready to be adapted to this? I personally do not think so,” Lee expounded. “What if a decision of an AI was biased, incorrect, or unfair in our society because there was some uncertainty in the data that was given to AI? Who will be responsible for the decision?”

Human rights are also a variable of society threaten by AI, Lee continued, “If an AI is monitoring what we do at all the times, what about our human rights?”

Additionally, the technology could always have errors, and Lee warned that introducing AI to core infrastructures in hospitals or power grids is risky.

Another issue in the field of AI is how to make ethically complex decisions making programmable. These ethical issues have been brought up by science fiction writers like Isaac Asimov, author of I, Robot. Asimov introduced Three Laws of Robotics in his works:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

“I think Asimov was the one who, sort of, set these guidelines that I think a lot of every-day scientists have tried to stick to — the idea that robots should not do harm,” Ammidown said. “And you look at self-driving cars and one of the biggest things with those is always the ethical aspect.”

Ammidown proposed a common ethical dilemma called the trolley problem, where one is the operator of a lever controlling an out-of-control trolley’s tracks. The operator has to make the decision to direct the trolley to either hit one person tied to the tracks or hit five people tied to the tracks.

Asimov created an ethical framework that corresponded with the work of real scientists, Ammidown said — Lee was slightly more skeptical but agreed that ethics have to be implemented into AI

“This is from fiction,” Lee said dismissively regarding Asimov’s three laws. “But in terms of technical implementation, it is possible to put constraint, but the constraint — defining the constraint will be very difficult because there are so many different cases and scenarios.”

Lee continued, “A form of the three laws will have to be used in the future, there are many different cases that can come up. It is hard to define what’s good and what’s bad for some of the scenarios.”

After a pause Lee continued, “It will have to be really applied when you’re using all these different AI machines all over the place.”

Regardless of its implications, AI is all around us already. Social media like Twitter, Facebook and Instagram all have a variation of AI Amazon tracks what a customer purchases and then shows him of her other products he or she might like.

This constant connect to technology is correlated to a common theme in science fiction, robots and humans falling in love.

“That’s a pretty classic robot story, of love between humans and robots. … Because that’s kind of the next step,” Ammidown said quizzically. “We interact with these machines all the time, you fall in love with things you interact with all the time, it just happens — dogs, cats, other people.”

However, Lee argued that the path AI is headed toward should be less emotional and less dramatic.

“We should probably go in the direction where AI is used as our tool, like cell phone, rather than something that would kill us all,” Lee said. “But it will be a very useful tool, that we cannot even imagine at this point.”

Lee commented again on how quickly AI is developing, “Whatever I said today could change tomorrow too, because I could see something happening a different way. … But I’m on the positive side that AI will play a good role in society.”

Popular culture has fueled people’s fascination with AI and science and science fiction are a two-way street, Ammidown said.

“Science fiction writers are using science to inform their work, and they’re also pushing the boundaries of imagination forward to inspire actual scientists,” he said.

Science fiction can also act as a warning according to Ammidown.

“If science has taught us nothing else it’s that ethics needs to be part of robotics and artificial intelligence. And I hope they’ve learned that lesson, because there’s a lot of warnings here,” Ammidown said while gesturing to the table full of novels, moves, comics about artificial intelligence.

Lee confirmed that there are many experts working on the ethical issues of robotics, including AI Ethics Lab and The AI and Governance of Artificial Intelligence Initiative. Many professors at BGSU are doing their own research as well, although not specifically regarding ethics.

So, will AI every outgrow humanity’s grasp? Maybe.

“That statement may be true, partly. Because of advances in technology, you can’t really keep up with what they are doing indefinitely. So, if we provide some information, put in the constraints, before we put the constraint they [AI] do something or they find a way to block the constraint.”

Lee paused to think.

“It may happen. … But from what we are doing in computer science, doing research, teaching courses and then, I mean, when we say ‘doing research’ — I’m not [an expert in AI ethics] but I know what other people are doing — from that perspective we still have control on putting constraint. I mean, people know about all these issues. People leading the world in this technology knows about these issues and they are looking at it. At the same time, all this bias problem or error problem or the impact on society — they’re not just looking at ‘Oh , we’re just going to make it better and faster,’ they’re looking at all the other issues too.”