Hollywood has a love affair with artificial intelligence movies. No surprise, since human beings have been obsessed with robots and AI for decades — an obsession which typically takes some form of the “Frankenstein complex” and has super-intelligent robots rising up against their flesh-and-blood masters to achieve world domination or manipulating humans into assisting machine aims. With machine learning and self-determination now hot topics across multiple industries, it’s worth taking a look at the Hollywood history of robot behavior and how it stacks up against emerging technologies.
Skynet. Judgment Day. The Resistance. These words conjure up terrifying apocalyptic images for most sci-fi fans. “The Terminator” franchise, one of Hollywood’s most popular, serves up everything humans are afraid of: a massive, interconnected network of computers which achieves sentience and then attempts to exterminate the human race. This is machine learning gone awry — Skynet is convinced humans will attempt to destroy it and so sets out to destroy humankind in an effort to safeguard the world. But in fact, companies like Northrop Grumman have advanced the concept of autonomy to help preserve freedom and national security. The U.S. Navy’s X-47B UCAS-D demonstrated that sophisticated autonomous systems are already capable of landing on and launching from an aircraft carrier — one of a Naval aviator’s most difficult tasks.
A newer Hollywood take on AI, “Ex Machina,” follows would-be savior Caleb as he attempts to free humanoid robot Ava from the clutches of her creator as an emotional bond develops between machine and man. The plan succeeds — except for the part where Caleb is trapped by Ava as she flees her prison, her “affections” for him revealed as a clever ruse. The last scene of the movie has her slipping into synthetic skin and passing among humans unnoticed. This is the other side of Hollywood’s robot reality: AI intelligent enough to mimic emotional responses and infiltrate society at large to further manipulate human affairs. It’s a subtler version of the Frankenstein complex but no less terrifying: bad behavior run amok, hidden behind a human face.
Next on the list of artificial intelligence movies is the Will Smith vehicle “I, Robot” released in 2004. Based loosely on the work of groundbreaking sci-fi writer Isaac Asimov, the movie centers around Asimov’s oft-cited Three Laws of Robotics, which are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
And while the summer blockbuster did well enough in theaters, of more interest is the original work of fiction which informed the script, specifically the short story “Runaround.” In it, scientists Donovan and Powell travel to Mars with robot SPD-13 (Speedy), which possesses a strengthened Third Law as he (for the sake of convenience) was expensive to produce. When ordered to collect selenium from a nearby pool, Speedy dutifully heads off but does not return, prompting Powell and Donovan to conduct a search. They discover Speedy tracing a circle around the pool as it contains dangerous chemicals — Speedy is unable to obey the Second Law because of his more potent Third Law programming, but cannot return to the base without completing his task. Powell realizes that placing himself at risk in Speedy’s sight will cause the First Law to supersede these conflicting instructions.
While less “kill all humans” or emotionally subversive, this version of robotic intelligence is the closest match to current models: machines built to accomplish a specific purpose — and which are extremely intelligent in areas directly related to that purpose — but still require human intervention and oversight.
Trusted Cognitive Evolution
Artificial intelligence movies have perpetuated a low-level fear of AI advancement but, in this case, fact and fiction don’t match. “Innovators are developing software that allow autonomous systems to decide for themselves how best to complete a specific task,” says Katherine Lemos, of Northrop Grumman. “Ensuring that these autonomous systems never go rogue, and always reflect the values of their developers, however, absolutely remains a top priority for engineers, scientists and ethicists. “
Bottom line? Hollywood loves to show robots behaving badly, but the future of self-determining devices isn’t so dystopian as depicted on screen.