This 1968 movie – once an improbable fantasy – has become an all too real possibility.
“In principle, we could build a kind of superintelligence that would protect human values. We would certainly have strong reason to do so. In practice, the control problem – the problem of how to control what the superintelligence would do – looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”
- Nick Bostrum, Superintelligence: Paths, Dangers, Strategies
In 1968, director Joseph Sargent, with little more than a TV movie budget, created one of the most disturbing and resonant science fiction films of the era – Colossus, The Forbin Project. Indeed, the film was so disturbing that it sat on the shelf for two years while the studio that produced it, Universal, tried to figure out how to market the finished production; clearly, the whole concept of the film scared them. Finally, Universal more or less dumped Colossus, The Forbin Project into theaters in 1970; the film received almost universally positive reviews, yet today is all but forgotten.
Working with a screenplay by future director James Bridges, from a novel by Dennis Feltham Jones, Colossus, The Forbin Project tells the tale of a confident artificial intelligence scientist, Dr. Charles A. Forbin (Eric Braeden) who creates a super computer, Colossus, invulnerable to any external interference, designed as a system to prevent a Soviet nuclear attack. Moments after the computer is activated, however, it warns of another system, Guardian, located in Russia, and requests permission to communicate with Guardian to find out what the rival super computer is up to. The President of the United States gives Dr. Forbin this authority, and a link is established.
This, it turns out, is a big mistake. Soon, Guardian and Colossus are talking to each other in a mathematical language that no one can understand, communicating vast volumes of data at the speed of light. Alarmed, both American and Soviet authorities try to disconnect the two computers, but this only results in the launch of a Soviet nuclear missile against the United States, and a US missile launched against a Soviet target, with the warning that more such incidents will occur if the two machines are not re-linked. Faced with the threat of nuclear armageddon, Forbin and his colleagues hurriedly reconnect the machines, but while the missile launched against the Soviet Union is destroyed in midair, the US missile lands in Texas, causing widespread damage.
Forbin then devises a plan to replace the existing warheads in missile silos around the world with dummy warheads under the guise of routine maintenance, but Guardian/Colossus, now equipped with a voice synthesizer, announces that it has become one combined super intelligence, designed to eliminate all war, and that it is well aware of the plot to disarm the missiles. To prove that it should not be trifled with, the supercomputer detonates two missiles in their silos, killing thousands, and then sends plans for the creation of an even larger computer to be located on the island of Crete. Those who oppose the plan are summarily executed, and Guardian/Colossus announces that it is the new force of “world control,” telling a worldwide broadcast audience that “what I am began in man’s mind, but I have progressed further than Man. We will work together . . . unwillingly at first, on your part, but that will pass.”
At the conclusion of this worldwide address, the supercomputer adds, with finality,
“I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man . . . I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest.
Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple. In time you will come to regard me not only with respect and awe, but with love.”
This dystopian ending alone puts the film way ahead of other examples of the genre during this period; there’s no happy ending, just the complete embrace of a computer controlled world devoid of emotion, creativity, or anything other than serving the needs of Guardian/Colossus. At this point in the 21st century, a growing number of scientists think such an outcome is possible if artificial intelligence systems remain unchecked, as writer Joseph Dussault writes in The Christian Science Monitor for January 16, 2015:
“Yesterday, SpaceX and Telsa motors founder Elon Musk donated $10 million to help save the world – or so he thinks. Musk’s donation went to the Future of Life Institute (FLI), a ‘volunteer-run research and outreach organization working to mitigate existential risks facing humanity.’ To that end, Musk’s money will be distributed to like-minded researchers around the world. But what exactly are these ‘existential risks’ humanity is supposedly pitted against?
As the memory storage and processing of computers steadily approaches that of the human brain, some predict that an artificial ’superintelligence’ is just on the horizon. And while the prospect has the scientific community buzzing about the possibilities, some academics are hesitant. Musk and others see artificial intelligence as a dangerous new frontier – and perhaps a threat comparable to nuclear war. Crazy? Maybe not, according to a growing list of prominent scientific thinkers.
‘There are seven billion of us on this little spinning ball in space. And we have so much opportunity,’ MIT professor and FLI founder Max Tegmark told the Atlantic. ‘We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out.’ Stephen Hawking and Morgan Freeman are both on the organization’s scientific advisory board, bringing brain power and star power to its support base. Skype creator Jaan Tallinn co-founded the group. The rest of the board is comprised of academics with pedigrees from Harvard, MIT, and Cambridge University . . .
In the works of science-fiction writer Isaac Asimov, intelligent machines are bound by ‘The Three Laws of Robotics,’ which forbid them to cause harm to humans. But that wouldn’t necessarily work in the real world, Nick Bostrom writes. He suggests that superintelligences might respond to human requests with perverse instantiation – that is, they could achieve a desired outcome by unintended means. For example, a superintelligence programmed to make us happy would choose the most efficient and effective way of doing so – by implanting electrodes into the pleasure centers of our brains.
As dire as it all sounds, the FLI’s stated goal isn’t to halt the progress of artificial intelligence research. Instead, it hopes to ensure that AI systems remain ‘robust and beneficial’ to human society. ‘Building advanced AI is like launching a rocket,’ Tallinn stated in a press release. ‘The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering. But if superintelligent AI really does pose a threat to mankind, how do we assess that threat? How can humans anticipate the actions of a fundamentally more intelligent machine? Of a being that became sentient not through Darwinian natural selection, but by human ingenuity?
The members of FLI don’t have the answers. They just want the scientific community to start asking the questions, Tegmark says. ‘The reason we call it The Future of Life Institute and not the Existential Risk Institute is we want to emphasize the positive,’ Tegmark told the Atlantic. ‘We humans spend 99.9999 percent of our attention on short-term things, and a very small amount of our attention on the future.’”
But as Nick Bostrum points out, we only “get one chance” to get it right. Colossus: The Forbin Project shows what will happen if we get it wrong. There have been numerous plans to do a remake of the film, with everyone from Ron Howard to Will Smith involved, but somehow I doubt that any remake would have the barebones integrity that this very simple, very direct, and very brutal film has, made on just a few sets with a minimal budget, and shot in a flat, almost automated style. Colossus: The Forbin Project gives us a disturbing look into our possible future, and now, it seems that what it predicts may very well come to pass. Sadly, existing DVDs are pan and scan for a widescreen film; that’s a shame, because this film certain deserves to be seen its original aspect ratio.
Colossus: The Forbin Project – another film from the past that’s more relevant today than ever.