When Stanley Kubrick’s film masterwork, “2001: A Space Odyssey,” first premiered 65 years ago, audiences were introduced to a number of technological advances that America was just starting to grapple with.
The dramatic force that literally propelled the motion picture forward by the Jewish American filmmaker was provided not by a character of flesh and blood, but by HAL, a large-scale electronic computer. It was a machine with a human voice and seemingly a human consciousness that Kubrick and his literary collaborator, Arthur Clarke, endowed with an almost human ability to act.
So, America and the world were first exposed to what these many decades later we have come to call artificial intelligence. In the film, the computer seemingly rebels and challenges the crew of a spaceship destined for the planet Jupiter. It actually causes the death of one crew member before being deactivated, or killed off, itself.
Today, HAL has been reborn, as it were as a full-blown branch of human inquiry, that is challenging all of us, much like the crew of Kubrick and Clarke’s space vehicle. Artificial intelligence, or AI as it’s called, for better or worse, is here to stay.
Because of its potential to attach itself to everything we do, the new technology has set off alarm bells in think tanks, corporate offices, and governments around the world. Just this week the United States, Britain, Israel, and 15 other nations signed the first international agreement to guide the responsible development of artificial intelligence. The non-binding agreement is the first step in what is likely to be a broader consideration of the ethical uses of the new technology.
It is yet another sign that the new science could lead us into a revolution, as profound or more so, that the industrial revolution of the past couple of centuries. For Paul Wolpe, the Emory University ethicist, the new era comes with a new name.
“It’s not an industrial revolution. It’s a new techno-intellectual revolution,” Wolpe says. “I mean, computers changed everything, but they were still a tool. Artificial intelligence is more than a tool. It’s a partner. It makes decisions with you. So that changes the nature of the way that we interact with technology.”
As someone who has spent most of his life exploring the ethical implications of human conduct, Wolpe now sees a new role for himself. Not only is he looking to explore human conduct in this new age but also how machines conduct themselves in our society.
“Artificial Intelligence or AI, as it’s called, is decision-making technology and because of that, everything it does has ethical implications. You can’t design an automated car and not tell it what to do without confronting the possibility that it can hit the pedestrian or smash into the wall and maybe kill people. Ethics are an intrinsic to it, part of its development rather than outside of it or extrinsic, part of its use, like most other technologies.”
The concern of the future of how we interact with our increasingly smarter machines has even touched Jewish religious life in Atlanta. Earlier this year, Congregation Shearith Israel, the Conservative synagogue in the Virginia Highland neighborhood, was awarded a $5,000 grant funded, in part, by Sinai and Synapses, a New York-based nonprofit that hopes to encourage a broader consideration of technology in local Jewish communities.
The grant, which the AJT reported on in June, is one of 15 awarded to synagogues around the country to help them bring together scientists and religious leaders in thoughtful dialogue.
The founder of the Sinai and Synapses project, Rabbi Geoffrey Mitelman, believes that what makes AI so challenging for us as Jews is much the same as what makes it so challenging for scientists and ethicists.
“Artificial intelligence asks the fundamental question, what does it mean to be human? And that’s a question that really gets to the core of who we are. For so much of human history, we’ve said what makes humans unique is intelligence. The influential Jewish German philosopher, Hermann Cohen, over a century ago said that what it means to be created in the image of G-d is to have intelligence. And so, if there is an artificial intelligence that raises this question of, are we now godlike? And if so, what is our responsibility.”
Much of the concern as Rabbi Mitelman sees it, is that the development and acceptance of AI is moving so fast. The fear is that as the technological development of AI accelerates it will outstrip our ability to control it. So, like HAL in 2001 we may find ourselves and our future in a battle to challenge the machines themselves.