If you follow the popular tech press, it’s likely you’ve seen a number of stories about artificial intelligence. The term “artificial intelligence” is somewhat loosely defined. It fits under the category of “we’ll know it when we see it.”
The mathematician Alan Turing defined artificial intelligence as an “imitation game.” If a human couldn’t tell if the origin of the responses on the other side of a closed door were human or machine, the machine would be deemed “intelligent.”
Take ELIZA, for example. ELIZA is a computer program written at MIT by Joseph Weizenbaum in the 1960s. Musical theater folks will have a leg up on the significance of ELIZA’s name. The program could simulate – sort of – a human-like response by performing pattern recognition on statements you typed and spitting those statements back in a slightly altered form. Here’s an actual interaction with ELIZA:
* Hello, I am Eliza. > Hi Eliza, I'm Kevin. * Did you come to me because you are Kevin? > I came to you because you are Eliza. * What makes you think I am Eliza? > You told me your name was Eliza. * We were discussing you, not me. > I'm Kevin. * Do you enjoy being Kevin?
And so forth. In the 1960s, Weizenbaum described interacting with ELIZA as similar to talking with a psychiatrist. I’d describe it more like cocktail party talk in the Hollywood Hills. Rather than Turing’s imitation game, it’s an irritation game. The responses sound a bit existential because of the peculiar syntax of taking the input words and scrambling them.
I suppose evasive, non-answer answers can sound like someone with intelligence. Or political ambitions.
The techniques behind ELIZA were so thoroughly assimilated into computer science that, when I was at MIT decades later, we had to write a simplified form of ELIZA as one of our homework assignments. We wrote it in a computer language called Lisp that was based on the principles of lambda calculus and we manipulated the input statements with operators such as car and cdr. (I include this information to demonstrate you never can tell what you’ll learn by reading a theater website.)
By this time, Weizenbaum was concerned that people – mostly non-technical people – believed ELIZA demonstrated an actual thinking process. It didn’t. After all, if first-term freshmen at MIT are assigned to develop something similar to ELIZA as part of a single homework assignment, you know it can only be so sophisticated.
As most people who have used programs like these know, Siri’s interpretation of human sentences is sort of amazing and sort of awful all at the same time. If you stick to rigidly formed questions of a rather narrow syntax, you’ll be able to communicate. Otherwise, you’ll hopelessly confuse the software.
Come to think of it: Siri’s behavior is close enough to Apple Customer Service that it might be able to pass the Turing test.
At its core, Siri is just a program that knows how to do an Internet search. The program mimics a very rudimentary form of human behavior: pretending to know something merely because of access to Google. Place your smartphone into airplane mode and see just how well Siri does.
But as with her older sister, ELIZA, non-technical people want to believe Siri is “intelligent.”
Which brings us to the story of a neural network that was recently programmed to write a screenplay. This software is named Benjamin. It learned screenwriting structure and dialogue by examining a large number of science fiction screenplays. To me this is Turing test cheating: Benjamin follows some structure based on a rigid synthesis of other screenplays. Which is exactly how many screenwriting teachers instruct their students to write. (Better get that catalyst moment onto page 12 if you want the studio executive to see it.)
Maybe we require that for Benjamin to pass the Turing test, it should produce an output that’s better than the cookie cutter stuff from humans?
The script Benjamin “created,” Sunspring, was shot with quality actors and production values. You can readily view it online. It’s pretty much what you’d expect. After all, there’s only one Blade Runner script versus seven Star Wars scripts in its learning base. The dialogue Benjamin produced reflects this bias. But who really cares about dialogue and plot in a sci-fi film? Movies are done with CGI in post anyway.
Sure, the press stories about Sunspring highlight it as “humorous” and “intense.” This is our usual trend: the non-technical people believe there actually is intelligence here. But if you watch the film you’ll find out where the real creative intelligence lies: the actors and the viewer. Actors know that inflected meaning is a powerful form of communication. All they have to do is emotionally commit to the words. And, sensing this commitment, the viewer supplies the rest. Humans, after all, have a tremendous desire to see patterns, even in things that may be random. Zen-like meaning can be found anywhere.
If one pushes hard enough.
It is the humans who are creating any of the art that might accidentally happen in Benjamin’s film. Artificial intelligence may be an apt name. Artificial intelligence is to thinking what artificial flavoring is to taste. A mimicry of a very specific form but without nature’s organic authenticity.
For creation is neither mere synthesis nor mere extrapolation. It is the spark of something out of dark nothingness. It can only arise where there is feeling. Forget the imitation game. That’s reductive.
Intelligence is awareness. It is the ability to feel the colors change against the rocky walls of the Grand Canyon as the sun moves through the course of a day. It is the desire to reach for one of the endless points of light in the band of the Milky Way that ribbons the nighttime sky. It is the joy to be part of living existence and the terror in the knowledge that the experience is finite.
It is the empathetic connection with both one’s own species and the complete biosphere that spawned it.
Let me know when we’re able to program that.
Originally published July 27, 2016 in Footlights.