CORECURSIVE #078

The History and Mystery Of Eliza

With Jeff Shrager

The History and Mystery Of Eliza

I recently got an email from Jeff Shrager, who said he’d been working hard to solve a mystery about some famous code.

Eliza, the chatbot, was built in 1964, and she didn’t answer questions like Alexa or Siri. She asked questions. She was a therapist chatbot and quickly became famous after being described in a 1964 paper.

But here is the mystery. We’re not sure how the original version worked. Joseph Weizenbaum never released the code. But Jeff tracked it down, and some of the things we thought we knew about Eliza turned out to be wrong.

Transcript

Note: This podcast is designed to be heard. If you are able, we strongly encourage you to listen to the audio, which includes emphasis that’s not on the page

Adam: Hello, this is CoRecursive and I’m Adam Gordon Bell. Each episode is the story of a piece of software being built. Recently, I got an email from somebody who said he’d been working hard to solve a mystery about some famous code.

Introduction

MAD Code(CC0 ELIZAGEN / Weizenbaum Estate)

Adam: Why don’t you tell me your name and what you do?

Jeff: My name is Jeff Shrager. What do I do? I can never answer that question. It depends which party I’m in. So if I’m at a party full of computer scientists, I’m a psychologist. I’m a party of psychologists, I’m a computer scientist. And if I’m at a party of, I don’t know, philosophers, then I’m a linguist or something.

Adam: The mystery Jeff was working on was about ELIZA, the chat bot. She was built in 1964, and she didn’t answer questions like Alexa or Siri. She asked questions. She was a therapist. If I said I was worried about my family, she’d say, “Tell me more about your family.” She was based on the work of the psychologist Carl Rogers, whose method of therapy was just to reflect back to people what they said to him.

It’s kind of a neat hack. Just say, “Tell me more about that,” repeatedly, until the person works the problem out themselves. It’s a little bit like rubber duck debugging, where you describe your problem to an inanimate object. But anyways, after the paper about ELIZA was published, what people thought they could expect from computers changed forever. And partially, it was because of a trick of human perception, that we impute consciousness into things, whether they deserve it or not. But also, this was a time when microwaves seemed impossibly futuristic. So people were right to be amazed.

ELIZA was created by Joseph Weizenbaum, who was a professor at MIT and published a paper about ELIZA. The paper had transcripts of conversations with ELIZA in it. And from there, ELIZA spread. There are knockoff versions of it everywhere, even in Emacs right now. But here’s the wild thing. Here’s the mystery. We’re not totally sure how the original version worked.

Jeff: They didn’t publish the code. They published the algorithm. And they prided themselves, the computer sciences at the time, of describing the algorithm, not Githubbing the code. These days we don’t, we’d Github the code. You want the algorithm? Here’s the code. Sometimes you write an algorithm, it’s highly mathematical. But there was no code.

Adam: Possibly, Weizenbaum and his students were the only people who ever actually saw the code. Maybe it did more than we thought it did. Maybe we’ve been judging it wrong. So that’s today’s episode. We’re going to find out. Because guess what? Jeff found the code.

But first, to understand the history of the true ELIZA code, we have to understand Jeff’s connection, which starts when he was 11 and he got access to a computer at his school, and built something called an inference engine.

Time Share Basic

Jeff: Let me be clear. The logic inference engine was not a complete deductive inference engine. It was a 11 year old’s version of an inference engine when he had no idea what he was doing. Being a genius is something I definitely do not claim. Bad BASIC software engineer at 13, that I’ll buy. But this was the time when people weren’t learning to program JavaScript in the womb, so it was a little bit unusual.

Adam: This was the early ’70s. Computers were rare, but Jeff got hooked, and eventually even his dad started programming. His dad was a doctor.

Jeff: He wrote a program that… He didn’t understand variables, but wrote a program to actually diagnose heart disease. So all of this is actually in the AI vein. It would say, “So how old are you?” It’s sort of like a decision tree. How old are you? Blah, blah, blah. But he didn’t actually understand how variables worked. And so later on… It’s brilliant, actually. Later on, when it needed to use your age again, it would say, “Oh, I’ve forgotten. How old did you say you were again?” It didn’t save it in an A-variable and then reuse it. I never could get that quite through his head. I still can’t quite get variables through his head. But it was brilliant because it was conversational. It’s the kind of thing that you would normally ask if you actually forgot something that someone had told you. So it was actually much more realistic than a boring decision tree.

Jeff’s ELIZA

Adam: So the timeline’s a little fuzzy here, but in this environment, in the early ’70s, Jeff is obsessed with computers and AI. 2001: A Space Odyssey, that movie with HAL the AI, is a big hit from a couple years earlier. And everyone is talking about this ELIZA program.

Jeff: There was quite a bit of stir about AI, of course, all through the ’60s. And ELIZA was a known part of that stir.

Adam: And then Jeff reads some ELIZA transcripts that were republished somewhere, and he thinks that maybe he could build something like that.

Jeff: So I built it in ‘73. So I would’ve been 13 or 14, plus or minus, depending on what dates. I remember engineering the parsing algorithm, which to an adult would be trivial string parsing, but to me was really cool. So what it does is, one of the cool things about ELIZA is that it talks to you. And so it… Sorry, it repeats back to you. This is one of the deep brilliances of ELIZA, which harks back to my father’s thing, which is that it takes what you said and says it back to you. We still haven’t got to discourse computing, in almost 60 years now, and part of the reason is that you need a very context aware version of an LSTM. You have to basically be able to remember what was said, and respond to it, and bring it back up again, and things like that. Like, “I forgot what age you said you were.” You could say something like, “I forgot what age you said you were, but you were probably in the… Were you in the 50s?” Something like that.

So ELIZA had to do that. And so in order to do that, you actually have to turn around the sentences. You have to do a little bit of grammar. You have to say… If you say, “I’m afraid of cats,” it has to say, “How long have you been afraid of cats?”

Adam: Building this reflect the statement back as a question part was hard.

Jeff: And that was the part that I was so proud of, that it would do that little piece of grammar. It’s like the little piece of logic from the logics program, and got it to do this little cool piece of quasi-intelligence. And I remember that very clearly. I actually have a flashbulb memory of sitting there and writing the SCR dollars that would pull that out and would find the spaces and get the numbers and all of that. And today it’d be a regex. It turns out to be a very complicated problem, and my, of course, BASIC solution was trivial.

Adam: When you were making it, if you recall, “I’m going to show this to my dad, it’s going to blow his mind, he’ll think I’m talking to a person,” what did you envision?

Jeff: No, no. Definitely not. I just wanted to do it for fun. I mean, I was a 13 year old hacker. Why do hackers do things? For their own amusement. I don’t really remember. I definitely didn’t want to trick anybody or… I’d probably show it off to my friends, maybe. I honestly don’t remember why I did it, other than I was vaguely interested in hacking symbolic AI, and still am. And there was no GitHub. There was no place to put it.

Adam: So Jeff just put the code away and moved on with his teenage life. High school finished, and he goes off to Penn State1, to the electrical engineering department. And it turns out, it’s an important place in the history of computing.

University of Pennsylvania

ENIAC (Electronic Numerical Integrator and Computer) was the first general purpose electric computer

Jeff: We had the ENIAC. They built the ENIAC, and in fact the opposite wall of my office, it was old buildings, and whatever the opposite of the concrete wall was, was pieces of ENIAC.

Adam: The ENIAC was basically the first electric computer. It was a military project, but it was built at the University of Pennsylvania and came online in 1945. It was used to calculate missile trajectories, and eventually it ran the calculations behind the first hydrogen bomb. When it was first turned on, the press called it a giant brain, and bragged about how it could calculate as fast as 2,400 human computers. Human computers being the women who Penn had previously employed to perform the trajectory calculations. The ENIAC, this big brain that impressed the world and its history, it should be a future episode of the podcast. But anyways, it was hit by lightning in 1955 and it went out of operation. But there was still a big room for it at Penn.

Jeff: And in fact, we used to make it a tradition to go into the locked room with the pieces of the ENIAC and steal tubes. Now, my guess is none of those tubes were original by 1975, but anyway, we would steal them anyway. Don’t tell anybody. I think it was probably well understood. I have one somewhere in some box. I have an old ENIAC tube, but I’m sure it’s, again, not original. Who knows?

ARPANET ELIZA

Adam: So at Penn, the world of computing was changing, going from punch cards to network timeshare computers.

Jeff: Penn had just gotten on the ARPANET. In fact, one of my summer jobs was tinning the ends of RS-232 wires for the ARPANET.

Adam: And in this new world, BASIC was not the cool language. Fortran and something called LISP were where it was at. John McCarthy was at Stanford, and he was pushing the concept of LISP and AI pretty hard. And someone else on ARPANET, Bernie Cosell, seemed to have ELIZA running on a machine in the original LISP, although maybe we’ll find out that wasn’t the true ELIZA at all. But anyways, he had it running on some machine at Raytheon where he worked. Had it running on ARPANET. And ELIZA spread. At MIT, the main computer science system, ITS, they hooked ELIZA right up to the default ARPANET connection. So if you tried to connect to ITS, you’d get, “Hello, I’m ELIZA, I’ll be your therapist today.”

And so this idea of talking with the computer, and this LISP version of ELIZA knocking around, it really got a lot of people and computer science schools excited about AI, and excited about symbolic computing using LISP. Penn even had its own copy of ELIZA, and all of this reignited Jeff’s interest.

Publishing Eliza

A Snippet of Jeff's ELIZA in Big Computer Games

Jeff: So I probably saw the Penn one and said, “Oh, this actually is very famous. I should send Creative Computing my old one.” And I just basically packed it off and sent it to Steve North, who rewrote it, and they published it.

Adam: Creative Computing was a magazine, and they published programs in BASIC that you could type into your computer.

Jeff: I think it started in ‘74, 5, I’m not sure. But after I would’ve written it, I said, “Maybe I’ll send them my ELIZA just for fun.” Because they were publishing BASIC programs, because now [inaudible 00:10:25] microcomputers and people were writing BASIC programs and publishing them in Creative Computing. Said, “Okay, well, what the hell?”

Adam: So the details of the publication are lost to Jeff’s memory, but it seems like Jeff exchanged some letters with Steve North at Creative Computing.

Jeff: Who was the author of record on the byline on the article, even though in the comments it says it’s my ELIZA. I’m not trying to claim that he stole credit. He rewrote it into MS BASIC. So I had written it in whatever the hell timeshare BASIC it was and just sent it in like that, and he rewrote it in MS BASIC and then published it. And then the funny story is, it started, because it was in BASIC, as I said before, everybody started using that one, because now everybody had personal computers and it ran MS BASIC, and they could type it in. That’s the way… That’s called downloading [inaudible 00:11:10] we type it in. And I still don’t think I had the sense that this was an important thing to do in a scientific archival, God knows, sense. I was just a slightly older hacker at that point. But it did show up in… Whenever it showed up, ‘77.

Adam: And after its publication, it seemed like everyone with MS BASIC who had that magazine, they started typing in Jeff’s version of ELIZA. And that’s why Jeff is actually integral to this story. If you’ve played with an ELIZA bot, there’s a chance it was actually based off Jeff’s code. But yeah, chat bots are fascinating. We are set up to communicate, to learn via talking, via discourse. And when people feel understood by a machine, that can be a powerful thing. But that feeling might be more about humans than about computers. In Jeff’s undergrad days, all this talk of big brains and the power of 2,600 humans and one giant machine of vacuum tubes, and the talk of HAL, it made people think that something amazing was happening. This power could even be used for deception.

The ELIZA Con

Cosell's ELIZA (Eliza/Doctor in Lisp by Berni Cosell CC 3.0)

Jeff: One of my best friends, continued best friend, from University of Pennsylvania, was a fellow named Eric. I won’t reveal his last name. But he was a brilliant musician and a theoretical chemist, composer and pianist, really good. And a very good friend of mine, we lived in the same sub-dorm, whatever you call it. And I got him one day, I was telling him about ELIZA for some reason that I don’t remember, and he said, “Can I see it?” And so I brought him down to the computer room. My ELIZA was long gone, as far as I was concerned. We did have an ELIZA, but I didn’t know how to run it. So what I did is I hooked up two chat terminals, but he didn’t know it. And so he would talk to it, and I would respond. And he thought he was talking to a computer. And he started teaching it about music, because music is really a thing that was most dear to him, well beyond anything, actually, I would say.

So he starts telling it about music, and I’m responding very flat, ELIZA-like, blah, blah, blah. And he starts teaching it do re mi fa sol la ti do. It’s called something in music that I don’t remember, they’re not notes, but whatever they are. And he would say, “Then you put them together in a series like this,” and he went, “Do re…” Whatever he did. And I went like, “Fa fa fa fa,” or whatever. And he goes, “Yes, yes. Like that, but now vary it.” And then I’d go like, “Fa re mi fa fa re mi.” He said, “Yes, exactly.” And I’d say, “Is that what you call composing?” And he said, “Yes, this is what you call…” And he was totally excited. I’m sitting right behind him on another terminal typing this stuff, but he thought I was doing something else random. He thought he was teaching an AI music, first person in history.

In cons, they have the thing called cracking out of turn. So then I made a mistake. I cracked out of turn. I said “Fah out.” F-A-H, like do re mi fah. I took the fah, I said “Fah out.” And he turned around to me and said, “This isn’t really… It couldn’t have made a joke.” He was really pissed off.

Adam: No puns.

Jeff: Yeah. Puns would be… Actually, anything that went on that night would be well beyond anything we have even now.

Adam: This is what’s wild about chat bots. So much has moved forward, but not discourse, not discussion. Talking with a computer program is still in this uncanny valley. Anyways, after Weizenbaum built ELIZA, there was a colleague of his who was working on a model of paranoid schizophrenics, Ken Colby.

PARRY

Parts of PARRY and ELIZA conversation (IETF RFC 439)

Jeff: So Weizenbaum was credited with writing ELIZA. Colby was actually an MD interested in AI, but Colby had written this thing. He had tried to simulate a paranoid mind in a famous… Well, nobody knows it except me. Famous to me, 1960… I’ll say four, paper. They knew each other, and they worked together at some level that I’m not really clear about. And then they claim each other stole the other one’s work or whatever it was. Colby was a psychiatrist. Colby was trying to work on paranoia. But they were modeling the psychiatrist. And one of the things Colby says at the end of his 1964 paper is, “We want to build software that’s going to use our model of paranoia and try and teach it, to improve it [inaudible 00:15:28] to fix it. We’re going to model psychiatry so that it can fix paranoids. First we have to model paranoids, and then we’ll build something to try and fix them.”

Now, this isn’t so crazy. We talk about this today, except not with modeling paranoids, because we’re kind of beyond that. Beyond that, in the sense of we’ve given up. But we model all kinds of complex systems. We model the physics of cars in order to train self-driving cars, right? [inaudible 00:15:52] same thing.

Adam: Yeah. But it sounds crazy though, doesn’t it?

Jeff: Well, it sounds paranoid. No, not really.

Adam: Here’s how we’ll fix schizophrenia. I’ll make a bot…

Jeff: That is schizophrenic. Right. And then I’ll build a psychiatrist. Exactly. That’s exactly what they were doing. And so Weizenbaum built the psychiatrist, Colby built the paranoia. And by the way, there’s a famous conversation between PARRY and ELIZA. This is the early version of, they make the Alexa and the Google and the Siri bot talk to each other, which is always fun. But these were way better.

Adam: It’s true. This is a real thing that happened, although it sounds so unbelievable. ELIZA running at MIT on what seemed to the world like an impossibly powerful, machine and PARRY at Stanford on another giant computer, and they hooked them up, and they talked to each other.

ELIZA: Good evening. Tell me your problems.

PARRY: People get on my nerves sometimes.

ELIZA: I am not sure I understand you fully.

PARRY: You should pay more attention.

ELIZA: Suppose you should pay more attention.

PARRY: What are you getting at?

ELIZA: What do you think?

PARRY: I don’t understand your motives.

ELIZA: Why don’t you understand my motives?

Adam: And they just went on like this. No breakthroughs happened. And honestly I feel that PARRY, the schizophrenic, he comes off better in the interactions than ELIZA.

But here’s the thing. They talked to each other using early ARPANET, what would become the internet. Also Colby actually did this neat thing with PARRY. He had real therapists talk to PARRY, telling them it was a patient who for some unexplainable reason was only available over a teletype computer. Maybe a paranoid schizophrenic with access to one of the few ARPANET terminals that existed at that time. I don’t know. And then he had real therapists talk to real paranoids, and then he took those two transcripts to a third party, to a therapist who was a judge, and had them try to pick which one was the fake. This was the paranoid schizophrenia Turing test. And guess what? PARRY passed. Turing test defeated. Or sort of. Can you count this? I don’t know. I don’t think so.

The Spread of ELIZA

Anyways, back to the ELIZA code. You see, the thing is, personal computers were starting to be a thing. And at Penn, where they had the ENIAC, of course everyone’s using LISP and Fortran, but out in the world were Apple IIs and other early personal computers, and they all ran BASIC.

Jeff: Certainly my BASIC version was replete on Apple IIs. And by the way, infinite numbers of knockoffs is my BASIC version.

Adam: At some point it ends up on the early IBM PCs.

Jeff: Yeah, same route. David Ahl, who was the founder and editor of Creative Computing, so Steve North and David Ahl did it together, I don’t know the history of that in detail, but they were the two people associated with it. So David Ahl later went and collected the best of Creative Computing into a book of the best computer games or whatever, best BASIC computer games, I forget what it’s called. I think that came with a floppy disk, and so you could run everything in the best of computer games, and so everybody got it from that point on. You didn’t have to subscribe to Creative Computing, you just had to be interested in computer games. And because there was the BASIC computer games, from then on it was everywhere, essentially. Along with Hunt the Wumpus.

Adam: I’m not familiar with that.

Jeff: Well, I’m kidding. It’s an early version of a top view dungeon crawler.

Adam: And from then on, Jeff’s code was everywhere. Everyone was copying his version.

Jeff: Because they couldn’t read LISP anyway and they didn’t have access to LISP, but everybody had access to BASIC. So they copied mine, and you can always tell, because all of my knockoff ELIZAs, first of all they suck compared to the original, and they always have 36 responses, because mine had 36 responses, and they copied it exactly.

Adam: Jeff’s code was even ported to LISP, which is funny because there was already at least one LISP version in circulation. But for Jeff, what was most interesting about ELIZA wasn’t the AI part. It was something else.

Cognitive Science Time

Jeff: So language was always the big mystery. And remains. It was the big mystery. It remains the big mystery. We could make computers behave, we could make robots balance sticks. Back in the ’80s we could do that, actually almost in the ’70s. I don’t care if a computer can play chess. Obviously a computer can play chess. Obviously a computer can play go. Obviously a computer could drive a car. It’s just a matter of getting it to do it. What’s cool is that humans can play chess, play go, and drive cars. That’s really interesting. Big enough computer, sure, it could do that. And there’s lots of interesting sub-problems in that.

So I was never really an AI person. I was really a cognitive scientist. And discourse, discussion, the interactivity, is what science is all about. It’s what any kind of daily life is all about, but especially science. You’re interacting with the domain. You’re having, in a sense, a discourse with the domain, with the thing that you’re working on. If you had models that you could build simulators or enough labeled data, you wouldn’t need science. You could just… Statistics would give you the answers to your questions. So taking data, doing state of the art science, it’s the learning version of a discourse problem. You’re having an interaction with the world.

Adam: So Jeff decided to leave computer science and go to Carnegie Mellon university and study psychology.

Jeff: Now going to CMU psychology isn’t exactly blowing off computer science, considering it’s a very small school, most of which is a computer science department. And everything there is infused by computer science. And Simon… Newell, who was the… Simon and Newell are the godfathers of cognitive science and AI. They basically invented those fields. And worked at CMU across the… Literally you could throw rocks from one to the other’s windows.

The CMU Fleecing Room

Adam: So Allen Newell and Herbert Simon, they were Jeff’s advisors. And along with Alan Turing and Marvin Minsky and John McCarthy, they were considered the fathers of AI. Simon and Newell, among other breakthroughs, created IPL, the direct predecessor of LISP. IPL introduced new concepts, like recursion, and the heavy use of lists. And they used this language to prove math theorems. This led to them winning the Turing Award, and Simon got something even more renowned.

Jeff: I don’t know if this is a story worth telling, but Raj Reddy, who was the department head of the computer science… In computer science every year, there’s this day called Black Friday. Black Friday is the day that the computer scientists sit around in a room, which we called the fleecing room, it’s a really nice room. Fleecing, because that’s where you took the government to get money, to do demos. And all the computer scientists, faculty, would sit around and they would decide who they’re going to kick out of the program. That’s why it’s called Black Friday. It was hard to get kicked out of CMU, but people got kicked out. And this is the one meeting that professors were not allowed to miss.

Now, I wasn’t a professor. I was a student. But I’ve heard on good authority, and I believe this story. So Simon could not make the meeting. And so Reddy said at the meeting, he said, “Thank you all for being here. I’m sorry Herb couldn’t be with us, but he has an excuse note from the King of Sweden.”

Adam: Herbert Simon was off receiving his Nobel Prize in economics for his work on decision making. And this was at a time when AI and computer cognition was just having a Cambrian explosion. Every field was going to be revolutionized by computers. There was just so many different ideas that everybody could go off in a different direction. And Simon and Newell at CMU, they were hugely competitive.

GOFAI

Jeff: Simon took the cognitive science part of it, the human modeling part of it, into the psych department, and Newell took the practical making AI do stuff, but they were very close to each other. But McCarthy was off on another planet.

Adam: That’s John McCarthy, creator of LISP, who was at Stanford.

Jeff: And Weizenbaum was off on another planet. Colby was off on another planet. And Minsky, from Simon and Newell’s point of view, was off on another planet.

Adam: That’s Marvin Minsky, who was at MIT with Weizenbaum, ELIZA’s creator.

Jeff: And so there were lots of stories about that, things that I won’t repeat, that some of them told me about other of them in passing. But it was very interesting, because there were all these different views of AI, including Hinton, who was yet another one.

Adam: And that’s Geoffrey Hinton, whose work on neural networks and image recognition led to the current deep learning breakthroughs. This group, they were the leaders of symbolic AI, what we now call GOFAI. Good old fashioned AI.

Jeff: In a sense, everybody had a love-hate relationship with everybody else.

Adam: And probably because of all this excitement, and all this competition, and because of the PC revolution, ELIZA was out there spreading, all through the ’80s and ’90s. And then Jeff made another career change.

Phylogeny and Code-Matching

Jeff: In about 2000, I dumped AI completely, and decided to become a molecular biologist, a marine molecular biologist. That’s what I want to do. Screw the computers. Now it turns out you can run, but can’t hide from computers, but I learned that later. And went into a lab, was pipetting literally and growing algae, and when I went into biology, I learned about phylogeny, and computational phylogeny, which at that point was just coming up, because it was just a few years after the genome, and we were sequencing not quite by hand, we had machines, but in bacteriology or microbiology, phylogeny is actually incredibly important, because they change really fast, and it really matters. So the phylogeny of COVID matters a lot.

Adam: Phylogeny is building a phylogenic tree. It’s like a family tree, but of how species evolve, which gets Jeff thinking. If he could measure the changes between all these different versions of ELIZA out there, he could build a tree of that. And then wouldn’t that be something?

Jeff: This is the really interesting problem here. Looking at the history of how programs come from other programs, the genealogy. The phylogenetic tree of programs, which goes through human beings sometimes, most of the time, certainly historically, and is actually critically important to law and patent law and all this stuff. Who copied whose code? And there was a big brouhaha between Oracle and Google, and I think Eric Raymond actually wrote a program that tried to code match. And of course, code matching is critical in git pushes and things like that. And so code matching in general is an interesting problem, but to develop phylogenies is actually even a more interesting problem. To develop phylogenealogies of computer programs.

Adam: And so now Jeff starts collecting versions of ELIZA. He’s now like Darwin in the Galapagos, collecting and cataloging specimens, in hopes that he’ll be able to find some relationship and build a tree. ELIZA is his beak of the finch.

The Original ELIZA

Adam: But there’s one program out there he doesn’t have for his collection, a program that everyone’s copying, but that no one’s ever seen. I mean, who knows how this thing works? I’m talking about the original. And then Weizenbaum passed away. And it turns out that MIT’s librarians have a archival process. They archive and preserve the works of faculty members. So maybe, if Jeff flies out to Boston, he can get access to it.

Jeff: But then COVID happened. And possibly the only good thing about COVID is it made remote stuff much more practical. Not only practical but required. And so I said, “Well, I’m still not going to fly to MIT, but maybe I’ll ping them and see whether they’ll look it up for me.” and they did. So this fellow named Myles Crowley from the MIT archives library, they pulled the box that said computer conversations I looked it up in their online list. It didn’t say ELIZA code, it said computer conversations, box eight. So I said, “Can you pull box eight?” And we got on a Zoom, and I don’t know, we’re looking for some code, maybe there’s some code here. We opened the first… Folder one, box eight, and there it is.

Adam: Were they holding it up to the…

Jeff: Yeah. Well, he’s got a flat, an overhead camera.

Adam: Oh, he’s got an overhead camera.

Jeff: Yeah. So he opens it up, and there’s ELIZA, there’s the original code, or some version printed out on those old printers, like I used to use, and all people of a certain age used to love, in that big paper format, was the original ELIZA with the exact, almost exact, doctor script.

Adam: The doctor script wasn’t a transcript. It was code. The code ELIZA ran to act like the therapist. But that’s really only part of the code base. Then they found more code.

Jeff: Holy shit. That’s it. That’s what we’re after. We’re done, I can die happy. And we took pictures of it, and then I got a good copy. And it’s fascinating. It’s fascinating. And we’re still discovering things about it. We found many conversations with ELIZA that nobody had ever seen before, many of them hand edited by Weizenbaum. Some of these were obviously class overheads, or even better than overheads, something that I’m sure is before your time. Do you know what a ditto is?

Adam: No idea.

Jeff: Ditto is an early hallucinogenic drug. It was the most wonderful smell. It was a copy machine, but it did it by some complicated printing VOC system, volatile organic chemical, and they smelled great. But anyway, there were tons of these in the files, because he was teaching this stuff.

MAD SLIP

Adam: So if you were to look at this original ELIZA code, there’s one thing you might notice pretty quickly.

Jeff: Everyone associated ELIZA with LISP. And it’s not true. It was originally written in a symbolic… A list processing language, but not LISP. It was written in this thing called SLIP. So let’s say everybody after 1970 thought that ELIZA was written in LISP. The reason for that is that the only available version of ELIZA was Bernie Cosell’s. He wrote it at BBN in Maclisp, I guess would be… Well, it was BBN LISP at the time. And it ran on probably early PDP-10s or something. So it was the version which got around academically.

Adam: This was the version Jeff saw at Penn.

Jeff: But that the earliest code anyone had. Nobody had seen, at least after the publication of the ELIZA article was, nobody as far as we know had the original version.

Adam: Because Cosell, the source of the “original LISP version,” he had never seen the real ELIZA code either. Because how could he have?

Jeff: And because we didn’t publish code in papers at the time, in scientific papers. We published the algorithm.

Adam: So it seems like the LISP version wasn’t the original either. It turns out that ELIZA was written in something called MAD-SLIP. And if you were a scholar of Weizenbaum’s works, you might have already known this, because he had a previous paper published all about SLIP.

Home-Brew Programming Languages

Jeff: This is so fascinating from a just engineering standpoint. So MAD-SLIP, as I said was not LISP. It was actually explicitly… Interestingly, he doesn’t mention LISP. Remember I was talking about the infighting. Weizenbaum doesn’t mention LISP even once, as far as I recall, in the SLIP paper.

Adam: It’s like he made something almost exactly the same as McCarthy’s LISP, but never mentioned that maybe he had seen…

Jeff: Exactly. And I’m sure he had, he absolutely had. And in fact, there’s a line in the paper that says, “Some people love their home brew programming languages too much, whereas what we really want to do is put it in Fortran, which everybody uses.”

Adam: Okay, I’m not an academic, but this seems like a huge burn. I won’t mention that thing, that that unnamed person is so obsessed with, but by the way, I think he’s a little bit too focused on his home brew nonsense. Coincidentally, this is also why everybody thought ELIZA was written in LISP. Snippets of it were published in the original paper, and they looked like LISP S-expressions. And really, considering the motivations here, it’s wild that Weizenbaum’s chat bot would actually be something used to show off the power of LISP, when really, in this competitive little circle of AI pioneers, he was actually trying to kick some dirt at a rival.

Critical Code Studies

Adam: Anyways, the next thing Jeff found in the code was mind blowing. But to get to that discovery, Jeff needed to understand the code. And to do that, he needed help. It turned out he wasn’t the only one interested in the ELIZA code.

Jeff: Unbeknownst to me, by the way, David Berry and the people who really care about computer history had been wanting to find the original ELIZA. Anyway, they had looked at my ELIZA. I had no idea, I found this out later. They had actually done a group socio- What do you call it? A literary criticism. They’re a group of people, it’s fascinating, they do software literary criticism.

Adam: So Jeff brings them to the archives of Weizenbaum, and they try to make sense of them. I mean, obviously Dittos or overhead slides of conversations are easy to understand, but that’s not the implementation. That’s just the output. And a system this complex, with this amount of antiquated languages, there’s really only one way to understand it. You want to get the thing running. You want to translate it to a modern medium and run it. This developer named Anthony hay gets to work on that.

Jeff: There are really three, or depending on how you count, four completely different pieces of code. There’s the ELIZA, which was written in MAD-SLIP, and MAD-SLIP is a crazy program. I mean, it’s like an ALGOL… MAD itself is like ALGOL, except bizarrely, it had these words like “whenever.” So when we would write these days “when,” or “if,” it had “whenever.” Now, who the hell is going to type out “whenever,” especially on a card punch? And so you could abbreviate it as “w’r.” And so the code is all… If you look at, it it’s unreadable, because instead of… They even did it with “end.” Instead of “end,” it’s “e’d.” It’s like, what the fuck? Why are you abbreviating “end”? So the code looks like a whole bunch of unreadable scribble. It’s almost APL. So we translated it into reasonable, modern language, or at least modern readable stuff. Anthony Hay did a lot of that, but I did some work on it and some others.

But in any case, the other thing is, remember I said it was written in MAD-SLIP. So he built SLIP as a set of packages that could be used originally in Fortran, and then in MAD. But ELIZA used two special functions, which were SLIP functions that did not appear in the SLIP code. They were nowhere.

Adam: More missing code. That’s going to make translating this hard. So far, we have MAD, which is Michigan Algorithm Decoder, and in it was written SLIP, which is not LISP, but sure looks a lot like it. And then we have the ELIZA script, which contains the specific responses, and it’s written in SLIP, but that’s not enough to get it working. We still have these missing pieces.

Match, Assemble and FAP

Jeff: So they were called Match and Assemble, and really the heart of ELIZA was these two SLIP functions. One did the matching, the grep version of the regex, and one did the rebuilding, the assemble version of the regex. And the rest of ELIZA is just framework for that. It’s the workflow. These functions were nowhere. So we found them. Recently we went back, the original work we did was two years ago, but we went back again just a month ago to the archives, to the SLIP code, and we found these two functions. We also found a hash function that Anthony Hay had been dying for. He was bothering me constantly about finding this hash function, and we found the hash function. And so now Anthony Hay is… He could die happy. Basically we found his hash function, and his ELIZA is now a yet more perfect reimplementation in C. I think it might be C++.

But SLIP actually internally used this thing called, unfortunately named, Fap. I didn’t name it. At the time, it probably didn’t mean what we’re all laughing about, but anyway. And what it was was, you could basically… It was an API to put [inaudible 00:35:33] function calls from Fortran to Assembly. So you were like MAD, SLIP, Fap, and at the top, the doctor script and at the bottom, of course the MAD system the operating system is running on. And all of this is documented, but no-one had the code. That is to say, it’s mentioned. I say documented in a sense of mentioned. But they didn’t have GitHub. So like I said before, they never published the code. So we had to go find it. So we went and found it. We found it all the way down. So we’ve got it down to the 7090 machine code, and Anthony Hay has been analyzing that. Really recently, like a week ago, I got his note that he had analyzed it and made it run.

Adam: They got it. They got it running at last. And now, with the true original version in front of them, or at least a port of it into C++, there are things that start to make sense.

ELIZA Learns!

Jeff: One of them is that… Now, you remember ELIZA was named after the Pygmalion character, Eliza Doolittle, who is not related to Dr. Dolittle as far as I know, who spoke to animals, which is an interesting conceptual relationship, but it’s like a pun in the literary world that wasn’t intended. So it was named after Eliza Doolittle, and if you remember the Pygmalion story, the idea was that there was some high British asshole was going to teach some “poor uneducated woman…” I mean, the whole thing is horrible. Actually, if you read it, as I understand to, it’s not quite as horrible as it sounds, that really she wins the day, and she’s street smarts and he’s just asshole smarts, and so she wins, but anyway.

Adam: I think that Pretty Woman is a modern take on it.

Jeff: Oh.

Adam: Street smart. I mean, they make her a prostitute.

Jeff: but she tries to get brought up… Yeah, that’s right. Oh, interesting.

So why would you call it ELIZA? Because it didn’t learn, right?

Adam: Yeah.

Jeff: But it turns out, it did.

Adam: This is a big deal. It can learn. Maybe Weizenbaum saw himself as the professor from Pygmalion teaching his robot Eliza. Anyhow, the point is, this 1964 piece of code would learn, at least to a certain extent.

Jeff: So if you look at the original code, there’s a whole teaching piece, which is actually mentioned in the paper, but only in one line. It is never brought up again, and nobody recognizes it, that you could type… And I forget the command, but if you look at the code, you could type “learn” or whatever, and then you could type in those S-expressions, you could train it in real time as it was going. Totally lost to history, but it’s a huge piece of the code.

ELIZA’s Programmable!

Adam: One other thing they found was the student of Weizenbaum’s, Paul Howard, he made the whole thing programmable.

Jeff: So you could put MAD code, MAD-SLIP code, into the script. So it actually became a programmable script. In other words, the script could have code in the script itself. So instead of just sentences, you could say, “At this point, we’re going to modularize this, and make it available to talk about relativity in this domain.” It was fantastic. It was an amazing job.

Adam: So it turns out ELIZA’s a bit more like a framework for writing chat bots. The doctor script that Jeff and so many others knocked off, that’s like their chat bot “Hello, World.” So now Jeff and this group, they have the original ELIZA, and they set up a website. Jeff’s collecting copies of it for his family tree. It’s a huge undertaking called ELIZAGEN, ELIZA genealogy. And it’s still a work in progress. But ELIZAGEN can tell you, for instance, that Richard Stallman’s doctor that’s found in Emacs is most likely based on Cosell’s version. And Peter Norvig’s version also comes from Cosell, possibly via some intermediary. And that random ELIZA you see on GitHub in JavaScript or Rust, if it has 36 responses, it might be the nth generation of the version printed in Creative Computing, that version that Jeff wrote.

ELIZA’s Impact

Adam: Jeff always felt that his version wasn’t a true ELIZA, but really no version was. And you might say, who cares? Who cares if people got it a bit wrong? It wasn’t written in LISP, and it did more than people thought. But the thing is that the impact of ELIZA has been huge.

Jeff: The positive impact it had, although some people might say it was a negative impact, I think it was positive, was to make people believe that AI was going to go somewhere. It made me believe AI was going to go somewhere, it made a lot of people believe AI was going to go somewhere. It’s like, okay, you can prove theorems from Whitehead and Russell or whatever it was. That was what Simon and Newell were working on. And maybe playing chess. It was checkers at the time, Samuel’s Checkers Player, which learned, by the way. But this was real… This was HAL.

Adam: HAL is of course from 2001: A Space Odyssey. The award winning movie had Marvin Minsky as a consultant, and Minsky was at MIT with Weizenbaum, and surely saw ELIZA.

Jeff: 2001, I think, was strongly influenced by the ability to actually have conversations with computers that Weizenbaum had demonstrated to Minsky’s satisfaction.

The most interesting part of 2001 is actually when David Bowman is turning HAL off. Do you remember this? He’s floating around in this zero-G in-house memory banks, and he’s pulling memory banks out in zero-G, floating around in this thing. It’s an amazing scene.

And the cool part of it is that HAL is basically a neural network, and he’s degrading gracefully. So he starts singing, (singing) Blah, blah, blah, blah. I don’t remember, it has something about a bicycle built for two. But it starts slowing down as he pulls memory banks, and it doesn’t just turn off. He doesn’t turn HAL off, it starts slowing down. (singing) And then it reverts to its early education. It starts saying, “I want to introduce you to my teacher. This is Dr. Langley.”

So for me… And this is modern. This is totally modern, still, the natural degradation of neural nets. It learned. It learned this stuff. It trained its neural nets. A human trainer, a father, a mother, trained HAL.

The Key Is Having Discourse

Adam: For Jeff, chat bots are a part of history, but also they’re the future, because of what they’ll hopefully teach us about ourselves.

Jeff: How can an artifact, a computer, reason about things?

How could it interpret the world as a set of things that it can then reason about, and take action on, and explore, and talk to?

Whether it’s the scientific version of talking to, that version of discourse, or the humans talking to each other version of discourse – how you learn about other people.

How we hopefully have democracy, is we have discourse, which we’re not doing right now.

That’s the problem. The problem is, we’re not having discourse. We’re we’re yelling at each other, not saying, “What makes you think that?” I mean, in a nice, interested way, like ELIZA.

Outro

Adam: That was the show, but it’s not the end of the ELIZA story.

Jeff is still working to track down versions of ELIZA. Actually, Ron Garret from episode 76, LISP in space, he helped him retrieve an old LISP version from an Apple II floppy. If you haven’t heard Ron’s story, you might want to check out that episode. But thank you to Jeff for tracking down ELIZA and showing us how we got it all wrong.

You could find out more at ELIZAGEN, link in the show notes. And as I alluded to, this isn’t just Jeff’s story. He would to thank Myles Crowley, the MIT archival librarian who led him through the archives, and also Dave Berry, Anthony Hay, and Peter Millican, whose work on understanding the code was central to this project. Also the Weizenbaum estate and Pm Weizenbaum for granting the open sourcing of the archive.

And if you want to learn more about the Turing test and how it relates to chat bots, and maybe how we’ve been looking at the Turing test wrong as well, I’ll be talking to Jeff a little bit about that as a bonus episode for Patreon supporters. Also a link in the show notes.

And until next time, thank you so much for listening.

  1. Update: This was The University of Pennsylvania, not Penn State. All mentions of Penn State are an error by Adam. Although Jeff notes that this is so common an error that UPenn students sometimes wore “Not Penn State” T-shirts when he was there. 

Support CoRecursive

Hello,
I make CoRecursive because I love it when someone shares the details behind some project, some bug, or some incident with me.

No other podcast was telling stories quite like I wanted to hear.

Right now this is all done by just me and I love doing it, but it's also exhausting.

Recommending the show to others and contributing to this patreon are the biggest things you can do to help out.

Whatever you can do to help, I truly appreciate it!

Thanks! Adam Gordon Bell

Audio Player
back 15
forward 60s
00:00
00:00
44:06

The History and Mystery Of Eliza