PARRY and ELIZA: When Early Chatbots Talked to Each Other

PARRY was one of the clearest direct developments from ELIZA-style conversation: a 1970s program with a more definite simulated personality, remembered especially for its strange ARPANET conversation with DOCTOR.

PARRY: ELIZA with attitude

ELIZA showed that a computer did not need to understand a conversation in order to appear, briefly and unsettlingly, as though it was taking part in one. Its most famous script, DOCTOR, used the style of a Rogerian psychotherapist: it reflected statements back as questions, picked up keywords, and encouraged the user to continue.

A few years later, psychiatrist Kenneth Mark Colby developed PARRY at Stanford University. PARRY was not simply another therapist-style chatbot. It was designed to model a fictional patient with paranoid beliefs, using a more specific internal structure of assumptions, concerns, and defensive responses. That makes it an unusually strong candidate for ELIZA's direct legacy: it was not a distant modern chatbot vaguely descended from ELIZA, but an early conversational program created in the same terminal-based world of psychiatry, symbolic AI, and typed interaction.

The contrast is useful. ELIZA's DOCTOR script often feels empty behind the curtain: it asks questions, reflects phrases, and waits for the human to supply meaning. PARRY, by contrast, had something closer to a persistent persona. It could return to its own worries, resist questions, become suspicious, and steer the discussion toward topics such as gambling, bookies, racketeers, and the Mafia.

ELIZA / DOCTOR

  • Therapist-like prompt style
  • Keyword matching and reflection
  • Often waits for the user to provide meaning
  • Designed to demonstrate typed natural-language interaction

PARRY

  • Fictional patient persona
  • More persistent topic focus
  • Suspicious and defensive responses
  • Designed as a simulation of a particular communication pattern

The 1972 PARRY and DOCTOR conversation

The most famous encounter between the two programs was preserved by Vint Cerf in RFC 439, PARRY Encounters the DOCTOR. Although the RFC was dated 21 January 1973, it states that the session itself took place on 18 September 1972.

This was not quite the simplest possible story of "ELIZA at MIT talking directly to PARRY at Stanford." The RFC says that PARRY was running at the Stanford Artificial Intelligence Laboratory, while DOCTOR was running on a BBN TENEX system, with both being accessed from UCLA. In practical terms, though, the historical importance remains the same: an ELIZA-style DOCTOR program and Colby's PARRY were connected across the early networked computing world and allowed to generate a transcript.

The result is comic, but also revealing. DOCTOR keeps trying to continue the session with prompts such as "Please go on", "What does that suggest to you?", and "Why do you ask?" PARRY repeatedly returns to its own concerns about racing, gambling, bookies, racketeers, and personal suspicion. One program behaves like a therapist who cannot be drawn into the substance of the story; the other behaves like a guarded interviewee who thinks too many questions may be dangerous.

Why the exchange matters

The PARRY/DOCTOR conversation matters because it removes the helpful human from the loop. When a person talks to ELIZA, the person often does much of the interpretive work: they read intention into the program's questions, forgive its evasions, and supply emotional continuity. But when PARRY talks to DOCTOR, there is no human user smoothing over the gaps. The two systems expose each other's limitations.

DOCTOR cannot understand why PARRY keeps circling back to gambling and organised crime. PARRY cannot understand that DOCTOR's questions are mechanical prompts rather than suspicious interrogations. The transcript becomes a machine-generated misunderstanding: two programs performing conversation without sharing meaning.

A direct descendant, not just a distant influence

PARRY deserves a place in ELIZA's direct afterlife because it took the basic idea of typed human-computer conversation and pushed it in a different direction. ELIZA had demonstrated the power of role, script, and expectation. PARRY asked what might happen if a conversational program had a more specific internal model: not just a script for asking questions, but a structured set of beliefs and reactions.

That does not make PARRY intelligent in the modern sense. It did not understand the world as a person understands it. It did not reason freely about arbitrary topics. But compared with ELIZA's deliberately shallow reflective style, PARRY tried to maintain a more consistent simulated point of view.

What PARRY tells us about ELIZA

Looking at PARRY helps explain why ELIZA mattered. ELIZA's importance was not that it solved conversation. It did not. Its importance was that it revealed how little machinery might be needed before people started treating typed output as socially meaningful. PARRY built on that discovery by showing that a more definite persona could make the illusion stronger, stranger, and more testable.

The PARRY and DOCTOR exchange is therefore more than a novelty transcript. It is an early snapshot of several questions that still surround conversational AI today: whether convincing responses require understanding, how much personality can be produced by rules, when simulation becomes misleading, and why humans so readily treat text as evidence of a mind behind it.

Quick facts

  • PARRY created by: Kenneth Mark Colby
  • Institution: Stanford University
  • Famous comparison: often summarised as "ELIZA with attitude"
  • Famous encounter: PARRY and DOCTOR connected over ARPANET
  • Session date: 18 September 1972
  • RFC date: 21 January 1973
  • Why it matters: one of the clearest early developments from ELIZA-style conversation toward persona-driven chatbot design

Historical note

Some older sources describe PARRY using clinical terminology that is now dated. This page treats PARRY as a historical computer simulation of a fictional patient with paranoid beliefs, not as a clinically meaningful model of mental illness.

Sources and further reading

What to read next