How ELIZA works

ELIZA is a rule-based chatbot. It does not understand meaning in the way modern users often imagine. Instead, it follows a small but clever pipeline: normalise the input, look for strong patterns, reflect parts of your wording back from a new perspective, then choose a fitting response template.

pattern matching reflection rules templates fallbacks topic recall

The basic loop

  1. Normalise the input by trimming it, lowering case, and simplifying punctuation.
  2. Check ordered rules for strong patterns such as feelings, family, beliefs, or reasons.
  3. Capture a phrase from the user’s sentence if the rule needs one.
  4. Apply reflections so “I”, “my”, or “you are” can be shifted into the bot’s perspective.
  5. Choose a response template and insert the reflected phrase where needed.
  6. Fallback gracefully with stock prompts if no strong rule matches.

A worked example

Suppose the user types “I feel anxious today.” The script first normalises the text, then checks its ordered rules. The feelings rule matches because the sentence begins with “I feel”. The captured phrase is “anxious today”. That phrase is passed through the reflection step where appropriate, then plugged into a template such as:

Do you often feel (1)?

The result is a reply like “Do you often feel anxious today?” or another variation from the same rule set. The effect feels conversational even though the system has not analysed the emotion in any deep sense.

Reflections (I ↔ you)

One of ELIZA’s most recognisable tricks is reflection: taking a piece of the user’s wording and flipping its perspective. That is why phrases such as “I am”, “my”, or “you are” can be turned into something that sounds like a therapist-style response.

// Example reflection mapping (simplified)
const reflections = {
  "i am": "you are",
  "i": "you",
  "my": "your",
  "you are": "I am",
  "you": "I"
};

Rules in this version

This browser version uses a compact rule set rather than a huge script. The current rules look for topics such as:

  • Feelings: “I feel…” / “I am feeling…”
  • Family: mother, father, parents, sister, brother, partner
  • Beliefs: “I think…” / “I believe…”
  • Reasons: “because…”
  • Difficulties: “I can’t…” / “I cannot…”
  • Wants and needs: “I want…” / “I need…”
  • Dreams: dream / dreamt / dreamed
  • Absolutes and tension: always / never
  • Deflection toward the bot: “you…”
  • Simple yes/no handling and generic fallback replies

Why the illusion works

ELIZA feels more impressive than its machinery because its design choices are well judged. It asks open questions, avoids making too many factual claims, and keeps the conversation centred on the user. That combination means the human participant does a surprising amount of interpretive work.

Where the illusion breaks

The limits show up quickly. ELIZA can repeat itself, miss the real meaning of a sentence, ignore wider context, or give a reply that is grammatically neat but emotionally shallow. Once you notice the rule boundaries, the trick becomes easier to see. That is exactly what makes ELIZA historically useful: it turns conversational illusion into something you can inspect.

Faithful in spirit, modern in format

This site is not a byte-for-byte recreation of the original code or script. It is a lightweight JavaScript interpretation designed to be fast, readable, mobile-friendly, and easy to try in a browser. The goal is to preserve the key ideas: keyword detection, reflections, templates, and the experience of being nudged to keep talking.

FAQ

Next: ELIZA’s influence on modern chatbots. Then: why the ELIZA effect still matters.