AI & Digitalization

AI Agent Roles in the Disney Model

The Disney Model of creativity is over 30 years old. Its three-way division into Dreamer, Realist and Critic provides a surprisingly precise architecture for AI agent workflows: specialized roles, strict phase separation, iterative quality assurance.

April 5, 2026 Ralph Köbler 10 min read
Walt Disney, 1938 Photo: Alan Fisher / New York World-Telegram & Sun, Library of Congress (Public Domain)
In brief: The "Disney Strategy" according to Robert B. Dilts separates creative processes into three roles: Dreamer (vision), Realist (execution) and Critic (review). This separation resolves a core problem of AI workflows: when a single agent simultaneously generates, plans and evaluates, quality drops — regardless of whether the task involves code, design, strategy or content. Specialized agent roles with clear handoffs and iterative cycles change this fundamentally.

The Disney Model: More Than a Creativity Technique

A Disney animator is said to have once observed that there were "three different Walts": one who dreamed wildly, one who made plans, and one you didn't want in the room when you were presenting a fresh idea. Robert B. Dilts formalized this observation in the early 1990s into a process model, which he first described systematically in Tools for Dreamers (1991).

The core idea is simple: creative performance does not emerge despite, but through the separation of three thinking functions. Generating ideas and evaluating them simultaneously blocks the process. Dilts framed the three phases as guiding questions:

Dreamer

“WANT TO”

What do we want? Why? What would be possible if there were no constraints? The Dreamer thinks big, uncensored, visionary. No feasibility check, no “yes, but.”

Realist

“HOW TO”

How do we make it happen? Who does what, when, with what? The Realist does not criticize — he makes things feasible. Steps, milestones, measurable criteria.

Critic

“CHANCE TO”

What could go wrong? Who is affected? What works well today and must not be broken? The Critic improves — he does not destroy.

The decisive detail lies in the name of the third phase: Dilts did not call it “Danger to” but “Chance to.” The Critic is not the enemy of the idea — he is its chance for improvement. Every criticism is translated into a constructive question: “How can we...?”

The most common mistake: Mixing the three phases. As soon as the Critic appears in the Dreamer phase, ideas die before they take shape. As soon as the Dreamer interferes in the Realist phase, the concrete plan reverts to a vague wish image. The strict separation is not a nice-to-have — it is the model.

Why the Disney Model Fits AI Agents

At first glance, applying a creativity technique from the 1990s to AI agents might look like a neat analogy. It is more than that — and for a structural reason.

The fundamental problem in complex AI workflows is well known: a single agent that simultaneously generates, plans and evaluates produces mediocrity. This applies equally to code, designs, strategy papers and campaign concepts. The LLM has no natural phase separation. It hallucinates and corrects in the same breath. It drafts brilliant concepts and retreats to safe platitudes in the very next token — because the next token is also the evaluation of the previous one.

This is exactly the problem Dilts described in human teams: mixing thinking functions blocks the process. The solution is the same in both cases — role separation through architecture, not through discipline:

  • In humans: Different rooms, different times, clear phase transitions.
  • In AI agents: Different system prompts, different contexts, clear handoff artifacts.

The three Dilts roles can be directly implemented as an agent architecture:

Role In a Workshop As an AI Agent Output
Dreamer Inviting room environment, no criticism allowed System prompt: “Think radically, no constraints.” High temperature. 3–5 solution approaches, payoff statements, future vision
Realist Practical workspace, flip charts, timelines System prompt: “Make it feasible. Steps, resources, criteria.” Step-by-step plan, milestones, measurable success criteria
Critic Deliberately “uncomfortable” room, hard questions System prompt: “Evaluate against criteria X, Y, Z. Be strict.” Fresh context. Weakness list, ecology check, “How can we...?” questions

The Critic Agent: The Heart of Quality Assurance

Of the three roles, the Critic is the most interesting for AI workflows — because it solves the problem that LLMs handle worst on their own: honestly evaluating their own work.

An LLM judging its own output is like an author reviewing their own book. The information that led to its creation still sits in context — and colors the evaluation. The solution is architectural:

Design Principles for the Critic Agent

Fresh context — no memory of the generation process

The Critic agent starts without the context of the Dreamer and Realist phases. It only knows the standardized evaluation prompt and the finished artifact. No history, no intermediate versions, no justifications. This prevents the “sunk-cost bias” that arises when an agent recapitulates its own reasoning process.

Standardized evaluation prompt — consistent across sessions

The Critic always receives the same evaluation prompt, regardless of what the Dreamer or Realist experienced in that session. This makes evaluations comparable: what receives a 75% score in session 1 has the same standard as the result in session 30.

Concrete evaluation criteria — no vague quality judgments

Not “Is this good?” but: “Evaluate against these specific criteria. Score each component. Flag suspicious areas.” The more concrete the criteria, the more useful the result. Vague prompts produce vague criticism.

Criticism as a question — Dilts' “How can we...?”

Pure negative lists are also useless in AI workflows. The Critic agent should translate every weakness into a constructive question: not “The module is poorly structured,” but “How can the module separate responsibilities more clearly?” This gives the Dreamer a concrete starting point in the next iteration.

Ecology check — what must not be broken?

Dilts' most important and most frequently forgotten point: the Critic does not only check for risks, but also identifies which positive qualities of the status quo must be preserved. In AI workflows: which existing structure, tone or consistency with the overall project must not be lost during revision?

“The Critic must ask at least one ecology question. Pure negative lists are not permitted.”
— Derived from Appendix H, Tools for Dreamers (Dilts/Epstein, 1991)

Iteration Logic: At Least Three Cycles

A single pass (Dreamer → Realist → Critic) rarely yields a solid result. The model calls for at least three complete cycles — and this is where it becomes particularly valuable for AI workflows.

The Three Passes Differ Fundamentally

Pass 1: Exploration. The Dreamer generates broadly. The Realist sorts. The Critic identifies the most obvious weaknesses and formulates “How can we...?” questions. Result: a rough artifact plus a concrete list of improvement questions.

Pass 2: Evolution. The Dreamer now works informed. It knows the Critic's questions and integrates them — without being constrained by them. It combines the best elements from pass 1 and attempts at least one new variant. The Realist sharpens the plan. The Critic evaluates with the same standardized prompt — and compares with the result from pass 1.

Pass 3: Convergence. If approaches have remained stable across passes, the process converges. If not, it iterates further. The result is a final one-pager: vision, plan, risks, ecology check and a clear go/no-go recommendation.

The key to iteration: The Dreamer in round 2+ is an informed visionary — it knows the landscape but is not constrained by it. For AI agents, this means: the Dreamer agent receives the Critic's questions as input, but not the Critic's detailed scoring of individual components. It should think anew, not repair.

Practical Implementation: An Agent Tableau

The following architecture can be directly implemented as a multi-agent system — whether with LangChain, CrewAI, Anthropic's Tool Use, or a simple orchestrator script:

Agent Core Responsibility Input Output
Dreamer Agent Open the possibility space, generate radical options Challenge statement, context, desired benefit 3–5 solution approaches + payoff statements + future vision
Realist Agent Create implementation plan, clarify resources Dreamer output, constraints, available resources 1–2 prioritized concepts + step plan + success criteria
Critic Agent Ensure quality, identify weaknesses, run ecology check Realist plan (fresh context, no Dreamer knowledge) Weaknesses + ecology check + “How can we...?” questions + go/iterate recommendation
Orchestrator Manage cycles, pass artifacts, check exit criteria Results from all three agents Final one-pager or trigger for next pass

Five Rules for Practice

  1. Strictly separate the phases. No “By the way, is this even feasible?” in the Dreamer prompt. No “And evaluate while you're at it” in the Realist prompt. Mixing is the most common mistake — in humans and AI alike.
  2. The Critic always starts fresh. No context from the generation phase. No “I know you worked particularly hard on section 3.” The Critic only knows the result and the evaluation criteria.
  3. Define concrete thresholds. Not “Is this good enough?” but measurable criteria: test coverage > 80%, no duplicates, all required components present, accessibility score met. If the threshold is not reached: back to the Dreamer — do not tinker with the existing result.
  4. When stuck: back to the Dreamer. If the Critic says “Iterate” three times in a row, the problem is not in the wording but in the fundamental approach. No amount of fine-tuning will help — the Dreamer needs a fresh perspective.
  5. Ecology questions are mandatory. The Critic must ask: what is working well right now? What will be lost through the change? In AI workflows: which consistency, style or structure must be preserved, even when content changes?

Seven Application Areas

The Disney Model with AI agents works wherever creative work needs structure. Here are seven domains — each with a concrete example of how the three roles work together.

1. Software Development

Dreamer: Architecture drafts, feature ideas, API design variants. Realist: Write code, tests, clarify dependencies. Critic: Code review for bugs, security, performance, technical debt.

Practical example: A Dreamer agent generates three architecture variants for a new microservice module — one with event sourcing, one with classic REST, one as a hybrid. The Realist picks the most pragmatic, writes the boilerplate code and defines the API contracts. The Critic evaluates with a standardized security prompt: SQL injection vectors, unsecured endpoints, missing rate limits. Result after two iterations: an architecture that is neither overengineered nor insecure.

2. Product Design & UX

Dreamer: Wireframes, interaction concepts, radical UI ideas without constraints. Realist: Design-system-compliant implementation, responsive variants, component library. Critic: Accessibility check, usability heuristics, brand consistency.

Practical example: The Dreamer agent designs five radically different onboarding flows for an app — from gamified to minimalist. The Realist builds the most promising one as a Figma prototype within the existing design system. The Critic evaluates against WCAG 2.2 AA: contrast ratios, keyboard operability, screen reader compatibility. Round 2 integrates the accessibility findings without diluting the flow.

3. Strategy Development & Business Planning

Dreamer: Business models, market opportunities, blue-sky scenarios. Realist: Financial plan, go-to-market, resource planning, KPIs. Critic: Risk analysis, competitive counter-check, stakeholder impact.

Practical example: A consulting firm uses the Dreamer agent to play out four expansion scenarios into a new market. The Realist distills a 12-month plan with budget, hiring plan and milestones. The Critic raises the ecology question: “Which existing client relationships suffer if the core team is redistributed?” — exactly Dilts' focus on the positive by-products of the status quo.

4. Marketing Campaigns

Dreamer: Campaign ideas, target-group personas, channel mix, creative concepts. Realist: Budget, timeline, content calendar, KPIs. Critic: Brand compliance, legal review, tonal consistency across channels.

5. Content & Text Production

Dreamer: Topic ideas, perspectives, unusual angles and metaphors. Realist: Structured texts following style guide, SEO optimization, formatting. Critic: Plagiarism check, originality check, fact verification, tone.

Practical example: With large-scale content projects — such as a reference book with hundreds of entries — the value of iteration becomes especially clear. A standardized Critic prompt evaluates each entry against the same quality criteria: originality > 75%, no duplications between related entries, own voice rather than source paraphrase. Entries that fall below the threshold do not go for revision — they go back to the Dreamer for a fundamentally new approach. In practice, some entries require one pass, others four.

6. Data Analysis & Research

Dreamer: Generate hypotheses, search for unexpected correlations, exploratory visualizations. Realist: Choose statistical methods, clean data, build reproducible pipelines. Critic: Methodological critique, bias check, reproducibility, significance testing.

Practical example: The Dreamer agent analyzes a customer dataset and generates twelve hypotheses about purchasing behavior — some obvious, some surprising. The Realist selects the five most testable, defines the statistical tests and cleans the data. The Critic checks: survivorship bias? Sample too small? Correlation sold as causation? Hypothesis 7 survives three passes and becomes the most surprising insight in the quarterly report.

7. Education & Curriculum Design

Dreamer: Learning objectives, innovative formats, gamification ideas, blended learning concepts. Realist: Structure curriculum, create materials, design assessments. Critic: Didactic review, learning objective achievement, accessibility, cognitive overload.

What All Application Areas Have in Common

  • The Critic always starts fresh — whether evaluating code, designs or strategies. No context from the generation phase.
  • A standardized Critic prompt ensures consistent evaluations across sessions. What receives a “Go” in session 1 has the same standard as session 30.
  • Some artifacts need one pass, others four. The iteration logic works — even at 4+ rounds. Rounds 3 and 4 are not touch-ups, but fundamentally new approaches.
  • The Dreamer gets more cautious with each iteration — if you're not careful. Solution: give the Dreamer the Critic's questions, but not the detailed scores.
  • Duplications between related artifacts are the most common blind spot. The Critic must explicitly check whether variant 2 differs sufficiently from variant 1.
The core rule from practice: If the Critic falls below the threshold, do not tinker with the existing result. Return to first principles and think fundamentally anew. That is the difference between “iteration” and “repair” — and it makes all the difference in quality.

Conclusion: Old Method, New Application

The Disney Model is not an AI framework. It is a facilitation heuristic from the NLP tradition, inspired by observations of an animation studio. But its core principle — deliberately separating, sequencing and iterating thinking functions — solves a problem that is just as real in AI workflows as in human teams.

The three roles give an agent architecture what it otherwise lacks: structure without rigidity. The Dreamer can be bold because the Realist will sort things out afterward. The Realist can be pragmatic because the Critic will review it. And the Critic can be strict because the Dreamer will bring a fresh perspective in the next pass.

The result: not a single, perfect shot — but a process that converges through iteration. Exactly as Walt Disney (allegedly) already knew.


Sources & Further Reading
  • Dilts, R.B., Epstein, T. & Dilts, R.W. (1991): Tools for Dreamers, Meta Publications — in particular Appendix H: “Well-Formedness Conditions for Evaluating New Ideas”
  • Hochschule Luzern (n.d.): Disney Method — method card with zone setup and time grid
  • Windauer (n.d.): Walt Disney Creativity Strategy According to Robert B. Dilts — NLP-influenced variant with meta-position and state anchors

Related Articles

AI & Digitalization
AI Development 2026: From Chatbots to Physical AI

Where does AI stand in 2026? OpenAI's 5-level model, Physical AI and the EU AI Act for HR.

Read more
AI & Psychology
AI & Psychology: When AI Models Need Therapy

What happens when we confront AI models with psychological tests?

Read more
AI & Digitalization
The Evolution of AI: AGI Through Developmental Psychology

How developmental psychology models help understand the path to AGI.

Read more