Recursive Habermas Machine
An AI-assisted deliberation tool that helps diverse groups identify consensus statements through democratic voting, based on Google DeepMind research.
What Is It?
The Recursive Habermas Machine implements research from Google DeepMind's 2024 Science paper on AI-mediated consensus building. It's a tool designed to help diverse groups—whether colleagues, stakeholders, or community members—find common ground on complex issues.
How It Works
- Collection: Participants provide their individual perspectives on a question or issue
- Synthesis: An LLM generates candidate consensus statements that attempt to capture the group's shared values
- Prediction: The system predicts how each participant would rank each candidate statement
- Selection: The Schulze voting method (Condorcet-compliant and strategically robust) identifies the statement with broadest appeal
- Recursion: For groups larger than 12, the process repeats hierarchically to maintain computational feasibility
Key Features
Local & Private
Runs entirely on local inference via Ollama. No cloud APIs, no data leaving your machine.
Full Transparency
See exact prompts, LLM responses, and voting matrices. No black boxes—understand how consensus was reached.
Democratic Voting
Uses Schulze method, a Condorcet-compliant system that's strategically robust and reflects true group preferences.
Scalable
Recursive processing handles groups of any size while maintaining computational tractability.
GUI Interface
Built with CustomTkinter for an accessible, user-friendly experience.
Open Source
AGPLv3 licensed—modifications deployed as network services must share source code.
Design Philosophy
Transparency Over Optimization
The system prioritizes understanding how consensus was reached over simply producing an output. Users can inspect every step of the process.
Questions, Not Decisions
The tool deliberately restricts itself to "questions and opinions, not proposals and decisions." It informs human deliberation but preserves human authority over final determinations.
LLMs Are Unreliable Tools
Rather than treating AI as infallible, the system acknowledges LLMs as unreliable and uses multi-retry mechanisms and transparent outputs to work around their limitations.