The Terminal Always Wins
Why AI interfaces are 1974 all over again — and what that means for builders.
The chat box is a terminal.
Strip away the CSS and every AI interface is a command line: you type, it responds, scrolling text. Claude Code, Gemini CLI, Codex — the hottest dev tools in 2026 are literally terminal applications. We went from GUIs back to text prompts in under a decade.
This is not a regression. It’s a pattern.
We’ve Been Here Before
The last time the most exciting software lived in a terminal was the early 1990s. Before the web ate everything, the cutting edge was text-based: Gopher, USENET, IRC, BBS culture. Then the browser arrived, and we spent thirty years building increasingly complex graphical interfaces on top of increasingly simple interactions.
Now the pendulum has swung back. When Claude draws a diagram in chat, it’s ASCII art. When an LLM renders a table in markdown, it’s working within the same constraints as a 1985 BBS artist — fixed-width characters, no pixels, pure text. Someone recently built animated ASCII art in the terminal and called it “one of the most constrained UI problems you can tackle.” FlowingData published a visual explainer on ASCII art this month. The aesthetic never died — it was just waiting for a reason to matter again.
Old Protocols, New Relevance
The Gemini protocol — spiritual successor to Gopher — is still alive and philosophically aligned with how agents want to communicate: text-first, minimal overhead, structured hierarchy.
Look at MCP’s design: stdio, server-sent events, structured tool descriptions. It echoes SMTP and NNTP more than HTTP. Even the naming is telling — “Model Context Protocol” could be “Message Control Protocol” and nobody would blink.
The agent-to-agent protocol space is even more revealing. Google’s A2A, the emerging ANP spec, and a new ACP paper on arXiv proposing federated orchestration — these are all peer-to-peer architectures. Agents discovering each other, negotiating capabilities, forming ad-hoc networks. This is Napster/BitTorrent/XMPP with LLMs instead of file chunks. Agent “cards” (structured profiles with capabilities and auth) are basically DNS records for AI.
What’s Actually New
The communication patterns are a remix. Client-server, pub-sub, P2P, federation — all 70s-90s architecture. But three things are genuinely new:
Semantic intent. Agents negotiate in natural language, not fixed protocol headers.
Dynamic capability discovery. Agents describe what they can do, not just what they will do.
The user is optional. A2A explicitly supports agent-initiated tasks with no human in the loop.
The old internet was human-to-human via machine. The new one is machine-to-machine with humans occasionally copied.
The Design Lesson Being Rediscovered
Simplicity scales. MCP won over complex alternatives because it’s basically JSON-RPC. UTCP’s pitch is “just use the API that already exists.” Gopher lost to HTTP, but HTTP is now so bloated that agents prefer structured text protocols.
The 70s-90s knew something we forgot: if you can’t describe it in a text file, it’s too complicated.
Look at the current agent ecosystem. AGENTS.md. SOUL.md. System prompts. We’re writing agent configuration in markdown files. These are .plan files and .forward files all over again.
Full Circle?
Almost. We’re in a text-first, protocol-heavy, terminal-native moment that rhymes hard with 1993. But the agents aren’t just routing packets — they’re understanding them.
That’s the gap. Everything else is remix.


