Intelligence amplification, not artificial intelligence.
Why we still prefer Engelbart's framing — humans augmented by tools — over the race to replace them. And what that means for what we build.
In 1962, Douglas Engelbart wrote a paper called Augmenting Human Intellect. He imagined computers not as autonomous thinkers, but as prosthetics for human thinking — extensions that would let people solve problems too complex to hold in a single head. He called this intelligence amplification, or IA.
The following decade, John McCarthy and friends coined artificial intelligence. AI was the more glamorous framing. It won.
Sixty years later, the word "AI" is on every slide deck on earth, and the word "IA" has been quietly forgotten. We think that was a mistake — and we think reclaiming it matters for what gets built next.
The framing shapes the product
It's easy to treat AI vs. IA as semantics. It isn't. The framing you start with shapes every product decision that follows.
AI-first design asks: "How can the system do this without the human?" The human is a bottleneck to be removed. The ideal is full autonomy. The UI is minimal, because the user shouldn't have to touch anything.
IA-first design asks: "How can the system make the human 10× better at this?" The human is the protagonist. The ideal is amplification — a cellist with a better bow, a writer with a better notebook, a scientist with a better microscope. The UI is rich, because the user is doing the actual work.
These produce radically different products, even when the underlying model is the same.
Why the AI framing broke in a specific way
There's a practical failure mode in AI-first design that IA-first design doesn't have. When the system does all the work and the user only sees the output, the user can't tell when the system is wrong. The confidence is hidden. The reasoning is hidden. The user becomes a passive consumer of results they can't evaluate, and their ability to catch errors atrophies.
IA-first design keeps the user in the loop — not out of ideology, but because that's how trust, judgment, and error-correction actually work. You want the agent drafting the argument, but you want the human reading every sentence of it. You want the search returning candidates, but you want the human knowing why one was picked.
This isn't anti-agent. It's pro-collaboration. The most useful agents are the ones that make their human more capable of evaluating the work — not less.
What IA looks like in 2026
Today, every Ideaflow product is built from this starting frame:
- Notes that compound. Your notebook remembers better than you do, but you're still the author.
- Meetings that don't evaporate. The agent captures what was said. You decide what mattered.
- A team memory that compounds. Context carries forward. Decisions don't have to be re-made.
- Agents with a shared graph. Your agents get more capable as you use them — not because they replaced you, but because they learned alongside you.
Every one of those products would look different if we were building toward autonomy as the goal. We're not. The goal is the amplified human — and the team, and the organization, and eventually the civilization — not the absent one.
We're the Intelligence Amplification Company. Engelbart had it right. We're just trying to finish what he started.