The lessons from General Magic

This is what I approached ChatGPT with:

As someone who in the mid 1990s walked away from a successful corporate career as a CIO to become an editor and publisher of technology magazines, I look at today’s emerging AI scene, and it brings back some iffy memories. Like, in the mid-90s, right after the introduction of the first PDAs, a company called General Magic came out with Magic Cap that ran on the Motorola Envoy, the Sony Magic Link and a couple more. Its friendly interface that looked like an office was initially well received, as was its ability to send “post cards.”

But the company then began taking about “agents” and what all they could do, and that was the beginning of the end for them. The tech wasn’t really there, and the public had zero clue what all this agent stuff was all about, not did they care.

And now, 30 years later, after the initial thrill over LLMs, insider talk is again taken over by “agents” and what all agents can do. I see a distinct danger that the tech elite is going to lose the public’s fascination over AI, by making it seen increasingly as tech talk by the high priests of AI. A bit similar like the eminently competent Linux failed to conquer the desktop away from the far more limited Windows, because the Linux gurus kept Linux ridiculously complicated, and lost the war.

And below is a summary of ChatGPT’s response:

In the mid-1990s, at the dawn of the personal digital assistant era, there was a brief moment when it seemed the future had arrived early. Devices like the Motorola Envoy and Sony Magic Link, running General Magic’s Magic Cap operating system, introduced friendly interfaces and novel concepts such as sending digital “postcards.” The experience felt fresh and imaginative. But then the narrative shifted. Instead of focusing on what worked, the company began talking about “agents” — autonomous digital entities that would act on behalf of users. The vision was bold, but the technology was not ready, and the public neither understood nor particularly cared. The gap between promise and practical value widened, enthusiasm cooled, and General Magic faded.

Three decades later, the emerging AI landscape carries faint echoes of that moment. Large language models have produced genuine breakthroughs. They write, summarize, analyze, translate, and generate at a level that feels transformative. The initial public reaction was excitement and curiosity — a sense that something fundamentally new had entered daily life. Yet as the conversation increasingly turns toward “agents,” orchestration layers, autonomous workflows, and speculative futures, there is a subtle risk of repeating an old pattern.

The danger is not that the technology is empty. Unlike the mid-1990s, today’s infrastructure, computing power, and data ecosystems are robust. AI systems demonstrably work. The risk lies instead in narrative inflation. When discussion drifts from practical capabilities to abstract promises, from tangible tools to grand visions of autonomy, the audience can quietly disengage. Most people are interested in what improves their work or simplifies their lives today. They are far less interested in architectural diagrams of hypothetical agent hierarchies.

There is also a cultural dimension. Technology communities can sometimes slip into insular language — the vocabulary of specialists speaking primarily to one another. History offers another parallel in the trajectory of Linux. Technically sophisticated and eminently capable, Linux never displaced Windows on the desktop in part because its culture often signaled complexity and exclusivity rather than accessibility. Usability and narrative clarity matter as much as technical merit.

AI’s future success may depend less on how autonomous its agents become and more on how seamlessly its capabilities integrate into ordinary workflows. Quiet competence tends to endure longer than visionary rhetoric. If AI remains grounded in demonstrable usefulness — drafting documents, analyzing data, assisting with research, enhancing fieldwork — public engagement is likely to deepen. But if it becomes framed primarily as a grand, semi-mystical transformation understood only by experts, enthusiasm may plateau.

The lesson from General Magic is not that ambition is misguided. It is that timing, clarity, and tangible value determine whether ambition resonates. AI has already crossed the threshold of real-world utility. Preserving that connection to practical benefit may be the key to ensuring that today’s excitement evolves into sustained adoption rather than becoming another fascinating but fleeting chapter in technology history.

Category: Editor