One of the primary points emerging from the recent Newsweek AI Impact Summit was that AI adoption strategies should focus on the value that could be delivered, more than the efficiency that could be realized by process or work automation. This is also a primary mantra of mine when advising companies on their AI strategies, for a couple of reasons:
- Efficiency is always hard to quantify, as the reference points are invariably difficult to determine and therefore the goal ends up being some form of "aspirational time savings," without a well-defined basis.
 - Value creation is more motivating as it reinforces the value of human creativity, whereas efficiency tends to demotivate and dehumanize by characterizing work in simple machine terms.
 
These observations were recently put into a compelling conceptual framework as I have just finished reading the excellent book Technopoly, written more than 30 years ago by the late Neil Postman, the cultural critic and professor at New York University. The book outlines the devolution that Postman saw in human culture by the manic pursuit of technologies that allow ever-increasing access to information at a rate that humans can no longer process. His disturbing but prescient conjecture is that our lives and culture will become defined by experts who define what is measured and how, and expert computing systems that execute these tasks. And along the way, the very essence of humanity is lost.
I think the relevance to the current AI zeitgeist is obvious, and Postman's analysis and observations about how to prepare for, and compensate for, this future are well worth revisiting.
Postman starts by recounting Plato's story of Socrates' conversation with Phaedrus in which he tells of the encounter between the Egyptian god Theuth and King Thamus. Thamus is approached by Theuth to promulgate his many inventions to the people of Egypt. He is appreciative of the novelty of many of the technological gifts, including numerical calculation, geometry and astronomy, but he is quite alarmed by the potential of one of the inventions, one which Theuth claims will "improve the wisdom and memory of all Egyptians." Thamus cautions that the practitioners of this new technology "will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part quite ignorant. And because they are filled with the conceit of wisdom instead of real wisdom, they will be a burden to society."

It is tempting to dismiss this parable when one learns that it concerns the technology of writing, which has manifestly revolutionized the communication and accumulation of information across time and distance and thereby transformed human existence and understanding. But, as Postman points out, that transformation—inevitable as it was—had profound societal consequences, with the loss of singular authorities that would guide and ground the structure of everyday life, replaced by myriad sources of external knowledge, many of which lack comparable wisdom. Moreover, what we thought and thought about, and how we thought was changed by this new medium of unconstrained expression, controlled by a new set of technology owners or monopolists—equivalent to the "tech-broligarchy" of today.
Postman argues that, by analogy—or perhaps more correctly, by what you could call technological equivalence—the recent internet age and the rise of computer and AI-based control should be viewed with similar caution: that a transformation in human existence is undoubtedly occurring, but the price paid will be that facile access to summary information will masquerade as real knowledge and wisdom, to the detriment of society and humankind.
Postman grounds his evolutionary argument by observing that there have been three significant innovation ages of humanity:
- A tool-centric age, which existed until the mid-18th century (ending with the invention of the steam engine), in which tools were created to aid humans with physical tasks, but the new tools were clearly in the service of humans, and human purpose was still considered to be divinely determined.
 - A technology-centric age, which existed until the mid-20th century, and was characterized by the idea that "knowledge was power" and that humanity is capable of progress through scientific endeavor, separate from any divine purpose. This was typified by Francis Bacon's observation that "improvement of man's mind and the improvement of his lot are one and the same thing."
 - A technology-autocratic age, which he terms the "Technopoly," characterized by a culture where information is endlessly generated by technology and so humanity is forced to employ ever more technology to manage this information "glut." Moreover, he argues that, in a Technopoly, tech experts are assigned priest-like status, with the new source of authority being statistical objectivity.
 
In short, Postman argues that a Technopoly results in a culture that consumes itself with information, so computers are required to discern any course of action and that expressions such as "the computer shows" or "the computer has calculated" are Technopoly's equivalent of "it is God's will." This is eerily reminiscent of the Little Britain comedy sketch in which David Walliams' bureaucratic administrator utters the classic catchphrase "computer says no" to any inquiry for help or assistance: a clear case of art imitating and mocking current life.
Postman observes that this evolution can be viewed as a "metaphor gone mad," starting with the simple proposition that humans are in some respect like machines, evolving to "humans are little more than machines" and finally concluding that "human beings are essentially just machines" and that machines can therefore duplicate anything that a human can do.
This is strikingly similar to the arguments that the AI protagonists make: that current LLMs are well on the path to achieving Artificial General Intelligence (AGI) or Super-Intelligence. Postman's treatise was published in 1992—well before the current age of AI—but he already saw the potential of over-dependence on computing systems, resulting in a belief system where statistical, data-centric "objectivity" was deemed infinitely superior to human experiential subjectivity, and the protagonists continuously deluded the masses with ever-increasing claims of seemingly magical machine capabilities.
Indeed, Postman highlights that, throughout history, technology has always driven unexpected cultural shifts or "wars" that produce a new set of winners and losers, with knowledge monopolies being created by the most important of these technologies. A defining symptom of this cultural war is indeed that the protagonists preach the benefits to the target users while undermining or underestimating the losses and potential harms. Conversely, the antagonists overemphasize the harms and minimize the benefits – again, uncannily reminiscent of the current state of affairs, defined by diametrically opposed protagonists and antagonists for and against the rise of AI.
Clearly, a balanced approach is warranted. And that is precisely what we are endeavoring to provide in the Newsweek AI Impact series of interviews and the complementary analyses.
One of the most interesting insights from Postman is what he calls the "reification of intelligence," meaning that something as complex and multi-dimensional as human intelligence is mistakenly reduced to a simple, more or less singular quantifiable thing. He argues that IQ tests do exactly this, as do SAT scores and any such standardized tests. And by doing so, it is then possible to argue that a computing system can emulate this ability, just as we are seeing asserted for current AI systems.
Another profound insight is that "it is meaning not utterance that makes the mind unique." In other words, that the words we use and the structure of language are necessary but not sufficient to convey meaning, which necessarily includes emotions, experiences and sensations that symbols with syntactical rules cannot describe.
Taken as a whole, Postman effectively predicts and parameterizes the current cultural state of our Gen AI-obsessed culture, more than three decades before it became a reality.
So, what is the solution? As a cultural critic, Postman admits that there is a tendency to observe but not provide any solutions; he does however propose that a more holistic approach to education is required, one that teaches the history of the evolution of a discipline or subject, so that the consequential connection between past, present(s) and possible futures can be appreciated. And it is hard to argue that a more informed and complete basis for understanding of all things by all people is not highly desirable. But I think there is a more tractable answer to the problem than redefining the entire educational system and it is paradoxically provided by the very same Gen AI systems.
Here is my (optimistic) conjecture: the unprecedented recent adoption of Gen AI technologies is the direct result of the massive information glut that humanity is collectively experiencing. It is, in effect, the human attempt to recover control and develop understanding from the information overload created by the rise of ubiquitous, interconnected computing systems. Prior search-based systems often compounded the problem of information overload by presenting too many options that required parsing by the user and too much promoted content that contaminated the results presented. But, with the rise of generative AI agents and answer engines in place of, or on top of, search engines, we are able to rebalance the equation in humanity's favor, and to become more informed, more creative, more expert and less overwhelmed. In short, Gen AI is not the beginning of the end for human intelligence, but rather the beginning of the revaluation of human intelligence in all its multi-dimensional, subjective, experiential richness.
Of course, we are not there yet. The current models are, as we have frequently highlighted in our interview series, incomplete, and possess inadequate representations of our experienced reality: they are mere emulators of that reality, rather than accurate simulators of it. But, perhaps they are the starting point on the long journey to humanity's salvation from information-overload and the new religion of objectivity and quantification of all things. And perhaps they move us towards something more holistic that enables transcendence from our current overwhelmed state and ascension to a collective state that is more meaningful, satisfying, equitable and holistic.
So, I would argue that instead of the computer saying "no" and exerting ever-increasing levels of control, the future must be about humans saying "I don't know," and using these new computing systems to assist us in knowing more, to augment our existence, both individually and collectively.
But to achieve this technological nirvana, we need to acknowledge and learn from the foreshadowings of Plato and Postman and prepare accordingly by valuing human creativity over computing efficiency.











