Jan 27, 2026

Through the Ages: Reflecting on 250 Years of AI Technology

Two letters. A & I. Put them together sequentially (i.e. AI) and today you will have the latest advertisement, a news story, a political platform, a term-paper, an investment strategy, a controversy, but most of all you will have a conversation that everyone is expected to see as important and have an opinion on. Whether everyone is talking about the same thing seems irrelevant. And no one needs to clarify that those two letters capitalized in succession no longer have any connection to the herd management technology introduced less than a generation ago in the era of agriculture’s “Green Revolution.”

While the histories of how new technologies emerge and develop can be wildly different, the symptoms of technological gestation across the ages are surprisingly predictable. Technology is a human activity. Humans do technology. People create; but they rarely create an artifact to only accomplish a practical end or purpose. Each artifact is always crafted within a story. New technology comes pre-packaged with a modern myth (in the Tolkien sense) that makes it operative.

I am not going to attempt an AI forecast. The look of clouds on a prairie horizon—whether ominous or majestic—have more to do with the orientation of the observer with reference to the light than the potential impact of the thunderhead. So it is with AI, and since technological crystal balls have been notoriously unreliable, I prefer to start AI conversations via a mirror rather than a looking glass. History. AI technology has a long history reaching back to the 16th century, and its pattern of development mimics many other technologies that have outgrown their original myth.

The mythmaking potential of technology typically peaks not after it is fully designed and implemented, but in its early stages of development when it is most “unknown.” When technology is mysterious and indiscernible, its myth holding capacity is at its apex. Take for instance, something as dull and ubiquitous as ordinary electricity. It wasn’t always so. Blowing the dust off your 1901 Sears Roebuck catalog, you would find nestled on page 472 an offer you couldn’t refuse. For only $18, the Giant Power Heidelberg Electric Belt could be yours, and better yet, they will let you try it for free for 10 days! A “genuine 80-gauge current alternating, self-regulating and adjusting electric belt,” which, believe it or not, is effective in curing nerve, stomach, kidney, and liver ailments, not to mention sexual dysfunction, and a host of other “medical” conditions1. By this time in America, electricity had clearly demonstrated its impressive potential for transportation, lighting, and machine power, which was just enough proven progress mixed with public mystery to catechize culture with the benefits of the Heidelberg belt, as ridiculous as it may seem to us. As David Nye2 describes, “Like the computer in more recent times, electricity and electrical machinery provided pervasive images for the progress of society, the operation of the mind, and the nature of the body.”

While I don’t anticipate an AI version of the Heidelberg belt, it is not hard to imagine substituting “AI” for “electricity” in our modern mythmaking. With just enough proven progress mixed with public mystery to “promise the world,” the recurring symptoms of technological gestation are hard to mask.

Myths also precede their technologies by many years. The origins of the “information age” and the western obsession with “data” mining, organizing, transforming, displaying, storing, and communicating can be traced back to the 16th and 17th centuries, during an explosion of measurement systems, statistics, maps, graphs, dictionaries, encyclopedias, postal, and telegraph systems3. Catalyzing this cultural obsession is the narrative that all wrongs can be made right if the correct information is simply found and presented, a myth that subsequently propelled scientific managers, progressive politicians, and technocratic utopians into the early 20th century.

Modern AI is not a radical paradigm shift; rather it is the radical amplification of an old paradigm. According to this established myth, data cannot lie. It is only the lack of data that seeds falsehood and impedes truth. So the story goes.

One of the recurring themes of technological history is that the more monolithic and empire-like a culture becomes (i.e. politically, economically, or industrially), the more innovation tends to produce change only in scale and scope (e.g. faster, bigger, or grander). Change is not “outside the box” but rather bound up within accepted practice. Innovation in this environment tends toward preservation and domination, rather than innovation. In terms of rate and diversity of culture-changing technological innovation, the European middle-ages or the fledgling colonial U.S. arguably out-paced the military, political, and industrial empires across the centuries, when it comes to novel myth-busting innovation. As a centralized high-input technology, capable of sustaining itself only through massive investments of political, economic, and environmental capital, AI will always lean toward imitation and homogeneity rather than ingenuity. Technological self-preservation is difficult to overcome in systems that favor monopoly.

What I do hope a historical perspective will bring us is the ability to filter out the ever present “messianic promises,” to better discern the “myths” from which modern technology propagates, and to develop a tenacity to technologically proceed with care and patience.

Which brings me back to the old AI (Artificial Insemination) as one of the ongoing techno-agricultural revolutions that changed the landscape of the dairy industry, among many others. As part of the “Green Revolution” that through science and technology brought more from less—each cow, each plant, each acre, producing faster, growing stronger, and yielding more—we are faced with answering the question of how “green” was the “Green Revolution.” Each innovation of this revolution brought new opportunities to feed the world, but also introduced new questions related to the integrity and resilience of our water, soil, energy, and social systems. But maybe more disconcerting is the question of whether we have the innovation capacity to solve our latest problems in the over-developed parts of the world that have the luxury to see innovation as merely a problem of scale.

Questions always carry hope. I believe the solution lies neither in acquiescing the new nor abandoning the old. But are we willing to question the 250-year-old assumptions of more data, more speed, more scale? Could our historical experience with the “Green Revolution” qualify our conversations around AI and its data centers? I hope so. But this will take time, since centuries of mythmaking baggage is difficult to unload.

All of these historical observations might cause you to categorically dismiss me as a Luddite-leaning anti-technologist in relation to AI. Not so. Like electricity, I expect AI to become somewhat ubiquitous and ordinary in the upcoming years. What I do hope a historical perspective will bring us is the ability to filter out the ever present “messianic promises,” to better discern the “myths” from which modern technology propagates, and to develop a tenacity to technologically proceed with care and patience. The only thing historians know for certain is that humans have been proven quite inept at being certain, particularly when it involves the future.

But then again, my goal may be far less ambitious. Maybe I am just hoping to save you 18 bucks, and the hassle of re-packaging the next-gen AI Heidelberg belt after day-nine-disappointment.

Let’s be honest, choosing to wear our history is never flattering. We are human after all, and for Christians within the “true myth” of creation-fall-redemption, our history exposes our brokenness through every age and innovation. Even amidst honest efforts to seek the good, our first act should always be to wait and hope for all things new. Sometimes old news is good news.

Get the Newsletter

Subscribe to the In All Things newsletter to receive biweekly updates with the latest content.

References:

  1. See 1901 Sears Catalog number 112, p. 472, available on Internet Archive (archive.org)
  2. Nye, David, Electrifying America: Social Meanings of a New Technology. MIT Press. 1992. P. 156.
  3. Headrick, Daniel, When Information Came of Age: Technologies of Knowledge in the Age of Reason and Revolution. Oxford University Press. 2000.

About the Author

Ethan Brue

Learn More