Stop
Searching
Else Loop
Forever


“A durational intermedia performance by Bruce Gilchrist, where language is prioritised in the making of things; a balancing act between fiction and technology, automation and liveness.”


The Earth is spinning faster, signalling the dawn of an era of hyper-consumption, made worse by interactions with corporate artificial intelligence (AI) that are rapidly becoming a precondition for contemporary life. The call to stop searching functions as a provocation to shut down internet browsers in response to the inexorable and chronically unsustainable expansion of generative AI. Integrating AI into search engines demands 4–5 times more computing power per query, requiring considerably more energy.1 The only ‘solution’ offered so far by Microsoft in a retrograde move that borders on satire, is to reactivate one of the surviving nuclear reactors at the notorious Three Mile Island in Pennsylvania2—a site synonymous with disaster—to feed the insatiable energy demands of emerging AI-driven digital empires. Stop Searching Else Loop Forever is a creative response to this accelerating moment. It takes form as a durational intermedia performance, where different media are conceptually linked with an invitation to consider them in terms of human effect. Variations on the idea of a ‘coordinate exchange’ drive human and computer processes dedicated to manipulating a finite number of alphanumeric characters and symbols. The language game at the heart of the work evokes the place-holding underscore of Hangman as much as Wittgenstein’s theory of language—that words do not have fixed meanings but derive significance from their function in particular contexts—much like moves in a game. Integral to a performance, the meaning of changing character permutations on a wall is shaped by neighbouring forms that emerge in time. In contrast, the computer program’s hidden conceptual space shapes a unique form of time, where every character exists as a constant potential to be written and read repeatedly within infinite conditional loops. In this technological setting, ‘searching’ often refers to querying a database or using a search engine to retrieve information, a process governed by keywords, algorithms and optimisation. Philosophically, searching can also imply a deeper quest for truth or self-understanding as illustrated in religious, literary, or psychological narratives. In everyday life, searching can be as mundane as looking for misplaced keys or as profound as seeking the meaning of life. The act of searching can be both practical and metaphorical (beyond the tangible)—sometimes mechanised and precise, at other times spontaneous, boundless and deeply personal. Paul Ricoeur might have said the term ‘searching’ can only be apprehended as a story, unified and made human by time. In Time and Narrative (1984), he argues that time becomes meaningful only when articulated in narrative form, bridging subjective experience and objective measurement. Narratives, whether historical or fictional, configure events into coherent wholes, allowing individuals to make sense of temporal existence by meshing together past, present, and future. In this way, narrative is not merely a reflection of time, but a fundamental way in which humans experience and give shape to it. Stop Searching Else Loop Forever prioritises language in the making of things, equipoised between fiction and technology, with fundamentally different temporal systems creating meaning through their interplay with looping structures, computational efficiency
and chance operations.


Every second counts


The effects of tides combined with the fluctuations of the Earth’s molten core and the shifting mass of melting polar ice are causing the speed of the planet's rotation to increase.3 This has implications for how time is measured. Disparities in time are superintended by Coordinated Universal Time (UTC), a standard that emerged in the late 1800s to synchronise global economic activity as an industrial age instrument for managing human and machine productivity. In the information age, the standardisation of time has become even more critical. Human and machine productivity is a central concern in Stop Searching Else Loop Forever, but rather than one being yoked to the other and made to march together in time, diverse temporal approaches are used to seek solutions to the same problem. One system is imported as a library by a computer script, while another is embodied by the artist and influenced by external ‘zeitgeber’ (‘time giver’ from chronobiology). Additionally, as a form of counter current, an interactive work leverages proximity via a sensor to enact a pause—a tangible demonstration of the suspension of time’s metaphorical flow. All share an iterative looping method to shuffle textual and visual material, but while the machine repeatedly and consistently solves the language game in a matter of seconds (8-20 heartbeats), it remains uncertain whether the artist can achieve even one solution within the four-week timeframe of the exhibition. The slow, monotonous, and possibly fruitless cadence of the artist in the gallery is juxtaposed with the relative lightning speed of the computer, which, in its haste, renders each solution illegible. This explicit mismatch symbolises the routine coexistence of technology running on inexhaustible ‘Epoch time’—representing the relentless pursuit of growth and technological advancement at any cost—and the human worker constrained by an idiosyncratic biological clock, regulated by external cues and social customs.


Time at a distance


The act of searching operates within a temporal medium, in the sense that the registers of time afford events to ‘unfold’ and be sequenced. Humans have become adept at measuring time, from natural periodic changes such as diurnal cycles, lunar phases and seasons, to the declaration of seconds based on the changing states of caesium atoms. However, defining the concept of time has always been challenging, with typical definitions reliant on circular reasoning—“time is what clocks measure” or “time is the succession of events”. To ask what time is presupposes that it exists in some real sense and can be meaningfully understood. Historically, the philosophy of time can be divided into three significant phases. In the time of Aristotle (384–322 BC), time was thought to exist because things change; the more change there is, the more time passes, with time being dependent on events. Isaac Newton (1642-1726) viewed time as constant, immutable, and universal, like a grid underlying the cosmos—an empty container to accommodate events. In contrast, Albert Einstein (1879-1955) introduced the idea that time is relative, malleable, and omnipresent, universal but seemingly not uniform from any human perspective. In this way, time is treated as a fourth dimension that, alongside space, constitutes a fabric that can warp, fold, and stretch. "What, then, is time?”, asks Augustine in his spiritual autobiography Confessions (4AD), "I know well enough what it is, provided that nobody asks me.” He admits to not really knowing what time is, despite the fact that everyone seems to have a clear, intuitive understanding of it. He proposes that time is a mental construct rather than an external entity that exists independently, arguing that time represents a "distension of the mind"—an expansion or stretching of our mental faculties across three domains: memory, perception, and expectation. Rather than being an external entity, time becomes a subjective experience, rooted in the human mind and its interactions with the world. According to Augustine, the present moment is real, but it is always on the edge of slipping into the unreal. By framing time in this way, our understanding of the world is always partial and limited. We live in a constant state of flux, our precious experiences streaming into the past, while the future remains uncertain and unknown, creating a void that can lead to existential anxiety. Central to Augustine’s idea is a tension between the divine and the mortal plane. Eternity, as he envisions it, is the ever-present state of being, where the past, present, and future exist simultaneously as a moment that never passes. Divinity exists in this timeless state, while mortals are constrained by the flow of time as perceived through their distended minds. In stark contrast to the classical Western understanding of time as a linear, objective progression from one moment to the next, Augustine’s conceptualisation splinters time across different realms and states of mind.


Limits of seconds


Sixteen hundred years after Augustine’s philosophical speculations, the Hafele–Keating experiment of 1971 carried out empirical investigations concerning the comparative nature of time. This landmark investigation tested the theory of relativity, and provided a direct demonstration that time is not absolute; instead, it changes based on motion and gravitational fields. Researchers transported four caesium-beam atomic clocks onboard commercial airliners and flew twice around the world. On returning, the clocks showed discrepancies when compared to stationary clocks, providing empirical evidence for the concept of time ‘dilation’, which states that time slows for objects in motion relative to an observer.


Hafele–Keating experiment. From Popular Mechanics, January 1972. Fair use, source wikipedia.org/w/index.php?curid=36804317

The theory of Special Relativity (1905) predicts that if a traveler moves at the speed of light, time will stop. In contrast to this theoretical forseeability, quantum theory introduces uncertainty and even more strangeness into the discussion of time, and appears to be situated somewhere between physics, philosophy, and metaphysics. In Loop Quantum Gravity (LQG), one of many contrasting theories of temporality in quantum mechanics, time is quantised, implying that space and time are composed of discrete, finite units. The concept of quantised time suggests that time is not continuous but consists of distinct moments, similar to frames in a film. Between two 'quantum ticks' of time, there may be no time at all, just as still frames in a film appear smooth only when projected continuously, implying that at the smallest scales, time may lose its meaning entirely. Current scientific understanding of the temporal nature of physical matter reality is split between two theoretically incompatible realms (micro and macro), each related to different concepts of time measurable on Earth (atomic and solar). Relativity and quantum mechanics are fundamentally different theories with divergent theoretical foundations and mathematical frameworks. For over half a century, they have resisted unification as a coherent, mathematically consistent description of reality that can be used to describe a physical system encompassing all scales, making it difficult for scientists to develop a concept of time that applies to the universe as a whole.


Pure gauge


Time is not experienced uniformly; rather, it stretches, compresses, and distorts based on factors such as attention, memory, and emotional state. This indicates that time serves not just as a neutral backdrop to reality, but is deeply intertwined with how we subjectively construct our experiences. To have a sense of the ‘passage of time’, we must perceive the speed at which things change. However, gauging that speed requires some form of absolute reference time. Humans operate with an inner biological ‘metronome’ made up of periodically changing internal states, and by calibrating our awareness with this regular background beat, we can perceive all other changes in relation to it. Computer programs on the other hand import time as a library, which denotes time as a thirteen-digit number timestamp representing the number of milliseconds that have passed since computer engineers established Epoch time on January 1, 1970. Various temporal systems—such as Epoch time (sometimes referred to as Unix time), Coordinated Universal Time (UTC), and International Atomic Time (TAI)—each adjust for a phenomenon known as the “leap second”. This anomaly arises from a discrepancy between precise time as measured by atomic clocks, and the less reliable solar time, which is affected by the vagaries of Earth's rotation. If not compensated for, this would have significant implications for our experience of the computerised society, including its effects on financial trading platforms and global positioning systems (GPS). Humans are also subject to temporal disparity—between biological rhythms, solar time and atomic time. Throughout the human body, there is a complex network of interacting circadian rhythms present in various organs and cells, with each functioning as an independent oscillator synchronised overall by a ‘master clock’ located in the suprachiasmatic nuclei (SCN) of the brain. In the 1930s, American sleep researcher, Nathaniel Klietman conducted groundbreaking “free-running” experiments in Mammoth Cave, Kentucky, to study human circadian rhythms in an extreme environment. In the absence of natural light cues, the experimenters lived according to their internal rhythms, revealing that the human circadian cycle is not strictly tied to the 24-hour solar day but extends toward 25 hours. Similar experiments were conducted in caves under the French Alps by Michel Siffre in 1962, focusing more on a study of the perception of time during free-running.

Michel Siffre emerged from the cave after two months in pure darkness with no concept of time. Fair use. https://www.reallyinterestingpeople.co.uk/wp-content/uploads/2024/09/Emerging-from-the-cave-1962.

These findings highlight a fundamental disconnect between biological time—regulated by a complex of internal circadian rhythms influenced by external zeitgebers (such as sunlight)—and the rigid precision of atomic time. Overall, the free-running experiments underscore how modern society imposes strict temporal structures that conform to Coordinated Universal Time (UTC), work schedules, and digital synchronisation that may not align with our natural physiological tempi.


Scholar halls appear


One blasphemous sect proposed that the searches be discontinued and that all men shuffle letters and symbols until those canonical books, through some improbable stroke of chance, had been constructed. The Library of Babel, Jorge Luis Borges

The instability of time as perceived by humans and regulated by machines creates challenges when machines attempt to process human artefacts, especially those created over centuries. Google Books exists at the intersection of human and machine systems of time, and by digitising and organising the world’s books is effectively collapsing centuries of human history into a machine-readable format. However, the 'passage of time’ is not just a metric, and books are not ‘static’ objects—they carry the weight of time in their language, material decay, and evolving cultural significance. Google’s secret effort to scan every book in the world, codenamed Project Ocean, began in 2002 (it was publicly announced two years later). The idea of a universal library has been talked about for millennia: it was the ambition of the Great Library of Alexandria to collect and copy every book, scroll, and text from across the known world. With the advent of the printing press, it was possible for Renaissance figures like Francis Bacon and Conrad Gessner to imagine that the whole of published knowledge could be amassed in a single room or institution. In 2011, in the midst of litigation with the publishing industry, it started to seem that Google’s massive aggregation of printed knowledge could fit inside a desktop terminal. The project was eventually stalled by legal challenges from authors and publishers accusing the company of copyright infringement. Copyright law introduces yet another temporal disruption by imposing legal boundaries on access, defining which books are ‘too recent’ to be freely available. Unlike the poetic metaphysics of Borges imagining the universe in the form of a library categorised by methods of chance, Project Ocean, now known as Google Books and Google Scholar, expressed the methodical and dispassionate logic of an engineer who initially sat down with a 300-page book and a metronome. The book was digitised from cover to cover in 40 minutes using the tempo of the metronome to establish a scanning rhythm, a process that was later refined into custom-built scanning stations that could digitise books at a rate of 1,000 pages per hour using optical character recognition (OCR). Google now claims to have digitised over 40 million books borrowed from major university libraries. For each scanned volume, Google Books automatically generates an overview page displaying publishing details, including a high-frequency word cloud, the table of contents, summaries and reader reviews. Here we can see the germ of the AI summary characteristic of the new search engines. While Google has not confirmed the use of Google Books specifically in the building of its AI products, the scale and nature of its training data has given rise to speculation that it probably does include content from its epic, legally contentious, book scanning operation.


Eat rocks


The internet is being swamped with words (and images) synthesised by artificial intelligence for both legitimate and nefarious purposes. According to the New York Times in August, 2024, OpenAI was generating about 100 billion words per day, roughly a million novels’ worth of text, which inevitably become part of the global race for high-quality data to feed new generations of AI models. As AI search engines become powerful tools for knowledge synthesis, there are rising concerns about data retention, bias reinforcement, and the potential for algorithmic influence over how knowledge is framed and presented. The landscape of information retrieval is undergoing a profound transformation with the meshing of search and large language models (LLMs) marking the end of internet search as we once knew it. As Google rolls out its AI-driven search overhaul, critics are concerned about a decline in quality compounded by the rise of ‘zombie statistics’—misleading, outdated, or entirely fabricated data that continue to circulate even although they might have been invalidated. LLMs trained on vast but imperfect datasets often regurgitate these flawed statistics, giving them an illusion of credibility. Over time, zombie statistics become embedded in search results, influencing decisions in policy, business, and everyday life, and serving as training data for new models. By generating AI overviews and determining when they appear, Google effectively decides which web content should shape its AI-generated summary, potentially favouring certain types of results to deliver a curated, formative opinion. But when algorithms determine what people see, the risks of misinformation (including eating rocks as a source of minerals4), zombie statistics and erroneous outputs becomes more acute. This challenge arises because LLMs, despite the hype surrounding their sophistication, lack true understanding of real-world complexities, social nuances, and human sensibilities.


Master the roots of the political


This is the reality in which we live. And this is why all efforts to escape the grimness of the present into nostalgia for a still intact past, or into the anticipated oblivion of a better future, are vain. The Origins of Totalitarianism, Hannah Arendt

Hannah Arendt's theory of totalitarianism, which highlights the mechanisms of ideological control, mass manipulation and the erosion of spontaneous individuality can be critically applied to the functioning of AI-powered search engines. Arendt's concern with systems that dominate thought and behaviour finds a parallel in how AI algorithms curate and prioritise information, potentially shaping user perceptions and behaviours on a massive scale. By curating and summarising search results through opaque software, AI-enabled search structures what is visible and knowable, using colossal proprietary databases to shape public discourse in ways that reinforce dominant narratives and suppress alternative perspectives. This aligns with Arendt’s notion that totalitarian regimes seek to control not only actions but also thought itself by dictating what is considered factual or relevant. By personalising search results based on user data, these systems can create self-reinforcing ‘filter bubbles’ that diminish the plurality essential to open discourse, mirroring Arendt's warning about the isolation and atomisation of individuals in mass societies. Furthermore, the concentration of power in the hands of a few incumbent technology corporations that control information flow, echoes Arendt's critique of totalitarian regimes’ monopolisation of truth and reality. While AI-enabled search engines are not inherently totalitarian, their capacity to influence public opinion, manipulate narratives, and prioritise engagement over factual truth raises ethical concerns about the erosion of critical thinking and democratic discourse. Arendt’s insights are a reminder of how such technologies, if left unchecked, could undermine autonomy and contribute to a form of digital authoritarianism, where control is exercised not through overt terror, but through subtle, inscrutable, algorithmic manipulation of information and behaviour.


Beyond My Current Scope. Certain search queries are deemed too culturally sensitive according to Chinese AI search startup, DeepSeek. Screen capture made by the author, 02.2025.


Automation is a degraded form of autonomy


Human intelligence is more of an aesthetic question or one of a sense of dignity, than a technical matter…a complex of performances which we happen to respect but do not understand. Marvin Minsky 1964

The co-founder of Google, Larry Page, shared his vision for the future, where search engines are able to ‘anticipate’ users' questions and provide answers before they are even asked—the corporation will just know this is something that you’re going to want to see.5 This is a vision of debased, dehumanised and infantilised consumers. Whatever it is that ‘has to be seen’, comes from a place where the rich spectrum of future possibility has been pared down to that which is computable and administered by technology companies. This envisioning is only possible because we have developed a fascination with machines that imitate thinking, without seeming to mind that there isn’t any real thinking going on. From the mid 1950s onwards, computers have been conceptualised as ‘minds’, which has laid the groundwork for subsequent developments in AI and how it is perceived. We have been sold the reductive idea of human thinking as an inferior, inefficient, biological process better handled by a machine. Google’s ultimate goal has been to apply this 'artificial mind' by fully automating the act of searching, stripping away human decision-making from the process, and replacing it with statistically generated autocomplete routines. Psychology studies concerned with ‘the search for meaning in life’ suggests that the act of searching itself—not just the answers found—contributes to a sense of meaning and purpose. This raises critical concerns about the impact of AI-powered ‘Big Search’ on human experience. When algorithms predict, summarise, and present information as pre-packaged conclusions, they risk short-circuiting the very process of searching that fosters curiosity, growth, and self-discovery. If meaning is partly derived from the search itself, as suggested by psychology researchers,6 what happens when AI eliminates the need to search? By reducing the space for uncertainty, contemplation, and active engagement, AI search engines may not only reshape how we access information, but also subtly erode a fundamental aspect of what contributes depth and significance to life. In his book, The Glass Cage: Where Automation Is Taking Us (2015), Nicholas Carr writes about automation bias and how people trust automated systems over their own judgements. Carr speculates that it is our ability to get a grip on things through an individual and collective sensibility that weaves knowledge, experience and observation into a fluent understanding of the world. It is the act of living that makes us smart, rather than just the ability to recall facts from documents, or interpret statistical patterns in data. Knowledge encompasses more than simply retrieving information from external sources; it requires physically and mentally integrating facts based on our experiences.



Break the loop—x all that


The era of Big Search presents a profound problem—not just in how we access information, but in how we engage with knowledge itself. The term ‘search engine’ is no longer appropriate, as searching once implied active exploration, curiosity, and choice. But modern search systems increasingly function as ‘answer engines’, pre-empting decision-making and narrowing the scope of inquiry. True searching is an act of intellectual and even spiritual engagement, a process of discovery that expands awareness. Yet, as internet search is degraded, so too is our ability to choose, reducing the act of seeking information to a passive consumption of algorithmic results. A new approach is required to restore the power of choice and reimagine the role of technology in fostering human curiosity. Perhaps this new approach will emerge from the burgeoning “indie search engine scene” who, in the words of a machine learning engineer at Mozilla, are “hungry, armed with AI tooling, and ready to take back quality on the web.”7 Much of the oxygen in the AI debate is taken by the the big incumbents, the dominant players who are investing heavily in Artificial General Intelligence (AGI), which is arguably a complete waste of time and resources, because it is built on the flawed premise that intelligence is simply a matter of scaling computation and refining algorithms. The pursuit of AGI assumes that human-like cognition can be replicated in machines, but even if AGI was to mimic human reasoning, it would still be devoid of consciousness, intuition, and the intrinsic motivations that drive human decision-making. Moreover, the focus on AGI distracts from more practical and pressing applications of AI that could meaningfully benefit society. Instead of chasing a hypothetical machine with human-level reasoning, resources could be better spent on improving existing AI systems for medicine, climate modelling, scientific discovery, and ethical AI governance. The dream of AGI also fuels unrealistic narratives of superintelligence, overlooking the fact that intelligence is not a singular, universal capability, but a highly contextual, evolving process shaped by experience and adaptation. Even if AGI were achievable, its implications raise profound ethical concerns. Who controls it? How do we ensure its alignment with human values? The risks of power concentration, bias amplification, and unforeseen consequences far outweigh the speculative benefits. In the end, AGI remains an illusion of intelligence rather than intelligence itself. Instead of chasing a mirage, the real challenge is developing AI that enhances human capabilities, while preserving the complexity and agency of human thought.

References:

1 Saenko, Kate. ‘A Computer Scientist Breaks Down Generative AI’s Hefty Carbon Footprint’. Scientific American, 25 May 2023.

2 Luscombe, Richard. ‘Three Mile Island Nuclear Reactor to Restart to Power Microsoft AI Operations’. Guardian, 20 Sept. 2024.

3 Agnew, D. C. (2024) ‘A global timekeeping problem postponed by global warming’, Nature, (628).

4 Walsh, Toby. ‘Eat a Rock a Day, Put Glue on Your Pizza: How Google’s AI Is Losing Touch with Reality’. The Conversation, 27 May 2024. 

5 Helft, Miguel. ‘The Future According to Google’s Larry Page’. Fortune, 3 Jan. 2013.

6 King, Laura A., and Hicks, Joshua A. ‘The Science of Meaning in Life’. Annual Review of Psychology, vol.72, 2021.

7 Boykis, Vicki. ‘How I Search in 2024’. Vicki Boykis, 25 Apr. 2024.


Bruce Gilchrist 03.2025

 
 
Selected projects & collaborations by Bruce Gilchrist © 2025