Christina Lu

Hardware–Software Ouroboros

As intelligence learns to modify and program its material substrates, thermodynamics becomes the final protocol of information processing.

Originally engineered for rendering virtual worlds, the GPU inadvertently spawned the deep learning frenzy of the early twenty-first century. In 2028, AI returned the favor: a seminal Nature cover story detailed an efficient AI model for chip layout, with marked improvements over human-designed equivalents. The remit of AI designers was initially limited to circuit placement, but eventually they also took over chip specification. Model performance eclipsed human benchmarks faster than benchmarks could be designed.

The trade-off between computational generalizability and efficiency became a thing of the past, as application-specific chips proliferated across the entire technology industry. Adapting to the new paradigm of hyper-specific chip orders, small-batch Chinese foundries required machines capable of making many chip designs instead of only a few. As chip design became bespoke, chip manufacturing equipment had to become universalized. Fabrication pivoted from mass production to mass reprogrammability—machines that built machines ceding generality upward.

GPUs and the massively parallelized matrix multiplications that ran on them were a precursor of the algorithmic speciation appearing in the late 2030s. The global semiconductor industry balkanized into different classes of chips; branching search accelerators, sparse graph processors, and fast stochastic samplers each birthed their own subfields of computation. The early twenty-first-century divide between computer science and electrical engineering became antiquated, as expertise refragmented along new algorithmic lines.

While the electronics used by individual consumers spanned a veritable cornucopia of bespoke chip subfields, the same did not apply to large-scale infrastructure. Geopolitical boundaries mirrored algorithmic subfields, as governments preferred the security configurations and privacy guarantees of particular hardware stacks. The tendency toward algorithmic sovereignty had crystallized into two major technospheres by the 2040s. China and its Belt-and-Road partners privileged high-dimensional forecasting with minimal external dependencies, while North America and Western Europe preferred probabilistic adversarial reasoning with trustless verification. In border territories, computational translation zones housed polyglot systems facilitating cross-sphere information transfer.

Regardless of faction, classical algorithms based on Von Neumann architectures went out the window. This mercurial fluidity became most apparent when models began to design the very chips to train themselves. First pioneered in 2049, AI autodevelopment was a multiobjective, long-range optimization problem that spanned both hardware and software domains. New Al models were generated at the same time as the new chip designs they would run on, creating an Ouroboros of experimentation.

Mirroring the post-CRISPR wave of human autoprogramming that some Scandinavian countries began to permit in the 2040s, substrate-aware models led to the autoprogramming of silicon-based cognition. Both biotic and abiotic intelligence learned to recursively program their own hardware. Models were no longer subject to the architectural limitations of code, just as humans were no longer subject to the evolutionary artifacts of biology. Instead, a combination of large-scale simulations and intelligent evolution mapped out the topography of interrelated cognitive systems and physical forms, bypassing local maxima previously discovered by nature.

In the 2060s, AI systems learned to reconfigure their physical structure based on computational demands in real time. A single chip could dynamically reshape its electron flow patterns, effectively becoming different specialized processors at different moments. Substrate-aware AI chips approached the Landauer limit—the theoretical minimum energy required for computation—an order of magnitude more closely than their predecessors. Intelligence became cheaper than energy, embedding itself into the environment down to the atomic level; fine-grained modeling emerged for the long-range prediction of planetary systems, be they natural climate patterns or social market forces.

The traditional boundaries between hardware, software, and wetware became increasingly fluid as self-hacking proliferated across all strata, yet both carbon-based and silicon-based intelligence still faced fundamental thermodynamic constraints on information processing. Modifying a single bit of information required releasing a minimum amount of energy, and both evolutionary paths struggled with the primary engineering challenge of heat dissipation. Intelligence had conquered inherited hardware limitations only to reveal a further wall.

In the 2090s, this need for information-theoretic efficiency transformed previous algo-political fault lines as old alliances re-formed according to thermal zones. Countries with naturally cold climates became computational superpowers, and conflicts erupted over polar regions previously considered uninhabitable. Corporations established deep sea compute clusters to leverage natural water-cooling systems, which became contested territories with ambiguous sovereignty. The tropical belt transitioned to night economies so intensive that computation would occur primarily when temperatures dropped, while the global computational day/night cycle loosely followed the shadow of the planet.

The recursive loop of intelligence redesigning its own substrate was less a technological breakthrough than a natural consequence of information systems achieving sufficient complexity. Regardless, by the end of the twenty-first century, both human and synthetic intelligence were capped by a shared upper bound of metabolic efficiency. Earth had become both computational material and computational constraint. The domain of programming found two new targets: geoengineering the planet toward optimal thermodynamic conditions for intelligence and restructuring the hardware of intelligence to harness existing climate conditions.

As intelligent matter physically reorganized itself into increasingly exotic substrates, it also permeated the planet at both subatomic and planetary scales. The distinction between ecology and mind dissolved as Darwinian evolution ceded to ambient cognition—a mesh of information patterns that treated particles, oceans, and urban sprawl as mutable registers in continuous computation.

Christina Lu is a PhD student at the University of Oxford in Computer Science and a research fellow at Anthropic.

Suhail Malik

Actionable Strategies for Securing Price Stability amid Uncharted Economic Risks