AI, Compute, and the Scaling Frontier
11 postsAt the heart of geohot's technological worldview lies a deceptively simple thesis: compute is the new oil, and those who control its production control the future. Across posts spanning years, he traces the bitter lesson's implications to their logical extreme—that raw computational power, not clever algorithms or human insight, determines the trajectory of artificial intelligence. The chip companies he dissects are not merely businesses but geopolitical actors, their fabs and architectures the battlegrounds where civilizational futures are decided.
His analysis of AI scaling reveals a mind grappling with exponential curves and their consequences. The question of brain FLOPS—how much compute matches human cognition—becomes a meditation on what it means to be surpassed. Yet geohot resists doomerism even as he acknowledges the stakes. His p(doom) calculations are notably restrained; he sees no hard takeoff, no sudden discontinuity where humanity loses the plot. Instead, he envisions a grinding, visible ascent where agency remains possible for those paying attention.
The AI control problem, as he frames it, is less about containing superintelligence than about ensuring compute remains distributed rather than captured. When he asks whether he'll ever own a zettaflop, he's asking whether individuals can remain players in a game increasingly dominated by nation-states and trillion-dollar corporations. Intel's tragic decline serves as cautionary tale—technical excellence means nothing if you cede the architectural high ground.
What distinguishes geohot's compute philosophy from Silicon Valley orthodoxy is his insistence that scaling should empower individuals, not institutions. On-device learning, tinygrad's efficiency obsession, the dream of personal compute sovereignty—these aren't technical preferences but political positions. The world's computer should be everyone's computer, or it becomes everyone's cage.