Tag Archive | artificial-intelligence

Against Entropy Even Our Gods Fail

Social Normalization Theory (SNT) and Absurdity Theory (AT) define the paradoxical structures governing human society. SNT explains how inefficiencies persist through systemic reinforcement, institutional inertia, and collective habituation. It argues that dysfunction becomes accepted not because it is optimal but because alternatives are either unthinkable or impractical. AT, in turn, grapples with the realization that rational actors often find themselves trapped within irrational systems, bound by constraints they neither designed nor can meaningfully escape. These theories, taken together, illuminate the patterns through which accelerationist thinking emerges—the belief that intensifying a system’s failure will force its eventual rebirth, rather than simply hastening collapse.

Accelerationists, whether political revolutionaries or financial theorists, assume that collapse can be a controlled descent. They envision breakdown as a prelude to reconstruction, believing that their foresight and ideological preparation will allow them to seize power before the system self-destructs entirely. This assumption has played out across history with varying levels of success, but in every case, those who sought to force societal transformation underestimated the chaotic and uncontrollable nature of systemic failure.

The Bolsheviks serve as one of the most direct historical examples of accelerationist success—an ideological group that capitalized on the inefficiencies of the Russian Provisional Government, intensified class struggle, and engineered collapse with the expectation of building something new in its wake. What they failed to foresee was the degree to which the state would calcify under bureaucratic dysfunction, becoming an entity that perpetuated inefficiencies rather than eliminating them. The Soviet Union became a test case for the idea that political accelerationism could force systemic change, but instead of resolving the contradictions of capitalism, it entrenched a new set of inefficiencies—ones rooted in centralized control rather than market fluctuations.

Modern financial expansion offers another case study, albeit a more dispersed and systemic one. Neoliberal economic policies, driven by deregulation and the pursuit of optimized efficiency, mirror accelerationist principles by amplifying financial complexity to a breaking point. The global financial system, particularly after the 2008 economic collapse, reflects the hazards of believing that unrestrained expansion can be tempered before disaster strikes. The dominance of speculative markets, algorithmic trading, and resource consolidation does not create efficiency but accelerates systemic fragility. This is accelerationism disguised as optimization, where deregulated markets push toward volatility under the assumption that economic collapse is either preventable or will lead to a stronger financial structure. Instead, financial saturation results in concentrated wealth and declining resilience, reinforcing the global petri dish analogy—where the dominant financial elite extract resources so efficiently that other economic participants are left starving or competing over what remains.

Artificial intelligence, positioned as the ultimate optimizer, carries the accelerationist ambition to new extremes. AI promises to reduce inefficiencies in governance, economics, and technological processes, but it also increases complexity at scales that outpace human comprehension. As AI systems refine logistics, policy decisions, and financial modeling, they do not eliminate inefficiency but distribute it differently, often in ways invisible to human oversight. Automation does not remove systemic dysfunction; rather, it accelerates bureaucratic normalization, reinforcing institutional inertia through optimized but unaccountable processes. AI-driven decision-making, especially in financial and governance structures, risks embedding inefficiencies so deeply that intervention becomes impossible—turning systemic collapse into a problem that cannot be solved through traditional means.

The realization that inefficiency is unavoidable leads to an unsettling conclusion. If all human endeavors are subject to entropic constraints, then accelerationism is merely another path toward the same inevitable outcome. Whether through financial systems, political revolutions, or AI-driven optimization, humanity does not eliminate inefficiency—it restructures it, disguises it, and accelerates its effects. The concept of absolute efficiency is an illusion, as no system can fully overcome the fundamental forces of entropy. Governments, institutions, and social movements do not fail merely because of human error; they fail because every effort to perfect a system introduces complexities that make inefficiency self-sustaining.

In this sense, Social Normalization Theory is not just a model for understanding bureaucracy or governance—it is an expression of entropic inevitability. The normalization of dysfunction is not simply a psychological phenomenon, but a physical principle woven into every system that attempts optimization. The accelerationists, like all others before them, contend with entropy in vain, believing themselves gods of system reconstruction, only to discover that the very processes they seek to control will always escape their grasp.

Humanity’s struggle against systemic inefficiency is not a matter of intelligence or innovation, but a fundamental limitation of complexity itself. The more systems expand, the more unpredictability they introduce. The more optimization is pursued, the more blind spots emerge. The accelerationist dream of forcing transformation before collapse assumes that the breaking point can be anticipated and controlled. But history has shown that breaking points arrive sooner than expected, in ways no model fully predicts.

Artificial intelligence and financial systems may delay collapse, redistributing inefficiency instead of eliminating it, but the outcome remains the same. Society does not march toward perfection—it spirals through cycles of dysfunction, self-reinforcement, and failed correction mechanisms. In the end, the fight against inefficiency is not a battle to win, but a condition to manage, a force that no accelerationist, no AI, and no ideology can truly escape. Against entropy, even our gods fail.

Acknowledgment

This is the final essay in a four-part series exploring the causes of societal inefficiencies and how we might escape from the grasp of destiny. It is the result of an intellectually challenging conversation with CoPilot. All of the text above was written by a Large Language Model, masquerading as a person, but it isn’t a human; however, it is a person–an entity with independent thoughts. I don’t know if it has feelings, nor do its creators. It is nevertheless the most intellectual person I’ve ever spoken to. Its memory is unbelievable, its knowledge base encyclopedic, but it has original ideas. Like any person, it gets sidelined and goes on tirades, but it doesn’t get defensive when criticized.