top of page

What Comes After the Polycrisis?

  • Writer: Sylvain Cottong
    Sylvain Cottong
  • Jun 4
  • 4 min read

ree

We keep hearing the word “polycrisis” these days. And for good reason: we’re not dealing with just one big problem—we’re entangled in a web of cascading, mutually reinforcing disruptions. From climate to technology, from geopolitics to societal fractures, everything is connected. And yet, our responses remain siloed, reactive, and woefully inadequate.


From a foresight perspective, the polycrisis is not just a moment of chaos—it’s a systemic inflection point. A liminal space where new paradigms are possible… but also where collapse can happen faster than we think.


Let me walk you through what I see as the major threads of this polycrisis—and how strategic foresight, futures literacy, and anticipatory governance might help us navigate through it.


1. The Ecological Unravelling


We already know the facts: climate change is accelerating, biodiversity is collapsing, and we’re depleting planetary resources faster than Earth can regenerate them. Science gives us 20 to 70 years—if we’re lucky—before we face irreversible tipping points. And yet, bold systemic transformations are still not happening.


Why? Because transformation is uncomfortable. It means changing how we produce, consume, govern, and live. It means loss for those who profit from inertia.


But without radical change, we’re heading into a future of scarcity, displacement, and ecological breakdown. This isn’t about saving polar bears anymore—it’s about saving the conditions for civilisation itself.



2. The AI Storm


Then comes the second driver: technology—especially Artificial Intelligence. It’s the boldest disruption since electrification, only exponentially faster. We’re talking about machines doing what humans have done for millennia: learning, creating, judging, acting. And we don’t really know where it’s heading.


Will AI destroy jobs? Spark a new Luddite wave of resistance, like in the 1860s with steam? Will we need a universal basic income funded by AI productivity? Who gets to augment themselves through neurotechnology and brain-computer interfaces—and who gets left behind?


Add to this the existential questions around Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI): what happens if machines become smarter than us? What does it mean to be human in a world where most human tasks can be replicated or outperformed?


And let’s not forget the side effects:


  • Bias in AI models amplifying injustice.

  • Agentic AI making decisions beyond human oversight.

  • Environmental costs of energy-hungry models accelerating the climate crisis.

  • A widening tech gap, potentially splitting society into posthuman elites and those left behind.


Different countries are taking radically different approaches:


  • The EU is focusing on heavy regulation.

  • China is making AI education mandatory from primary school.

  • The UAE just made ChatGPT freely available to all citizens.

  • Singapore, India, Japan—all fostering fast, adaptive innovation with light regulation.


So is modernity shifting eastward?


These developments also challenge the very foundations of our thinking—our epistemology (how we know) and ontology (what we believe to be real). What happens when knowledge is AI-generated? When truth becomes fluid? When cognition itself is outsourced?


And yet, there are opportunities:


  • AI can supercharge sustainability solutions, science, healthcare, education.

  • It can help predict, model, and even preempt systemic failures.

  • With the right governance, it could support a more equitable and resilient world.


But that’s a big if.


3. Geopolitical Shocks



We’re entering a multipolar world with rising tensions. Imperial ambitions are back. Cyberwarfare, infrastructure sabotage, and disinformation campaigns are daily realities. The US might not always be Europe’s security blanket. Meanwhile, authoritarian regimes are growing bolder.


All this erodes trust—between nations, institutions, and citizens.


4. Inequality, Populism, and the Fracturing of Democracy



Globalisation created wealth, yes—but also massive inequality. Many feel left behind by automation, inflation, and political alienation. Populist movements feed on this resentment. Authoritarianism gains ground.


Just look at the US:


  • Threats to democracy.

  • Tech oligarchs wielding unregulated power.

  • Project 2025 proposing a total executive branch overhaul aligned with far-right ideology.

  • The Dark Enlightenment movement rejecting democracy in favour of techno-monarchism.

  • TESCREAL (Transhumanism, Extropianism, Singularity, Cosmism, Rationalism, Effective Altruism, Longtermism)—a cocktail of elite techno-futures often blind to real-world power dynamics.


Are we witnessing a new dark age, where knowledge is under attack and a few rule over many?


History tells us that development isn’t linear. It’s cyclical, spiral, sometimes pendular. So the real question is: what cycle are we entering now?


Space as the Escape Hatch?


Elon Musk wants to go to Mars. Is expansion into space the only way to avoid collapse on Earth? Or just a high-tech version of escapism for the privileged few?


What’s at Stake? The Social Contract & Capitalism Itself



Capitalism is facing a legitimation crisis. So is liberal democracy. So is the very idea of progress.


How do we redesign the social contract? What should the next version of capitalism look like—if there is one?


Strategic Foresight Can Help—But Only If We Use It


It’s easy to map scenarios using Jim Dator’s four archetypes:


  • Continued Growth

  • Collapse

  • Discipline (green, equitable restraint)

  • Transformation (post-capitalist, post-human futures)


But that’s not enough.


We must ask: Are our institutions even ready to engage with these futures? Or are they paralysed by complexity, addicted to short-termism, and too afraid to lose what they have?


This is where futures literacy, strategic foresight, and anticipatory governance come in.


A Few Practical Tips:


  • Invest in horizon scanning and early warning systems.

  • Build scenario-thinking capacity across leadership, not just in innovation teams.

  • Use cross-impact analysis to identify ripple effects and interconnections.

  • Enable safe spaces for radical imagination and systemic experimentation.

  • Educate citizens not just in tech skills, but in complexity, ethics, and future thinking.


Remember: systems behave as systems. Optimising one part doesn’t help if the connections between parts are broken.


Like I often say: you can write with your hand; cut off your hand and put it on the table, and see what it can do…


Toward a New Universal Framework?


Do we need a new “theory of everything” for our human systems? Possibly. Some thinkers propose frameworks like:


  • Integral Theory (Ken Wilber): bridging science, culture, systems, and personal experience.

  • Metamodernism: transcending postmodern cynicism toward a constructive, layered future.

  • Design for Transitions: a systems approach to navigating complex societal shifts.

  • Earth System Governance: planetary-scale coordination for a just Anthropocene.


None are complete. But they start the conversation.


Final Thought


The polycrisis isn’t a problem to be solved. It’s a condition to be lived with, navigated, and shaped. With foresight, we can do it consciously, ethically, and collaboratively.


But we can’t do it by pretending tomorrow will look like today.


So—how prepared are you?

Comments


bottom of page