Shaking Things Up
Happy autumn, or spring, depending on your orientation to the equator. We last dropped a note to you earlier in the year, so there’s a lot to finally catch up on.
Start at the (new) beginning
Some of you may know we have been in the process of shifting our practice, and ourselves, over the past year or more. After nearly a decade in the Netherlands, we decided to make a geographic move to suit a disciplinary evolution. Changeist has re-homed to Barcelona, Spain, in the multidisciplinary, multicultural and multi-meteorological center of Catalunya.
After spending some summers here over the past decade, helping to seed the futures community in part, we’ve made the shift toward the sun, looking to tap into both the creative and technical energy that this region is known for. We’re making a bet that Barcelona can provide a good setting for our next professional pursuits, with its deep pool of talent, open culture, interesting spaces, social embrace and open horizon — all things we felt we needed.
Starting in October, Changeist SL, a Spanish company, will be active as a base for our more experimental pursuits, including new work in this region. Consider it a satellite. Partners already working with the mothership will continue to do so. Whichever door you choose, you’ll still find us. :) Changeist SL will physically take shape over time — we’re actively looking for the appropriate space to unfold in.
From a business point of view, our Dutch office will cease to operate shortly. So long, and thanks for all the appelflappen!
Next?
In recent years, we’ve been focusing on the sharp end of the future—emerging frontier risks that compel us to make strategic choices in the absence of historical experience. Lately, we’ve been thinking about how we bring our wide spectrum of experience together with a diverse toolset—including scenarios, gaming, narrative design, and experience design—to create fresh, immersive experiences that build on aspects of social play and competition that strengthen critical decision-making capabilities.
We want to provide time and space to help decision-makers, current or future, explore the types of choices that might be made within unpredictable, unfamiliar systems when stakeholders face unprecedented challenges. While challenges may be technological, biophysical, social, political, or other types that move and metastasize quickly, we’re designing a format that helps explore the possibilities inherent in forcing critical decisions based on fluid information in high uncertainty.
This work was the focus of our Visiting Research Fellowship last November at MOD/UniSA in Adelaide last autumn, and Scott’s been back at MOD this summer to playtest the resulting experience.
So 🥁 the first product of that research and development…
Foom: A New Strategic Simulation for Critically Exploring AI Futures
In recent years, organizations of all sizes have faced increasing pressure to figure out how, or even if, they will respond to the rapid advancements in AI — from machine learning to generative tools like LLMs, video editing and everyday desktop apps. With competitive forces, unpredictable roadmaps and an atmosphere filled with uncertainty, making sound decisions in the AI space feels like a shot in the dark — even for the developers building AI tools. The choices made today, particularly around Artificial General Intelligence (AGI), will profoundly shape our future, for better or worse.
Yet, too often, organizations lack the opportunity to "play through" these crucial decisions in a safe, informed environment — a sandbox. Without this, it’s difficult to gain a critical, nuanced understanding of AI's societal and ethical dimensions. And hiding from it, yelling “Nope!” isn’t a strategy, but a static position.
That's why we created Foom — an immersive strategic simulation that puts participants into a world where AI progress brings both promise and pitfalls. Unlike traditional workshops or tabletop exercises, Foom is designed for players to experience the immediate consequences of their choices, with AI development scenarios powered by these very generative tools which are shaping its aspects of its future. Drawing on our experience, and informed by the insights of expert colleagues, we've carefully designed Foom to be an engaging and thought-provoking experience that’s not only engaging but actually innovative, we think.
In Foom, players representing diverse sectors—business, government, AI developers, activists, and the public—navigate rounds of complex, evolving scenarios. They make critical decisions that directly influence the trajectory of AI’s development, balancing competing desires: to accelerate progress, impose necessary guardrails, or even block certain developments.
At the heart of the experience is the tension between aligning with ethical concerns and pushing for breakthroughs, while potentially heading toward the risky leap into full, independent AGI. In each experience, the content is bespoke, shaped around the players—including location, role, and strategic challenges—and responds to their choices with often unforeseen outcomes.
What sets Foom apart internally is its embrace of constructive uncertainty. Just as real-world AI development is riddled with unpredictability, Foom harnesses the power of generative AI to introduce unexpected twists, reflecting the genuine unpredictability of AGI progress. This approach mirrors the complex, non-linear nature of real-world decision-making, where outcomes are rarely clear-cut, and it encourages collaborative exploration of this challenging landscape.
Externally, Foom distinguishes itself in three ways. First, through its unique approach to generating valuable insights about the topic of AI in the “real” world—for example, what may be coming down the pipeline at us—and about what’s known and unknown within an organization. Second, it provides insights into a given organization’s strategic approach to AI and whether it has thought through the if-then elements. Finally, it reveals an organization’s culture of decision-making under both uncertainty and compressed time—two conditions we’re experiencing frequently but for which we lack tools to make us sharper and more reflective.
Foom offers a dynamic, evolving world shaped by participants’ choices. It provides a unique platform for organizations to explore how their actions might influence AI’s future. Whether deciding how much autonomy to grant AI, when to impose or adjust regulations, or how to address societal concerns, Foom helps participants develop a reflective, hands-on perspective on the critical issues shaping AI’s path forward.
Foom is designed for a wide range of users — from corporations and industry groups to policy-setting bodies, educational institutions and cultural organizations. It can also be deployed at conferences, exhibitions or internal events to engage teams and spark meaningful dialogue about AI's future. Foom can also form the centerpiece of a multi-day learning event — a way to shake up thinking before going deeper into applied strategy.
As a recent participant from a governmental AI task force summed it up: Foom enabled “the chance to explore options without breaking something in real life.” Another participant told us: “Foom gave me a much deeper understanding of the challenges AGI presents while offering a rare opportunity to connect with peers who are equally invested in the future of AI.”
Ready to experience Foom for yourself?
Contact us to learn how this immersive, strategic simulation can be tailored to help your organization navigate the critical decisions that lie ahead in their work, market, communities, or world.
Foom is the first module of what we hope will be additional topical modules that deal with new frontier risks, running on the same underlying lightweight simulation process. We’re already looking at issues like climate engineering, new pandemics and similar challenges. If this sounds interesting, let’s discuss. All of these modules are adapted to the context of the players, so customization to specific audiences is a key capability.
Breaking Another Future
Four years ago next month, we organized a special event. Amid the cloud of uncertainty hovering over the outcome of the 2020 election—still two months before January 6th—we convened three of the smartest people we know: author Christopher Brown, author and academic Malka Older, and futurist Jake Dunagan to ruminate on the vote, speculate on outcomes, and illustrate some possible futures (video) for US democracy and the role of America in the world. We called this special session Breaking Futures.
These last four years have been a wild ride, but possibly not the weirdest timeline that could have been. This past July, Scott happened to be in Adelaide at the same time as Jake, who was researching governance futures at MOD as a visiting research fellow. This quirk of timing meant they watched the unprecedented reconfiguration of the U.S. Democratic presidential ticket—and subsequently the race—from adjoining offices. This rapid evolution from the politically near-impossible, through the plausible, to the probable over a period of three weeks sparked the idea for a Breaking Futures reunion.
So, mark your calendars, it’s happening again. On the day after US Eelction Day, Wednesday, 6 November, at 2PM EDT/11AM PDT/8PM CET we are running Breaking Futures 2, streaming live on YouTube at youtube.com/@breakingfutures.
As happened last time, we don’t expect a clear result by the morning after. We don’t even know what we’ll feel like the morning after, but we will take a swing at the trajectory America might be on, what the future might hold, and whether we expect a Breaking Futures 3 in four years’ time.
Drop us an email at breakingfutures@changeist.com to RSVP, and we will send you a reminder closer to the date.
And if you are an American registered to vote, VOTE! The futures depend on it.
Incidentally
To get the ball rolling locally, this past week we were at two events. The first was Journada Demà Futur 2024, hearing about what’s important for the future of Catalunya. Thanks to the Direcció General d’Anàlisi i Prospectiva of the Generalitat de Catalunya and partners for convening a valuable discussion.
The next day we joined a fantastic line up as part of the Marató DHub 2024, or Dhub Marathon, at the Disseny Hub in Barcelona on 28 September. This program, which we attended last year, is an eclectic all-day series of 15 minute talks loosely connected by a thematic thread. This year the thread is “time”. We talked about games, time, probability and how the future is circular. And made our best effort at being trilingual!
Raksha Launches
Over the past six years, we have been privileged to work with Aarathi Krishnan, both as a client, collaborator, friend and now partner. This week marks the launch of Raksha, a futures intelligence firm founded by Aarathi to provide “quantitative, AI-powered data analysis and modeling with field-tested human expert analysis to make sense of complex issues and risks” to organizations around the world facing emerging risks.
Aarathi has pulled together a unique mix of people, data and capabilities, underpinned by her considerable field experience, to provide a fresh approach to risk anticipation. Changeist is happy to be among Raksha’s strategic partners, which includes the Center for Existential Risk, Superflux, Demos Helsinki, Overwatch, Rockefeller Philanthropy Advisors, Global Nation and others. While we couldn’t be in NYC for the launch of Raksha this week, we’re popping some cava from afar to wish the team much success!
Last Call
A few things to highlight that we’ve found interesting lately:
Paul Graham Raven’s two-part interview with Dr. Georgina Voss about her work, Systems Ultra.
Friends Igor Schwarzmann and Johannes Kleske are using their spare cycles to think out loud about culture, and do it in a different way, through knownunknowns.xyz, which lives somewhere between Instagram and Youtube, appropriately. Give it a listen.
Scott has gotten back to short-form writing outside of Changeist, including thoughts on books, film, politics and culture. It also holds an archive of past essays rescued from the teeth of Medium. This will become more consistent over time, but drop into Collision Detector if you need even more of these reads.