Emotional Swarms
With the news that Simile.AI has raised $100m to simulate human behavior at scale, I’m sharing the following excerpt from my book, “A Mind Made of Math” (Note that you can download the book for free from my website at stevej.ca )
Emotional Swarms
In previous chapters, we have discussed the ability of AI agents to engage in reasoning and also the ability of AI agents to coordinate in swarms. One use-case that we envisioned was simulations of entire populations of humans. In order for those simulations to be useful, they need to behave like actual humans rather than optimal rational reasoning machines. Imagine a quiet simulation lab, where a swarm of artificial citizens starts its day. Thousands of AI-driven agents wake up in a virtual city, check the news, chat with neighbors, and go about their routines. Each agent carries the quirks and biases that real humans bring to public life. Some cling to the familiar (a touch of status quo bias), some dread losses more than they value gains (classic loss aversion), some discount the future for immediate gratification (time inconsistency), and many can’t help but notice what their peers are doing before deciding for themselves (a nod to social proof and herd behavior). This rich mix of personalities swirling together is precisely the point. By letting these diverse simulated people interact, policymakers can watch entire miniature societies unfold and react when a new public policy drops into their world. The goal is to anticipate surprises: Will the policy spark cooperation, indifference, or open rebellion? Who benefits, who loses out, and how do those perceptions ripple through the population? The simulation becomes a proxy for reality; messy, unpredictable, and occasionally wiser than our theories.
Consider a climate policy scenario. In the real world, getting millions of households to switch to green technologies is infamously slow. People aren’t purely profit-maximizing calculators. They worry about upfront costs, unfamiliar tech, and whether their neighbors approve. To capture this, one simulation populated its agents with loss aversion, the tendency to fear losses more than equivalent gains. The result was eye-opening. When these agents were deciding whether to replace their trusty gas heaters with efficient electric pumps, they showed the same hesitancy as ordinary consumers. In fact, the study found that models assuming perfectly rational, eager-to-upgrade behavior wildly overestimated how fast clean energy would catch on. (“Modelling the Effectiveness of Climate Policies: How Important Is Loss Aversion by Consumers?,” ResearchGate, accessed May 26, 2025, ttps://doi.org/10.1016/j.rser.2019.109419.) Once loss aversion was included, adoption rates in the simulation plummeted; mirroring the stubborn persistence of old technologies in many communities. Policies that looked sufficient on paper suddenly faltered in the synthetic society. To achieve the same climate targets, the government in the simulation had to double the financial incentives; for example, a carbon tax of €200 per ton was needed to drive the equivalent emissions cuts that €100 per ton was supposed to achieve under the naive rational model. In other words, ignoring the public’s aversion to loss led to overly optimistic plans, whereas baking in that realistic bias showed that much stronger measures (or more time) would be required to shift behavior. By witnessing these dynamics play out among thousands of simulated families, officials can calibrate their expectations and craft strategies that respect the psychological hurdles, perhaps by framing changes in terms of potential losses avoided (like future climate disasters) rather than immediate sacrifices.
Zooming in from the global to the local, another experiment explored how social influence and the status quo bias could lock a community into inaction on environmental issues. It’s often observed that everyone waiting for someone else to go first can paralyze change; a phenomenon sometimes called pluralistic ignorance. Researchers created a virtual town where each agent deeply underestimated how much their neighbors cared about sustainability. Each family assumed others wouldn’t bother with things like solar panels or electric cars, so they hesitated to do those things themselves. (Tabea Hoffmann et al., “Overcoming Inaction: An Agent-Based Modelling Study of Social Interventions That Promote Systematic pro-Environmental Change,” Journal of Environmental Psychology 94, no. 102221 (March 2024), https://doi.org/10.1016/j.jenvp.2023.102221). In reality, many neighbors privately were environmentally conscious, but since no one was acting on it, the impression of apathy became a self-fulfilling prophecy. This social feedback loop with pessimistic assumptions reinforcing the stagnant status quo was vividly recreated in the simulation. The breakthrough came when the virtual town tried an intervention: making pro-environmental behavior more visible. In one scenario, agents suddenly found it easier to see evidence of eco-friendly actions by others. Solar panels, once hidden in backyards, became as conspicuous as shiny new cars in driveways; conversations about recycling and efficient appliances started cropping up. That tweak caused a remarkable shift. Seeing a few agents adopt green habits gave others the confidence to follow suit, and before long the entire town “tipped” into a new normal of widespread sustainability. What had been an idle stalemate turned into a cascade of change. This virtual example underscores the real-world power of social proof: people are profoundly influenced by the visible behavior of their peers. A policy that relies on voluntary public action such as a program to conserve water or reduce waste might fail if everyone is privately on board but publicly hesitant. The simulation suggests a remedy: find ways to broadcast early adopters and positive deviants. When agents (and by extension, people) realize “hey, people like me are actually doing this,” the herd instinct can flip from impeding change to driving it. It’s a reminder that public behavior can sometimes change not gradually, but all at once, once a critical mass becomes visible. Policymakers who grasp this might focus less on one-size-fits-all incentives and more on seeding a trend, knowing that human beings often act like birds in a flock; banking and swerving in unison once the flock decides where to go.
One could argue that this approach marks a subtle turning point in how we think about governing and planning. We’ve always had theories and we’ve always had data, but now we have these living laboratories where theories and data mix with a bit of imagination to produce experiential forecasts. It’s as if we’re no longer limited to reading history; we can create miniature histories of our own and learn from them. A swarm of AI agents can simulate in minutes what might take years to unfold in reality, giving us a chance to see pitfalls ahead of time. Will people embrace a drastic climate policy or revolt? Will a public health campaign save lives or be met with skepticism? Instead of guessing, we can observe a proxy version of the public grappling with those questions; not perfectly, but with enough fidelity to yield insights. The implications are quietly revolutionary. It suggests that as AI and computing power progress, our ability to rehearse societal change in silico will get ever more sophisticated.
Policymakers of the future might routinely consult these AI societies the way we consult opinion polls today, treating them as another tool; one that captures emergent, collective behavior rather than just individual attitudes. The broader thesis, if there is one, is that transformative technologies like AI are not only about automation or efficiency; they’re also about understanding complexity at a depth we never could before. We’ve begun to decode the patterns of society by literally growing patterns in a computer simulation, and that offers a perspective that is both novel and sorely needed. In an age when small policy decisions can have vast, global ripple effects, having a sandbox to anticipate consequences is invaluable. It’s a way to be prudent without being paralyzed; to try bold ideas in a safe virtual space, see how the crowd might react, and adjust accordingly. And perhaps most intriguing, these simulations remind us of something fundamentally humbling: even in artificial worlds, populated by code, the hardest force to tame is human nature. The true power of these AI agent swarms is that they force us to confront our own biases and collective behaviors more honestly. They hold up a mirror, and sometimes what we see is a populace resistant to change for reasons that run deeper than ignorance or ill-will; reasons rooted in the very wiring of how we decide. Recognizing that, through the prism of a simulation, might just help society inch toward policies that are not only smart on paper, but psychologically savvy and resilient in practice. In the end, the swarm of AI agents is not an oracle telling us what to do, but a collaborator helping us explore the vast design space of our shared future, one plausible social drama at a time.


