Business

Why do we cooperate?

Alexander J Stewart
Mathematical Biologist, University of Pennsylvania

Why do people cooperate? This isn’t a question anyone seriously asks. The answer is obvious: we cooperate because doing so is usually synergistic. It creates more benefit for less cost and makes our lives easier and better.

Maybe it’s better to ask why don’t people always cooperate. But the answer here seems obvious too. We don’t do so if we think we can get away with it. If we can save ourselves the effort of working with someone else but still gain the benefits of others’ cooperation. And, perhaps, we withhold cooperation as punishment for others’ past refusal to collaborate with us.

Since there are good reasons to cooperate – and good reasons not to do so – we are left with a question without an obvious answer: under what conditions will people cooperate?

Despite its seeming simplicity, this question is very complicated, from both a theoretical and an experimental point of view. The answer matters a great deal to anyone trying to create an environment that fosters cooperation, from corporate managers and government bureaucrats to parents of unruly siblings.

New research into game theory I’ve conducted with Joshua Plotkin offers some answers – but raises a lot of questions of its own too.

Traditionally, research into game theory – the study of strategic decision making – focused either on whether a rational player should cooperate in a one-off interaction or on looking for the “winning solutions” that allow an individual who wants to cooperate make the best decisions across repeated interactions.

Our more recent inquiries aim to understand the subtle dynamics of behavioral change when there are an infinite number of potential strategies (much like life) and the game payoffs are constantly shifting (also much like life).

By investigating this in more detail, we can better learn how to incentivize people to cooperate – whether by setting the allowance we give kids for doing chores, by rewarding teamwork in school and at work or even by how we tax to pay for public benefits such as healthcare and education.

What emerges from our studies is a complex and fascinating picture: the amount of cooperation we see in large groups is in constant flux, and incentives that mean well can inadvertently lead to less rather than more cooperative behavior.

But first, let’s learn a little bit more about game theory.

Cooperation and game theory

Game theory, first developed in the 1930s but whose origins reach all the way back to Plato, is a tool for studying cooperation. It tackles the question of when cooperation will occur by imagining players engaged in a game. The game has rules, and the players have strategies. The problem is to figure out, for a given set of rules, which strategies players will use.

Lets consider the simplest possible cooperation game. Two players each have a choice: to cooperate or not. Depending on their own choice, and the choice of their opponent, they each receive a “payoff,” or the amount of benefit they get from the interaction. A player’s strategy is whether or not to cooperate, and may depend on their past experience as well as their expected gains.

The first question to ask is which strategy should each player use? Presumably a player should do whatever will result in the largest payoff.

Yet in the prisoner’s dilemma, the most famous example of this simple two-person game of cooperation, the answer – based on playing the game just once – is that neither should cooperate. Ever.

For a more detailed description of the prisoner’s dilemma, click here. But briefly, imagine two members of a gang are locked away in solitary confinement and each is given an offer: betray the other and go free, while the partner gets three years in jail, or stay silent and serve only one year. If both players betray the other, they both get two years.

A purely rational person – again playing the game just one time – should choose to betray the other (or defect, as us game theorists put it) in hopes of going free, but the end result of both behaving rationally is that both get two years in jail. It would be better for them to “cooperate” and in this case stay silent (giving them each a one-year sentence).

But while these prisoners have to make a one-time choice whether to cooperate – and neither has any knowledge of the other’s past behavior or can imagine an impact on future choices – in real life we play these cooperation games over and over. The choices we make are informed by our past experience and our expectation of future interactions. For example, I am less likely to cooperate with someone who has betrayed me in the past, and I am less likely to betray someone who might have the opportunity to return a future favor.

This difference is reflected in experiments with actual people playing the prisoner’s dilemma, who often choose to “cooperate” (that is, stay silent). And so, to understand anything about when real people might cooperate, we must think about how they decide when to cooperate – and which strategy to chose – and how this changes over time.

Since our behavior depends on our experience of interacting with many different people, we have to look at games played between not just an individual pair but between many players. All of which leads us to think about populations of players, and the dynamics of player’s (do you want this to be plural or singular?) strategies in evolving games. As the complexity grows, so does the utility.

Cooperation in the long run

In an evolving game, we think about players who interact with each other many times – which makes it resemble life a lot more and opens up far greater practical usefulness to its study. Players change their strategies and over time they try out many different types, and also copy those of other players who are more successful.

So how do those strategies change over time? Will certain ones evolve and take hold? And especially will cooperation be the norm? If so, when?

This evolutionary approach to game theory has already led to many useful insights about how to incentivize cooperation. And it has long been known that by punishing defectors (or those who don’t cooperate) appropriately, specific cooperative strategies can do well in an evolutionary setting.

But recently, researchers have begun to think about a much wider range of strategies, and a more complex picture has emerged.

Our research doesn’t ask which strategy “wins” in a population, because it turns out that no single strategy is always best, with so many options available. In fact, in the long run, no one behavior (cooperate or defect) dominates forever.

Instead, when we focus on the dynamics of strategies over time, what emerges is a picture of constant flux. People may choose cooperative strategies, but these are slowly replaced by defector or selfish strategies, which in turn are eroded and replaced.

The reason for this flux is a naturally emerging complacency: when everybody cooperates, there is no need to worry about these defectors (call them rebels without a cause) who go against the grain. Players are free to try out new strategies – such as never punishing a defector – and in the short term they suffer no cost. But when such a complacent strategy takes hold, the whole population is open to exploitation by defectors, and so cooperation is lost.

Despite this constant turnover, we can still try to determine what kinds of behavior dominate on average. Fortunately for society, what we find is that much of the time it is cooperation that will dominate. The turnover between cooperators and defectors may be unavoidable, but still cooperation is the rule. However, this depends critically on keeping the costs and benefits of cooperation fixed. And in general, they are not.

When cooperation falls apart

We constantly change the way we incentivize cooperation. A new government comes to power, a new manager wants to make their mark, a new book on childrearing is read by a parent.

In the simple prisoner’s dilemma game, shorter jail sentences would incentivize the players to keep their mouths shut, and thus achieve an optimal result. In everyday life, cooperation between people involves some cost – such as work effort – and comes with some reward – a better product than anyone could have created alone. The incentives are the rewards; the costs are what individuals contribute to attaining them.

Typically, benefits and rewards vary together; the more effort people put into cooperating, the greater the rewards they get from the interaction. In an evolving game, this leads players to not only change their strategies but also the effort they put in when they do choose to cooperate.

This might seem like a good thing – members of a team not just cooperating but going that extra mile to get the best results possible. Unfortunately once strategies, costs and benefits start to co-evolve, something counter-intuitive can happen: cooperation starts to collapse.

The collapse of cooperation occurs when the ratio of costs to benefits becomes too high.

Suppose everyone in the team really does go the extra mile when they work on a project. Then every member of the team knows he or she has relatively little to lose by slacking off, because everyone else’s extra effort will still carry them.

This is exactly what we see in evolving games – cooperating players contribute ever greater effort to cooperation, only to make it easier for defectors to take hold. This presents something of a paradox, because it means the more we cooperate, the less likely others are to do the same.

All of which raises questions about how to incentivize cooperation. On the one hand we find that it is impossible to guarantee that members of a group will always cooperate in the long run, but we can often ensure a lot of cooperation on average if we get the payoffs right. On the other hand if we incentivize cooperation too much, we paradoxically encourage defection at the same time.

Games like the prisoner’s dilemma are overly simple, especially when it comes to capturing the complexity of human interactions.

The evolutionary approach to game theory analysis cannot tell us precisely how to get the right balance between encouraging cooperation and defection, but it does reveal that there are steep costs to over-incentivizing.

This article is published in collaboration with The Conversation. Read the original article.

Publication does not imply endorsement of views by the World Economic Forum.

To keep up with the Agenda subscribe to our weekly newsletter.

Author: Alexander J Stewart is a Post Doctoral Fellow in Mathematical Biology at University of Pennsylvania.

Image: Swiss President Samuel Schmid (R) and U.N. Secretary-General Kofi Annan (R) shake hands during an official working visit in Kehrsatz near Berne, Switzerland. REUTERS/Pascal Lauener

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

leadership

Related topics:
BusinessLeadership
Share:
The Big Picture
Explore and monitor how Entrepreneurship is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Extended producer responsibility and a global plastics treaty – what do the experts say?

Jeet Kar, Madeleine Sophia Brandes and Audrey Helstroffer

November 18, 2024

The mindset change businesses need for a climate-resilient future

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum