How did cooperation evolve in humans?
Why do we cooperate – this topic has always intrigued me. All the great achievements of mankind have largely occurred because we work collectively, often subsuming individual interests for collective good. No other species on earth is as organized as humans – at all levels from tribes, kingdoms, nations to supra-national entities like United Nations. Unlike ants or other species which do exhibit an orderly and seemingly purposeful social organization, we are unique in our intelligence and our ability to comprehend self interest. We humans are gifted with intellectual prowess, marked by high cognition, motivation, and self-awareness which helps us recognize possibility of benefiting from an interaction, quantify the benefit along with the mental and physical dexterity to act (cooperatively, deceptively, forcefully) to realize the benefit, often to the detriment of others.
Darwin’s theory of evolution espouses “survival of the fittest” – so why shouldn’t I seek sole personal benefit at the expense of others? Yet, the overwhelming odds are for interactions to be cooperative, and not adversarial. How is this possible?
It is very interesting to look at the problem from the perspective of game theory. Mathematics and algorithms do provide lot of insights into strategies for human interaction; a completely different approach from humanities, which is a soft approach based on ethics, philosophy and morality. I’d like to introduce this approach briefly, with the striking implications it has on “ideal” human behaviour to seek cooperation.
The simplest of human interaction would be a 2-person zero-sum game. There is one winner, and one loser. Ex. a game of chess. The total winnings are fixed; if somebody wins, other party loses. No cooperation is possible. Your aim is to go out and win, else you’re going to lose. A more complex game would be “How do divide a piece of cake between 2 children” to their satisfaction. Rational solutions are possible to such problems and game theory searches for such rational outcomes. A simple solution is “let the first child cut the cake into two, and the second child picks the piece he wants”. Thus, two rational human beings, with their interests completely opposed, can settle on a rational course of action in confidence that other will do the same. Von Neumann generalized this to a generalized 2 person zero-sum game to come up with his “Minimax Theorum” which provides an elegant solution to such interactions. Jon Nash extended this theory to non zero-sum games i.e. where “win-win” and “lose-lose” situations are possible in addition to simple “win-lose” and “lose-win” situations. However, Nash’s solution fails to provide a “rational outcome” in all cases, and one such case mirrors a very common human interaction dubbed “The Prisoner’s Dilemma”.
Consider the following game called “Prisoner’s Dilemma”. Two players make a simultaneous, independent choice between DEFECT and COOPERATE. No communication of any sort is allowed between the players. The payoff’s associated with the moves are show below
The rational outcome from the perspective of each player is to DEFECT, because the payoff is better in each case as compared to COOPERATE. Consider player-1 defecting for example: the payoff is (5, 1) for player-2 cooperating/defecting, which is better than (3, 0) which is the corresponding payoff in case player-1 cooperates. From his perspective, it does not make sense to cooperate. No matter what the other player plays, it pays for player-1 to DEFECT. Similar analysis holds for player-2. This results in both players to DEFECT, and it locks them into a low (1, 1) payoff situation. Individual rationality leads to a worse outcome, whereas both would benefit if both COOPERATE and earn a (3, 3) payoff. This is the “Prisoner’s Dilemma”. Nash’s solution provides the sub-optimal (1, 1) payoff with mutual defection.
We encounter such situations very often in our lives. Should I pump as much groundwater as I can, without caring for groundwater level? If I don’t do it, somebody else will anyway. Should I catch as much fish as I can in the pond? If not me, somebody else will do so anyway. Why shouldn’t I throw trash on the roads … after all everybody does it? What difference does it make for me to hold back? It makes sense to DEFECT and act purely in self-interest. This leads to “tragedy of commons”, where individuals acting independently in self-interest, result in harm to common good. We in India are no strangers to such tragedies!
How do we get out of such a situation? What kind of strategies can be employed to get out of mutual defection i.e. non-cooperation? It turns out that the key to the solution is repeated interaction, where the players play multiple games in sequence and it turns out that is sufficient (along with other conditions) for cooperation to emerge spontaneously! The “iterations” provide a plausible solution to the dilemma.
Robert Axelrod, a professor of political science and public policy at University of Michigan had an innovative approach to this problem. He held a competition in 1980, where he invited participants to submit computer programs to play the “Prisoner’s Dilemma” game. There were 15 entries, including RANDOM, ALWAYS-DEFECT, TIT-FOR-TAT, and other more sophisticated algorithms. Programs played games against each other and themselves repeatedly, with five games of 200 moves each. The resulting scores were compared and it turns out that the simplest 5-line program, TIT-FOR-TAT emerged as the winner. All it did was to cooperate in the first move, and then does what the other player did in the earlier round. If the other player COOPERATED in the previous round, it COOPERATES next time; else if the other player DEFECTED, it DEFECTS next time.
A follow-up second-round of the contest was held after publishing the detailed results and the code for all 15 entries. 63 entries were submitted in the second round, and once again TIT-FOR-TAT emerged as the winner! Axelrod wrote a seminal paper and a book analysing the contest and why TIT-FOR-TAT emerged as the winning entry. The conclusions are worth their weight in gold, and offer immense insights into behaviour and strategies required for cooperation. The key attributes for cooperation and a winning strategy are listed below. It is striking indeed to see how much of it is “common sense” and falls under the realm of “ethics” and “morality” when viewed this way. I view these as no different from “life lessons”:
- The key for cooperation is repeated interaction. When that happens, each move casts a shadow onto the future, and players learn about each other to predict future behaviour. If I deceive you this time, I’m sure to receive a payback in kind in future. A one-off interaction leaves no such possibility and hence leads to a sub-optimal (DEFECT, DEFECT) equilibrium. That is why you buy from the same shop, the same service provider and same vegetablevendor … you have created a historical context, and can predict the behaviour of the other player better.
- Almost all of the top winning strategies were “nice” meaning never DEFECTed first. A DEFECTION sets off a cycle of mistrust and results in both parties getting locked into mutual defection and distrust. The temporary benefit accrued from a DEFECTION, results in irreparable damage to your reputation and invokes retaliatory defections from your counterparts and cumulative loss in future.
- TIT-FOR-TAT and other high-scoring strategies are “retaliatory” meaning they punish a DEFECTION by retaliating with DEFECT in the next round. This is very important. Any strategy which does not do so, exposes itself to exploitation in future. Ignore defections to your peril, as non-retaliation will invite further defections from the counterpart. You should make it “unprofitable” for the other party to exploit you by giving a message. Gandhiji’s “turn the other cheek” advice doesn’t work in the real world!
- A variation of TIT-FOR-TAT which was “forgiving”, i.e. which tolerated DEFECTIONS once in a while was found to be more effective than regular TIT-FOR-TAT. This forgiveness is important to get out of defection echo effects, where both parties are locked into mutual DEFECTIONS because of a past defection. A forgiving strategy can overlook the defection occasionally and still COOPERATE, thus restoring the cycle of trust and cooperation in future. And while it is very easy to advise and hard to act on, “giving way” once in a while when arguing with your spouse, or dealing with a customer may soothe nerves and restore the relationship.
- The most important property of successful strategies like TIT-FOR-TAT is “clarity” or “predictability”. Because the strategy is so simple (no randomness or devious probabilistic behaviour), it is very easy for your opponent to figure out how you will react and respond to their moves. This helps them set their strategy and setup cooperation and avoid getting surprised. The lesson for us mortals is: it does not pay to hold your cards close to your chest. Your principles in life and interactions with your counterparts should be simple, consistent and easy for others to figure out. The resulting ability to “read you” inspires confidence in the other party and helps them cooperate. Infact, the RANDOM strategy had the biggest problem invoking COOPERATION from other strategies because it was utterly unpredictable …. the opponent had no basis to setup a consistent playing strategy because RANDOM could not be relied upon 🙂 Avoid such randomness in your interpersonal and professional interactions
- One clear indication of how healthy or effective a strategy is, depends on how it behaves when paired with itself. It is easy to see why TIT-FOR-TAT enjoys great success against itself, since both COOPERATE on the first move, and this sets off a virtuous cooperation cycle. A RANDOM or exploitative strategy which surprises the opponent with occasional DEFECTIONS, will disturb this harmony and engender non-cooperation. Don’t be too competitive. Imagine a world filled with people like you (your thinking, your strategy). If you can do well against them, you are onto something positive. Do unto others as you wish others to behave with you!
- Finally, the most important insight of all: TIT-FOR-TAT never does better than any strategy individually in a single game (since it never DEFECTS first), yet it emerged as the winner among all strategies. It did this not by beating other strategies in individual interactions, but by eliciting cooperation in whichever strategy it paired with. You don’t have to be better than everybody, or beat everybody …. all you need is to encourage others to cooperate with you for mutual benefit. This is highly counter-intuitive at first, and dead against the conventional wisdom of “you need to be better than the rest to excel”, but it is the truth! The winner is on the podium because he/she benefited by cooperation from all (and in turn benefiting them as well) and not because he/she “beat” the other person by his/her superior abilities. I urge you to reflect on this…