The pros and cons of collaboration

Vitalik Buterin
2022-08-22 17:17:38
Collection

Author: Vitalik Buterin

Original Title: 《Coordination, Good and Bad

Published on: September 11, 2020

Coordination------ the ability of a large group of actors to work together for their common benefit------ is one of the most powerful forces in the universe. It is manifested in: a king comfortably ruling a country through oppressive dictatorship, while on the other hand, the people can rise up and overthrow him. It is manifested in: allowing global temperatures to rise by 3-5°C on one hand, while on the other hand, when temperatures rise slightly, we can work together to prevent them from rising further. Coordination is a key that allows companies, nations, and any sufficiently large social organization to function properly.

Coordination can be improved in many ways: faster information dissemination, better norms to determine which behaviors are classified as cheating and impose more effective punishments, stronger or more powerful organizations, tools like smart contracts that allow interactions in low-trust scenarios, governance technologies (voting, shares, decision markets…), and more. In fact, every decade, we can see progress in coordination issues.

But coordination also has a philosophically very counterintuitive dark side: while everyone cooperating with everyone else is much better than everyone working for their own outcomes, this does not mean that everyone taking a step towards more cooperation is necessarily beneficial. If cooperation is increased in an unbalanced way, the result can easily be harmful.

We can present this issue on a map, although in reality, this map has many, many dimensions, rather than just the two drawn.

image

In the lower left corner, everyone is for themselves, which is a place we do not want to see. In the upper right corner, complete cooperation is ideal but likely unattainable. However, the vast area in the middle is far from a smooth uphill; there are many reasonable, safe, and productive places that may be our ideal place to settle and can avoid many deep, dark pits.

Note: Hobbesian refers to the belief that human behavior is driven by selfishness, and society is a situation of unrestricted, selfish, and brutal competition. This concept comes from the book 'Leviathan' by Thomas Hobbes, a 17th-century British political philosopher.

What are the dangerous forms of partial cooperation, where some cooperate with a specific group but not with others, leading to a descent into the abyss? It is best illustrated with examples:

  • Citizens of a country bravely sacrificing for their nation's interests during a war… and that country is Germany or Japan during World War II.
  • Lobbyists bribing politicians in exchange for the adoption of the lobbyists' biased policies.
  • Someone selling their vote in an election.
  • All product sellers in a market colluding to raise prices simultaneously.
  • A large miner in a blockchain colluding to initiate a 51% attack.

In all the above cases, we see a group of people coming together and cooperating, but they greatly harm the groups outside their cooperation circle, thereby causing substantial damage to the world as a whole. In the first case, everyone is a victim of the aggression of the aforementioned countries, they are the ones outside the cooperation circle and thus suffer tremendous losses; in the second and third cases, it is the voters and those affected by the decisions made by corrupt politicians; in the fourth case, it is the customers; in the fifth case, it is the miners and users of the blockchain who are not involved. This is not an individual betrayal of the group, but a group's betrayal of a broader group, often the entire world.

This local cooperation is often referred to as collusion, but it is important to note that the range of behaviors we are discussing is quite broad. In normal contexts, the term collusion is often used to describe relatively symmetrical relationships, but in the cases mentioned above, many have strong asymmetrical characteristics. Even extortionate relationships ("Vote for my preferred policy, or I will publicly expose your affair") are a form of collusion in this sense. In the remainder of this article, we will use the term collusion (or perhaps conspiracy is more appropriate) to refer to this type of "unwelcome cooperation."

Assess Intent, Not Action

An important characteristic of lighter collusion cases is that one cannot determine whether an action belongs to unintended collusion merely by observing the action itself. The reason is that an individual's action is the result of that person's internal knowledge, goals, and preferences interacting with external incentives imposed on that person; therefore, the actions people take when colluding often overlap with the actions people voluntarily take (or cooperate in a benign way).

For example, consider the case of collusion among sellers (a form of antitrust violation). If operating independently, three sellers might each set the price of a certain product between $5 and $10; the price range reflects the sellers' internal costs, differing willingness to pay, supply chain issues, and other factors. But if the sellers collude, they might set the price between $8 and $13. Again, this price range reflects different possibilities regarding internal costs and other hard-to-see factors. If you see someone selling the product for $8.75, did they do something wrong? Without knowing whether they are cooperating with other sellers, you cannot judge! Making a law that prohibits selling the product for more than $8 is not a good idea; perhaps the reasons for the current high price are reasonable. But enacting a law against collusion and successfully enforcing it would yield ideal results—if the price must be that high to cover the sellers' costs, you would get a price of $8.75, but if the factors pushing the price up are naturally low, you would not get that price.

This applies to bribery and vote-selling cases as well: it is likely that some people vote for the orange party legitimately, while others vote for the orange party because they were bought off. From the perspective of those deciding the voting mechanism rules, they do not know in advance whether the orange party is good or bad. What they do know is that a voter's vote based on their true feelings tends to work out well, but a vote where a voter can freely buy and sell their vote works out very poorly. This is because vote-selling is a tragedy of the commons: each voter only gains a small portion of the benefit from voting correctly, but if they vote according to the briber's wishes, they receive the full bribe. Thus, the bribe needed to attract each voter will be far less than the actual cost to the public for any policy the briber wants. Therefore, allowing vote-selling voting will quickly collapse into oligarchic rule (Plutocracy).

Understanding Game Theory

We can delve deeper and view this issue from the perspective of game theory. In the version of game theory that emphasizes individual choice—where each participant is assumed to make decisions independently (without the possibility of "agent groups" working for their common benefit)—there is a mathematical proof that there must be at least one stable Nash equilibrium in any game. In fact, mechanism designers have a great deal of freedom to design games to achieve specific outcomes. However, in the version of game theory that allows for coalition cooperation (such as collusion), known as cooperative game theory, we can prove that there is a large class of games with no stable outcomes (referred to as the core in game theory). In these games, regardless of the current situation, there are always some coalitions that can profitably deviate from it.

Note: This conclusion is known as the Bondareva–Shapley theorem.

An important part of this class of inherently unstable games is majority games. Majority games are formally described as games of agents where any subset of agents exceeding half can obtain a fixed reward and distribute it among themselves—this setup is eerily similar to many other situations in corporate governance, politics, and human life. That is to say, if there is some fixed resource pool and a currently established resource allocation mechanism, 51% of participants will inevitably conspire to seize control of the resources, regardless of the current configuration, there will always be some conspiracy that is profitable for the participants. However, this conspiracy is also easily influenced by potential new conspiracies, which may include combinations of previous conspirators and victims… and so on.

image

This fact, namely the instability of majority games under cooperative game theory, as a simplified general mathematical model, can be said to be severely underestimated in explaining why there is likely no "end of history" in politics, nor a proven completely satisfactory system; I personally believe it is much more useful than the more famous Arrow's theorem.

Note: Arrow's Theorem, also known as Arrow's Paradox, states that there is no ideal election mechanism that simultaneously satisfies three fairness principles: Pareto efficiency, non-dictatorship, and independence of irrelevant alternatives.

Please note again that the core dichotomy here is not "individual vs. group"; for a mechanism designer, "individual vs. group" is surprisingly easy to handle. The challenge lies in "group vs. broader group."

Decentralization as Anti-Collusion

However, from this line of thought, there is another brighter and more actionable conclusion: if we want to create stable mechanisms, we know an important factor is to find ways to make collusion, especially large-scale collusion, harder to occur or maintain. In voting scenarios, we have anonymous voting—ensuring that voters have no way to prove to third parties what they voted for, even if they want to prove it (MACI is a project attempting to extend the principle of anonymous voting to online environments using cryptography[1]). This undermines the trust between voters and bribers, severely limiting the potential for unwelcome collusion. In cases of antitrust and other corporate misconduct, we often rely on whistleblowers, even rewarding them, explicitly incentivizing participants in harmful collusion to defect. In broader public infrastructure, we have the very important concept of: decentralization.

A naive view of why decentralization is valuable is that it reduces the risk of single points of technical failure. In traditional "enterprise-level" distributed systems, this is often the case, but in many other situations, we know this is not sufficient to explain what is happening. Looking at blockchain is quite enlightening. A large mining pool publicly demonstrates how they distribute their nodes and network dependencies internally, which does little to quell community members' fears about mining centralization. However, the image below, showing 90% of Bitcoin's hash power appearing in the same conference discussion group, is indeed quite frightening:

image

But why is this image scary? From the perspective of "decentralization equals fault tolerance," large miners being able to communicate with each other does not cause any harm. But if we view "decentralization" as a barrier to harmful collusion, then this picture becomes quite alarming, as it indicates that these barriers are not as strong as we might imagine. In reality, these barriers are far from zero; those miners can easily collaborate technically, and they are likely all in the same WeChat group, but this does not mean that Bitcoin "is not much better than centralized companies."

So, what are the remaining barriers to collusion? Some major obstacles include:

  • Moral barriers: In the book 'Liars and Outsiders,' Bruce Schneier reminds us that many "security systems" (locks, warning signs reminding people of punishment…) also have a moral function, reminding potential wrongdoers that they are about to commit a serious crime, and if they want to be good people, they should not do so. Decentralization can be said to serve this function.
  • Internal negotiation failures: Individual companies may begin to demand concessions in exchange for participating in collusion, which can lead to negotiations directly stalling (see the "hostage problem" in economics).
  • Anti-cooperation: A system is decentralized, making it easy for participants not involved in collusion to make a fork, separating the colluding attackers and continuing to run the system from there. The threshold for users to join the fork is low, and the intent of decentralization will create moral pressure favorable to participating in the fork.
  • Defection risk: Five companies colluding to do harm is much more difficult than if they were to unite for uncontroversial or benign purposes. The five companies do not know each other very well, so it is possible that one of them refuses to participate and quickly blows the whistle, making it difficult for participants to assess the risk. Individual employees within companies may also blow the whistle.

In summary, these barriers are indeed substantial—often substantial enough to prevent potential attacks, even if those five companies are fully capable of quickly coordinating to do something legitimate at the same time. For example, Ethereum miners are fully capable of cooperating to increase the GAS limit, but that does not mean they can easily collude to attack the blockchain.

Experiences from blockchain indicate that designing protocols as institutionally decentralized architectures, even when most activities are known to be dominated by a few companies, is often very valuable. This idea is not limited to blockchain; it can also be applied in other contexts (e.g., see antitrust applications[2]).

Forking as Anti-Cooperation

But we cannot always effectively prevent harmful collusion from occurring. To deal with those situations where harmful collusion does occur, it would be better if the system could more robustly resist these collusions—making it more expensive for the colluders and easier for the system to recover.

We can achieve this through two core operational principles: (1) supporting anti-cooperation, and (2) having skin in the game. The idea behind anti-cooperation is this: we know we cannot design systems to be passively robust against collusive behavior, largely because the ways to organize collusion are extremely varied, and there are no passive mechanisms to detect them, but what we can do is actively respond to collusive behavior and counterattack.

Note: The term "skin in the game" comes from horse racing, where the horse's owner has "skin" in the race, giving them the most say in the outcome.

In digital systems, such as blockchain (this can also apply to more mainstream systems like DNS), a major and critical form of anti-cooperation is forking.

image

If a system is taken over by a harmful alliance, dissenters can come together and create an alternative version of the system that has (mostly) the same rules, except it eliminates the power of the attacking alliance to control the system. In the context of open-source software, forking is very easy; the main challenge in creating a successful fork is often gathering the legitimacy needed (which is a form of "common sense" in game theory) to get all those who disagree with the main alliance's direction to follow you.

This is not just theoretical; it has been successfully achieved, most famously with the Steem community's resistance to hostile takeover attempts, leading to a new blockchain called Hive, where the original adversaries have no power.

Markets and Skin in the Game

Another class of strategies to resist collusion is the concept of "skin in the game." In this case, "skin in the game" essentially refers to any mechanism that makes individual contributors responsible for their contributions in decision-making. If a group makes a wrong decision, those who approved that decision must suffer more than those who tried to object. This avoids the inherent "tragedy of the commons" in voting systems.

Forking is a powerful form of anti-coordination precisely because it introduces skin in the game. In Hive, the community fork of Steem that set aside hostile takeover attempts, the coins used to vote in favor of the hostile takeover were largely removed in the new fork. Key individuals involved in the attack were also personally affected.

Markets, in general, are very powerful tools precisely because they maximize skin in the game. Decision markets (prediction markets used to guide decisions; also called futarchy[3]) are an attempt to extend this benefit of markets to organizational decision-making. Nevertheless, decision markets can only solve some problems; in particular, they cannot tell us which variables we should optimize in the first place.

Note: Futarchy is a new form of government proposed by economist Robin Hanson, where elected officials set policies, and the public bets on different policies through speculative markets to produce the most effective choices. See V. Buterin's article 'On Collusion'[4].

Structured Cooperation

All of this gives us an interesting perspective on what those building social systems are doing. One of the goals of constructing an effective social system is largely to determine the structure of cooperation: which groups, configured in what way, can come together to advance their group goals, and which groups cannot?

image

image


Different cooperation structures yield different outcomes

Sometimes, more cooperation is beneficial: when people can work together to collectively solve their problems, the situation improves. At other times, more cooperation is dangerous: a small subset of participants may collaborate to deprive others of their rights. And at other times, for another reason, more cooperation is necessary: to enable the broader society to counteract collusion attacking the system.

In all three cases, these objectives can be achieved through different mechanisms. Of course, it is very difficult to directly prevent communication, and it is also challenging to make cooperation work perfectly. However, there are many options between these two extremes that can produce powerful effects.

Here are some possible structured techniques for cooperation:

  • Privacy-protecting technologies and norms.
  • Techniques that make it difficult to prove how you acted (secret voting, MACI, and similar technologies).
  • Deliberate decentralization, distributing control of a mechanism to a widely known group that does not coordinate well.
  • Decentralization of physical space, separating different functions (or different shares of the same function) to different locations (e.g., see Samo Burja on the connection between urban decentralization and political decentralization).
  • Decentralization between constituencies based on roles, separating different functions (or different shares of the same function) to different types of participants (e.g., in blockchain, "core developers," "miners," "token holders," "application developers," "users").
  • Schelling points, allowing large groups to quickly coordinate around a common path forward. Complex Schelling Points may even be implemented in code (e.g., how to recover from a 51% attack).
  • Using a common language (or splitting control among multiple supporters using different languages).
  • Voting by person rather than by (token/share) to greatly increase the number of people needed to influence decisions through collusion.
  • Encouraging and relying on defectors to alert the public to impending collusion.

Note: Schelling points were proposed by American economist Thomas Schelling in his book 'The Strategy of Conflict,' where people often converge their actions on a prominent focal point if they know others are trying to do the same without prior communication. For example, if two people are to meet in New York without prior communication, they are highly likely to choose Grand Central Station, forming a natural Schelling Point.

These strategies are not perfect, but they can be used in various situations and achieve varying degrees of success. Moreover, these techniques can and should be combined with mechanism designs that attempt to make harmful collusion less profitable and riskier; in this regard, "skin in the game" is a very powerful tool. Which combination is most effective ultimately depends on your specific case.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators