My Research Papers:
- [Job Market Paper] Framing Matters: Sanctioning in Public Good Games with Parallel Bilateral Relationships. Find the current version here.
- Catherine Moon and Vincent Conitzer. Role Assignment for Game-Theoretic Cooperation. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) , pp. 416-423, New York City, NY, USA, 2016. Find the official conference version here; the long version here.
- Catherine Moon and Vincent Conitzer. Maximal Cooperation in Repeated Games on Social Networks. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-15) , pp. 216-223, Buenos Aires, Argentina, 2015. Find the official conference version here; the long version here.
Cooperation allows for the society as a whole to attain better social outcomes, which are not achievable individually without cooperation. Even though taking proactive actions will generate long-term collective benefits internationally, individual countries struggle to act cooperatively in pollution abatement agreements for reasons such as the short-term upfront cost and the fear of being the workhorse. Oftentimes, identification of potential cooperators and careful design of structural arrangements can promote cooperation. For example, countries trading carbon permits can be viewed as pairing of low cost polluters making sacrifices within the pollution reduction game in return for the high cost polluters making sacrifices within the monetary payment game. I use game theory and computational methods to explore how to identify and promote sustainable cooperation in a multiagent system. Broadly, I have three wings in my research: (1) analyze the collective level dynamics unraveling from some given situation, (2) find arrangements that conduce cooperation, and (3) understand (and eventually hope to fill) the gap between game-theoretically rational behaviors and observed behaviors affected by confounding factors. Specifically, with game theory as a common lens to view and simplify the conflict of priorities each agent faces, I use actions of the rational, self-interested agents as a guide to understand incentives underneath potential actions in the real world. This understanding enables me to explore algorithmically institutional arrangements where robust, sustainable cooperation is possible. Further, I aim to understand how insights from game theory can be extended to behaviors in the real world: as the first step, I conduct a human subject experiment where agents face decisions between self- and group-interested behaviors, which is analyzed both through quantitative methods of statistics and machine learning, and by qualitative measures of welfare economics.
Below, I describe my works (with Vincent Conitzer) in detail, and outline my future research objectives, which center around extending game-theoretical insights to the real world, and improving models by analyzing both experimental human behavior data as well as big data from online sources that record human behavior choices.
Understanding Cooperation and Collective Level Dynamics
Members within a society face many potentially competing priorities, for example, their own self-interest often competes against societally beneficial group-interest. Rational, self-interested agents, as studied in traditional game theory, would want to maximize their own utility, and would cooperate only when group-interested actions align with that objective. When there is an immediate conflict between self- and group-interested behaviors, agents would cooperate only if they are presented with high enough threat of consequent punishment for uncooperative behaviors. The first wing of my research is dedicated to understanding the collective level dynamics of cooperative behaviors that would unfold in some given situation.
Understanding an agent’s incentive to cooperate becomes more complex when punishment is not as easy, due to characteristics such as delays in information transfer from social distance and asymmetry of social relationships in cost, benefit, and information associated with the interaction. In my article, Maximal Cooperation in Repeated Games on Social Networks, published in IJCAI’15 , Vincent Conitzer and I study relationships with the above two characteristics represented by a directed network of agents, and use game theory and algorithmic iterative elimination to efficiently identify the set of agents with sufficient incentive to cooperate. Our understanding of the interdependencies and the consequent collective dynamics helps negotiation among enforceable parties to reach cooperative agreement. For illustration, take China, South Korea, and Japan, and a pollution abatement agreement as an example. Geographically, South Korea and Japan share the East Sea, and China is located to the west of both countries. Consequently, China’s pollution affects both Korea and Japan, but due to the dominant west wind, China is not affected by their pollution. However, Korea and Japan share a sea, so their pollution does affect each other. In this setting, since China does not receive any benefits, but only pays a cost for the cooperative action of reducing pollution, China would not cooperate. Korea and Japan, on the other hand, depending on the cost and benefit structure, may decide to sustain cooperation. Identification of the set of cooperating agents, none or Korea and Japan in this case, provides a pivotal information for interested agents to effectively form coalition.
Through computer simulation varying the value of cooperative relationships and degree of discounting, we observe a phase transition with a sharp drop in cooperative behaviors. We can mathematically derive an analytical expression approximating this fine line between cooperative and uncooperative societies. This insight on the subtleties of cooperation naturally leads to the second wing of my research: how to structure arrangements for sustainable cooperation.
Institutional Arrangements for Sustainable Cooperation
To ensure cooperation among self-interested agents, it is crucial to build in a high enough threat of punishment for uncooperative behaviors to the relationship. Oftentimes, in literature, games are studied in isolation, even when the same set of agents interact in multiple different games together in reality. Looking at a problem in an isolated manner can be limiting. Even if a game independently does not give an agent sufficient incentive to play the “cooperative” action, there may be hope for cooperation when multiple games with compensating asymmetries are put together. In this situation, agents would have incentives to cooperate in all the games they are participating in, as long as the losses in some of the games are offset by gains in the other games, and cooperation in the games with gains is conditional on cooperative actions in the games with losses. The second wing of my research aims to discover optimal institutional structures and arrangements to promote sustainable cooperation.
In Role Assignment for Game-Theoretic Cooperation, published in IJCAI’16 , we formalize this set up as a problem of assigning roles within multiple projects, to an overlapping set of agents. Using game-theoretical methods, we simplify the problem into a computational one, and find a worst case bound on how long it would take to find a solution. We also provide an empirically useful integer program which solves role assignment instances in a really, really short time. This project and its algorithm provides answers to two important questions: (1) what is the institutional arrangement guaranteeing the most robust cooperation and (2) what is the minimum subsidy needed to encourage cooperation, when it is not naturally possible.
To understand the implications of extending these insights to real world implementation, my final wing of research attempts to experimentally understanding the gap between behaviors of game-theoretical, rational agents and agents in the real world.
Connecting Theory and Data on Agent Behavior
Real world agents can (and often may) behave differently from the completely self-interested agents studied in game theory. Understanding this gap, and bringing the understanding back to the model, is important in better approximating the cooperative dynamics.
In Framing Matters: Sanctioning in Public Good Games via Parallel Bilateral Relationship, Vincent Conitzer and I run a human subject experiment designed to better understand the behavioral relationship between sanctioning and cooperation in public good games. While prior studies find that sanctioning is a robust mechanism to enforce cooperation in public good games, they only study structures of adding external punishment stages. In real world settings, however, adding an additional domain of interaction for punishment purposes, on top of existing relationships, can be difficult and sanctioning may take place informally through other domains. For example, while it would be nearly impossible to introduce a new domain of interaction for punishment purposes among countries in international climate agreements, sanctioning could take place informally in trade. Though simple Folk Theorem could predict cooperation as the likely outcome, it is important to answer whether these other domains, such as trade relationships, can still act as an effective sanctioning tool, given that they are of importance independently.
To capture an important distinction of how potential sanctioning happens in real world, we introduce a new experimental setting where agents have “local” bilateral public good games representing other domains of interaction. Surprisingly, we find that, “framing”, or pointing out the possible use of these other domains as sanctioning tools, reversely lowered cooperation in a statistically significant way, unlike previous studies where the addition of explicit sanctioning stages led to higher contributions. Analyses based on machine learning and welfare economics corroborated that this framing led to a less cooperative mindset in the experiment. This suggests that understanding non-game-theoretical aspects as such how societal norms are developed, with respect to planning how to present situations, are important in a road to sustainable cooperation. Results from this experimental work also provide me exciting paths towards future research: By modifying the simpler game-theoretical model; for instance, what would happen if I analyze a setting where agents make costly punishment decisions while facing noise in observing intentions? By using data to verify insights and allowing interplay between data and theoretical model, I would like to better approximate real world collective level dynamics of cooperative behaviors, which will in turn generate better insights in making decisions for sustainable cooperation.
Future Research: More on Cooperation and Reaching into Big Data
Much of my Ph.D. work views cooperation from a computational game theory perspective, and provides insights on ways to promote cooperation. In the future, while keeping the cooperation theme, I intend to (1) further find ways to improve theoretical models on cooperation by examining the interplay between data and theory and (2) study how to incorporate non-game-theoretical factors (such as social norms) into models, so that I will be able to reduce the gap between theory and practice and build theoretical models that better inform collective dynamics within organizations. I plan to pursue these goals by not only analyzing experimental data of how people behave in a given situation, but also analyzing big data from online traces on how people have behaved.
 Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI’15). IJCAI is a rank A* / rank 1 conference in computer science, specifically in artificial intelligence. In computer science, conferences are regarded more highly than journals, a characteristic specific to the field.
 Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI’16). Details on IJCAI are provided in Footnote 1.