It comes with colossal opportunities, but also threats that are difficult to predict. [32] Notably, discussions among U.S. policymakers to block Chinese investment in U.S. AI companies also began at this time.[33]. It is the goal this paper to shed some light on these, particularly how the structure of preferences that result from states understandings of the benefits and harms of AI development lead to varying prospects for coordination. The following subsection further examines these relationships and simulates scenarios in which each coordination model would be most likely. An example of norm enforcement provided by Axelrod (1986: 1100) is of a man hit in the face with a bottle for failing to support a lynching in the Jim Crow South. In this example, each player has a dominantstrategy. Stag Hunt is a game in which the players must cooperate in order to hunt larger game, and with higher participation, they are able to get a better dinner. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. This is taken to be an important analogy for social cooperation. From that moment on, the tenuous bonds keeping together the larger band of weary, untrusting hunters will break and the stag will be lost. When looking at these components in detail, however, we see that the anticipated benefits and harms are linked to whether the actors cooperate or defect from an AI Coordination Regime. The second technology revolution caused World War II. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry. Payoff variables for simulated Stag Hunt, Table 14. Nash Equilibrium Examples 0000002555 00000 n If security increases cant be distinguished as purely defensive, this decreases instability. A day passes. But, at various critical junctures, including the countrys highly contentious presidential elections in 2009 and 2014, rivals have ultimately opted to stick with the state rather than contest it. In a security dilemma, each state cannot trust the other to cooperate. This variant of the game may end with the trust rewarded, and it may result with the trusting party alone receiving full penalty, thus, leading to a new game of revenge. <>stream One example addresses two individuals who must row a boat. Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of both the likelihood that the actor themselves will develop a harmful AI times that harm, as well as the expected harm of their opponent developing a harmful AI. Combining both countries economic and technical ecosystem with government pressures to develop AI, it is reasonable to conceive of an AI race primarily dominated by these two international actors. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. NUMBER OF PAGES 65 14. 0000006962 00000 n This section defines suggested payoffs variables that impact the theory and simulate the theory for each representative model based on a series of hypothetical scenarios. One significant limitation of this theory is that it assumes that the AI Coordination Problem will involve two key actors. startxref [11] In our everyday lives, we store AI technology as voice assistants in our pockets[12] and as vehicle controllers in our garages. Uses of Game Theory in International Relations Uncategorized, Mail (will not be published) Stag Hunt - Game Theory .net If both choose to leave the hedge it will grow tall and bushy but neither will be wasting money on the services of a gardener. In the US, the military and intelligence communities have a long-standing history of supporting transformative technological advancements such as nuclear weapons, aerospace technology, cyber technology and the Internet, and biotechnology. b Gardner's vision, the removal of inferior, Christina Dejong, Christopher E. Smith, George F Cole. Together, the likelihood of winning and the likelihood of lagging = 1. [8] If truly present, a racing dynamic[9] between these two actors is a cause for alarm and should inspire strategies to develop an AI Coordination Regime between these two actors. Here, values are measured in utility. The article states that the only difference between the two scenarios is that the localized group decided to hunt hares more quickly. This additional benefit is expressed here as P_(b|A) (A)b_A. As such, it will be useful to consider each model using a traditional normal-form game setup as seen in Table 1. THE STAG HUNT THE STAG HUNT T HE Stag Hunt is a story that became a game. f(x)={332(4xx2)if0x40otherwisef(x)= \begin{cases}\frac{3}{32}\left(4 x-x^2\right) & \text { if } 0 \leq x \leq 4 \\ 0 & \text { otherwise }\end{cases} 0 The 18th century political philosopher Jean-Jacques Rousseau famously described a dilemma that arises when a group of hunters sets out in search of a stag: To catch the prized male deer, they must cooperate, waiting quietly in the woods for its arrival. [25] In a particularly telling quote, Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek foreshadow this stark risk: One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. He found various theories being proposed, suggesting a level analysis problem. The ultimate resolution of the war in Afghanistan will involve a complex set of interlocking bargains, and the presence of U.S. forces represents a key political instrument in those negotiations. Civilians and civilian objects are protected under the laws of armed conflict by the principle of distinction. As a result, there is no conflict between self-interest and mutual benefit, and the dominant strategy of both actors would be to cooperate. In short, the theory suggests that the variables that affect the payoff structure of cooperating or defecting from an AI Coordination Regime determine which model of coordination we see arise between the two actors (modeled after normal-form game setups). Put another way, the development of AI under international racing dynamics could be compared to two countries racing to finish a nuclear bomb if the actual development of the bomb (and not just its use) could result in unintended, catastrophic consequences. War is anarchic, and intervening actors can sometimes help to mitigate the chaos. which can be viewed through the lens of the stag hunt in for an example the countrys only international conference in International Relations from, Scenario Assurance game is a generic name for the game more commonly known as Stag Hunt. The French philosopher, Jean Jacques Rousseau, presented the following Under this principle, parties to an armed conflict must always distinguish between civilians and civilian objects on the one hand, and combatants and military targets on the other. Payoff variables for simulated Prisoners Dilemma. Table 2. However, anyone who hunts rabbit can do sosuccessfullyby themselves, but with a smaller meal. The remainder of this section looks at these payoffs and the variables that determine them in more detail.[53]. The Stag Hunt represents an example of compensation structure in theory. Structural Conflict Prevention refers to a compromosde of long term intervention that aim to transform key socioeconomic, political and institional factors that could lead to conflict. Table 5. 1 The metaphors that populate game theory modelsimages such as prisoners . Understanding the Stag Hunt Game: How Deer Hunting Explains Why People LTgC9Nif [9] That is, the extent to which competitors prioritize speed of development over safety (Bostrom 2014: 767). 16 (2019): 1. The payoff matrix is displayed as Table 12. Within these levels of analysis, there are different theories that have could be considered. 0000002169 00000 n hVN0ii ipv@B\Z7 'Q{6A"@](v`Q(TJ}Px^AYbA`Z&gh'{HoF4 JQb&b`#B$03an8"3V0yFZbwonu#xZ? The stag hunters are likely to interact with other stag hunters to seek mutual benefit, while hare hunters rarely care with whom they interact with since they rather not depend on others for success. In 2016, the Obama Administration developed two reports on the future of AI. [49] For example, by defecting from an arms-reduction treaty to develop more weapons, an actor can gain the upper hand on an opponent who decides to uphold the treaty by covertly continuing or increasing arms production. The dilemma is that if one hunter waits, he risks one of his fellows killing the hare for himself, sacrificing everyone else. This technological shock factor leads actors to increase weapons research and development and maximize their overall arms capacity to guard against uncertainty. Here, both actors demonstrate a high degree of optimism in both their and their opponents ability to develop a beneficial AI, while this likelihood would only be slightly greater under a cooperation regime. Stag hunt definition: a hunt carried out to find and kill stags | Meaning, pronunciation, translations and examples It involves a group of . David Hume provides a series of examples that are stag hunts. Table 4. [7] Aumann concluded that in this game "agreement has no effect, one way or the other." The corresponding payoff matrix is displayed as Table 14. See Carl Shulman, Arms Control and Intelligence Explosions, 7th European Conference on Computing and Philosophy, Bellaterra, Spain, July 24, 2009: 6. Both games are games of cooperation, but in the Stag-hunt there is hope you can get to the "good" outcome. Each can individually choose to hunt a stag or hunt a hare. Why do trade agreements even exist? As will hold for the following tables, the most preferred outcome is indicated with a 4, and the least preferred outcome is indicated with a 1., Actor As preference order: DC > CC > DD > CD, Actor Bs preference order: CD > CC > DD > DC. Depending on the payoff structures, we can anticipate different likelihoods of and preferences for cooperation or defection on the part of the actors. [13] And impressive victories over humans in chess by AI programs[14] are being dwarfed by AIs ability to compete with and beat humans at exponentially more difficult strategic endeavors like the games of Go[15] and StarCraft. The best response correspondences are pictured here. Despite the damage it could cause, the impulse to go it alone has never been far off, given the profound uncertainties that define the politics of any war-torn country. The matrix above provides one example. Finally, Jervis[40] also highlights the security dilemma where increases in an actors security can inherently lead to the decreased security of a rival state. {\displaystyle a>b\geq d>c} hunting stag is successful only if both hunters hunt stag, while each hunter can catch a less valuable hare on his own. PDF A game theory view of the relationship between the U.S., China and Taiwan The area of international relations theory that is most characterized by overt metaphorical imagery is that of game theory.Although the imagery of game theory would suggest that the games were outgrowths of metaphorical thinking, the origins of game theory actually are to be found in the area of mathematics. Hunting stags is quite challenging and requires mutual cooperation. 0000002790 00000 n Donna Franks, an accountant for Southern Technologies Corporation, discovers that her supervisor, Elise Silverton, made several errors last year. Within the arms race literature, scholars have distinguished between types of arms races depending on the nature of arming. By failing to agree to a Coordination Regime at all [D,D], we can expect the chance of developing a harmful AI to be highest as both actors are sparing in applying safety precautions to development. If a hunter leaps out and kills the hare, he will eat, but the trap laid for the stag will be wasted and the other hunters will starve. [24] Defined by Bostrom as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills, Nick Bostrom, How long before suerintelligence? Linguistic and Philosophical Investigations 5, 1(2006): 11-30. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection. International Cooperation Theory and International Institutions For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI . 0000000016 00000 n The paper proceeds as follows. In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. If the United States beats a quick path to the exits, the incentives for Afghan power brokers to go it alone and engage in predatory, even cannibalistic behavior, may prove irresistible. [25] For more on the existential risks of Superintelligence, see Bostrom (2014) at Chapters 6 and 8. [58] Downs et al., Arms Races and Cooperation, 143-144. But the moral is not quite so bleak. 0000003027 00000 n Anarchy in International Relations Theory: The Neorealist-Neoliberal Debate Created Date: 20160809151831Z Payoff variables for simulated Deadlock, Table 10. PxF`4f$CN*}S -'2Y72Dl0%^JOG?Y,XT@ dF6l]+$.~Qrjj}46.#Z x^iyY2)/c lLU[q#r)^X The dynamics changes once the players learn with whom to interact with. Moreover, the usefulness of this model requires accurately gauging or forecasting variables that are hard to work with. GAME THEORY FOR INTERNATIONAL ACCORDS - University of South Carolina For example, one prisone r may seemingly betray the other , but without losing the other's trust. 0 [2] Tom Simonite, Artificial Intelligence Fuels New Global Arms Race, Wired., September 8, 2017, https://www.wired.com/story/for-superpowers-artificial-intelligence-fuels-new-global-arms-race/. Throughout history, armed force has been a ubiquitous characteristic of the relations between independent polities, be they tribes, cities, nation-states or empires. <<3B74F05AAAB3B2110A0010B6ACF6FC7F>]/Prev 397494>> We have recently seen an increase in media acknowledgement of the benefits of artificial intelligence (AI), as well as the negative social implications that can arise from its development. We are all familiar with the basic Prisoners Dilemma. Payoff matrix for simulated Prisoners Dilemma. An hour goes by, with no sign of the stag. As stated before, achieving a scenario where both actors perceive to be in a Stag Hunt is the most desirable situation for maximizing safety from an AI catastrophe, since both actors are primed to cooperate and will maximize their benefits from doing so. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium. The game is a prototype of the social contract. This is taken to be an important analogy for social cooperation. These strategies are not meant to be exhaustive by any means, but hopefully show how the outlined theory might provide practical use and motivate further research and analysis. But what is even more interesting (even despairing) is, when the situation is more localized and with a smaller network of acquainted people, most players still choose to hunt the hare as opposed to working together to hunt the stag. Continuous coordination through negotiation in a Prisoners Dilemma is somewhat promising, although a cooperating actor runs the risk of a rival defecting if there is not an effective way to ensure and enforce cooperation in an AI Cooperation Regime. endstream endobj 76 0 obj <>stream Namely, the probability of developing a harmful AI is greatest in a scenario where both actors defect, while the probability of developing a harmful AI is lowest in a scenario where both actors cooperate. Instead, each hunter should separately choose the more ambitious and far more rewarding goal of getting the stag, thereby giving up some autonomy in exchange for the other hunter's cooperation and added might. A major terrorist attack launched from Afghanistan would represent a kind of equal opportunity disaster and should make a commitment to establishing and preserving a capable state of ultimate value to all involved. genocide, crimes against humanity, war crimes, and ethnic cleansing. It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. [3] Elon Musk, Twitter Post, September 4, 2017, https://twitter.com/elonmusk/status/904638455761612800. As a result, concerns have been raised that such a race could create incentives to skimp on safety. Each player must choose an action without knowing the choice of the other. POLS1501 Practice Flashcards | Quizlet 0000003265 00000 n 'The "liberal democratic peace" thesis puts the nail into the coffin of Kenneth Waltz's claim that wars are principally caused by the anarchical nature of the international system.' On the other hand, real life examples of poorly designed compensation structures that create organizational inefficiencies and hinder success are not uncommon. HtV]o6*l_\Ek=2m"H)$]feV%I,/i~==_&UA0K=~=,M%p5H|UJto%}=#%}U[-=nh}y)bhQ:*HzF1"T!G i/I|P&(Jt92B5*rhA"4 In addition to boasting the worlds largest economies, China and the U.S. also lead the world in A.I. Is human security a useful approach to security? arguing that territorial conflicts in international relations follow a strategic logic but one defined by the cost-benefit calculations that . In the context of international relations, this model has been used to describe preferences of actors when deciding to enter an arms treaty or not. > Here, this is expressed as P_(h|A or B) (A)h_(A or B). If participation is not universal, they cannot surround the stag and it escapes, leaving everyone that hunted stag hungry. As we discussed in class, the catch is that the players involved must all work together in order to successfully hunt the stag and reap the rewards once one person leaves the hunt for a hare, the stag hunt fails and those involved in it wind up with nothing. International Relations of Asia & US Foreign Policy. Two players, simultaneous decisions. Learn how and when to remove these template messages, Learn how and when to remove this template message, "Uses of Game Theory in International Relations", "On Adaptive Emergence of Trust Behavior in the Game of Stag Hunt", "Stag Hunt: Anti-Corruption Disclosures Concerning Natural Resources", https://en.wikipedia.org/w/index.php?title=Stag_hunt&oldid=1137589086, Articles that may contain original research from November 2018, All articles that may contain original research, Articles needing additional references from November 2018, All articles needing additional references, Wikipedia articles that are too technical from July 2018, Articles with multiple maintenance issues, Creative Commons Attribution-ShareAlike License 3.0, This page was last edited on 5 February 2023, at 12:51. The hedge is shared so both parties are responsible for maintaining it. In this scenario, however, both actors can also anticipate to the receive additional anticipated harm from the defector pursuing their own AI development outside of the regime. Table 7. One example addresses two individuals who must row a boat. Next, I outline my theory to better understand the dynamics of the AI Coordination Problem between two opposing international actors. Additionally, the feedback, discussion, resource recommendations, and inspiring work of friends, colleagues, and mentors in several time zones especially Amy Fan, Carrick Flynn, Will Hunt, Jade Leung, Matthijs Maas, Peter McIntyre, Professor Nuno Monteiro, Gabe Rissman, Thomas Weng, Baobao Zhang, and Remco Zwetsloot were vital to this paper and are profoundly appreciated. Actor As preference order: DC > CC > CD > DD, Actor Bs preference order: CD > CC > DC > DD. However, a hare is seen by all hunters moving along the path. The complex machinations required to create a lasting peace may well be under way, but any viable agreementand the eventual withdrawal of U.S. forces that would entailrequires an Afghan government capable of holding its ground on behalf of its citizens and in the ongoing struggle against violent extremism. As of 2017, there were 193 member-states of the international system as recognized by the United Nations. As a result, this could reduce a rival actors perceived relative benefits gained from developing AI. In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. 0000018184 00000 n In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. As discussed, there are both great benefits and harms to developing AI, and due to the relevance AI development has to national security, it is likely that governments will take over this development (specifically the US and China). Individuals, factions and coalitions previously on the same pro-government side have begun to trade accusations with one another. She argues that states are no longer [21] Jackie Snow, Algorithms are making American inequality worse, MIT Technology Review, January 26, 2018, https://www.technologyreview.com/s/610026/algorithms-are-making-american-inequality-worse/; The Boston Consulting Group & Sutton Trust, The State of Social mobility in the UK, July 2017, https://www.suttontrust.com/wp-content/uploads/2017/07/BCGSocial-Mobility-report-full-version_WEB_FINAL-1.pdf. Although the development of AI at present has not yet led to a clear and convincing military arms race (although this has been suggested to be the case[43]), the elements of the arms race literature described above suggest that AIs broad and wide-encompassing capacity can lead actors to see AI development as a threatening technological shock worth responding to with reinforcements or augmentations in ones own security perhaps through bolstering ones own AI development program.
Is Love Your Enemies In The Old Testament,
Harmon The Righteous Gemstones,
Did The Plagues Affect Goshen,
Kapaa Football Roster 2021,
Articles S