Four Battlegrounds by Paul Scharre

Four Battlegrounds by Paul Scharre

Author:Paul Scharre
Language: eng
Format: epub
Publisher: W. W. Norton & Company
Published: 2023-01-18T00:00:00+00:00


34

RESTRAINT

In militarizing AI, nations are using for destructive purposes a technology that, even in its current form today, is both powerful and difficult to control. Even if none of the risks of more advanced forms of AI ever come to pass, the militarization of AI today poses serious risks to international peace and stability. While the current state of military AI competition does not meet the definition of an arms race, there are risks in how militaries might employ AI. Near-term AI could cause harm in a number of ways, including autonomous weapons, humans overtrusting prediction algorithms, and ways in which AI could upset nuclear stability. Nations should come together to help limit some of the worst dangers of military AI.

In addition to the risks of a race to the bottom on AI safety, there are a number of specific AI applications that could be dangerous to international stability. Lethal autonomous weapons, in which a machine is able to search for, select, and engage targets on its own without human intervention, are one potential risk. Nations have come together at the United Nations since 2014 to discuss the risks of lethal autonomous weapons, but diplomatic progress has moved slowly while the technology continues to advance. The intersection of AI and cyber systems could present other risks, further exacerbating competition in cyberspace.

Even nonlethal autonomous robotics could undermine stability among nations. Over the last decade, as countries have incorporated more and more uninhabited robotic vehicles into their military forces, they have increasingly been used in militarized disputes. Aerial drones have been used in contested regions around the globe and have been tools of provocation or targets for attack. Today, these systems are largely remote controlled. If they have automation at all, it is simple rule-based automation, such as the ability to navigate along preprogrammed GPS waypoints or return to a position if they lose communication with the remote human pilot. Yet over time, these robotic vehicles will incorporate increasingly capable automation and AI, enabling more autonomous operation. Machine learning will help autonomous systems perceive their environment and identify objects. One risk is that such systems have accidents, leading to collisions of robotic ships or aerial drones that go astray, perhaps flying into another country’s territory. Such actions, which would not be intended by human operators, could escalate tensions among nations. Another kind of risk stems from the ambiguity about whether a robotic vehicle’s actions are intended by humans. A country may not know whether another country’s drone encroaching on sensitive territory or taking self-defense measures is acting consistent with human intent. This ambiguity could complicate escalation dynamics among nations, leading to misperceptions and miscalculation.

AI prediction systems that do not directly take any actions on their own could also complicate crisis dynamics. In the 1980s, the Soviet Union developed a computer program called RYaN (an acronym of its name in Russian, which translates to “Nuclear Missile Attack”) to detect the impending launch of a surprise nuclear attack by the United States. The program



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.