Challenges of AI: Military Applications

Ethical Challenges: The Moral Dilemma of AI in the Military Sphere

By: Ricky Juarez

Summary

As AI emerges as a revolutionary technology, there is a global race among various nations to integrate AI into military applications. Many areas of application are under discussion, ranging from data and decision-making to cyber operations and precision targeting. Additionally, AI is being utilized to augment intelligence, enhance security, and improve surveillance capabilities. These developments have the potential to reshape the strategy of deterrence, which traditionally relied on the concept of mutual vulnerability, where both parties understand their vulnerability to the actions of the other.

Analysis

Zachary Davis’ article, titled “Artificial Intelligence on the Battlefield: Implications for Deterrence and Surprise,” highlights several advantages of implementing AI in the military context. These include AI-enhanced ISR (intelligence, surveillance, and reconnaissance), improved targeting and navigation systems, and enhanced cyber operations, which involve tasks like probing, mapping, and exploiting weaknesses in an adversary’s computer network. These advancements encourage international collaboration among allies, aiming to maintain strategic stability and prevent any single power from gaining a lasting advantage. However, it’s important to note that AI, in its current state, is unpredictable and could potentially lead to inadvertent escalations, particularly when various foreign AI systems interact with each other.

In Eric Lipton’s article titled “A.I. Brings the Robot Wingman to Aerial Combat,” a recent military application of AI is discussed. The prototype known as the XQ-58A Valkyrie is presented as the “forefront of efforts by the U.S. military to harness the capacities of an emerging technology”. This experimental aircraft is propelled by a rocket engine and is designed to carry missiles capable of engaging targets beyond visual range. The key innovation of this pilotless aircraft is its autonomy, relying on sensors and AI for threat identification and assessment, with human approval required before engagement. The U.S. Air Force plans to produce a significant number of these “collaborative combat aircraft” (AI drones) to complement traditional fighter jets. Mary Wareham, the advocacy director of the arms division at Human Rights Watch, expresses concerns, stating, “You’re crossing a moral line by outsourcing killing to machines—by allowing computer sensors, rather than humans, to take human life.”

Discussion Questions

  • What are the risks associated with the unpredictability of AI systems, especially when various foreign AI systems interact with each other?
  • To what extent should we grant autonomy to AI-powered weapons?
  • How can governments and international organizations address the ethical challenges associated with AI in military applications? What policies and regulations should be in place to ensure responsible and ethical use of AI in military contexts?

Further Reading

Leave a comment

Your email address will not be published. Required fields are marked *