Autonomous Weapons: Should AI Be Allowed to Decide Life and Death?
Abstract
The rise of autonomous weapons powered by artificial intelligence has introduced critical ethical, legal, and security concerns. This article examines whether AI should be allowed to make life-and-death decisions on the battlefield. It explores the risks of removing human judgment from lethal actions, including accountability gaps, bias in algorithms, and the potential for misuse. At the same time, it considers arguments in favor of such systems, such as increased precision and reduced human casualties. By analyzing global debates and emerging policies, the article highlights the urgent need for regulation and human oversight. Ultimately, it argues that while AI can support decision-making, the authority to take human life must remain under meaningful human control.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 PakTech Today

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.