AI and the risks of modern warfare

AI could set off a nuclear war

In modern battlefields, Artificial Intelligence is increasingly common. In the Russia-Ukraine war, it is used in geospatial intelligence, where AI is employed to trace and analyze open-source information and define enemy installations, and to coordinate drone attacks. Nonetheless, during this high-tech war, even the final targeting decisions have been consciously made to remain under human control. In the Middle East, they are also using advanced algorithms; US tech giants are supplying cloud and AI services to the Israeli military, and Palantir has a strategic partner to offer AI systems to target. These achievements demonstrate that AI excels at quick analysis and enhanced detection, but they also reveal profound threats. Humanity cannot be left to an algorithm; that is what the UN Secretary-General, Antonio Guterres, warned about.

Google, Amazon, and Palantir, large technology companies, are supplying Israel with cloud computing and artificial intelligence. These cases indicate that AI-based targeting has become a reality, and accountability is a severe ethical and legal issue in such circumstances. As a state in a troubled region, having a nuclear arsenal, Pakistan has been vocal on these very dangers. Islamabad reported in its April submission to the UN that the implementation of AI in the nuclear command, control, and communications would pose strategic risks that could result in miscalculation, accidents, and disastrous consequences.

Nuclear deterrence has always been based on the ability of humans to make rational choices and restrain themselves. Automating those processes may eliminate or greatly diminish the human factor, in which case, the process may easily grow out of control. Pakistan, thus, calls on all the nations with nuclear weapons to sign a declaration that they will engage in meaningful human control of weapons. As a matter of fact, a typical AI-controlled strike in South Asia could soon escalate to the nuclear front. Any fake signal or signal hack on such a system, whether due to a cyber attack or a sensor failure, would have a disastrous effect.

Other than the nuclear aspect, Pakistan emphasizes that AI makes the fog of war run like a machine. The characteristics of automated decision-support can reduce crisis timelines to extreme levels, so that the time frame for negotiation and de-escalation disappears. By excessively relying on AI-generated suggestions, commanders might lose context and nuance. In April, the submission points out that militaries eager to gain an edge can use AI in such a widespread way that armed conflict can be brought down to low-cost levels.

Technical vulnerabilities complicate these dangers. Since most AI tools are black boxes where the underlying code cannot be read, a malfunction or an imprecision in calculations is likely to be detected only when it is already too late. It has already been witnessed that machine translators may make deadly mistakes, or that incorrectly tagged imagery may do so. Overall, any malfunction of AI that results in a false intelligence signal, or that of a cyber-attacker, may trigger a domino effect that no one can foresee until the sound of weapon alarms is heard.

The current trend in the world is both promising and dangerous. The usefulness of AI in development and security is no myth, as it can predict pandemics and optimize disaster management. But its weaponised version may destroy decades of strategic stability. Pakistan urges the UN and all states to apprehend it. It is high time to ask for legally binding caps, a ban on fully autonomous weapons, binding commitments in treaties to keep human beings in control, or standards against using AI to perform nuclear or cyber attacks. 

The legal and ethical connotations of AI are also serious. International Humanitarian Law is developed based on human conscience and discernment, distinguishing between soldiers and civilians and requiring proportionality. However, such judgment is absent in an autonomous weapon that moves at the speed of light. Pakistan’s warning is justified, that allowing AI to select and interact with targets may violate the primordial principles of IHL. Should a killer robot make a mistake, who will be responsible? It is possible that soldiers could add that the computer forced him to do it, bringing commanders into lawsuits previously unknown to humanity. This has also been repeated by the UN Secretary-General, who urged a moratorium on totally autonomous lethal weapons. He said that any action to use nuclear weapons must be left to human beings, not machines. This new international convention, then, is sustained by Pakistan, maybe a treaty whereby there is a real human control of all weapon systems, and most notably, the ones that determine life or death.

Pakistan sees a systematic UN-sponsored procedure. In its submission, it requests that all pertinent UN bodies address the strategic risks in the nuclear world. This multi-forum would avoid the disjointed shopping that occurs in forums, making them less coherent. More importantly, it would allow developing countries to speak, as has been observed in recent debates of the UNGA, that the revolution of AI should not deepen the divide between rich and poor nations. The UN’s universal membership is perceived as Pakistan’s major strength: it is here that the interests of all states, large and small, are heard simultaneously.

The current trend in the world is both promising and dangerous. The usefulness of AI in development and security is no myth, as it can predict pandemics and optimize disaster management. But its weaponised version may destroy decades of strategic stability. Pakistan urges the UN and all states to apprehend it. It is high time to ask for legally binding caps, a ban on fully autonomous weapons, binding commitments in treaties to keep human beings in control, or standards against using AI to perform nuclear or cyber attacks. The world should treat military AI as the strategic priority that it is. The recent high-level dialogues are a beginning, but only inclusive, multilateral frameworks will work. Allow the voices of the people within the UN disarmament institutions to formulate regulations that make AI an instrument of peace rather than a source of instability.

Abu Hurairah Abbasi
Abu Hurairah Abbasi
The Writer works as a Researcher with an Islamabad-based policy think tank, the Institute of Strategic Studies Islamabad. He is also a Research Fellow at Hanns Siedel Foundation Pakistan. He can be reached at [email protected]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

Pakistan, Saudi Arabia pledge to build long-standing military partnership

Lt-Gen Syed Aamer Raza, Saudi counterpart reinforce shared commitment to regional peace, stability and greater self-reliance RIYADH: Lieutenant General Syed Aamer Raza, Chief of...