Killing decisions without humans: Artificial intelligence is changing the rules of war.
May 29, 20251 ViewsRead Time: 2 minutes
Font Size
16
In a world where the pace of technological advancement is accelerating, artificial intelligence has become one of the key components of modern warfare, not just as a support tool, but as a strategic weapon that may surpass the threat posed by traditional and even nuclear weapons. Armies no longer need to cross borders with firearms, as a cyber attack supported by artificial intelligence can disable entire power grids, penetrate air defense systems, or even influence public opinion through coordinated disinformation campaigns. Dr. Mohamed Mohsen Ramadan, an expert in cybersecurity and combating cybercrime, warned that future wars will increasingly rely on "killer algorithms," where intelligent systems make lethal battlefield decisions without direct human supervision, known as the concept of "autonomous lethal decision-making." This type of decision-making removes humans from the life-and-death equation, raising unprecedented ethical and legal questions. Dr. Ramadan explained that artificial intelligence is already being used to pilot drones, analyze real-time big data to predict enemy movements accurately, launch cyber attacks that can adapt to and penetrate protection systems before detection, spread fake news through chatbots and social platforms to destabilize the internal fronts of targeted countries. He pointed out that several parties now rely on it to direct drones, penetrate digital infrastructures, disrupt communication and military control systems, manage information influence campaigns, and pinpoint aerial strike targets with high precision. He emphasized that "algorithms will be faster than bullets," confirming that artificial intelligence may make decisive decisions without human intervention, making it an extremely dangerous threat. Many technology experts have expressed concerns that artificial intelligence could become more dangerous to humanity than nuclear weapons if left without clear regulations and legislation. The fear is not from the machine itself, but from the human programming it for killing without ethical constraints or oversight. Dr. Ramadan concluded by stressing that cybersecurity has become a national necessity, urging countries to invest in advanced encryption, develop capabilities in quantum computing, establish electronic armies capable of deterrence and defense, in addition to working on an international legal framework regulating the use of artificial intelligence in armed conflicts. The crucial question is no longer just how to fight, but how to prevent an emotionless machine from igniting an endless war?