“This technology is our future threat,” warns Serhiy Beskrestnov as he studies a newly captured Russian drone. It is unlike anything he has seen before. Driven by artificial intelligence, it can find and strike its targets entirely on its own.
Beskrestnov, a consultant to Ukraine’s defence forces, has examined countless drones since the invasion began. But this one is different. It neither sends nor receives radio signals, which makes it impossible to jam or detect.
Both Russian and Ukrainian militaries are now testing artificial intelligence in combat. They use it to locate enemy units, analyse intelligence, and clear mines—tasks once done slowly and manually.
Artificial intelligence becomes Ukraine’s invisible ally
For Ukraine’s army, AI has become a critical tool. “Our forces receive more than 50,000 video streams from the front every month,” says Deputy Defence Minister Yuriy Myronenko. “Artificial intelligence analyses the footage, identifies threats, and places them on a map.”
The technology saves time, improves precision, and helps protect lives. Yet its influence reaches beyond data analysis. Ukrainian troops now deploy drones that use AI to lock onto targets and complete their final attack path autonomously.
These drones cannot be jammed and are difficult to shoot down. Experts believe they are only the beginning of a new generation of fully autonomous weapons that can search for and destroy targets without human control.
Drones that fight without pilots
“All a soldier needs to do is press a button on a smartphone,” explains Yaroslav Azhnyuk, chief executive of the Ukrainian tech company The Fourth Law. “The drone will find its target, drop explosives, assess the damage, and return to base. It doesn’t even require piloting skills.”
Azhnyuk says these drones could dramatically improve Ukraine’s air defence against Russia’s long-range weapons, including the Shahed drones. “A computer-guided system can outperform humans,” he says. “It reacts faster, sees clearer, and adapts instantly.”
Myronenko admits such a system is not yet complete but says Ukraine is close. “We have partially integrated the technology into some devices,” he confirms. Azhnyuk predicts that by the end of 2026, thousands of these autonomous drones could be active on the battlefield.
The thin line between innovation and danger
Ukrainian developers remain cautious about handing full control to machines. “AI might not recognise who is friend or foe,” warns Vadym, a defence engineer who asked not to share his surname. “A Ukrainian and a Russian soldier might look the same in uniform.”
Vadym’s company, DevDroid, builds remotely controlled machine guns that use AI to detect and track movement. However, they have disabled automatic firing because of the risk of friendly fire. “We could enable it,” he says, “but we need more data and feedback from field units before trusting it completely.”
The ethical questions are immense. Can autonomous weapons follow the laws of war? Will they spare civilians or recognise soldiers who surrender? Myronenko believes a human should always make the final decision, even if AI assists. But he admits that not every military force will act responsibly.
A new global race for control
AI’s rapid advance has sparked a new arms race that many experts fear could spiral beyond human control. How can any defence stop a swarm of intelligent drones that can think, adapt, and evade traditional countermeasures?
Ukraine’s “Spider Web” operation last June—when 100 drones struck Russian air bases—was likely guided by AI systems. Many in Ukraine now worry Moscow will replicate the tactic, not only at the front but deep inside the country.
President Volodymyr Zelensky recently warned the United Nations that artificial intelligence is driving “the most destructive arms race in human history.” He urged world leaders to create binding global rules for AI weapons, calling the challenge “as urgent as preventing nuclear proliferation.”
