AI Algorithms Detects Man In The Middle Attacks On Unmanned Military Vehicles In Seconds

In a paper published by University of South Australia and Charles Sturt University, professors have developed an algorithm to detect and intercept man-in-the-middle (MitM) attacks on unmanned military robots that aim to interrupt the operation, modify the transmitted instructions, and assume control and instruct the robots to take malicious actions.

The technical paper detailed how the robot operating system is extremely susceptible to data breaches and electronic hijacking because it is so highly networked, and can be compromised at various levels, from the core system to sub-components of sub-systems. Meanwhile, crewless vehicle systems operate under fault-tolerant modes further complicating MitM detection.

Using machine learning techniques, University researchers developed the algorithm to detect these attempts. Furthermore, the professors tested the algorithm in a replica of a bot used by the U.S. Army and recorded successful attack prevention 99% of the time, with false positives occurring in less than 2% of the tested cases.

“The advent of Industry 4, marked by the evolution in robotics, automation, and the Internet of Things, has demanded that robots work collaboratively, where sensors, actuators, and controllers need to communicate and exchange information with one another via cloud services,” comments Professor Anthony Finn, who participated in the study.

Ted Miracco, CEO, Approov Mobile Security had this comment:

   “Using AI to address security concerns in military robots raises significant concerns and warrants critical examination. While the development of an algorithm to detect and intercept man-in-the-middle (MitM) attacks is a commendable effort, relying on AI for such critical tasks may not be the most responsible approach.  A 99% success rate in preventing attacks may initially sound impressive, but when it comes to matters of national security and potential harm caused by compromised military robots, even a 1% failure rate is unacceptable if you are on the receiving end of the attack. MitM attacks can have severe consequences, including the potential for loss of life and significant damage and AI algorithms are probabilistic by nature, making them inherently fallible. There is always a risk of false positives or the much more disconcerting false negatives, where attacks go undetected. In the context of military operations, these errors can lead to disastrous outcomes.

   “To ensure the security and integrity of military robots, deterministic solutions that provide 100% accuracy should be prioritized. While AI can play a role in augmenting security measures, it should be used as a supportive tool rather than the primary line of defense. Incorporating reliable, deterministic protocols and encryption techniques that leave no room for ambiguity or uncertainty should be the foundation of any security framework for military robots. It is imperative to prioritize deterministic solutions that eliminate any margin for error and take a comprehensive approach to security to ensure the safety and effectiveness of unmanned military systems.”

Given that these are military vehicles, I have to admit that I have concerns that they might not be secure. If they aren’t, I hope there are means in place to make them as secure as possible. As in every possible way should be taken to ensure that these military vehicles are as secure as possible.

Leave a Reply

%d