Written by: Sam Orlando
PENSACOLA, FL - The United States Air Force Research Laboratory has made a breakthrough in AI-controlled flight, according to a recent press release dated July 25, 2023. An artificial intelligence (AI) agent successfully controlled an XQ-58A Valkyrie uncrewed aircraft for a three-hour flight, a first in the history of aviation technology. The test, executed in Florida's Eglin Test and Training Complex, marks a major milestone in the Skyborg Vanguard program, a two-year-long endeavor spearheaded by the Air Force.
"This mission proved out a multi-layer safety framework on an AI/Machine Learning (ML) flown uncrewed aircraft and demonstrated an AI/ML agent solving a tactically relevant 'challenge problem' during airborne operations," Col. Tucker Hamilton, Air Force AI Test and Operations chief and 96th Operations Group commander, was quoted in the release.
However, the implications of AI-controlled military tech have raised concerns among scientists and technology pioneers. The late physicist Stephen Hawking and Tesla CEO Elon Musk have notably warned about the potential risks of unregulated AI, cautioning it could lead to the creation of autonomous weapons, potentially igniting an AI arms race.
Musk, in particular, has been vocal about the dangers of AI, stating in a tweet from 2017, "If you're not concerned about AI safety, you should be. Vastly more risk than North Korea."
While proponents argue that AI can make warfare more precise and reduce human casualties, critics underline the paradox that the technology aimed at saving lives could also impersonally and efficiently end them.
Moreover, critics point out the intrinsic risks of decision-making by AI. Unlike humans, AI decision-making is based solely on algorithms and data inputs, lacking human emotions, ethical considerations, or mercy. In a complex situation such as war, an AI could misinterpret data, leading to catastrophic errors. Additionally, these systems could be vulnerable to hacking or manipulation.
The advancement made by the Air Force Research Laboratory is undeniably impressive, but it also highlights the urgent need for comprehensive international legislation and rigorous ethical debates regarding AI in military contexts.
As we move forward into this new era of AI technology, the warnings of experts like Hawking and Musk should guide our approach. Their caution about the potential dangers of AI underscores the importance of developing and deploying this technology responsibly, with stringent safety, ethical, and oversight standards in place.
As we chart the course for this brave new frontier, maybe Musk and Hawking were right: and that humanity's focus should be on using AI to preserve peace and prevent our creations from controlling our fate.
Comments