PENTAGON PROGRAM SEEKS TO REPLACE HUMANS HACKERS WITH AI
- By The Financial District
- Sep 16, 2020
- 2 min read
If the US Joint Operations Center is the physical embodiment of a new era in cyber warfare — the art of using computer code to attack and defend targets ranging from tanks to email servers — IKE is the brains. It tracks every keystroke made by the 200 fighters working on computers below the big screens and churns out predictions about the possibility of success on individual cyber missions. It can automatically run strings of programs and adjusts constantly as it absorbs information, wrote Zachary Fryer-Biggs of the Center for Public Integrity (CPI) for Yahoo News.

IKE is a far cry from the prior decade of cyber operations, a period of manual combat that involved the most mundane of tools. The hope for cyber warfare is that it won’t merely take control of an enemy’s planes and ships but will disable military operations by commandeering the computers that run the machinery, obviating the need for bloodshed. The concept has evolved since the infamous American and Israeli strike against Iran’s nuclear program with malware known as Stuxnet, which temporarily paralyzed uranium production starting in 2005. However, the code that made the attack successful somehow escaped from that system and started popping up across the internet, revealing America’s handiwork to security researchers who discovered the bug in 2010. That led to strict rules governing how and when cyber weapons could be used.
IKE hasn’t been turned into a fully autonomous cyber engine, and there’s no chance nuclear weapons would ever be added to its arsenal of hacking tools, but it’s laying the groundwork for computers to take over more of the decision making for cyber combat. U.S. commanders have had a persistent fear of falling behind rivals like China and Russia, both of which are developing AI cyber weapons.
While the growing autonomous cyber capabilities are largely untested, there are no legal barriers to their deployment. What worries some experts, however, is that artificial intelligence systems don’t always act predictably, and glitches could put lives at risk. The computer “brains” making decisions also don’t fret about collateral damage: If allowing US troops to be killed would give the system a slight advantage, the computer would let those troops die.
The Financial District would like to learn more from its audience. Can you please give us feedback on this article you just read. Click Here to participate in our online survey.