Latest    News/CommentaryFeatured

Pentagon reportedly integrating AI into defense systems, but is that really such a good idea?

The Pentagon has begun integrating artificial intelligence (AI) technology into its defensive systems, but is that such a great idea?

The beauty of such systems, according to Politico, is that they “can respond on their own, without human input — and move so fast against potential enemies that humans can’t keep up.”

This makes them a powerful tool.

The problem is that AI appears to lack common sense.

Last year, Stanford University’s Hoover Wargaming and Crisis Simulation Initiative Director Jacquelyn Schneider started conducting war games featuring the country’s top AI large language models (LLMs).

These sorts of AI war games have become exceedingly popular:

According to Politico, Schneider exposed the AI systems to “fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan.” The results were not at all good.

“Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately and turn crises into shooting wars — even to the point of launching nuclear weapons,” Politico notes.

“The AI is always playing Curtis LeMay,” Schneider said. “It’s almost like the AI understands escalation, but not de-escalation. We don’t really know why that is.”

So if the world were in a free-for-all battle where the goal was to just exterminate everybody else, AI would clearly do terrific. But in the real world, not so much.

Despite this, the Pentagon keeps signing deals with AI firms:

The problems are compounded when you factor in the chances of the Pentagon eventually becoming dependent on AI — which, believe it or not, is very, very possible.

“[T]he need for lightning-fast decision-making, coordinating complex swarms of drones, crunching vast amounts of intelligence data, and competing against AI-driven systems built by China and Russia mean that the military is increasingly likely to become dependent on AI,” Politico notes.

Complicating matters is the fact that experts like Schneider still don’t even really understand how LLMs actually work.

“[W]hile the Pentagon is racing to implement new AI programs, experts like Schneider are scrambling to decipher the algorithms that give AI its awesome power before humans become so dependent on AI that it will dominate military decision-making even if no one ever formally gives it that much control,” Politico notes.

But speaking of control, Schneider is afraid that commanders will grow ever more reliant on AI, to the point that they’ll become afraid to challenge its potentially flawed decision-making.

“I’ve heard combatant commanders say, ‘Hey, I want someone who can take all the results from a war game and, when I’m in a [crisis] scenario, tell me what the solution is based on what the AI interpretation is,’” she said.

The good news is that in 2023, the Department of Defense updated its directive on weapons systems linked to AI by noting that “appropriate levels of human judgment over the use of force” are mandatory in all cases.

But according to Politico, critics are worried that the directive’s language is too vague. They’re also concerned because the directive includes a waiver allowing senior department officials to override it.

Plus, the directive doesn’t yet apply to nuclear weapons…

“The administration supports the need to maintain human control over nuclear weapons,” a senior administration official nevertheless told Politico.

The problem is that China and Russia are believed to already be using AI in their defense systems, meaning the U.S. has no other option but to jump in and integrate before it’s too late.

DONATE TO BIZPAC REVIEW

Please help us! If you are fed up with letting radical big tech execs, phony fact-checkers, tyrannical liberals and a lying mainstream media have unprecedented power over your news please consider making a donation to BPR to help us fight them. Now is the time. Truth has never been more critical!

Success! Thank you for donating. Please share BPR content to help combat the lies.

Vivek Saxena
Latest posts by Vivek Saxena (see all)

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. Thank you for partnering with us to maintain fruitful conversation.



Source link

Related Posts

1 of 17