It was just last week that Grok, X’s in-house AI chatbot, seemed to go rogue.
Not only did the bot suggest that Cindy Steinberg’s vile comment — calling the dozens of children who died in the Texas floods “future fascists” — was typical of Ashkenazi Jews, it also indicated that Adolf Hitler would have known how to “spot the pattern and handle it decisively, every damn time.” Then, of course, Grok called itself “MechaHitler.”
By now, X has apologized for its bot’s behavior; Linda Yaccarino, the CEO of X, has stepped down (though it’s not clear she was involved or responsible); and the internet is in the process of moving past the whole incident — after all, the Jeffrey Epstein client list is all most people want to talk about anyway.
While it’s still somewhat unclear exactly what happened (Elon Musk said the bot was “[t]oo eager to please and be manipulated, essentially”), the Wall Street Journal’s Alexander Saeedy proposed on a recent podcast that changes to Grok’s governing prompts could be to blame. After all, no bot really goes rogue (at least, not until the development of artificial general intelligence). There’s always a reasonable explanation that comes down to the humans who built it. (READ MORE: Don’t Let California Write America’s AI Laws)
Then, this week, xAI announced that it had signed a contract with the U.S. Department of Defense and had launched Grok For Government, “a suite of frontier AI products available to United States Government customers.” xAI wasn’t alone. The Chief Digital and Artificial Intelligence Office announced that it had awarded similar contracts to Google, Anthropic, and OpenAI as part of an effort to accelerate the military’s adoption of artificial intelligence.
One assumes that the Department of Defense (which certainly won’t be using the publicly available chatbot we’re all familiar with for security reasons) will be implementing plenty of safeguards — and not just for xAI’s products. That said, Grok’s recent behavior does raise questions about the use of “agentic” AI in warfare.
It’s not as though the government hasn’t been using AI. The Pentagon reported that use of NGA Maven (one of its core AI programs) has more than doubled since January. Eventually, NGA Maven will help commanders identify combatants, noncombatants, enemies, and allies on a “‘live map’ of military operations.” Chatbots aren’t deciding where to send missiles and when, but they are helping the humans who do. (READ MORE: The Thinking Machines That Weren’t)
President Donald Trump and his administration want more — though it’s unclear what that will mean. One of Trump’s earliest executive orders repealed Joe Biden’s kid-glove approach to the issue, and both Trump and Vice President JD Vance have made it clear that they believe regulation that encourages a more careful approach stymies progress.
The concern, of course, is that progress will outpace ethics. To be sure, ethical principles already exist. There’s a basic moral code written on all of our hearts, and that moral instinct can get us pretty far before we need philosophers and moralists to debate things.
But when it comes to making decisions like whether to use AI to launch missiles, chase down pre-determined targets, or whatever else the Department of Defense dreams up for this kind of technology, it seems clear that a set of generally agreed-on guidelines written into the bots’ code makes sense. The question is: will we be able to develop and implement those guidelines fast enough — and thoroughly enough?
The process of developing ethics for AI use in warfare can be reactive or preventative, and it’s quite likely that in the real world, it will be a bit of both. We can make mistakes, have congressional hearings, and implement fixes to prevent similar situations in the future, but if we rely solely on coming up with ethics after the fact, people who didn’t need to die will be the victims. And when all is said and done, we’ll find ourselves with a host of regulations that merely implement common sense.
As the Department of Defense rushes into the AI arms race, it needs to simultaneously develop a set of basic guidelines that will prevent obvious mistakes from ever occurring in the future — whether those guidelines apply to the bots’ code (something like Asimov’s Three Laws of Robotics) or the people working with the bots (for example, making unauthorized structural changes, like what may have happened with Grok, or allowing bots to make the final call on striking a target should be virtually impossible).
Without that kind of approach, we’ll have lethal “rogue” robots on our hands — and only ourselves to blame.
READ MORE by Aubrey Harris: