A central consideration for social contract instantiations of government is the optimal balance between liberty and safety. In this era of agentic AI bots (which use reasoning, planning, and tools), and physical AI (AI systems embedded in hardware) robots, can our moral judgement keep pace?
There are several companies that provide their AI technologies to departments of the U.S. government, including intelligence agencies and the Department of War. Three of them (there are others) are OpenAI, Palantir, and Anthropic. OpenAI remains open to government’s needs. Palantir, led by Alex Karp, is gung-ho patriotic.
By contrast, Anthropic is putting restrictions on the government’s use of their AI technology. Specifically, they are insisting that their AI stacks not be used for autonomous weapons or domestic mass surveillance. That may be a conscientious position, but the government didn’t take kindly to it. In fact, the Pentagon has imposed upon them the very rare label of supply-chain risk. That jeopardizes their future revenues, perhaps eventually restricting their market penetration (though, as discussed below, their technology is being implemented elsewhere). At a minimum, being blacklisted threatens Anthropic’s lucrative business with the government.
Sam Altman, head of competing OpenAI, is indeed more open. At an all-hands meeting he put the kibosh on any sanctimonious staff stirrings, making it clear that the company doesn’t “get to make operational decisions” about how the government uses their technologies. If OpenAI’s technology can help win the Iran war, or at least help uncover security threats, then let circumspect experts who are privy to intelligence make that determination.
Social contract theories of government … concede that in order to provide security, a smidgeon of privacy may be curtailed under the “consent of the governed.”
Ah, then there’s Palantir. CEO Karp is unapologetically pro-American; he is proud to help our war effort. During a recent interview, he exulted that AI gives the U.S. and allies an edge in the Iran war. Crucially, the surveillance abilities of Project Maven that aid with targeting decisions. Interestingly, Palantir implements Anthropic’s Claude AI models because they are proficient at identifying patterns, revealing “data-driven insights,” and support making “informed decisions in time-sensitive situations.”
It seems a bit convoluted given their rare supply-chain risk designation, but, even if indirectly, Anthropic (for now) is helping in the Iran war effort. But what about domestic surveillance, especially with the increased threats of domestic terrorism, sleeper cells, and upcoming sporting extravaganzas like the World Cup (with Iraq, of all countries, potentially replacing Iran, who pulled out)?
It’s prudent to be wary about potential privacy infringements. Indeed, Benjamin Franklin’s statement that “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety,” still resonates. However, his concern was not with balancing government power with civil liberties as we currently define them; rather, with effective self-government in the face of security. On another historical curiosity, there are serious doubts as to whether Patrick Henry ever uttered the famous phrase “Give me liberty, or give me death.”
Social contract theories of government (including our own) concede that in order to provide security, a smidgeon of privacy may be curtailed under the “consent of the governed.” After all, in a state of nature, complete “freedom” may run roughshod over the meek as “might makes right.” The relationship between liberty and safety is symbiotic: If humans can restrain it from running amok, AI has the potential to help ensure our safety, thus allowing more freedom of choice. With war raging and terroristic activity on the rise, I’m open to OpenAI, and cheer Palantir.
READ MORE from Noel Williams:






![Donald Trump Slams Chicago Leaders After Train Attack Leaves Woman Critically Burned [WATCH]](https://www.right2024.com/wp-content/uploads/2025/11/Trump-Torches-Powell-at-Investment-Forum-Presses-Scott-Bessent-to-350x250.jpg)









