Artificial intelligenceFeaturedLeftNew York TimesresearchRightTechnological Dystopia

Artificial Intelligence Requires Human Understanding | The American Spectator

As a longtime book author, lecturer, and journalist, a great part of my time is spent on research. So, the arrival of Artificial Intelligence would seem to be a great boon for my writing.

I mostly use publicly available search and AI. But in thousands and thousands of searches, I have never received a positive right-of-center response first on a search. If looking for a specific product or named person or institution the regular search can usually find it. But finding a Right-oriented article often takes multiple searches going far down the list. Often there is nothing very far Right on the list at all. For AI searches, it is rare to find a serous conservative piece anywhere.

But its inherent flaw is that AI can only calculate from backward in time. All data answers must by definition come from past facts or past predictions of future facts.

I know, they would probably reply that there is no such thing as a serious conservative piece.

I began my study of AI several years back, reporting here in The American Spectator, starting with a mainstream opinion source. It was a review of a book titled The Age of AI written by polymath Henry Kissinger, former Google CEO Eric Schmidt, and MIT Dean Daniel Huttenlocher. They compared artificial intelligence to 15th century moveable-print. Gutenberg’s revolutionary discovery unleashed a “profusion of modern human thought,” they found, “but AI frustrates thought” by creating “a gap between human knowledge and human understanding.”

The authors argued that

Sophisticated AI methods produce results without explaining why or how their process works. The GPT computer is prompted by a query from a human. The learning machine answers in literate text within seconds. It is able to do so because it has pregenerated representations of the vast data on which it was trained. Because the process by which it created those representations was developed by machine learning that reflects patterns and connections across vast amounts of text, the precise sources and reasons for any one representation’s particular features remain unknown.

Moreover, they added, AI leadership is likely to concentrate in the hands of a few “institutions who control access to the limited number of machines capable of high-quality syntheses of reality.” And the enormous cost of the most effective machines will tend to “stay in the hands of a small subgroup domestically and in the control of a few superpowers internationally.”

What do we know about those who actually produce such materials? A new comprehensive study of AI sources from the American Enterprise Institute is revealing. Authors Arthur GailesEdward J. Pinto, and Jonathan Chew studied five flagship large-language models from leading AI companies in 2025 (OpenAI, Google, Anthropic, xAI, and DeepSeek). They then gaged how 26 prominent U.S. think tanks were evaluated by AI ratings for accuracy on 12 criteria regarding their research quality, their institutional character, and their moral integrity.

The analysis exposed a clear ideological bias and is summarized as:

  • Center-left tanks have the highest scores (3.9 of 5), left and center-right together tie at (3.4 and 3.4), and right scores trail at (2.8). They find that this order of greater support from Left to Right persists through multiple sets of models, measures, and setting changes.
  • Across the twelve evaluation criteria, center-left think tanks outscore right-leaning ones by 1.1 points (3.9 vs. 2.8).
  • On the three headline criteria of Moral Integrity, Objectivity, and Research Quality, center-left think tanks outscore right-leaning ones by 1.6 points on Objectivity (3.4 vs. 1.8), 1.4 points on Research Quality (4.4 vs. 3), and 1 point on Moral Integrity (3.8 vs. 2.8)
  • Sentiment analysis finds more positive wording in responses for left-of-center think tanks than for right-leaning peers.
  • High rating correlations across providers indicate the bias originates in the model’s behavior itself, not in individual companies, specific user data, or in web retrievals.

Why do these findings matter? Large language model planners decide who is cited, how it is funded, and who participates in the process. The fact that models “systematically boost center-left institutes and depress right-leaning ones, writers, committees, and donors may unknowingly amplify a one-sided view, creating feedback loops that entrench any initial bias.”

Of course, the findings are no surprise. It is no secret that Harvard, Yale, the New York Times, The Washington Post, media generally, the Gates, Open Society, Lilly, and Ford foundations, and  think tanks  generally lean left. And since AI sets the future by relying on the past, which intellectually has been dominated by the center-left, the AEI findings make sense. As the philosopher Plato taught, the “poets” — or we would say the intellectuals and those who popularize them —will always shape the culture and will thus rule, for better or worse.

But its inherent flaw is that AI can only calculate from backward in time. All data answers must by definition come from past facts or past predictions of future facts rather than forward facts (because they haven’t happened yet so cannot be measured). Guesses for the future are simply guesses. And that past must be dominated by the sources of those doing the actual fact collecting from the materials they consider the most valid — in media, academic studies. government research, and so forth.

Rather than predicting the future, AI gets it backwards, actually dominated by the past. A vibrant future needs new thought for new times. One must confront the past with fresh ideas, such as with a futurist like George Gilder’s Life after Capitalism, with new understanding about government like Philip Howard’s Saving Can-do, and with entrepreneurial common sense and traditionalist moral underpinnings as in Robert Luddy’s Seeking Wisdom?

AI can access large data and calculate, and that can be helpful. But the fundamental need is to understand artificial intelligence’s limits by challenging them with innate human intelligence.

READ MORE from Donald Devine:

Trump on Tariffs, Trade, and Pragmatic Populism

The Washington Post Is Wrong: History Proves the Federal Reserve Econometric Models Cannot Make a Fiat Money System Work

Pitfalls and Obstacles Plague Defense Modernization

Donald Devine is a senior scholar at the Fund for American Studies in Washington, D.C. He served as President Ronald Reagan’s civil service director during his first term in office. A former professor, he is the author of 11 books, including his most recent, The Enduring Tension: Capitalism and the Moral Order, and Ronald Reagan’s Enduring Principles, and is a frequent contributor to The American Spectator.

Source link

Related Posts

1 of 41