|
President Trump last week issued several executive orders that command AI developers to refrain from building in “ideological biases or social agendas” in their artificial intelligence services. The orders deploy the federal government’s procurement power to enforce ideological neutrality on AI Large Language Models (LLMs). The president’s concern is understandable. Google’s AI image-generator Gemini, in an effort to provide racial and gender balance, portrayed both the American founders and Nazi soldiers as Black, the Pope as female, and National Hockey League players as women. What was going on here? “Well, it turned out that Google was aware that Gemini’s data, which draws from the entire internet, was flawed,” said Bobby Allyn, an NPR technology correspondent. “It perpetuated stereotypes. There are more images of male doctors than female doctors. There are more photos of white CEOs than executives of color. So every time someone asked for an image, Google placed secret code into the request that basically said, make the images more diverse.” What had begun as a commendable tweak turned into a monstrous distortion of reality. Conservatives note that LLMs are also apt to give progressive answers on controversial social issues. One conservative reported that ChatGPT refused to create an argument for how fossil fuels enhance human civilization. It was willing to write poems extolling former President Biden but not one about President Trump. On the other hand, when we asked ChatGPT about the 1619 Project, which asserts that America’s true founding began with the importation of slaves and not the Declaration of Independence in 1776, it answered: “The claim that America's ‘real founding’ was in 1619 is a provocative reinterpretation, not a universally accepted historical fact.” We would score that as a solid and unbiased answer. So what is going on with the more biased answers? LLMs compose responses extracted from terabytes of public and copyrighted material vacuumed up from the internet. It is all too easy to forget that humans originate AI’s raw material. It necessarily absorbs the biases of society, of the right as well as the left. The old saying about computer programming – “garbage in, garbage out” – should apply to the contents of the human brain as well. We also do not preclude the built-in biases the president’s executive order aims at. Whether ChatGPT or X, the design parameters often reflect the views of their creators. Though the president’s executive orders aim at a real problem, his approach has two flaws. The first is that when the government uses its purchasing power to manage speech, it is clearly encroaching on First Amendment territory. Over the years, government influence over AI could bend this technology in one political direction or another. This points to the second problem – that language is slippery. The 20th century philosopher Ludwig Wittgenstein warned against the “bewitchment of our intelligence” by language. Words are simply too malleable and subject to interpretation to codify under an order. What are the boundaries of “woke”? Where does the ideological hobbyhorses of the DEI movement end and a simple dedication to racial fairness begin? These are real questions that no government regulation could or should answer. Only time, criticism, free markets, and free speech can do that. Comments are closed.
|
Archives
January 2026
Categories
All
|
ABOUT |
ISSUES |
TAKE ACTION |
RSS Feed