Grok Shows How Centralized Tech Can Be Manipulated ⇥ techdirt.com
Mike Masnick, Techdirt, reacting to Grok’s Nazi turn:
We need to take back control over the tools that we use.
Especially these days, as so many people have started (dangerously) treating AI tools as “objective” sources of truth, people need to understand that they are all subject to biases. Some of these biases are in their training data. Some are in their weights. And some are, as is now quite clear, directly in their system prompts.
The problem isn’t just bias — it’s whose bias gets embedded in the system. When a centralized AI reflects the worldview of tech billionaires rather than the diverse perspectives of its users, we’re not getting artificial intelligence. We’re getting artificial ideology.
I am half compelled by this argument, and half concerned. I obviously believe we should be skeptical of how much trust we place in corporations. After all, they have given us ample reason to be suspicious of them.
Even before it was “X”, Twitter did not have the best reputation for quality discussion. And then it was bought by Elon Musk. I still do not believe there is sufficient evidence for bias in users’ feeds during the recent U.S. presidential election, but the anti-“political correctness” written into Grok is a plainly obvious problem. Even so, a new version of Grok was launched this week, which consults Musk’s tweets when it gets stuck on a query. All of this should undermine the little bit of trust anyone might have left in X and xAI.
A company with a much better reputation, historically, is Google. Even though it has faced decades of scrutiny and questions about its secret website rankings, it has generally gotten things more right than not. To be clear, I can point to dozens of times when it has been bad at search — especially in the last five years — but it remains what most people think of when they think of searching the web. Yet, because it feels to some like A.I. works like magic, that reputation is on the line with good criticisms and very dumb ones. The Attorney General of Missouri — the state that nearly prosecuted a journalist for viewing the source of a website — is investigating Google, Meta, Microsoft, and OpenAI for being insufficiently supportive of the president’s record on Israel–U.S. relations. The Attorney General approvingly cites Missouri v. Biden, which the state lost.
Yet, even with all this in mind, we need to be able to trust institutions to some extent. This is the part of me concerned about Masnick’s piece. I think it is a great suggestion that we should control our own tools, where anyone can “choose your own values, your own sources, and your own filters”. However, most people are unlikely to do these things. Most of us will probably use something from some big company we do not really trust, but it is what ships with the system or is built into the apps we use most, or whatever. We need to ensure the areas where we have little control are trustworthy, too.
What that probably means is some kind of oversight, akin to what we have for other areas of little control. This is how we have some trust in the water we drink, the air we breathe, the medicine we take, and the planes we fly in. Consumer protection laws give us something to stand on when we are taken advantage of. Yes, there are places where this is done better than others, and I think we should learn from them instead of throwing up our hands and pretending this problem will be solved on an individual basis. To be clear, I am not reading Masnick’s writing as some kind of libertarian fantasy or an anti-regulation screed, nor do I interpret that in Alex Komoroske’s manifesto either. But I also believe there should be some regulation because we need to be realistic about the practical limitations of how much time and effort people will invest into controlling their experience.