Grok This ⇥ web.archive.org
If you want to know what absolutely disingenuous A.I. regulation looks like, look no further than Elon Musk who, in March, was among the signatories of a plea to pause development of these technologies. Thanks to Mike Masnick for this first link.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter issued by the Future of Life Institute.
The Musk Foundation is a major donor to the non-profit, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union’s transparency register.
“AI stresses me out,” Musk said earlier this month. He is one of the co-founders of industry leader OpenAI and his carmaker Tesla uses AI for an autopilot system.
Even at the time, as this Reuters story acknowledges, Musk’s concerns about A.I. rang hollow since he is so eager to avoid responsibility for the autonomous systems in Teslas.
I suppose Musk must have had a change of heart because his xAI venture launched the Grok language model; I am linking to an Internet Archive capture because this website does not have permalinks. Grok was introduced like so:
Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!
A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems.
If you want to be pedantic, this does not breach the declarations in the open letter from the Future of Life Institute, signed by Musk, for two reasons:
The letter-writers demanded a six-month pause and, to be fair, it has been seven months.
The letter-writers were worried about “systems more powerful than GPT-4”. Grok, according to xAI, is somewhere between GPT-3.5 and GPT-4.
But come on; I am no fool. If Musk really was co-signing on the intent of the open letter and agreed with growing concern over a “dangerous race to ever-larger unpredictable black-box models with emergent capabilities”, it is idiotic to launch something which offers “spicy” and “rebellious” answers.
Regulators are not complete dummies. They are surely aware of efforts by A.I. businesses to write favourable laws to govern the industry. My hope is for there to be a framework which encourages cautious advancement, with more oversight required as risks become more severe. That sure sounds a lot like the E.U.’s proposal to me, and it seems quite reasonable.