OpenAI Is Everything It Promised Not to Be

Chloe Xiang, Vice:

This blog post and OpenAI’s recent actions — all happening at the peak of the ChatGPT hype cycle — is a reminder of how much OpenAI’s tone and mission have changed from its founding, when it was exclusively a nonprofit. While the firm has always looked toward a future where [Artificial General Intelligence] exists, it was founded on commitments including not seeking profits and even freely sharing code it develops, which today are nowhere to be seen.


Will this AI be shared responsibly, developed openly, and without a profit motive, as the company originally envisioned? Or will it be rolled out hastily, with numerous unsettling flaws, and for a big payday benefitting OpenAI primarily? Will OpenAI keep its sci-fi future closed-source?

This was published February 28, roughly two weeks before GPT-4 was launched.

Ben Schmidt, of Nomic AI, on Twitter:

I think we can call it shut on ‘Open’ AI: the 98 page paper introducing GPT-4 proudly declares that they’re disclosing *nothing* about the contents of their training set.

James Vincent, the Verge:

Speaking to The Verge in an interview, Ilya Sutskever, OpenAI’s chief scientist and co-founder, expanded on this point. Sutskever said OpenAI’s reasons for not sharing more information about GPT-4 — fear of competition and fears over safety — were “self evident”:

“On the competitive landscape front — it’s competitive out there,” said Sutskever. “GPT-4 is not easy to develop. It took pretty much all of OpenAI working together for a very long time to produce this thing. And there are many many companies who want to do the same thing, so from a competitive side, you can see this as a maturation of the field.”

In addition to effort and competition, Sutskever also raises questions about what it would mean for safety if the company was more transparent — something Schmidt pushes back on — while Vincent documents potential legal liability. But are these not foreseeable complications, at least for competition and safety? Why maintain the artifice of the OpenAI non-profit and the suggestive name? A growing problem with things like these is questions about their trustworthiness; why not pick a new name that is not, you know, objectively incorrect?