About That OpenAI ‘Breakthrough’ theverge.com

You are probably sick of hearing about OpenAI palace intrigue; I am, too, but I have a reputation to correct. I linked favourably to something published at Fast Company recently, and I must repent. I have let you down and I have let myself down and, happily, I can fix that.

On Monday, which only just happened earlier this week, Fast Company’s Mark Sullivan asked the question “Is an AGI breakthrough the cause of the OpenAI drama?”; here is the dek, with emphasis added:

Some have theorized that Sam Altman and the OpenAI board fell out over differences on how to safeguard an AI capable of performing a wide variety of tasks better than humans.

Who are these “some”, you might be asking? Well, here is how the second paragraph begins:

One popular theory on X posits that there’s an unseen factor hanging in the background, animating the players in this ongoing drama: the possibility that OpenAI researchers have progressed further than anyone knew toward artificial general intelligence (AGI) […]

Yes, some random people are tweeting and that is worthy of a Fast Company story. And, yes, that is the only source in this story — there is not even a link to the speculative tweets.

While stories based on tweeted guesswork are never redeemable, the overall thrust of Sullivan’s story appeared to be confirmed yesterday in a paywalled Information report and by Anna Tong, Jeffrey Dastin, and Krystal Hu of Reuters:

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

[…]

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

But Alex Heath, of the Verge, reported exactly the opposite:

Separately, a person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing.

Heath’s counterclaim relies on a single source compared to Reuters’ two — I am not sure how many the Information has — but note that none of them require that you believe OpenAI has actually made a breakthrough in artificial general intelligence. This is entirely about whether the board received a letter making that as-yet unproven claim and, if that letter was recieved, whether it played a role in this week of drama.

Regardless, any story based on random internet posts should be canned by an editor before anyone has a chance to publish it. Even if OpenAI really has made such a breakthrough and there really was a letter that really caused concern for the company’s board, that Sullivan article is still bad — and Fast Company should not have published it.

Update: In a lovely coincidence, I used the same title for this post as Gary Marcus did for an excellent exploration of how seriously we ought to take this news. (Via Charles Arthur.)