The New Yorker Profiles Sam Altman newyorker.com

Ronan Farrow and Andrew Marantz spent a year and a half investigating Sam Altman for the New Yorker and, in particular, the many people around him who say he lies habitually and cannot be trusted. This feels like it could be a personal attack but, in the hands of Farrow and Marantz, it is carefully adjudicated including through several on-the-record conversations with Altman. Unfortunately, like many people who have been accused of similar behaviour, Altman cannot seem to remember much when confronted with these accusations.

This reads at times like a petty drama of infighting, in large part because this is a horribly insular club of ultra-wealthy people who simultaneously treat the technology they are working to create as having all the power of nuclear weapons, yet with all the growth potential of a hot new social network. Everyone is nominally an intellectual engaged in thoughtful research. Yet it is difficult to take anyone seriously.

Farrow and Marantz:

[…] After [Ilya] Sutskever grew more distressed about A.I. safety, he compiled the memos about [Sam] Altman and [Greg] Brockman. They have since taken on a legendary status in Silicon Valley; in some circles, they are simply called the Ilya Memos. Meanwhile, [Dario] Amodei was continuing to assemble notes. These and the other documents related to him chart his shift from cautious idealism to alarm. His language is more heated than Sutskever’s, by turns incensed at Altman — “His words were almost certainly bullshit” — and wistful about what he says was a failure to correct OpenAI’s course.

Neither collection of documents contains a smoking gun. Rather, they recount an accumulation of alleged deceptions and manipulations, each of which might, in isolation, be greeted with a shrug: Altman purportedly offers the same job to two people, tells contradictory stories about who should appear on a live stream, dissembles about safety requirements. But Sutskever concluded that this kind of behavior “does not create an environment conducive to the creation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached similar conclusions. Amodei wrote, “The problem with OpenAI is Sam himself.”

These guys are obsessed with artificial general intelligence in concept and seem to think of the world in those terms. Between that and the palling around they do with similarly rich and disconnected colleagues, I cannot imagine any of them can be trusted with developing these technologies in ways that are beneficial for the rest of us — even if they are being honest.