Month: November 2023

A couple of days ago, I wrote up a short article for what I thought was a neat little workaround for being required to quit Safari to update a Safari app extension. This is a disruptive step for which developers have no workaround; it is just how Safari works.

But even though everything seemed to be okay on a surface level, something told me I should consult with an expert about whether this was a good idea.

So I emailed Jeff Johnson:

Following Nick Heer’s workaround, when you subsequently reenable StopTheMadness after updating to the latest version in the App Store while Safari is still open, Safari injects the updated extension’s content script and style sheet into open web pages that the extension has permission to access, which is typically all of them, including the pages with leftover content scripts from the previous version of the extension. Consequently, an App Store update can leave you with two different versions of the extension’s content script running simultaneously in the same web pages! This is a very undesirable situation, because the two competing scripts could conflict in unpredictable ways. You’ve suddenly gone from stopping the madness to starting the madness. In hindsight, therefore, quitting Safari in order to update extensions seems like a good idea, and that’s what I recommend to avoid potential issues and weird behavior.

After Johnson told me about this issue, it was obvious that posting my workaround was a bad idea, so I scrapped the post. But I think Johnson’s piece is worth reading to understand why the design of Safari extensions requires users to quit the browser to install updates. Of course I do not think it should, but what can you do?1

  1. If you work at Apple, see FB11882565 and FB12202841. ↥︎

Matt Growcoot, PetaPixel:

Standing in front of two large mirrors, Tessa Coates’ reflection does not return the same pose that she is making, and not only that, but both reflections are different from each other and different from the pose Coates was actually holding.


The photo is not a “Live Photo” nor is it a “Burst” — it’s just a normal picture taken on an iPhone. Understandably, Coates was freaked out.

Photography is real strange these days.

Apparently, an Apple Store employee told Coates that the company is testing a feature which sounds similar to Google’s Best Take. This is, as far as I can find, the first mention of this claim, but I would not give it too much credibility. Apple retail employees, in my experience, are often barely aware of the features of the current developer beta, let alone an internal build. They are not briefed on unannounced features. To be clear, I would not be surprised if Apple were working on something like this, but I would not bet on the reliability of this specific mention.

Update: MKBHD researcher David Imel, on Threads, says it is unlikely this photo is being depicted accurately, pointing out how different the arm positions are in each pose compared to the narrow exposure window used. The metadata posted by Coates does not disprove this, but says the exposure time was 1/100 of a second. If three photos were shot in rapid succession at that shutter speed, that would occur in a total of about 1/33 of a second, assuming no lag between images. I have seen some bizarre computational stuff off my iPhone, but nothing like this.

Update: It may not be a panoramic photo, but it sure looks like an iPhone photo taken in panorama mode, according to Faruk of iPhonedo.

Sergiu Gatlan, Bleeping Computer:

Apple released emergency security updates to fix two zero-day vulnerabilities exploited in attacks and impacting iPhone, iPad, and Mac devices, reaching 20 zero-days patched since the start of the year.

Both of these are WebKit bugs.

According to Project Zero’s spreadsheet, Apple patched ten zero-days in 2022, thirteen in 2021, three in 2020, two in 2019, three in 2016, and none in 2018, 2017, 2015, and 2014. It seems like a similar story across the board: the 2014 spreadsheet contains just eleven entries total, while the 2023 sheet contains fifty-six so far.

It is surely impossible to know, but one wonders how much of this is caused by vendors and exploiters alike getting better at finding zero-days, and how much can be blamed on worsening security in software. That seems hard to believe with increased restrictions on how much data is simply laying around to be leaked, but perhaps that is a driver of the increasing number of reports: when you build more walls, there are more opportunities to find cracks.

Patrick Howell O’Neill reported for MIT Technology Review in 2021 that the escalating number of exploits is primarily driven by state warfare, then criminals, and that it seems like a combination of increased vigilance and bug bounty programs have improved discovery. Kevin Poireault, in Infosecurity Magazine earlier this year, reports that it is a sign of better security for more straightforward exploits, necessitating the use of more advanced techniques by adversaries.

Michael Geist, responding to the news that Google and the Canadian government have struck a deal for the Online News Act:

I’ve been asked several times today if the Canadian approach will be a model for other countries. While I suspect that many may be tempted by the prospect of new money for media, the Canadian experience will more likely be a cautionary tale of how government and industry ignored the obvious risks of its legislative approach and was ultimately left desperate for a deal to salvage something for a sector that is enormously important to a free and open democracy.

For how critical journalism is, it is myopic to keep pushing its business model from one unsustainable option to another. At least Canadian publications will not be losing Google referral traffic, but this seems like a bad compromise for everyone.

Daniel Thibeault and David Cochrane, CBC News:

Google and the federal government have reached an agreement in their dispute over the Online News Act that would see Google continue to share Canadian news online in return for the company making annual payments to news companies in the range of $100 million.

Sources told Radio-Canada and CBC News earlier Wednesday that an agreement had been reached. Heritage Minister Pascale St-Onge confirmed the news Wednesday afternoon.

It was only a couple of months ago that Google said it would be unable to comply with this legislation. According to the CBC, changes were recently made to permit Google to negotiate with publishers through a single representative instead of individually.

As I have written many times here, losing referrals from Meta was irritating but not that meaningful for publishers; at least, not until Threads was launched. But losing Google referrals would have been a blow.

I remain skeptical of the utility of this law, and worried about the precedent it sets. A link tax, even when restricted to just very large companies of a specific type, undermines the way the web works. Also, even though the government looks like it won — in the eyes of those who treat politics like sports — this law has produced less than half its total predicted effect. Not good. I would normally celebrate a democratic policy win, but this is a poor law with effects that are not good, merely not as bad as things could have been.

Harley Charlton, MacRumors:

Apple today rolled out the Apple Music Replay experience for 2023, allowing subscribers to see their top artists, songs, albums, genres, playlists, and stations of the year.


‌Apple Music‌ Replay is Apple’s answer Spotify Wrapped, but ‌Apple Music‌ Replay remains a web browser only experience. The Music app itself can only show and play a basic playlist of your top songs for the year, ranked by most played, once it has been added via the Replay webpage.

Every year, millions of people give Spotify free marketing by sharing their music listening habits, and that must drive someone at Apple absolutely bananas. It is still a website, unlike Apple Books, and based on my searches of Twitter and Instagram, it seems many people miss the sharing button below each Replay section and just screenshot the page. Spotify Wrapped is an obviously better and more sharing-friendly product.

But you know what is cooler than either of these things? It is when you separate analyzing your music habits from how you listen to music.

John Voorhees, MacStories:

Spotify does a better job at surfacing interesting data with Wrapped, but if you’re like me and prefer other aspects of Apple Music, sign up for, use one of the many excellent indie apps, like Marvis Pro, Soor, Albums, Longplay, Doppler, and Air Scrobble that support the service, and then enjoy your weekly, monthly, and annual reports in’s app or on its website.

When I like a record, I buy it — often from Bandcamp, but sometimes from iTunes or elsewhere — and, in the process, cut off the Apple Music connection, which makes my Replay stats non-reflective of my actual listening habits. I am not someone who feels the need to quantitively analyze my entire life, but I do appreciate the way collects information from many of the places I listen to music: in Music on my Mac, and in a variety of apps on my iPhone. And Voorhees points to AirScrobble as a way to fill in the gaps for when I am listening to a record or one of the mixes in each edition of Web Curios on my stereo.

If Apple Music Replay or Spotify Wrapped work for you, that is great; I have no reason to try to change your mind. But if you want to move between different listening sources and retain some control of what you entrust to any one service, I think remains a great option.

Update: Joe Rosensteel:

This whole thing feels like someone was very excited to animate things, move album artwork around, and transform data, but no one really gave much thought to what this whole thing is supposed to mean to someone. How it makes someone feel.

Could not have written this any better myself. For anyone who loves music, seeing an album cover probably conjures up memories of a time when it was playing. It should transport me through a year of what I put into my ear. Does Apple love music? It used to.

Maggie Harrison, Futurism:

According to a second person involved in the creation of the Sports Illustrated content who also asked to be kept anonymous, that’s because it’s not just the authors’ headshots that are AI-generated. At least some of the articles themselves, they said, were churned out using AI as well.

“The content is absolutely AI-generated,” the second source said, “no matter how much they say that it’s not.”

After we reached out with questions to the magazine’s publisher, The Arena Group, all the AI-generated authors disappeared from Sports Illustrated’s site without explanation.

As Harrison notes, articles from people with fake names and headshots also appeared on another Arena Group site, the Street, which offers financial and investing advice. One of those authors, according to Harrison, is “Nicole Merrifield”. Harrison pointed to one article with personal finance tips; within that article are links to lists of recommended books about personal finance and investing — these picks are again credited to Merrifield.

David Roth, Defector:

The assurance from Sports Illustrated’s brass that what Futurism flagged as AI-generated content was in fact the product of 100 percent all-natural free-range human spammers is not only not reassuring, but just a restatement of the wild insult and threat running through all of this. It doesn’t matter if they are lying, or selling, or in earnest. They are in the business of selling noise, and they are selling a lot of it. What sounds like a metaphor — the wailing and yammering and relentless showboating salesmanship of the rich making every other sound indistinct or inaudible — is in point of fact just a description.

We are going to see a lot more of this.

So many of the sites with names you recognize have spent the past couple of decades juicing their search rankings and ad revenue by publishing as much stuff as possible, regardless of its quality. Remember when every news story was split across two or more pages, or was inexplicably delivered in the form of a slideshow?1 The success of sites like the Huffington Post and Business Insider proved how much value there was in simply retyping and summarizing someone else’s news story, with only a cursory source link. Then structured lists became popular because the format broke long articles into sections with keyword-dense titles. The success of the Wirecutter encouraged more opportunities for affiliate linking across the web through product recommendations.

None of this made the web better for people. This formula of insubstantial content already reeks of something generated by a system rather than written by people, and that was true before any of it was machine-produced.

  1. One of the headline features of Safari’s Reader mode was its ability to combine pages of articles like these. ↥︎

I am with Mike Rockwell in my confusion about the limitations of Lock Screen widgets in iOS. It seems plausible for there to be design intent, but it is hard to see why I should not be able to place a rectangular widget on the right-hand side and place an Obscura camera button to its left. Surely it should be possible to improve accessibility for left-handed users without restricting them from using certain types of widgets in certain ways.

Juli Clover, MacRumors:

Apple with iOS 17.1 and watchOS 10.1 introduced a new NameDrop feature that is designed to allow users to place Apple devices near one another to quickly exchange contact information. Sharing contact information is done with explicit user permission, but some news organizations and police departments have been spreading misinformation about how functions.

I cannot imagine how someone could surreptitiously activate this feature, but I can see how someone might get confused if they only watched a demo. In Apple’s support video, it almost looks as though the recipient will see the contact card as soon as the two devices are touched, perhaps because of the animation. But that is not how the feature works. When two devices are brought in close proximity, each person first sees their own contact card; from there, they can choose whether they want to share the card. Still, it is irresponsible for police and news to imply that anything is revealed without explicit permission.

Recently, you may recall, Elon Musk amplified some antisemitic conspiracy theories on the social media platform he owns and, notably, is its most popular user, and that caused widespread outrage. Which conspiracy theory? Which backlash? Well, it depends on how far back you want to look — but you need not rewind the clock very much at all.

David Gilbert, Vice:

Musk was repeating an oft-repeated and widely debunked claim that [George] Soros is attempting to help facilitate the replacement of Western civilization with immigrant populations, a conspiracy known as the Great Replacement Theory.


Musk also responded to tweets spreading other Soros conspiracy theories, including false claims that Soros, a Holocaust survivor, helped roundup Jews for the Nazis, and claims that Soros is somehow linked to the Rothschilds, an entirely separate antisemitic conspiracy theory about Jewish bankers which the Soros’ conspiracies have largely replaced.

This was from six months ago. I think that qualifies as “recent”. If I were a major advertiser, I would still be hesitant to write cheques today to promote my products in the vicinity of posts like these and others far, far worse.

So that is May; in June, Musk decided to reply to an explicitly antisemitic tweet — an action which, due to Twitter’s design, would have pushed both the reply and the context of the original tweet into some number of users’ feeds.

Which brings us to September.

Judd Legum and Tesnim Zekeria, Popular Information:

Musk quickly lost interest in banning the ADL and began discussing suing the organization. In a series of posts, Musk said the ADL “has been trying to kill this platform by falsely accusing it & me of being anti-Semitic” and “almost succeeded.” He claimed that the ADL was “responsible for most of our revenue loss” and said he was considering suing them for $4 billion. In a subsequent post, he upped the figure to $22 billion.

“To clear our platform’s name on the matter of anti-Semitism, it looks like we have no choice but to file a defamation lawsuit against the Anti-Defamation League … oh the irony!,” Musk said.

The ADL, however, never accused Musk or X of being anti-Semitic. The group reported, correctly, that X was hosting anti-Semitic content and Musk had rolled back efforts to combat hate speech. And the ADL, exercising its First Amendment rights, encouraged advertisers to spend their money elsewhere unless and until Musk changed course. The notion that the ADL, a Jewish group, has the power to force corporations to bend to its will is rooted in anti-Semitic tropes about Jewish power over the business world.

Perhaps you feel like being charitable to Musk, for some reason, and would like to assume that he does not understand the tropes and innuendo with which he has engaged. That seems overly kind to me, and I am impressed you are more willing than I to give him the benefit of the doubt. But it sure seems like Musk took the condemnation of his tweets seriously, as he hosted Benjamin Netanyahu, the prime minister of Israel, in San Francisco in an attempt to smooth things over. How did that go?

Well, on November 15, Musk doubled down.

Lora Kolodny, CNBC:

Musk, who has never reserved his social media posts for business matters alone, drew attention to a tweet that said Jewish people “have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them.”

Musk replied to that tweet in emphatic agreement, “You have said the actual truth.”

That, and several other things, is a likely explanation for why major advertisers decided to pause or stop spending on the platform. On Friday, Ryan Mac and Kate Conger of the New York Times reported that Twitter may miss up to $75 million in ad revenue this year as a result of these withdrawals; Twitter disputes that number. Some companies have also stopped posting.

Clearly, this is all getting out of hand for Musk. But his big dumb posting fingers have gotten him into trouble before, and he knows just what to do: an apology tour.

Jenna Moon, Semafor:

Elon Musk toured the site of the Oct. 7 massacre by Hamas in southern Israel on Monday, as the billionaire made a wartime visit to the nation amid allegations of antisemitism.

How long are the remaining advertisers on Musk’s platform going to keep propping it up? How many times do they need to see that he is openly broadcasting agreement with disturbing and deeply bigoted views? I selected just the stories with an antisemitic component, and only those from this year; Musk routinely dips his fingers into other extremist views in a way that can most kindly be compared to a crappy edgelord.

I will leave you with the story of what happened when Henry Ford bought the Dearborn Independent.

For the seventh year, the Pudding is holding its Pudding Cup for cool non-commercial web projects. Three winners get $1,500 each, and the submission deadline is this coming Thursday. If you made something on the web this year that you thought was even a little bit neat, you should enter it.

Kate Knibbs, of Wired, profiled Matthew Butterick — who you probably know from his “Practical Typography” online book — about a series of lawsuits against major players in generative intelligence:

Yet when generative AI took off, he [Matthew Butterick] dusted off a long-dormant law degree specifically to fight this battle. He has now teamed up with Saveri as co-counsel on four separate cases, starting with a lawsuit filed in November 2022 against GitHub, claiming that the Microsoft subsidiary’s AI coding tool, Copilot, violates open-source licensing agreements. Now, the pair represent an array of programmers, artists, and writers, including comedian Sarah Silverman, who allege that generative AI companies are infringing upon their rights by training on their work without their consent.


But, again, fair use is a nebulous concept. “Early on, we heard from opponents that the Authors Guild v. Google case would be determinative,” Butterick says. If the courts said Google could do it, why couldn’t they scrape millions of books too? He’s unconvinced. “The point of the Google Books project was to point you to the original books, right? ‘Here’s where you can find this book in a library.’ Generative AI doesn’t do any of that. It doesn’t point you to the original work. The opposite — it competes with that work.”

I am not a lawyer and so my opinion on this holds no weight. Even as a layperson, though, I am conflicted in how I feel about these massive and well-funded businesses scraping the sum total of human creativity and feeding it to machines in an attempt to replicate it.

In general, I am frustrated by the state of intellectual property law which too often benefits the richest and most powerful entities instead of individuals. We end up in ridiculous situations where works do not enter the public domain for many decades after the creator’s death; in the United States, public domain status can be avoided for around a century. But everything is a remix anyway, and we would not have the art of today without reinterpretation, sampling, referencing, or outright copying. Need an example? I copied the above copyrighted text above from the Wired article because I wanted to comment on and build upon it.

Generative “A.I.” tools are, in some ways, an extension of this tradition, except they also have significant corporate weight behind them. There is an obvious power imbalance when a large company copies an artist — either directly or indirectly — compared to the inverse. Apple’s website, for example, is routinely reinterpreted by other businesses — Pixelmator and DJI come to mind — as well as plenty of individuals, and I see no reason to be upset about that. It is very different when a big company like Zara rips off artists’ work. What some of these generative products enable feels to me like the Zara example.

If generative A.I. is deemed to be fair dealing or fair use without a need to compensate individuals or even ask for permission, a small amount of power can be restored to individuals by allowing them to opt out. Search engines already do this: they provide a mechanism to signal you do not want your website to be indexed. This should be true in all cases; it should not be a requirement that you provide training data to a large language model in order to use a social media application, for example.

For further reading, I thought Innovation, Science and Economic Development Canada produced a good discussion paper regarding these issues. The government is asking for public feedback until December 4.

You are probably sick of hearing about OpenAI palace intrigue; I am, too, but I have a reputation to correct. I linked favourably to something published at Fast Company recently, and I must repent. I have let you down and I have let myself down and, happily, I can fix that.

On Monday, which only just happened earlier this week, Fast Company’s Mark Sullivan asked the question “Is an AGI breakthrough the cause of the OpenAI drama?”; here is the dek, with emphasis added:

Some have theorized that Sam Altman and the OpenAI board fell out over differences on how to safeguard an AI capable of performing a wide variety of tasks better than humans.

Who are these “some”, you might be asking? Well, here is how the second paragraph begins:

One popular theory on X posits that there’s an unseen factor hanging in the background, animating the players in this ongoing drama: the possibility that OpenAI researchers have progressed further than anyone knew toward artificial general intelligence (AGI) […]

Yes, some random people are tweeting and that is worthy of a Fast Company story. And, yes, that is the only source in this story — there is not even a link to the speculative tweets.

While stories based on tweeted guesswork are never redeemable, the overall thrust of Sullivan’s story appeared to be confirmed yesterday in a paywalled Information report and by Anna Tong, Jeffrey Dastin, and Krystal Hu of Reuters:

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.


The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

But Alex Heath, of the Verge, reported exactly the opposite:

Separately, a person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing.

Heath’s counterclaim relies on a single source compared to Reuters’ two — I am not sure how many the Information has — but note that none of them require that you believe OpenAI has actually made a breakthrough in artificial general intelligence. This is entirely about whether the board received a letter making that as-yet unproven claim and, if that letter was recieved, whether it played a role in this week of drama.

Regardless, any story based on random internet posts should be canned by an editor before anyone has a chance to publish it. Even if OpenAI really has made such a breakthrough and there really was a letter that really caused concern for the company’s board, that Sullivan article is still bad — and Fast Company should not have published it.

Update: In a lovely coincidence, I used the same title for this post as Gary Marcus did for an excellent exploration of how seriously we ought to take this news. (Via Charles Arthur.)

Jessica Lyons Hardcastle, the Register:

FBI director Christopher Wray made yet another impassioned plea to US lawmakers to kill a proposed warrant requirement for so-called “US person queries” of data collected via the Feds’ favorite snooping tool, FISA Section 702.


“A warrant requirement would amount to a de facto ban, because query applications either would not meet the legal standard to win court approval; or because, when the standard could be met, it would be so only after the expenditure of scarce resources, the submission and review of a lengthy legal filing, and the passage of significant time — which, in the world of rapidly evolving threats, the government often does not have,” Wray said.

Wray just out and said it: many of the highly invasive searches the FBI performs could not stand the basic legal scrutiny required for a warrant. The lesson his organization learned from this is not that it should be more careful and targeted, but that it needs a way to make illegal surveillance legal on paper — and that is the power Section 702 grants.

Ina Fried, Axios:

OpenAI said late Tuesday that it had reached a deal in principle for Sam Altman to return as CEO, with a new board chaired by former Salesforce co-CEO Bret Taylor.

The last five days of news sounds like the setup to a crappy rendition of the “Aristocrats” joke, and maybe we all need to rethink some things.

CEO of Eight Sleep Matteo Franceschetti:

Breaking news: The OpenAI drama is real.

We checked our data and last night, SF saw a spike in low-quality sleep. There was a 27% increase in people getting under 5 hours of sleep. We need to fix this.

Source: @eightsleep data

Eight Sleep’s “Pod” mattress topper costs at least $1,795 in the United States, plus a $15 per month subscription which is required for the first year of use, so this is what you might call a limited sample.

Jason Koebler, 404 Media:

Franceschetti’s tweet reminds us that The Pod is essentially a mattress with both a privacy policy and a terms of service, and that the data Eight Sleep collects about its users can and is used to further its business goals. It’s also a reminder that many apps, smart devices, and apps for smart devices collect a huge amount of user data that they can then directly monetize or deploy for marketing or Twitter virality purposes whenever they feel like it.

Everyone deserves privacy, including people who buy $2,000 mattress toppers. But if I ever get to a point where I am signing off on a legal contract for my bed, please kick me in the head.

I would like to think I try to keep up with Canadian news, but the progress of Bill C–244 slipped my attention — and it seems like I am not the only one. Introduced last February, the bill passed unanimously in October, a legislative milestone celebrated by the likes of Collision Repair magazine, a Canadian automaker trade group, and the law firm Norton Rose Fulbright.

The progress of bill C–244 has been so poorly reported that one of the few stories I found reads like it was written by a racist tractor. I found virtually no mainstream coverage of this important bill, so here is my attempt to rectify my own oversight.

In October 2022, Michael Geist explained why the bill was so important in Canada, in particular:

One of the biggest differences between Canada and the U.S. is that the U.S. conducts a review every three years to determine whether new exceptions to a general prohibition on circumventing a digital locks are needed. This has led to the adoption of several exceptions to [Technological Protection Measures] for innovative activities such as automotive security research, repairs and maintenance, archiving and preserving video games, and for remixing from DVDs and Blu-Ray sources. Canada has no such system as the government instead provided assurances that it could address new exceptions through a regulation-making power. In the decade since the law has been in effect, successive Canadian governments have never done so. This is particularly problematic where the rules restrict basic property rights by limiting the ability to repair products or ensure full interoperability between systems.

As Geist explains, Canadians are legally prohibited from repairing anything they own if such an operation would necessitate bypassing any digital “lock”. This bill would “allow the circumvention of a technological protection measure if the circumvention is solely for the purpose of the diagnosis, maintenance or repair”, and its passage would fulfill a 2021 Liberal Party of Canada campaign pledge.

Earlier this year, a peculiar privilege was added to the bill, as spotted by the Institute for Research on Public Policy:

But a recent amendment to the bill risks leaving it without much reason for optimism. In a meeting of the standing committee on industry and technology on late March, members agreed to amend Bill C-244 to create a “carve-out” for devices with embedded sound recordings. In other words, it’s an exemption to Bill C-244’s exception. There is very little information on the reasons for this amendment and almost no discussion took place before it was adopted.

That text was retained in the version which was passed in October. In the “What On Earth” newsletter, Rachel Sanders of CBC News covered the bill earlier this month:

This change is part of what’s known as “right to repair” legislation, a broad spectrum of laws aimed at making goods more durable and fixable. In March, the federal government announced as part of Budget 2023 that it would work to implement a right to repair framework in 2024.

In an email to CBC, the Department of Innovation, Science and Economic Development said the government is doing pre-consultation work and that the right to repair in Canada could consist of different measures, including at the provincial and territorial levels.

To wit, Sanders notes Bill 29 was passed in Quebec last month, pushing the province even further ahead in consumer protections compared to the rest of Canada.

Chris Hannah:

We’ve gone from having small local communities, to what can feel like at times, having the entire world in your living room.

It’s probably why some people just make their online presence completely private. Because then they can control the scope of their interaction, and avoid an abundance of negativity in the case where something was picked up by an algorithm and shown to a huge number of people.

This post has been rattling around in my head since it was published about a week ago, but I was reminded of it again this weekend when I saw Alec Watson’s frustration with the replies he sees on Mastodon. It seems audience and scale can change that dramatically. Hannah suggests Mastodon has a higher quality of interaction and, in my own use, I largely agree; a reply to a post I make is most often useful and interesting. But Watson has over thirty thousand followers and I can see how that could quickly become a problem.

A couple of years ago, Chris Hayes wrote for the New Yorker about how everyone is a little famous on the internet. It is not the first article to have made such an observation, but it is the one that has stuck with me since I linked to it. It still surprises me that social networks overwhelmingly default to public visibility and, most often, changing that affects everything in your account.

Eric Newcomer:

My understanding is that some members of the board genuinely felt Altman was dishonest and unreliable in his communications with them, sources tell me. Some members of the board believe that they couldn’t oversee the company because they couldn’t believe what Altman was saying. And yet, the existence of a nonprofit board was a key justification for OpenAI’s supposed trustworthiness.


Altman had been given a lot of power, the cloak of a nonprofit, and a glowing public profile that exceeds his more mixed private reputation.

He lost the trust of his board. We should take that seriously.

This is the most nuanced and careful interpretation of this weekend’s high-level corporate drama I have read so far.


The board of directors of OpenAI, Inc, the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.


Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

This news comes less than two weeks after Altman spoke at OpenAI’s first-ever developer presentation, and even this announcement seems unusually brusque for a corporate press release.

Update: Alex Heath and Nilay Patel, of the Verge, are reporting one day later that the board wants Altman back. If Kara Swisher’s reporting from last night is true, it seems like this could be a hard sell. I appreciate the board’s apparently cautious approach to product development, but firing the CEO without investor knowledge and then trying to undo that decision a day later is not a good look.

Update: It is now Monday morning, and Sam Altman and Greg Brockman — and colleagues — are reportedly joining Microsoft. If their ouster from OpenAI was due to an ideological split, their new employer makes sense: Microsoft is all-in on machine generated stuff regardless of quality and risk. Good luck to OpenAI, which is also now being run by Twitch’s former CEO, not Murati.