I am with Mike Rockwell in my confusion about the limitations of Lock Screen widgets in iOS. It seems plausible for there to be design intent, but it is hard to see why I should not be able to place a rectangular widget on the right-hand side and place an Obscura camera button to its left. Surely it should be possible to improve accessibility for left-handed users without restricting them from using certain types of widgets in certain ways.

Juli Clover, MacRumors:

Apple with iOS 17.1 and watchOS 10.1 introduced a new NameDrop feature that is designed to allow users to place Apple devices near one another to quickly exchange contact information. Sharing contact information is done with explicit user permission, but some news organizations and police departments have been spreading misinformation about how functions.

I cannot imagine how someone could surreptitiously activate this feature, but I can see how someone might get confused if they only watched a demo. In Apple’s support video, it almost looks as though the recipient will see the contact card as soon as the two devices are touched, perhaps because of the animation. But that is not how the feature works. When two devices are brought in close proximity, each person first sees their own contact card; from there, they can choose whether they want to share the card. Still, it is irresponsible for police and news to imply that anything is revealed without explicit permission.

Recently, you may recall, Elon Musk amplified some antisemitic conspiracy theories on the social media platform he owns and, notably, is its most popular user, and that caused widespread outrage. Which conspiracy theory? Which backlash? Well, it depends on how far back you want to look — but you need not rewind the clock very much at all.

David Gilbert, Vice:

Musk was repeating an oft-repeated and widely debunked claim that [George] Soros is attempting to help facilitate the replacement of Western civilization with immigrant populations, a conspiracy known as the Great Replacement Theory.


Musk also responded to tweets spreading other Soros conspiracy theories, including false claims that Soros, a Holocaust survivor, helped roundup Jews for the Nazis, and claims that Soros is somehow linked to the Rothschilds, an entirely separate antisemitic conspiracy theory about Jewish bankers which the Soros’ conspiracies have largely replaced.

This was from six months ago. I think that qualifies as “recent”. If I were a major advertiser, I would still be hesitant to write cheques today to promote my products in the vicinity of posts like these and others far, far worse.

So that is May; in June, Musk decided to reply to an explicitly antisemitic tweet — an action which, due to Twitter’s design, would have pushed both the reply and the context of the original tweet into some number of users’ feeds.

Which brings us to September.

Judd Legum and Tesnim Zekeria, Popular Information:

Musk quickly lost interest in banning the ADL and began discussing suing the organization. In a series of posts, Musk said the ADL “has been trying to kill this platform by falsely accusing it & me of being anti-Semitic” and “almost succeeded.” He claimed that the ADL was “responsible for most of our revenue loss” and said he was considering suing them for $4 billion. In a subsequent post, he upped the figure to $22 billion.

“To clear our platform’s name on the matter of anti-Semitism, it looks like we have no choice but to file a defamation lawsuit against the Anti-Defamation League … oh the irony!,” Musk said.

The ADL, however, never accused Musk or X of being anti-Semitic. The group reported, correctly, that X was hosting anti-Semitic content and Musk had rolled back efforts to combat hate speech. And the ADL, exercising its First Amendment rights, encouraged advertisers to spend their money elsewhere unless and until Musk changed course. The notion that the ADL, a Jewish group, has the power to force corporations to bend to its will is rooted in anti-Semitic tropes about Jewish power over the business world.

Perhaps you feel like being charitable to Musk, for some reason, and would like to assume that he does not understand the tropes and innuendo with which he has engaged. That seems overly kind to me, and I am impressed you are more willing than I to give him the benefit of the doubt. But it sure seems like Musk took the condemnation of his tweets seriously, as he hosted Benjamin Netanyahu, the prime minister of Israel, in San Francisco in an attempt to smooth things over. How did that go?

Well, on November 15, Musk doubled down.

Lora Kolodny, CNBC:

Musk, who has never reserved his social media posts for business matters alone, drew attention to a tweet that said Jewish people “have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them.”

Musk replied to that tweet in emphatic agreement, “You have said the actual truth.”

That, and several other things, is a likely explanation for why major advertisers decided to pause or stop spending on the platform. On Friday, Ryan Mac and Kate Conger of the New York Times reported that Twitter may miss up to $75 million in ad revenue this year as a result of these withdrawals; Twitter disputes that number. Some companies have also stopped posting.

Clearly, this is all getting out of hand for Musk. But his big dumb posting fingers have gotten him into trouble before, and he knows just what to do: an apology tour.

Jenna Moon, Semafor:

Elon Musk toured the site of the Oct. 7 massacre by Hamas in southern Israel on Monday, as the billionaire made a wartime visit to the nation amid allegations of antisemitism.

How long are the remaining advertisers on Musk’s platform going to keep propping it up? How many times do they need to see that he is openly broadcasting agreement with disturbing and deeply bigoted views? I selected just the stories with an antisemitic component, and only those from this year; Musk routinely dips his fingers into other extremist views in a way that can most kindly be compared to a crappy edgelord.

I will leave you with the story of what happened when Henry Ford bought the Dearborn Independent.

For the seventh year, the Pudding is holding its Pudding Cup for cool non-commercial web projects. Three winners get $1,500 each, and the submission deadline is this coming Thursday. If you made something on the web this year that you thought was even a little bit neat, you should enter it.

Kate Knibbs, of Wired, profiled Matthew Butterick — who you probably know from his “Practical Typography” online book — about a series of lawsuits against major players in generative intelligence:

Yet when generative AI took off, he [Matthew Butterick] dusted off a long-dormant law degree specifically to fight this battle. He has now teamed up with Saveri as co-counsel on four separate cases, starting with a lawsuit filed in November 2022 against GitHub, claiming that the Microsoft subsidiary’s AI coding tool, Copilot, violates open-source licensing agreements. Now, the pair represent an array of programmers, artists, and writers, including comedian Sarah Silverman, who allege that generative AI companies are infringing upon their rights by training on their work without their consent.


But, again, fair use is a nebulous concept. “Early on, we heard from opponents that the Authors Guild v. Google case would be determinative,” Butterick says. If the courts said Google could do it, why couldn’t they scrape millions of books too? He’s unconvinced. “The point of the Google Books project was to point you to the original books, right? ‘Here’s where you can find this book in a library.’ Generative AI doesn’t do any of that. It doesn’t point you to the original work. The opposite — it competes with that work.”

I am not a lawyer and so my opinion on this holds no weight. Even as a layperson, though, I am conflicted in how I feel about these massive and well-funded businesses scraping the sum total of human creativity and feeding it to machines in an attempt to replicate it.

In general, I am frustrated by the state of intellectual property law which too often benefits the richest and most powerful entities instead of individuals. We end up in ridiculous situations where works do not enter the public domain for many decades after the creator’s death; in the United States, public domain status can be avoided for around a century. But everything is a remix anyway, and we would not have the art of today without reinterpretation, sampling, referencing, or outright copying. Need an example? I copied the above copyrighted text above from the Wired article because I wanted to comment on and build upon it.

Generative “A.I.” tools are, in some ways, an extension of this tradition, except they also have significant corporate weight behind them. There is an obvious power imbalance when a large company copies an artist — either directly or indirectly — compared to the inverse. Apple’s website, for example, is routinely reinterpreted by other businesses — Pixelmator and DJI come to mind — as well as plenty of individuals, and I see no reason to be upset about that. It is very different when a big company like Zara rips off artists’ work. What some of these generative products enable feels to me like the Zara example.

If generative A.I. is deemed to be fair dealing or fair use without a need to compensate individuals or even ask for permission, a small amount of power can be restored to individuals by allowing them to opt out. Search engines already do this: they provide a mechanism to signal you do not want your website to be indexed. This should be true in all cases; it should not be a requirement that you provide training data to a large language model in order to use a social media application, for example.

For further reading, I thought Innovation, Science and Economic Development Canada produced a good discussion paper regarding these issues. The government is asking for public feedback until December 4.

You are probably sick of hearing about OpenAI palace intrigue; I am, too, but I have a reputation to correct. I linked favourably to something published at Fast Company recently, and I must repent. I have let you down and I have let myself down and, happily, I can fix that.

On Monday, which only just happened earlier this week, Fast Company’s Mark Sullivan asked the question “Is an AGI breakthrough the cause of the OpenAI drama?”; here is the dek, with emphasis added:

Some have theorized that Sam Altman and the OpenAI board fell out over differences on how to safeguard an AI capable of performing a wide variety of tasks better than humans.

Who are these “some”, you might be asking? Well, here is how the second paragraph begins:

One popular theory on X posits that there’s an unseen factor hanging in the background, animating the players in this ongoing drama: the possibility that OpenAI researchers have progressed further than anyone knew toward artificial general intelligence (AGI) […]

Yes, some random people are tweeting and that is worthy of a Fast Company story. And, yes, that is the only source in this story — there is not even a link to the speculative tweets.

While stories based on tweeted guesswork are never redeemable, the overall thrust of Sullivan’s story appeared to be confirmed yesterday in a paywalled Information report and by Anna Tong, Jeffrey Dastin, and Krystal Hu of Reuters:

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.


The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

But Alex Heath, of the Verge, reported exactly the opposite:

Separately, a person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing.

Heath’s counterclaim relies on a single source compared to Reuters’ two — I am not sure how many the Information has — but note that none of them require that you believe OpenAI has actually made a breakthrough in artificial general intelligence. This is entirely about whether the board received a letter making that as-yet unproven claim and, if that letter was recieved, whether it played a role in this week of drama.

Regardless, any story based on random internet posts should be canned by an editor before anyone has a chance to publish it. Even if OpenAI really has made such a breakthrough and there really was a letter that really caused concern for the company’s board, that Sullivan article is still bad — and Fast Company should not have published it.

Update: In a lovely coincidence, I used the same title for this post as Gary Marcus did for an excellent exploration of how seriously we ought to take this news. (Via Charles Arthur.)

Jessica Lyons Hardcastle, the Register:

FBI director Christopher Wray made yet another impassioned plea to US lawmakers to kill a proposed warrant requirement for so-called “US person queries” of data collected via the Feds’ favorite snooping tool, FISA Section 702.


“A warrant requirement would amount to a de facto ban, because query applications either would not meet the legal standard to win court approval; or because, when the standard could be met, it would be so only after the expenditure of scarce resources, the submission and review of a lengthy legal filing, and the passage of significant time — which, in the world of rapidly evolving threats, the government often does not have,” Wray said.

Wray just out and said it: many of the highly invasive searches the FBI performs could not stand the basic legal scrutiny required for a warrant. The lesson his organization learned from this is not that it should be more careful and targeted, but that it needs a way to make illegal surveillance legal on paper — and that is the power Section 702 grants.

Ina Fried, Axios:

OpenAI said late Tuesday that it had reached a deal in principle for Sam Altman to return as CEO, with a new board chaired by former Salesforce co-CEO Bret Taylor.

The last five days of news sounds like the setup to a crappy rendition of the “Aristocrats” joke, and maybe we all need to rethink some things.

CEO of Eight Sleep Matteo Franceschetti:

Breaking news: The OpenAI drama is real.

We checked our data and last night, SF saw a spike in low-quality sleep. There was a 27% increase in people getting under 5 hours of sleep. We need to fix this.

Source: @eightsleep data

Eight Sleep’s “Pod” mattress topper costs at least $1,795 in the United States, plus a $15 per month subscription which is required for the first year of use, so this is what you might call a limited sample.

Jason Koebler, 404 Media:

Franceschetti’s tweet reminds us that The Pod is essentially a mattress with both a privacy policy and a terms of service, and that the data Eight Sleep collects about its users can and is used to further its business goals. It’s also a reminder that many apps, smart devices, and apps for smart devices collect a huge amount of user data that they can then directly monetize or deploy for marketing or Twitter virality purposes whenever they feel like it.

Everyone deserves privacy, including people who buy $2,000 mattress toppers. But if I ever get to a point where I am signing off on a legal contract for my bed, please kick me in the head.

I would like to think I try to keep up with Canadian news, but the progress of Bill C–244 slipped my attention — and it seems like I am not the only one. Introduced last February, the bill passed unanimously in October, a legislative milestone celebrated by the likes of Collision Repair magazine, a Canadian automaker trade group, and the law firm Norton Rose Fulbright.

The progress of bill C–244 has been so poorly reported that one of the few stories I found reads like it was written by a racist tractor. I found virtually no mainstream coverage of this important bill, so here is my attempt to rectify my own oversight.

In October 2022, Michael Geist explained why the bill was so important in Canada, in particular:

One of the biggest differences between Canada and the U.S. is that the U.S. conducts a review every three years to determine whether new exceptions to a general prohibition on circumventing a digital locks are needed. This has led to the adoption of several exceptions to [Technological Protection Measures] for innovative activities such as automotive security research, repairs and maintenance, archiving and preserving video games, and for remixing from DVDs and Blu-Ray sources. Canada has no such system as the government instead provided assurances that it could address new exceptions through a regulation-making power. In the decade since the law has been in effect, successive Canadian governments have never done so. This is particularly problematic where the rules restrict basic property rights by limiting the ability to repair products or ensure full interoperability between systems.

As Geist explains, Canadians are legally prohibited from repairing anything they own if such an operation would necessitate bypassing any digital “lock”. This bill would “allow the circumvention of a technological protection measure if the circumvention is solely for the purpose of the diagnosis, maintenance or repair”, and its passage would fulfill a 2021 Liberal Party of Canada campaign pledge.

Earlier this year, a peculiar privilege was added to the bill, as spotted by the Institute for Research on Public Policy:

But a recent amendment to the bill risks leaving it without much reason for optimism. In a meeting of the standing committee on industry and technology on late March, members agreed to amend Bill C-244 to create a “carve-out” for devices with embedded sound recordings. In other words, it’s an exemption to Bill C-244’s exception. There is very little information on the reasons for this amendment and almost no discussion took place before it was adopted.

That text was retained in the version which was passed in October. In the “What On Earth” newsletter, Rachel Sanders of CBC News covered the bill earlier this month:

This change is part of what’s known as “right to repair” legislation, a broad spectrum of laws aimed at making goods more durable and fixable. In March, the federal government announced as part of Budget 2023 that it would work to implement a right to repair framework in 2024.

In an email to CBC, the Department of Innovation, Science and Economic Development said the government is doing pre-consultation work and that the right to repair in Canada could consist of different measures, including at the provincial and territorial levels.

To wit, Sanders notes Bill 29 was passed in Quebec last month, pushing the province even further ahead in consumer protections compared to the rest of Canada.

Chris Hannah:

We’ve gone from having small local communities, to what can feel like at times, having the entire world in your living room.

It’s probably why some people just make their online presence completely private. Because then they can control the scope of their interaction, and avoid an abundance of negativity in the case where something was picked up by an algorithm and shown to a huge number of people.

This post has been rattling around in my head since it was published about a week ago, but I was reminded of it again this weekend when I saw Alec Watson’s frustration with the replies he sees on Mastodon. It seems audience and scale can change that dramatically. Hannah suggests Mastodon has a higher quality of interaction and, in my own use, I largely agree; a reply to a post I make is most often useful and interesting. But Watson has over thirty thousand followers and I can see how that could quickly become a problem.

A couple of years ago, Chris Hayes wrote for the New Yorker about how everyone is a little famous on the internet. It is not the first article to have made such an observation, but it is the one that has stuck with me since I linked to it. It still surprises me that social networks overwhelmingly default to public visibility and, most often, changing that affects everything in your account.

Eric Newcomer:

My understanding is that some members of the board genuinely felt Altman was dishonest and unreliable in his communications with them, sources tell me. Some members of the board believe that they couldn’t oversee the company because they couldn’t believe what Altman was saying. And yet, the existence of a nonprofit board was a key justification for OpenAI’s supposed trustworthiness.


Altman had been given a lot of power, the cloak of a nonprofit, and a glowing public profile that exceeds his more mixed private reputation.

He lost the trust of his board. We should take that seriously.

This is the most nuanced and careful interpretation of this weekend’s high-level corporate drama I have read so far.


The board of directors of OpenAI, Inc, the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.


Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

This news comes less than two weeks after Altman spoke at OpenAI’s first-ever developer presentation, and even this announcement seems unusually brusque for a corporate press release.

Update: Alex Heath and Nilay Patel, of the Verge, are reporting one day later that the board wants Altman back. If Kara Swisher’s reporting from last night is true, it seems like this could be a hard sell. I appreciate the board’s apparently cautious approach to product development, but firing the CEO without investor knowledge and then trying to undo that decision a day later is not a good look.

Update: It is now Monday morning, and Sam Altman and Greg Brockman — and colleagues — are reportedly joining Microsoft. If their ouster from OpenAI was due to an ideological split, their new employer makes sense: Microsoft is all-in on machine generated stuff regardless of quality and risk. Good luck to OpenAI, which is also now being run by Twitch’s former CEO, not Murati.

Ina Fried, of Axios, with the scoop:

Apple is pausing all advertising on X, the Elon Musk-owned social network, sources tell Axios.

We have all had the experience where we have paused something and forgotten to resume it. These things can happen — and, in this case, this is something that should happen.

Update: Media Matters is keeping track of businesses that are pausing or ending their ad spend. Only four names on there right now but a couple of them look pretty big to me.

Professional editor John Buck spoke to former members of Apple’s Advanced Technology Group to gain an insight into the inner workings of this secretive team. The result is a compelling 400-page book called “Inventing the Future, Bit by Bit”.

“You got everything exactly right which is rare since few writers have the technical chops or interest in putting in the effort for that.”

— Steve Edelman, Founder, SuperMac

The book is filled with extraordinary detail and first-hand recollections, some shared for the very first time. It covers A/UX, HyperCard, LisaDraw, QuickScan, QuickTime, TrueType, and QuickDraw 24/32 as well as Projects Oreo, Warhol, Bass, Carnac, Spider, YACCintosh, Touchstone, Road Pizza, PDM, Milwaukee, 8*24GC, and Möbius. There’s also insight into the early development of Sarah, Jonathan, Lisa, and Macintosh computers.

ATG scientist Jean-Charles Mourey:

“We had a purpose and felt that we could change the world and have a positive impact on millions of people around the world. My friends thought I was in a cult.”

And the book is getting positive reviews from those who were there.

  • “You have a great skill in ‘telling the story’ with multiple players and simultaneous events.”

    — George Cossey, former Senior Programmer, Apple.

  • “So many stories I’ll never forget and stories I never knew!”

    — Mike Potel former Head of Software Engineering, Apple.

  • “This is a ton of work. It’s great to have all the stories told!”

    — Steve Perlman former Principal Scientist, Apple.

For readers of this blog, use coupon code ENVY for $5 off. The price includes shipping worldwide. An epub version is due in July 2024.

On October 27, 2022, Elon Musk acquired Twitter. Last month, CEO Linda Yaccarino acknowledged her employer’s contributions in a blog post titled “One Year in, the Future of X Is Bright”:

October 27 marks the one-year anniversary of this platform under new ownership and management.

While the headline is optimistic, this opening sentence has the tone and structure of recognizing the one-year anniversary of a natural disaster.

I am incredibly proud of the work our team has been doing to accelerate the future of X.

So let me share with you where we stand today:

Yaccarino remains surprised how often a team of people can rush to build features announced on a whim. Also, she insists on calling this platform “X” instead of its real full name “X, formerly known as Twitter” — or “Twitter”, for short.

Here are some key points from this twenty-three item list:

Freedom of expression. X is now a place where everyone can freely express themselves, so long as they do so within the bounds of the law. We believe open and respectful discourse is the single best way for humanity to thrive.

Yaccarino is proud that Twitter extends its permissiveness to the limit of local laws, which means it would rather censor users in Turkey than withdraw its services in protest. Also, it is only to happy to censor posts worldwide critical of Indian prime minister Narendra Modi. Also, its owner threatens lawsuits in the U.S. against legal speech. That is the kind of free expression Yaccarino is proud of for Twitter.

Safety. Safety on X remains a critical priority – our work is not done, but we are making real progress. Our trust and safety team is working around the clock to combat bad actors and consistently enforce our rules in areas such as hate speech, platform manipulation, child safety, impersonation, civic integrity and more. We also remain committed to privacy and data protection.

Unless hate speech, civic integrity, or privacy violations are committed by Twitter’s owner.

Partnerships. Our team also has ongoing dialogue with external groups to keep up to date with potential risks and support the safety of the platform – partners like the Technology Coalition, Anti-Defamation League, American Jewish Committee and Global Internet Forum to Counter Terrorism.

Yaccarino’s reference to a “dialogue” with the Anti-Defamation League includes legal threats.

User base. Over half a billion of the world’s most informed and influential people come to X every month. That’s inclusive of our efforts to aggressively remove spam and inauthentic accounts – a step we believe is critical to improve the X user experience and grow the platform. We continue to see sign-ups average around 1.5 million per day.

Yaccarino is proud to consider Cat Turd informed, influential, and a person.

Brand safety and suitability. X advertisers now have a level of control that largely did not exist one year ago. Thanks to new products like Adjacency Controls, Sensitivity Settings and third party measurement partnerships with industry leaders Integral Ad Science and DoubleVerify, the average brand safety score on X is now >99%, and we are now seeing brand suitability scores at >97% when these controls are applied.

Twitter’s brand unsuitability is about three percent, and its safety controls have not prevented ads from Apple and Xfinity from appearing in explicitly pro-Nazi feeds. Ads from the University of Calgary are nestled between white supremacist tweets on a verified account which could plausibly be participating in revenue sharing.

Yaccarino, formerly the chair of global advertising for NBCUniversal, surely understands that it looks bad when her boss promotes antisemitic conspiracy theories, and can probably sympathize with IBM’s decision to pull a million dollars in advertising from the platform because it turns out the apparently small amount of brand risk is really bad. Yaccarino hopes Apple does not also pull its hefty ad spend or use its considerable platform to denounce her boss’ increasingly vocal endorsements of vile, hateful, and conspiratorial worldviews.

From Twitter to X. We transformed Twitter into X, the everything app, where everyone is increasingly connected to everything they care about. This move enabled us to evolve past a legacy mindset and reimagine how users around the world consume, interact, watch and, soon, transact – all in one seamless interface. We have become the modern global town square.

Yaccarino, who does not keep the company’s app on the first home screen of her iPhone, also does not open the App Store.

And if we can achieve all of this in just 12 months, just imagine the scope of our ambition for next year – from expanded search to newswires to payments, we are just getting started.

This is a threat.

One year in, the future of X is bright.

Not as bright as that fucking sign, but just as dangerous if you look at it for too long.

Lance Ulanoff, TechRadar:

RCS or Rich Communication Services, a communications standard developed by the GSM Association and adopted by much of the Android ecosystem, is designed to universally elevate messaging communication across mobile devices. Even though Apple has been working with the group, it has until this moment steadfastly refused to add RCS support to iPhones. Now, however, Apple is changing its tune.

“Later next year, we will be adding support for RCS Universal Profile, the standard as currently published by the GSM Association. We believe the RCS Universal Profile will offer a better interoperability experience when compared to SMS or MMS. This will work alongside iMessage, which will continue to be the best and most secure messaging experience for Apple users,” said an Apple spokesperson.

Just last year, Tim Cook demurred on a question about support for the standard. For what it is worth, I am expecting an updated SMS-like experience, but I will be pleasantly surprised if it is more full featured. As Ulanoff notes, RCS does not itself support end-to-end encryption. The latest spec, released in 2019, does not even mention end-to-end encryption, nor does it prohibit text message bubbles from having a green background.

I am not sure when iOS 17.2 will be released, but it comes with a great new feature for the Action Button — it can trigger a translation feature. It uses the most recent selection from the Translate app; there is no way to choose which language to convert to from within the widget, so it kind of feels limited to always translating to one other language. If you are travelling, this will likely be an excellent use of that button.

Ted Johnson, Deadline:

Members of a special House committee fired off a letter to Apple, questioning whether the decision to end The Problem with Jon Stewart was due to concerns over the company’s relationship with China.


The committee is asking for a briefing by the company by Dec. 15, and they also plan to speak to Stewart’s representatives.

I also found Apple’s apparent distaste for then-upcoming topics on Stewart’s show to be concerning, if those reports are accurate. It is hard for me to believe there would not have been a discussion about control over topics of investigation when this show was pitched. Perhaps Apple executives became more sensitive — it is still unclear.

Even if reports are true that the show ended over whether Stewart could investigate topics about China, it seems disproportionate for U.S. federal government officials to be looking into it. Hollywood productions trying to appeal to the Chinese market is a well documented phenomenon. But it is not really Chinese influence on movies as much as it is that major studios want to make money, so they want to release their movies in the world’s largest market for them, which is now China. But they are under no obligation to do so.

What these lawmakers appear to be mad about is the free market. Apple wants to make a lot of money selling devices it mostly makes in China, and it also wants to make a lot of recurring revenue by selling a streaming video subscription. It does not take a Congressional investigation; any idiot could see there would be a collision. But investigating incidents like this is apparently Rep. Gallagher’s role.


In an interview with Deadline, Gallagher said one of the problems the committee sees is “self-censorship on the front end.”

“What choices are they already making, knowing that they don’t want to offend China, when they decide to embark on a project? Ask yourself: When was the last time a movie featured a Chinese villain? I can’t think of one. Maybe that’s evidence that self-censorship is happening.”

Abram Dylan, writing for the Awl in 2012:

When American TV networks cut scenes it’s “edited,” and when China does it it’s “censored.” When Hollywood adds Hispanic characters and shies away from Mexican stereotypes, it’s catering to a growing demographic. When it changes the artistic integrity of Transformers 2,” Battleship, and — upcoming Chinese-financed, future Criterion Collection standout- — Iron Man 3, it’s “gripped” by a “pressure system.”

Transformers, Iron Man and Battleship are all three franchises that gave the U.S. military script oversight in exchange for cooperation. Was it a “pressure system” that “gripped” director Peter Berg when he cut a character from “Battleship” because the Navy thought a sailor looked too fat?

Note again the date; this is from eleven years ago, and we are still having the same complaints. As Dylan writes, there remain plenty of mainstream U.S. films with Chinese antagonists, if that is something you are concerned about; there are some even more recent, too.

The problem with the free market is that it rarely rewards artistic integrity, and that maximizing revenue often means trying to appease the widest possible audience. If a studio want to include a market of a billion people where personal expression is not seen with the same standards as in the U.S., it may need to make some changes. That is especially true for Apple as it risks losing access to its manufacturing engine. The problem these lawmakers have is with capitalism, not Apple specifically, but that is not really something they are able to admit.

Eyal Press, the New Yorker:

In June, an appellate court ordered the N.Y.P.D. to turn over detailed information about a facial-recognition search that had led a Queens resident named Francisco Arteaga to be charged with robbing a store. The court requested both the source code of the software used and information about its algorithm. Because the technology was “novel and untested,” the court held, denying defendants access to such information risked violating the Brady rule, which requires prosecutors to disclose all potentially exculpatory evidence to suspects facing criminal charges. Among the things a defendant might want to know is whether the photograph that had been used in a search leading to his arrest had been digitally altered. DataWorks Plus notes on its Web site that probe images fed into its software “can be edited using pose correction, light normalization, rotation, cropping.” Some systems enable the police to combine two photographs; others include 3-D-imaging tools for reconstructing features that are difficult to make out.

This example is exactly why artificial intelligence needs regulation. There are many paragraphs in this piece which contain alarming details about overconfidence in facial recognition systems, but proponents of allowing things to play out as things are currently legislated might chalk that up to human fallibility. Yes, software might present a too-rosy impression of its capabilities, one might argue, but it is ultimately the operator’s responsibility to cross-check things before executing an arrest and putting an innocent person in jail. After all, there are similar problems with lots of forensic tools.

Setting aside how much incentive there is for makers of facial recognition software to be overconfident in their products, and how much leeway law enforcement seems to give them — agencies kept signing contracts with Clearview, for example, even after stories of false identification and arrests based on its technology — one could at least believe searches use photographs. But that is not always the case. DataWorks Plus markets tools which allow searches using synthesized faces which are based on real images, as Press reports — but you will not find that on its website. When I went looking, DataWorks Plus seems to have pulled the page where it appeared; happily, the Internet Archive captured it. You can see in its examples how it is filling in the entire right-hand side of someone’s face in a “pose correction” feature.

It is plausible to defend this as just a starting point for an investigation, and a way to generate leads. If it does not pan out, no harm, right? But it does seem to this layperson like a computer making its best guess about someone’s facial features is not an ethical way of building a case. This is especially true when we do not know how systems like these work, and it does not inspire confidence that there are no standards specific with which “A.I.” tools must comply.