Link Log

David Pierce, of the Verge, obtained a statement from Nadine Haija at Apple acknowledging it was responsible for shutting down Beeper Mini’s reverse-engineered iMessage app:

At Apple, we build our products and services with industry-leading privacy and security technologies designed to give users control of their data and keep personal information safe. We took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage. These techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks. We will continue to make updates in the future to protect our users.

Pierce:

This statement suggests a few things. First, that Apple did in fact shut down Beeper Mini, which uses a custom-built service to connect to iMessage through Apple’s own push notification service — all iMessage messages travel over this protocol, which Beeper effectively intercepts and delivers to your device. To do so, Beeper had to convince Apple’s servers that it was pinging the notification protocols from a genuine Apple device, when it obviously wasn’t. (These are the “fake credentials” Apple is talking about. Quinn Nelson at Snazzy Labs made a good video about how it all works.)

I am not surprised Apple leaned on privacy and security. Though I framed it as “shakier” than a business defence — since Beeper Mini apparently used the same standards as Apple’s own iMessage client — I did write that it would likely “treat this reverse engineering exercise as a security problem”, which is exactly what happened. Beeper Mini was a high-profile vulnerability proof-of-concept disguised as a neat new app.

Calling Nelson’s embargoed preview a “good video about how it all works” is a curious choice of words. I do not disagree that Nelson explained the mechanism successfully, but there is a whole chapter in it named “Apple isn’t likely to patch this ‘exploit'”. Nelson:

Needless to say, this doesn’t appear to be some easy thing that Apple can just turn off. It will require a complete redesign of their entire authentication and delivery strategy — not just for iMessage, but for Apple ID account access as a whole.

Maybe Apple really did redevelop its entire iMessage and Apple ID architecture in the three days between Beeper Mini’s public launch and when it was shut down — or, more charitably, between when the pypush demo was published in early August and now. But I do not think so; I think this was a relatively straightforward change. It seems like Nelson’s choice of language reflected Beeper’s overly confident explanation.

Pierce:

Since Apple cut off Beeper Mini, Beeper has been working feverishly to get it up and running again. On Saturday, the company said iMessage was working again in the original Beeper Cloud app, but Beeper Mini was still not functioning. Founder Eric Migicovsky said on Friday that he simply didn’t understand why Apple would block his app: “if Apple truly cares about the privacy and security of their own iPhone users, why would they stop a service that enables their own users to now send encrypted messages to Android users, rather than using unsecure SMS?”

Migicovsky says now that his stance hasn’t changed, even after hearing Apple’s statement. He says he’d be happy to share Beeper’s code with Apple for a security review, so that it could be sure of Beeper’s security practices. Then he stops himself. “But I reject that entire premise! Because the position we’re starting from is that iPhone users can’t talk to Android users except through unencrypted messages.”

I am not falling for Migicovsky’s play-dumb act here and, I am certain, neither are you. Of course Apple does not want some random company piggybacking on its iMessage infrastructure with an unofficial client. What part of Apple’s history since about 1997 would indicate that it would look at a reverse-engineered client for a rival operating system and say geez, thanks for helping out?

There are plenty of end-to-end encrypted messaging apps available for iOS and Android, like Signal and WhatsApp, so the premise that “iPhone users can’t talk to Android users except through unencrypted messages” is also complete nonsense. This is basically a U.S. problem, and the most common reasons cited for cross-platform compatibility — media quality, group chats, and privacy — are resolved for everyone if we choose a different app. I think it would be great if iMessage were available universally as it has been stable and reliable for me; I would also like some way for any messaging client to securely communicate with others.1 The reality is that iMessage is an Apple proprietary protocol and that is unlikely to change. Messaging is one area where there is no shortage of choice for users.


  1. Perhaps through a vendor-provided plugin system. Admittedly, even if there were some kind of universal messaging client on Android with a Facebook-created WhatsApp plugin and a Telegram-made Telegram plugin, would you bet on Apple building iMessage compatibility? I would not. ↥︎

Surely by now you have seen Google’s Gemini demo. The company opens the video with this description:

We’ve been testing the capabilities of Gemini, our new multimodal Al model.

We’ve been capturing footage to test it on a wide range of challenges, showing it a series of images, and asking it to reason about what it sees.

What follows is a series of split-screen demos with a video on the left, Gemini’s seemingly live interpretation on the right, and a voiceover conversation between — I assume — a Google employee and a robotic voice reading the Gemini interpretation.

Google acknowledges in the video description that “latency has been reduced and Gemini outputs have been shortened for brevity”. Other than that, you might expect the video to show a real experience albeit sped up; that is how I interpreted it.

Parmy Olson, Bloomberg:

In reality, the demo also wasn’t carried out in real time or in voice. When asked about the video by Bloomberg Opinion, a Google spokesperson said it was made by “using still image frames from the footage, and prompting via text,” and they pointed to a site showing how others could interact with Gemini with photos of their hands, or of drawings or other objects. In other words, the voice in the demo was reading out human-made prompts they’d made to Gemini, and showing them still images. That’s quite different from what Google seemed to be suggesting: that a person could have a smooth voice conversation with Gemini as it watched and responded in real time to the world around it.

If you read the disclaimer at the beginning of the demo in its most literal sense, Google did not lie, but that does not mean it was fully honest. I do not get the need for trickery. The real story would have undoubtably come to light, if not from an unnamed Google spokesperson, and it undermines how impressive this demo is. And it is remarkable — so why not make the true version part of the story? I do not think I would have found it any less amazing if I had seen a real-time demonstration of the still frame of the video being processed by Gemini with its actual output, and then I saw this simplified version.

Instead, I feel cheated.

Online privacy isn’t just something you should be hoping for — it’s something you should expect. You should ensure your browsing history stays private and is not harvested by ad networks.

By blocking ad trackers, Magic Lasso Adblock stops you being followed by ads around the web.

Magic Lasso Adblock privacy benefits

It’s a native Safari content blocker for your iPhone, iPad, and Mac that’s been designed from the ground up to protect your privacy.

Rely on Magic Lasso Adblock to:

  • Remove ad trackers, annoyances and background crypto-mining scripts

  • Browse common websites 2.0× faster

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

So, join over 280,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Todd Vaziri:

In this day and age, when there are filmmakers out there like James Cameron, Martin Scorsese, David Fincher, Michael Bay, Zack Snyder and others proudly showing off the digital effects work in their movies, considering them valuable partners in the filmmaking process (and earning billions of dollars at the box office and awards and prestigious accolades in the meantime), it’s absolutely bizarre that certain studios and filmmakers steadfastly maintain the idea that marketing a modern movie means highlighting physical production while outright lying about their use of digital visual effects — and indirectly and directly insulting an entire craft in the process.

Vaziri links to the first part of a series that will eventually be four videos by Jonas Ussing, called “‘No CGI’ is really just invisible CGI”.1 Coincidentally, Ussing uploaded the second part today, and it is excellent.


  1. If it feels familiar on this website, it is because I linked to it in the context of Apple’s October event, which it shot with iPhones. ↥︎

Dan Milmo, reporting for the Guardian in 2021:

The head of safety at Facebook and Instagram’s parent company, Meta, announced that the encryption process would take place in 2023. The company had previously said the change would happen in 2022 at the earliest.

Loredana Crisan, vice president of Messenger, yesterday:

Today I’m delighted to announce that we are rolling out default end-to-end encryption for personal messages and calls on Messenger and Facebook, and a suite of new features that let you further control your messaging experience. We take our responsibility to protect your messages seriously and we’re thrilled that after years of investment and testing, we’re able to launch a safer, more secure and private service.

This news comes days after Meta announced it would separate the previously intertwined chat features of Messenger and Instagram. The company did not say why, leading some to speculate it was for E.U. regulatory compliance reasons.

Instagram and Messenger already have optional end-to-end encryption. Notably, Meta specifically says the default will be coming to Facebook and Messenger; “Instagram” is not mentioned anywhere in this announcement. It is only if you look toward the bottom of the engineering blog post that Meta says “additional testing” for end-to-end encryption in Instagram messaging is planned for “the next year”. In a Wired story, Lily Hay Newman reports “it will take some time for the rollout of full default end-to-end encryption to reach all Messenger and Instagram chat users”.

Kim Zetter says on Twitter that Meta briefed journalists last week about this news — which was supposed to be revealed tomorrow — at approximately the same time Joan Donovan filed a complaint against Harvard. Donovan claims the school forced her out after she tried to make public documents leaked by Frances Haugen. Shortly thereafter, the Chan Zuckerberg initiative pledged $500 million to Harvard around the same time and, Donovan alleges, that in part led to her eventual dismissal.

Sean Hollister, the Verge:

On Friday, Judge Donato vowed to investigate Google for intentionally and systematically suppressing evidence, calling the company’s conduct “a frontal assault on the fair administration of justice.” We were there in the courtroom for his explanation.

“I am going to get to the bottom of who is responsible,” he said, adding he would pursue these issues “on my own, outside of this trial.”

The incidents of apparent evidence destruction — which have surfaced during both recent Google trials — are matched only by very smart executives playing dumb in court, as though everyone involved simply could not know any better. Quite the audacious plan. Everyone knows judges love to have their patience tested.

U.S. Senator Ron Wyden:

In the spring of 2022, my office received a tip that government agencies in foreign countries were demanding smartphone “push” notification records from Google and Apple. My staff have been investigating this tip for the past year, which included contacting Apple and Google. In response to that query, the companies told my staff that information about this practice is restricted from public release by the government.

Raphael Satter, Reuters:

In a statement, Apple said that Wyden’s letter gave them the opening they needed to share more details with the public about how governments monitored push notifications.

“In this case, the federal government prohibited us from sharing any information,” the company said in a statement. “Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.”

[…]

Wyden’s letter cited a “tip” as the source of the information about the surveillance. His staff did not elaborate on the tip, but a source familiar with the matter confirmed that both foreign and U.S. government agencies have been asking Apple and Google for metadata related to push notifications to, for example, help tie anonymous users of messaging apps to specific Apple or Google accounts.

This is an entire category of stuff the U.S. government has apparently prohibited Apple and Google from disclosing and it is a good reminder that their transparency reports exist at the behest of governments, with their limitations imposed. But, also, Apple specifically blames the “federal government” — I take that to mean the U.S. federal government. Why would they be able to prevent Apple from disclosing this category of law enforcement requests from other countries?

Joseph Cox of 404 Media reviewed one warrant which mentioned push notifications in the case of an Ohio researcher, questioning whether it “is boilerplate language that has been included in the search warrant application”. I poked around on RECAP and found a lot of filings which include the same language, including a warrant (PDF) issued to Life360 for, among other things, push notifications if they are related to the geographic location history of a specific device. Both the one I found and the one Cox cites were issued by U.S. authorities for U.S. subjects. But in another warrant (PDF), this one issued to Google, there is a difference: the subjects are based in Mexico and Vietnam.

That raises questions for me about whether push notifications, having to go through servers from Apple and Google, are a vector for the U.S. surveillance campaign on the rest of the world. It is possible to encrypt notifications on iOS and Android; my understanding is that iMessage and Signal both do so. But some metadata, as noted by Wyden, remains in clear text.

Lorenzo Franceschi-Bicchierai, TechCrunch:

On Friday, genetic testing company 23andMe announced that hackers accessed the personal data of 0.1% of customers, or about 14,000 individuals. The company also said that by accessing those accounts, hackers were also able to access “a significant number of files containing profile information about other users’ ancestry.” But 23andMe would not say how many “other users” were impacted by the breach that the company initially disclosed in early October.

As it turns out, there were a lot of “other users” who were victims of this data breach: 6.9 million affected individuals in total.

The announcement Friday was made in a financial disclosure, and the company updated an old blog post a day after this TechCrunch article was published. According to 23andMe, the information disclosed by the “DNA Relatives” feature will at minimum include a display name derived from one’s (presumably real) name, recent site activity, and “predicted” relationship.

Jason Koebler, 404 Media:

Every few years, I write an article about how it is generally not a good idea to voluntarily give your immutable genetic code to a for-profit company (or any other genetic database, for that matter), and how it is an even worse deal to pay money to do so. It is also not wise or ethical to gift a 23andMe Saliva Collection Kit to your loved ones for Christmas, their birthday, or any other reason.

Give your family and friends the gift of not subjecting their genetics to businesses with a data breach record of, as of writing and I cannot stress this enough, half their customer base.

Update: A very important postscript, via Brian Sutorius. Matthew Cortland:

So what measures has 23andMe announced to mitigate the tremendous harm their negligence has caused? If you guessed, “updating their Terms of Service to force customers – including everyone who has used 23andMe since their first product became available in the United States in 2007 – into binding arbitration” you’d be correct. 23andMe is updating their TOS to strip victims of the company’s negligence of the right to seek justice in a court of law, instead forcing those harmed by 23andMe’s conduct into binding arbitration. […]

Notification of the updated Terms of Service was sent to 23andMe users one day before it disclosed the results of its investigation. If you are a user, there are specific steps you need to follow this month to opt out of binding arbitration. Read Cortland’s post in full for more information.

Bruce Schneier, Slate:

Knowing that they are under constant surveillance changes how people behave. They conform. They self-censor, with the chilling effects that brings. Surveillance facilitates social control, and spying will only make this worse. Governments around the world already use mass surveillance; they will engage in mass spying as well.

Corporations will spy on people. Mass surveillance ushered in the era of personalized advertisements; mass spying will supercharge that industry. Information about what people are talking about, their moods, their secrets — it’s all catnip for marketers looking for an edge. The tech monopolies that are currently keeping us all under constant surveillance won’t be able to resist collecting and using all of that data.

And Schneier on his blog, a republished transcript of a September talk at Harvard:

In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.

If you only have time for one of these, I recommend the latter. It is more expansive, thoughtful, and makes me reconsider how regulatory framing ought to work for these technologies.

Both are great, however, and worth your time.

Sony PlayStation:

As of 31 December 2023, due to our content licensing arrangements with content providers, you will no longer be able to watch any of your previously purchased Discovery content and the content will be removed from your video library.

The list of shows which users “purchased” is extensive and, because this is digital media, varies by country. The above link is to the Canadian list; the U.S. list is even longer.

Michael Tsai has a good roundup of articles, with most noting this is not an isolated incident and sharing other — often entirely different — examples. It is obviously an intolerable practice. Yet it is increasingly standard for digital purchases to be licensed in a way where access can be changed or revoked without consent.

I spot-checked the PlayStation list and found many of these shows are not officially available in a hard copy format. Sure, nobody is entitled to own them at all, but if you want to ensure you retain access for whatever reason, you often have no legal option. “Okay, well, you know what that means: steal it”.

Beeper:

Now you can send and receive blue bubble texts from your phone number. As soon as you install Beeper Mini, your Android phone number will be blue instead of green when your iPhone friends text you. It’s easy to join iPhone-only group chats, since people can add your phone number instead of your email address. All chat features like typing status, read receipts, full resolution images/video, emoji reactions, voice messages, editing, un-sending, and more are supported.

Beeper is charging two dollars per month for access.

This is all made possible by the frankly incredible work of the pypush project. Primarily, its author is “JJTech”, a high school student who reverse-engineered the way iMessage works:

One of the most foundational components of iMessage is Apple Push Notification Service (APNs). You might have encountered this before, as it is the same service that is used by applications on the App Store to receive realtime notifications and updates, even while the app is closed.

However, what you probably didn’t know about APNs is that it is bidirectional. That’s right, APNs can be used to send push notifications as well as receive them. You can probably already tell where this is going, right?

This overview is pretty good; I think I understand what is going on here, even if the specifics are flying right over my head. This is extremely clever. Unlike the catastrophic launch of Nothing’s messaging client and all other predecessors, Beeper Mini is not proxying iMessages through Apple devices. It is sending and receiving iMessages as though it is an Apple device. Regardless of how concerned you may feel about privacy and security, you have to admit that is pretty impressive. It has somehow taken eleven years to fully reverse-engineer iMessage and build a user-friendly client — but it seems it has been done.

Journalists have understandably raised questions about how long this app will be tolerated by Apple. The people behind it — including “JJTech” — believe Apple could not turn it off for technical reasons, but it seems like Apple is prepared to discontinue services on older devices at least. The Verge’s Nilay Patel noted on Threads the P.R. risk of shutting it down, while Sarah Perez of TechCrunch points to current antitrust investigations and E.U. regulations.

I am not so sure any of this would be a deterrent for Apple. It could be more restrictive on what it would portray as privacy, security, and business grounds. The privacy and security excuses could feel shakier, as it does seem messages sent through Beeper Mini fit the iMessage protocol without additional risk exposure — pending third-party auditing, of course — but the business case is more solid. As noted earlier, Beeper is selling iMessage access, but Apple does not charge for the service. It bakes the cost into device sales. Beeper gets to profit from Apple’s free-to-users network.

I am not defending Apple’s revenue or its likely stance; I do not much care either way. For what it is worth, I do not think Beeper Mini will actually make much of an impact because iMessage interoperability concerns are localized to the United States and a handful of other countries. But I do think Apple is protective of its network and will treat this reverse engineering exercise as a security problem. If it wants to launch iMessage on Android, it will do so on its own terms.

Some table-setting: I rarely need to note any conflicts of interest in the things I publish here, but repeat site sponsor — most recently this weekMagic Lasso is an ad blocker, and its developer must navigate YouTube’s crackdown. To be clear, this post is not informed by that sponsorship, and I am mindful of separating the part of this site that makes me money from the reason people read anything I write in the first place. (Sorry, Magic Lasso.)

Anthony Ha, Engadget:

As noted in a blog post by the ad- and tracker-blocking company Ghostery, YouTube employs a wide variety of techniques to circumvent ad blockers, such as embedding an ad in the video itself (so the ad blocker can’t distinguish between the two), or serving ads from the same domain as the video, fooling filters that have been set up to block ads served from third-party domains.

[…]

Keeping pace with YouTube will likely become even more challenging next year, when Google’s Chrome browser adopts the Manifest V3 standard, which significantly limits what extensions are allowed to do. Modras said that under Manifest V3, whenever an ad blocker wants to update its blocklist — again, something they may need to do multiple times a day — it will have to release a full update and undergo a review “which can take anywhere between [a] few hours to even a few weeks.”

The transition to Manifest V3 has been a long time coming, which means much has been written about it, and I question the more absolutist claims that its eventual rollout will destroy ad blockers. In 2019, Catalin Cimpanu of ZDNet reported that Apple rolled out similar restrictions, pointing to shutdown notices from uBlock Origin and AdGuard Pro. Four years later, uBlock remains unavailable, but there is still a version of AdGuard for Safari. I would bet on there being differences, but ad blockers exist for Safari, which surely means the kinds of restrictions Google is working on are not a death knell for the industry.

Unsurprisingly, the motivations for this feel different when it is being done by Google instead of Apple because Google’s whole business model is based on advertising. When it changes extensions in Chrome, the world’s most popular web browser, in ways that make ad blocking more difficult, people are going to view that as a conflict of interest. YouTube also happens to be the place the most of the world watches video. That gives Google an extraordinary advantage: it runs the browser, it hosts the video, and it powers the ads.

Craig Mod:

YouTube premium is the greatest deal on the internet, and all the work to block YouTube ads leaves me confused. Premium is the best kind of paid service upgrade: it makes the user experience perfect and you support creators.

It is nice for there to be options available to users instead of an expectation of advertising. If you watch a lot of YouTube, Premium looks like a great choice, though I find it requires a reorientation of your headspace: think of YouTube Premium as “YouTube”, and YouTube sans Premium as the “free trial” or “lite” version. That framing also puts Google’s strategy for YouTube into a more understandable context, I think. Google has increased the per-video ad load and it delivers fewer skippable ads, and it is becoming more strict about ad blocking in the same way many software companies limit free trials.

But I can understand why people block ads, too, because the quality of ads I get on YouTube sucks. Part of this is my fault because I am a more privacy-conscious user and, so, take steps to prevent specific targeting. That means I get an awful lot of ads with deep-faked celebrities hawking sketchy investments, garbage supplements, gambling, diet scammers, and other bottom-of-the-barrel crap. I understand my restrictions reduce my likelihood of seeing things which interest me. On the other hand, why is Google accepting ads like these in the first place?

Ha notes that Adblock Plus is not dedicating itself to YouTube ad blocking, as it says they fall under the acceptable ads criteria. Fine. I do not think the sort of ads I get are actually acceptable in a broader sense; they are the kinds of things that would be rightfully be rejected by the sleaziest print publication. But if they are not seen as disruptive in the way, say, a popover or interstitial ad might be, I can understand that.

At least there are now options. You can grin and bear the nightmare surveillance ad machine, you can excise yourself from that targeting and still put up with ads, or you can pay to separate yourself from the results of that system while your data is still used to feed it. Or you can try to fight it. Just be prepared for Google to fight back.

You can also support your favourite video creators — or writers — on platforms like Patreon, too. But that cuts Google out of the revenue picture, so do not expect to see them pitching that as a legitimate alternative option. The main problem with YouTube is that it is a social network for some users, and a utility for others, and those perspectives are not always compatible.

Want to experience twice as fast load times in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock — the ad blocker designed for you. It’s easy to setup, blocks all ads, and doubles the speed at which Safari loads.

Magic Lasso Adblock is an efficient and high performance ad blocker for your iPhone, iPad, and Mac. It simply and easily blocks all intrusive ads, trackers and annoyances in Safari. Just enable to browse in bliss.

Magic Lasso Adblock screenshot

By cutting down on ads and trackers, common news websites load 2× faster and use less data.

Over 280,000+ users rely on Magic Lasso Adblock to:

  • Improve their privacy and security by removing ad trackers

  • Block annoying cookie notices and privacy prompts

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad, and Mac.

Download today via the Magic Lasso website.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Harris Brewis is back with a four-hour examination of plagiarism on YouTube. Yeah, it is a big one; I watched it in two parts because I needed to charge my headphones, because that is the world in which we now live, and it effectively monopolized my lazy Sunday.

Its subject matter is a little inside baseball — I do not watch nearly enough YouTube to know any of the creators examined — but it is a thoughtful look at what plagiarism on YouTube is like and a particularly damning exposé of one person specifically. It is extremely long but it kind of needs to be in order to accommodate examples in a fuller context.

As you may expect, there is a part of the conclusion which touches on generative media tools: how they can be used to disguise plagiarism in text, and how they themselves are farming the broader works of others. I found this interesting as I have been writing about intellectual property and generative tools a little bit. I am not sure if Brewis’ presentation of copyright is correct — that is to be determined — but it does seem like these tools do something more akin to plagiarism than strict copyright violation. Attorneys at Heer Law — no relation, as far as I know — note that plagiarism “is an ethical offence — rather than a legal offence — and it does not necessarily encompass copyright infringement”. That feels right to me as a description of what something like ChatGPT does in producing something nominally original.

Matthew Lane:

I’m not saying that there aren’t issues with AI that need to be addressed, especially worker exploitation. AI art generators can be especially infuriating for artists: they use a lot while giving back little. In fact, these generators are arguably being built to replace artists rather than to provide artists with new tools. It can be attractive to throw anything in the way to slow it down. But copyright, especially copyright maximalism, has done a terrible job of preventing artist exploitation.

As I wrote recently, this is an area of interest for me as both a technology-curious person and as an artist. There are as many ways of using generative products in ways that feel creative and encouraging as in ways that feel kind of like a cheat — for example, to duplicate a specific artist’s signature style. That is unfortunate; it is also not new.

I am not a lawyer — I have a fine arts degree — but I understand it is generally legal to replicate another artist’s work. You can repaint a painting or make a sculpture that is a duplicate of another work. (Update: To be clear: for personal use.) What is illegal is when that work is presented as though it was a creation of the original artist. That is, you can sculpt a famous statue and maybe even sell it as a copy (Update: in some places, sometimes!), but you cannot claim it is a figure by that artist — and you especially cannot do so in conjunction with a sale. It also depends where you live. This is not legal advice.

It is also perfectly legal to make things in the style of a specific artist. There are plenty of people who make music intended to sound like Joni Mitchell, take photos that look like something Edward Burtynsky would do, and make movies that imitate Wes Anderson’s style. Artists lift from each other all the time. (Update: This also depends on how a court interprets it; as I was reminded by email, a jury determined Robin Thicke’s “Blurred Lines” sounded too similar to Marvin Gaye’s “Got to Give It Up”.)

But there is an obvious power imbalance between if some random person recording a note-for-note cover of a Lana Del Rey song without permission, and if Lana Del Rey duplicated an unknown artist’s song. That is the uncomfortable tension of what some of these generative tools enable. A contributor to a stock photo website is likely no Lana Del Rey of their own craft, but they get paid to produce flexible illustrative photos for ads and brochures. If these images can be generated instead by a rich company that has money to burn, photographers do not get paid any more for this kind of work. That goes, too, for people who have been expected to churn out website filler, or even application developers. We are not going to solve this with stricter copyright laws.

A couple of days ago, I wrote up a short article for what I thought was a neat little workaround for being required to quit Safari to update a Safari app extension. This is a disruptive step for which developers have no workaround; it is just how Safari works.

But even though everything seemed to be okay on a surface level, something told me I should consult with an expert about whether this was a good idea.

So I emailed Jeff Johnson:

Following Nick Heer’s workaround, when you subsequently reenable StopTheMadness after updating to the latest version in the App Store while Safari is still open, Safari injects the updated extension’s content script and style sheet into open web pages that the extension has permission to access, which is typically all of them, including the pages with leftover content scripts from the previous version of the extension. Consequently, an App Store update can leave you with two different versions of the extension’s content script running simultaneously in the same web pages! This is a very undesirable situation, because the two competing scripts could conflict in unpredictable ways. You’ve suddenly gone from stopping the madness to starting the madness. In hindsight, therefore, quitting Safari in order to update extensions seems like a good idea, and that’s what I recommend to avoid potential issues and weird behavior.

After Johnson told me about this issue, it was obvious that posting my workaround was a bad idea, so I scrapped the post. But I think Johnson’s piece is worth reading to understand why the design of Safari extensions requires users to quit the browser to install updates. Of course I do not think it should, but what can you do?1


  1. If you work at Apple, see FB11882565 and FB12202841. ↥︎

Matt Growcoot, PetaPixel:

Standing in front of two large mirrors, Tessa Coates’ reflection does not return the same pose that she is making, and not only that, but both reflections are different from each other and different from the pose Coates was actually holding.

[…]

The photo is not a “Live Photo” nor is it a “Burst” — it’s just a normal picture taken on an iPhone. Understandably, Coates was freaked out.

Photography is real strange these days.

Apparently, an Apple Store employee told Coates that the company is testing a feature which sounds similar to Google’s Best Take. This is, as far as I can find, the first mention of this claim, but I would not give it too much credibility. Apple retail employees, in my experience, are often barely aware of the features of the current developer beta, let alone an internal build. They are not briefed on unannounced features. To be clear, I would not be surprised if Apple were working on something like this, but I would not bet on the reliability of this specific mention.

Update: MKBHD researcher David Imel, on Threads, says it is unlikely this photo is being depicted accurately, pointing out how different the arm positions are in each pose compared to the narrow exposure window used. The metadata posted by Coates does not disprove this, but says the exposure time was 1/100 of a second. If three photos were shot in rapid succession at that shutter speed, that would occur in a total of about 1/33 of a second, assuming no lag between images. I have seen some bizarre computational stuff off my iPhone, but nothing like this.

Update: It may not be a panoramic photo, but it sure looks like an iPhone photo taken in panorama mode, according to Faruk of iPhonedo.

Sergiu Gatlan, Bleeping Computer:

Apple released emergency security updates to fix two zero-day vulnerabilities exploited in attacks and impacting iPhone, iPad, and Mac devices, reaching 20 zero-days patched since the start of the year.

Both of these are WebKit bugs.

According to Project Zero’s spreadsheet, Apple patched ten zero-days in 2022, thirteen in 2021, three in 2020, two in 2019, three in 2016, and none in 2018, 2017, 2015, and 2014. It seems like a similar story across the board: the 2014 spreadsheet contains just eleven entries total, while the 2023 sheet contains fifty-six so far.

It is surely impossible to know, but one wonders how much of this is caused by vendors and exploiters alike getting better at finding zero-days, and how much can be blamed on worsening security in software. That seems hard to believe with increased restrictions on how much data is simply laying around to be leaked, but perhaps that is a driver of the increasing number of reports: when you build more walls, there are more opportunities to find cracks.

Patrick Howell O’Neill reported for MIT Technology Review in 2021 that the escalating number of exploits is primarily driven by state warfare, then criminals, and that it seems like a combination of increased vigilance and bug bounty programs have improved discovery. Kevin Poireault, in Infosecurity Magazine earlier this year, reports that it is a sign of better security for more straightforward exploits, necessitating the use of more advanced techniques by adversaries.

Michael Geist, responding to the news that Google and the Canadian government have struck a deal for the Online News Act:

I’ve been asked several times today if the Canadian approach will be a model for other countries. While I suspect that many may be tempted by the prospect of new money for media, the Canadian experience will more likely be a cautionary tale of how government and industry ignored the obvious risks of its legislative approach and was ultimately left desperate for a deal to salvage something for a sector that is enormously important to a free and open democracy.

For how critical journalism is, it is myopic to keep pushing its business model from one unsustainable option to another. At least Canadian publications will not be losing Google referral traffic, but this seems like a bad compromise for everyone.