Month: November 2023

Ina Fried, of Axios, with the scoop:

Apple is pausing all advertising on X, the Elon Musk-owned social network, sources tell Axios.

We have all had the experience where we have paused something and forgotten to resume it. These things can happen — and, in this case, this is something that should happen.

Update: Media Matters is keeping track of businesses that are pausing or ending their ad spend. Only four names on there right now but a couple of them look pretty big to me.

Professional editor John Buck spoke to former members of Apple’s Advanced Technology Group to gain an insight into the inner workings of this secretive team. The result is a compelling 400-page book called “Inventing the Future, Bit by Bit”.

“You got everything exactly right which is rare since few writers have the technical chops or interest in putting in the effort for that.”

— Steve Edelman, Founder, SuperMac

The book is filled with extraordinary detail and first-hand recollections, some shared for the very first time. It covers A/UX, HyperCard, LisaDraw, QuickScan, QuickTime, TrueType, and QuickDraw 24/32 as well as Projects Oreo, Warhol, Bass, Carnac, Spider, YACCintosh, Touchstone, Road Pizza, PDM, Milwaukee, 8*24GC, and Möbius. There’s also insight into the early development of Sarah, Jonathan, Lisa, and Macintosh computers.

ATG scientist Jean-Charles Mourey:

“We had a purpose and felt that we could change the world and have a positive impact on millions of people around the world. My friends thought I was in a cult.”

And the book is getting positive reviews from those who were there.

  • “You have a great skill in ‘telling the story’ with multiple players and simultaneous events.”

    — George Cossey, former Senior Programmer, Apple.

  • “So many stories I’ll never forget and stories I never knew!”

    — Mike Potel former Head of Software Engineering, Apple.

  • “This is a ton of work. It’s great to have all the stories told!”

    — Steve Perlman former Principal Scientist, Apple.

For readers of this blog, use coupon code ENVY for $5 off. The price includes shipping worldwide. An epub version is due in July 2024.

On October 27, 2022, Elon Musk acquired Twitter. Last month, CEO Linda Yaccarino acknowledged her employer’s contributions in a blog post titled “One Year in, the Future of X Is Bright”:

October 27 marks the one-year anniversary of this platform under new ownership and management.

While the headline is optimistic, this opening sentence has the tone and structure of recognizing the one-year anniversary of a natural disaster.

I am incredibly proud of the work our team has been doing to accelerate the future of X.

So let me share with you where we stand today:

Yaccarino remains surprised how often a team of people can rush to build features announced on a whim. Also, she insists on calling this platform “X” instead of its real full name “X, formerly known as Twitter” — or “Twitter”, for short.

Here are some key points from this twenty-three item list:

Freedom of expression. X is now a place where everyone can freely express themselves, so long as they do so within the bounds of the law. We believe open and respectful discourse is the single best way for humanity to thrive.

Yaccarino is proud that Twitter extends its permissiveness to the limit of local laws, which means it would rather censor users in Turkey than withdraw its services in protest. Also, it is only to happy to censor posts worldwide critical of Indian prime minister Narendra Modi. Also, its owner threatens lawsuits in the U.S. against legal speech. That is the kind of free expression Yaccarino is proud of for Twitter.

Safety. Safety on X remains a critical priority – our work is not done, but we are making real progress. Our trust and safety team is working around the clock to combat bad actors and consistently enforce our rules in areas such as hate speech, platform manipulation, child safety, impersonation, civic integrity and more. We also remain committed to privacy and data protection.

Unless hate speech, civic integrity, or privacy violations are committed by Twitter’s owner.

Partnerships. Our team also has ongoing dialogue with external groups to keep up to date with potential risks and support the safety of the platform – partners like the Technology Coalition, Anti-Defamation League, American Jewish Committee and Global Internet Forum to Counter Terrorism.

Yaccarino’s reference to a “dialogue” with the Anti-Defamation League includes legal threats.

User base. Over half a billion of the world’s most informed and influential people come to X every month. That’s inclusive of our efforts to aggressively remove spam and inauthentic accounts – a step we believe is critical to improve the X user experience and grow the platform. We continue to see sign-ups average around 1.5 million per day.

Yaccarino is proud to consider Cat Turd informed, influential, and a person.

Brand safety and suitability. X advertisers now have a level of control that largely did not exist one year ago. Thanks to new products like Adjacency Controls, Sensitivity Settings and third party measurement partnerships with industry leaders Integral Ad Science and DoubleVerify, the average brand safety score on X is now >99%, and we are now seeing brand suitability scores at >97% when these controls are applied.

Twitter’s brand unsuitability is about three percent, and its safety controls have not prevented ads from Apple and Xfinity from appearing in explicitly pro-Nazi feeds. Ads from the University of Calgary are nestled between white supremacist tweets on a verified account which could plausibly be participating in revenue sharing.

Yaccarino, formerly the chair of global advertising for NBCUniversal, surely understands that it looks bad when her boss promotes antisemitic conspiracy theories, and can probably sympathize with IBM’s decision to pull a million dollars in advertising from the platform because it turns out the apparently small amount of brand risk is really bad. Yaccarino hopes Apple does not also pull its hefty ad spend or use its considerable platform to denounce her boss’ increasingly vocal endorsements of vile, hateful, and conspiratorial worldviews.

From Twitter to X. We transformed Twitter into X, the everything app, where everyone is increasingly connected to everything they care about. This move enabled us to evolve past a legacy mindset and reimagine how users around the world consume, interact, watch and, soon, transact – all in one seamless interface. We have become the modern global town square.

Yaccarino, who does not keep the company’s app on the first home screen of her iPhone, also does not open the App Store.

And if we can achieve all of this in just 12 months, just imagine the scope of our ambition for next year – from expanded search to newswires to payments, we are just getting started.

This is a threat.

One year in, the future of X is bright.

Not as bright as that fucking sign, but just as dangerous if you look at it for too long.

Lance Ulanoff, TechRadar:

RCS or Rich Communication Services, a communications standard developed by the GSM Association and adopted by much of the Android ecosystem, is designed to universally elevate messaging communication across mobile devices. Even though Apple has been working with the group, it has until this moment steadfastly refused to add RCS support to iPhones. Now, however, Apple is changing its tune.

“Later next year, we will be adding support for RCS Universal Profile, the standard as currently published by the GSM Association. We believe the RCS Universal Profile will offer a better interoperability experience when compared to SMS or MMS. This will work alongside iMessage, which will continue to be the best and most secure messaging experience for Apple users,” said an Apple spokesperson.

Just last year, Tim Cook demurred on a question about support for the standard. For what it is worth, I am expecting an updated SMS-like experience, but I will be pleasantly surprised if it is more full featured. As Ulanoff notes, RCS does not itself support end-to-end encryption. The latest spec, released in 2019, does not even mention end-to-end encryption, nor does it prohibit text message bubbles from having a green background.

I am not sure when iOS 17.2 will be released, but it comes with a great new feature for the Action Button — it can trigger a translation feature. It uses the most recent selection from the Translate app; there is no way to choose which language to convert to from within the widget, so it kind of feels limited to always translating to one other language. If you are travelling, this will likely be an excellent use of that button.

Ted Johnson, Deadline:

Members of a special House committee fired off a letter to Apple, questioning whether the decision to end The Problem with Jon Stewart was due to concerns over the company’s relationship with China.

[…]

The committee is asking for a briefing by the company by Dec. 15, and they also plan to speak to Stewart’s representatives.

I also found Apple’s apparent distaste for then-upcoming topics on Stewart’s show to be concerning, if those reports are accurate. It is hard for me to believe there would not have been a discussion about control over topics of investigation when this show was pitched. Perhaps Apple executives became more sensitive — it is still unclear.

Even if reports are true that the show ended over whether Stewart could investigate topics about China, it seems disproportionate for U.S. federal government officials to be looking into it. Hollywood productions trying to appeal to the Chinese market is a well documented phenomenon. But it is not really Chinese influence on movies as much as it is that major studios want to make money, so they want to release their movies in the world’s largest market for them, which is now China. But they are under no obligation to do so.

What these lawmakers appear to be mad about is the free market. Apple wants to make a lot of money selling devices it mostly makes in China, and it also wants to make a lot of recurring revenue by selling a streaming video subscription. It does not take a Congressional investigation; any idiot could see there would be a collision. But investigating incidents like this is apparently Rep. Gallagher’s role.

Johnson:

In an interview with Deadline, Gallagher said one of the problems the committee sees is “self-censorship on the front end.”

“What choices are they already making, knowing that they don’t want to offend China, when they decide to embark on a project? Ask yourself: When was the last time a movie featured a Chinese villain? I can’t think of one. Maybe that’s evidence that self-censorship is happening.”

Abram Dylan, writing for the Awl in 2012:

When American TV networks cut scenes it’s “edited,” and when China does it it’s “censored.” When Hollywood adds Hispanic characters and shies away from Mexican stereotypes, it’s catering to a growing demographic. When it changes the artistic integrity of Transformers 2,” Battleship, and — upcoming Chinese-financed, future Criterion Collection standout- — Iron Man 3, it’s “gripped” by a “pressure system.”

Transformers, Iron Man and Battleship are all three franchises that gave the U.S. military script oversight in exchange for cooperation. Was it a “pressure system” that “gripped” director Peter Berg when he cut a character from “Battleship” because the Navy thought a sailor looked too fat?

Note again the date; this is from eleven years ago, and we are still having the same complaints. As Dylan writes, there remain plenty of mainstream U.S. films with Chinese antagonists, if that is something you are concerned about; there are some even more recent, too.

The problem with the free market is that it rarely rewards artistic integrity, and that maximizing revenue often means trying to appease the widest possible audience. If a studio want to include a market of a billion people where personal expression is not seen with the same standards as in the U.S., it may need to make some changes. That is especially true for Apple as it risks losing access to its manufacturing engine. The problem these lawmakers have is with capitalism, not Apple specifically, but that is not really something they are able to admit.

Eyal Press, the New Yorker:

In June, an appellate court ordered the N.Y.P.D. to turn over detailed information about a facial-recognition search that had led a Queens resident named Francisco Arteaga to be charged with robbing a store. The court requested both the source code of the software used and information about its algorithm. Because the technology was “novel and untested,” the court held, denying defendants access to such information risked violating the Brady rule, which requires prosecutors to disclose all potentially exculpatory evidence to suspects facing criminal charges. Among the things a defendant might want to know is whether the photograph that had been used in a search leading to his arrest had been digitally altered. DataWorks Plus notes on its Web site that probe images fed into its software “can be edited using pose correction, light normalization, rotation, cropping.” Some systems enable the police to combine two photographs; others include 3-D-imaging tools for reconstructing features that are difficult to make out.

This example is exactly why artificial intelligence needs regulation. There are many paragraphs in this piece which contain alarming details about overconfidence in facial recognition systems, but proponents of allowing things to play out as things are currently legislated might chalk that up to human fallibility. Yes, software might present a too-rosy impression of its capabilities, one might argue, but it is ultimately the operator’s responsibility to cross-check things before executing an arrest and putting an innocent person in jail. After all, there are similar problems with lots of forensic tools.

Setting aside how much incentive there is for makers of facial recognition software to be overconfident in their products, and how much leeway law enforcement seems to give them — agencies kept signing contracts with Clearview, for example, even after stories of false identification and arrests based on its technology — one could at least believe searches use photographs. But that is not always the case. DataWorks Plus markets tools which allow searches using synthesized faces which are based on real images, as Press reports — but you will not find that on its website. When I went looking, DataWorks Plus seems to have pulled the page where it appeared; happily, the Internet Archive captured it. You can see in its examples how it is filling in the entire right-hand side of someone’s face in a “pose correction” feature.

It is plausible to defend this as just a starting point for an investigation, and a way to generate leads. If it does not pan out, no harm, right? But it does seem to this layperson like a computer making its best guess about someone’s facial features is not an ethical way of building a case. This is especially true when we do not know how systems like these work, and it does not inspire confidence that there are no standards specific with which “A.I.” tools must comply.

Harry McCracken wrote a great profile of Marques Brownlee for Fast Company:

Brownlee, who turns 30 in December, has come a long way since he began shooting videos about tech hardware and software in his family’s suburban New Jersey home at age 15. (He uploaded the results under the nom de YouTube of MKBHD — for “Marques Keith Brownlee” and “high definition” — a moniker that has been synonymous with his own name ever since.) By the time he was in college, he was a phenomenon: In 2013, Google’s then senior VP of engineering, Vic Gundotra, declared Brownlee “the best technology reviewer on the planet.”

[…]

A decade and a half into his career, Brownlee is still expanding his influence: He’s added 1.4 million subscribers in the past 12 months alone. “He’s only going to grow from here, especially as Gen Z and Gen Alpha become the largest purchasing group,” predicts creator trends analyst Fana Yohannes. An alumnus of Apple’s PR department, she helped get Brownlee invited to the company’s product launch events starting in 2015, once it got its head around the notion of a 21-year-old YouTuber being one of the world’s most important tech experts.

Serendipitously, I was thinking about this exact thing over the weekend, albeit on a tangent. When Apple launched the iPhone X in 2017, it represented a shift in its marketing strategy. There was a time when Apple review units were a scarce resource among tech journalists — limited first to a handful of print publications and tech sites, then to blogs and more niche sites, and then to a few YouTube producers. Now, though, Apple must loan out hundreds of prerelease iPhones given the number of reviews I see when the embargo drops.

There was a larger context in which I was thinking about this. Apple product launches used to be live stage affairs, which came with risks and rewards — occasionally at the same time. When John Ternus announced the thousand-dollar Pro Display stand at WWDC 2019, there was an amazing crowd reaction, which seems to have caused Ternus to slightly flub the next line. It is an electric moment.

Contrast that with the pre-recorded announcement of the new Mac Pro at WWDC earlier this year from a mix of Ternus and Jennifer Munn. A benefit of the video format is that Apple can switch out speakers more fluidly, and it opens up opportunities for people who may not be comfortable in a one-take live setting. But that safety and confidence also means there is no feedback when presenters say something surprising — either positive or negative. When Ternus returns to announce its price — a thousand dollars more than its predecessor even with dramatically less expansion capability — it was like he was telling you the weather.

An Apple product announcement is now fully a sales pitch with none of the theatrical tension of demonstrating something live. If you are into the products, you might go from watching that video to watching coverage of the new products a week later on YouTube in video reviews that are often referencing those same talking points. In the hands of a reviewer less skilled than Brownlee, a YouTuber might also reuse Apple’s own promo footage. All of this is a lot of video that can often feel samey, albeit with different production values between some really successful YouTube producers and one of the world’s richest companies.

My lengthy digression over, I should point you back to McCracken’s article, which is rather excellent and worth your time. The thing it captures well is how Brownlee is so often a calming and rational presence in the too-dramatic world of YouTube. There is little spectacle in Brownlee’s videos and nothing is shouted. I feel respected as a viewer. I appreciate that.

Tripp Mickle, Ella Koeze, and Brian X. Chen, New York Times:

But since 2017, iPhone repairs have been a minefield. New batteries can trigger warning messages, replacement screens can disable a phone’s brightness settings, and substitute selfie cameras can malfunction.

The breakdowns are an outgrowth of Apple’s practice of writing software that gives it control over iPhones even after someone has bought one. Unlike cars, which can be repaired with generic parts by auto shops and do-it-yourself mechanics, new iPhones are coded to recognize the serial numbers for original components and may malfunction if the parts are changed.

But not all parts in all cars can be replaced with generic components, as these reporters later gesture at:

Using software to control repairs has become commonplace across electronics, appliances and heavy machinery as faster chips and cheaper memory turn everyday products into miniature computers. HP has used a similar practice to encourage people to buy its ink cartridges over lower-priced alternatives. Tesla has incorporated it into many cars. And John Deere has put it in farm equipment, disabling machines that aren’t fixed by company repair workers.

Tesla apparently builds its standard and long range cars with the same battery, and reduces the range of the less expensive cars in software. And, in a letter to Canadian officials (PDF), Tesla defended locking information down from some independent repair technicians on the basis of safety. But it is not the only auto manufacturer engaged in software-based limitations. Recent FCA vehicles have a “security gateway module” that prevents some diagnostic messages from being read by unauthorized users. Authorized users — in the cases of both FCA and Tesla — are required to pay up to each of those companies for access.

So long as everything we use moves closer to becoming a computer, this problem will grow because some legislation does not explicitly prohibit it while other laws have loopholes. Right to repair advocates and the Times have framed this as a financial issue. But I am not sure that is the case; as I have written before, it is much more likely that these companies simply do not prioritize repairability. To be clear, that is not an excuse. If anything, I think that is even worse; it implies a lack of caring in how something is built if it is not made with repair in mind. Remember Apple’s butterfly keyboard? Shipping a faulty family of keyboards for years was bad enough, but it was made a fiasco because of how it was assembled — it was often easier to replace the entire top case of an affected laptop, at a cost of hundreds of dollars, instead of changing individual keys.

To be fair, Apple also seems like it wants to take this more seriously than it used to, by making it easier to do common repairs on the iPhone, and its configuration process is done entirely by customers when using its Self-Service Repair website. But it should be possible to use this tool with authentic parts harvested from a different iPhone, too. This should be mandated across all industries. Most people do not buy things with an understanding of how they were built and how to fix them. Better legislation for third-party repair support would consider this for all kinds of products.

This book has no rumours in it, all the action takes place outside 9–5 and Jony Ive isn’t mentioned. However, it is assembled bit by bit so this is the perfect home for an exclusive look at Apple and its secrets.

Inventing the Future book cover

Inventing the Future is based on never-before-conducted interviews with engineers and scientists at Apple who ushered in the multimedia revolutions. As you watch a YouTube video on your phone today, these are the people you need to thank.

— Hansen Hsu, Computer History Museum

Professional editor John Buck ‘lost three years’ after an ex-Apple engineer told him, ‘There’s a book full of stories about Apple’s Advanced Technology Group that has never been told’. Buck spoke to dozens of former ATG members, pored over hundreds of documents, patents, and emails to gain an insight into the inner workings of this elusive group. The result is a compelling book filled with extraordinary detail and first-hand recollections of invention.

For readers of this blog, use coupon code ENVY for $5 off. The price includes shipping worldwide. Books.by prints and ships locally.

Has the rapid availability of images generated by A.I. programs duped even mainstream news into unwittingly using them in coverage of current events?

That is the impression you might get if you read a report from Cam Wilson, at Crikey, about generated photorealistic graphics available from Adobe’s stock image service that purportedly depict real-life events. Wilson pointed to images which suggested they depict the war in Gaza despite being created by a computer. When I linked to it earlier this week, I found similar imagery ostensibly showing Russia’s war on Ukraine, the terrorist attacks of September 11, 2001, and World War II.

This story has now been widely covered but, aside from how offensive it seems for Adobe to be providing these kinds of images — more on that later — none of these outlets seem to be working hard enough to understand how these images get used. Some publications which referenced the Crikey story, like Insider and the Register, implied these images were being used in news stories without knowing or acknowledging they were generative products. This seemed to be, in part, based on a screenshot in that Crikey report of one generated image. But when I looked at the actual pages where that image was being used, it was a more complicated story: there were a couple of sketchy blog posts, sure, but a few of them were referencing an article which used it to show how generated images could look realistic.1

This is just one image and a small set of examples. There are thousands more A.I.-generated photorealistic images that apparently depict real tragedies, ongoing wars, and current events. So, to see if Adobe’s A.I. stock library is actually tricking newsrooms, I spent a few nights this week looking into this in the interest of constructive technology criticism.

Here is my methodology: on the Adobe Stock website, I searched for terms like “Russia Ukraine war”, “Israel Palestine”, and “September 11”. I filtered the results to only show images marked as A.I.-generated, then sorted the results by the number of downloads. Then, I used Google’s reverse image search with popular Adobe images that looked to me like photographs. This is admittedly not perfect and certainly not comprehensive, but it is a light survey of how these kinds of images are being used.

Then, I would contact people and organizations which had used these images and ask them if they were aware it was marked as A.I.-generated, and if they had any thoughts about using A.I. images.

I found few instances where a generated image was being used by a legitimate news organization in an editorial context — that is, an A.I.-generated image being passed off as a photo of an event described by a news article. I found no instances of this being done by high-profile publishers. This is not entirely surprising to me because none of these generated images are visible on Adobe Stock when images are filtered to Editorial Use only; and, also, because Adobe is not a major player in editorial photography to the same extent as, say, AP Photos or Getty Images.

I also found many instances of fake local news sites — similar to these — using these images, and examples from all over the web used in the same way as commercial stock photography.

This is not to suggest some misleading uses are okay, only to note a difference in gravity between egregious A.I. use and that which is a question of taste. It would be extremely deceptive for a publisher to use a generated image in coverage of a specific current event, as though the image truly represents what is happening. It seems somewhat less severe should that kind of image be used by a non-journalistic organization to illustrate a message of emotional support, to use a real example I found. And it seems further less so for a generated image of a historic event to be used by a non-journalistic organization as a kind of stock photo in commemoration.

But these are distinctions of severity; it is never okay for media to mislead audiences into believing something is a photo related to the story when it is neither. For example, here are relevant guidelines from the Associated Press:

We avoid the use of generic photos or video that could be mistaken for imagery photographed for the specific story at hand, or that could unfairly link people in the images to illicit activity. No element should be digitally altered except as described below.

[…]

[Photo-based graphics] must not misrepresent the facts and must not result in an image that looks like a photograph – it must clearly be a graphic.

From the BBC:

Any digital manipulation, including the use of CGI or other production techniques (such as Photoshop) to create or enhance scenes or characters, should not distort the meaning of events, alter the impact of genuine material or otherwise seriously mislead our audiences. Care should be taken to ensure that images of a real event reflect the event accurately.

From the New York Times:

Images in our pages, in the paper or on the Web, that purport to depict reality must be genuine in every way. No people or objects may be added, rearranged, reversed, distorted or removed from a scene (except for the recognized practice of cropping to omit extraneous outer portions). […]

[…]

Altered or contrived photographs are a device that should not be overused. Taking photographs of unidentified real people as illustrations of a generic type or a generic situation (like using an editor or another model in a dejected pose to represent executives being laid off) usually turns out to be a bad idea.

And from NPR:

When packages call for studio shots (of actors, for example; or prepared foods) it will be obvious to the viewer and if necessary it will be made perfectly clear in the accompanying caption information.

Likewise, when we choose for artistic or other reasons to create fictional images that include photos it will be clear to the viewer (and explained in the caption information) that what they’re seeing is an illustration, not an actual event.

I have quoted generously so you can see a range of explanations of this kind of policy. In general, news organizations say that anything which looks like a photograph should be immediately relevant to the story, anything which is edited for creative reasons should be obviously differentiated both visually and in a caption, and that generic illustrative images ought to be avoided.

I started with searches for “Israel Palestine war” and “Russia Ukraine war”, and stumbled across an article from Now Habersham, a small news site based in Georgia, USA, which originally contained this image illustrating an opinion story. After I asked the paper’s publisher Joy Purcell about it, they told me they “overlooked the notation that it was A.I.-generated” and said they “will never intentionally publish A.I.-generated images”. The article was updated with a real photograph. I found two additional uses of images like this one by reputable if small news outlets — one also in the U.S., and one in Japan — and neither returned requests for comment.

I next tried some recent events, like wildfires in British Columbia and Hawaii, an “Omega Block” causing flooding in Greece and Spain, and aggressive typhoons this summer in East Asia. I found images marked as generated by A.I. in Adobe Stock used to represent those events, but not indicated as such in use — in an article in the Sheffield Telegraph; on Futura, a French science site; on a news site for the debt servicing industry; and on a page of the U.K.’s National Centre for Atmospheric Science. Claire Lewis, editor of the Telegraph’s sister publication the Sheffield Star, told me they “believe that any image which is AI generated should say that in the caption” and would “arrange for its removal”. Requests for comment from the other three organizations were not returned.

Next, I searched “September 11”. I found plenty of small businesses using generated images of first responders among destroyed towers and a firefighter in New York in commemorative posts. And seeing those posts changed my mind about the use of these kinds of images. When I first wrote about this Crikey story, I suggested Adobe ought to prohibit photorealistic images which claim to depict real events. But I can also see an argument that an image representative of a tragedy used in commemoration could sometimes be more ethical than a real photograph. It is possible the people in a photo do not want to be associated with a catastrophe, or that its circulation could be traumatizing.

It is Remembrance Day this weekend in Canada — and Veterans Day in the United States — so I reverse-searched a few of those images and spotted one on the second page of a recent U.S. Department of Veterans Affairs newsletter (PDF). Again, in this circumstance, it serves only as an illustration in the same way a stock photo would, but one could make a good argument that it should portray real veterans.

Requests for comment made to the small businesses which posted the September 11 images, and to Veterans Affairs, went unanswered.

As a replacement for stock photos, A.I.-generated images are perhaps an okay substitute. There are plenty of photos representing firefighters and veterans posed by models, so it seems to make little difference if that sort of image is generated by a computer. But in a news media context these images seem like they are, at best, an unnecessary source of confusion, even if they are clearly labelled. Their use only perpetuates the impression that A.I. is everywhere and nothing can be verifiable.

It is offensive to me that any stock photo site would knowingly accept A.I.-generated graphics of current events. Adobe told PetaPixel that its stock site “is a marketplace that requires all generative AI content to be labeled as such when submitted for licensing”, but it is unclear to me how reliable that is. I found a few of these images for sale from other stock photo sites without any disclaimers. That means these were erroneously marked as A.I.-generated on Adobe Stock, or that other providers are less stringent — and that people have been using generated images without any possibility of foreknowledge. Neither option is great for public trust.

I do think there is more that Adobe could do to reduce the likelihood of A.I.-generated images used in news coverage. As I noted earlier, these images do not appear when the “Editorial” filter is selected. However, there is no way to configure an Adobe account to search this selection by default.2 Adobe could permit users to set a default set of search filters — to only show editorial photos, for example, or exclude generative A.I. entirely. Until that becomes possible from within Adobe Stock itself, I made a bookmark-friendly empty search which shows only editorial photographs. I hope it is helpful.

Update: On November 11, I updated the description of where one article appeared. It was in the Sheffield Telegraph, not its sister publication, the Sheffield Star.


  1. The website which published this article — Off Guardian — is crappy conspiracy theory site. I am avoiding linking to it because I think it is a load of garbage and unsubtle antisemitism, but I do think its use of the image in question was, in a vacuum, reasonable. ↥︎

  2. There is also no way to omit editorial images by default, which makes Adobe Stock frustrating to use for creative or commercial projects, as editorial images are not allowed to be manipulated. ↥︎

Online privacy isn’t just something you should be hoping for — it’s something you should expect. You should ensure your browsing history stays private and is not harvested by ad networks.

By blocking ad trackers, Magic Lasso Adblock stops you being followed by ads around the web.

Magic Lasso Adblock privacy benefits

It’s a native Safari content blocker for your iPhone, iPad, and Mac that’s been designed from the ground up to protect your privacy.

Rely on Magic Lasso Adblock to:

  • Remove ad trackers, annoyances and background crypto-mining scripts

  • Browse common websites 2.0× faster

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

So, join over 280,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

What was it that Steve Jobs said?

The only problem with Microsoft is that they just have no taste. They have absolutely no taste, and I don’t mean that in a small way, I mean that in a big way.

Imagine taking that as a compliment — as a directive.

Update: Microsoft has pulled this version while emphasizing that it was just a test. It is very important to gather user feedback on how intrusive a background syncing process can be.

It is not every day that a company launches a new kind of device, let alone from high-profile ex-Apple employees. Well, after years of speculation and teasers, Humane is ready to begin selling its A.I. “pin”.1

You can think of it as the answer to the question what if you could wear a smart kitchen speaker? and it sounds kind of compelling or, at least, not stupid. If a smartphone is a perfect convergence device, you can think of this as an attempt to move in the other direction.

Some people say they want to use their phone less, but a $700 device with a $24-per-month cell plan seems like an ambitious product for that niche. There are also plausible accessibility benefits to a mostly voice-controlled device for anyone who is able to clearly speak but maybe lacks fine motor control. Time will tell, but we can explore what has been revealed about the device so far.

In a video, Humane co-founder Imran Chaudhri asks the A.I. Pin when the next eclipse will occur and where it will be visible. It responded that it will be on April 8, 2024, and that the “best places to see it are Exmouth, Australia and East Timor”. So I looked it up, and that is not right at all. This solar eclipse will almost exclusively be visible across North America and it will not be seen anywhere near Australia. In fact, its path is so specific that there is a marketing campaign about the “Great American Eclipse”.

It is just one error in one promotional video; I do not want to put too much emphasis on it. But anything this pin does requires a high degree of trust by the nature of the product. I would not book flights on the basis of its reply, but there are lower risk purchasing decisions that could feel more comfortable. Later in the video, Chaudhri asks how much a book is online by showing the pin its cover and, after hearing the price, directs the pin to buy it. That sounds great if you trust that you are getting the best price and you have no interest in shopping around. Maybe you are one of the apparently small handful who shops by voice.

Speaking of small handfuls, the pin also said the almonds in Chaudhri’s hand contained fifteen grams of protein, which was off by at least a factor of two.

There has clearly been a lot of thought put into this product and it seems like the company is very proud of it. It is a tall order to compete with established wearable products. I am, at the very least, intrigued by its premise, but I find these botched demos do not engender trust. Humane is a brand new company and this is a new kind of device, so it seems like it should be doing everything to build confidence right out of the gate.

Erin Griffith and Tripp Mickle, of the New York Times, got a preview of the device, and they seem skeptical. Chaudhri acknowledges it has not stopped team members from using their smartphones. Also, there is this:

Humane established a company culture that borrowed from Apple, including its secretiveness. During its experimentation phase, the start-up created intrigue by announcing high profile investors like Mr. Altman and making grandiose — if vague — public statements about building “the next shift between humans and computing.” […]

On the contrary, Humane’s approach of broadcasting its world-changing ambitions before showing the device is decidedly not Apple-like.

But what do I know? Time named it one of the best inventions of the year two weeks ago. I am sure that has nothing to do with the disclaimer that Time’s ownership is also an investor in Humane.


  1. Copy-editing note: Humane is spelling it “Ai Pin” in some places, “ai pin” in others, and also “AiPin” sometimes, but pronouncing it ay eye pin. I am using the house-styled “A.I.” because I think that is more consistent than what Humane is doing. ↥︎

Adam Gabbatt, the Guardian:

Jezebel, a feminist US news site, was shut down by its owners on Thursday, with 23 people laid off and no plans for the outlet to resume publication.

G/O Media, which owns Jezebel and other sites including Gizmodo and the Onion, announced the closure in a memo to staff, which was obtained by the Guardian.

“Unfortunately, our business model and the audiences we serve across our network did not align with Jezebel’s,” Jim Spanfeller, the chief executive of G/O Media, wrote in the memo, which was sent to staff on Thursday morning.

This sucks. It follows about seven years of parent company mismanagement, first at Univision and now at Great Hill Partners, a private equity firm. The ray of hope I am clinging to is that former colleagues have found success in independence.

Some ex-writers from Kotaku launched a new site:

Welcome to Aftermath, a worker-owned, reader-supported news site covering video games, the internet, and the cultures that surround them.

Aftermath exists in the same realm as Defector and the Autopian, both of which were created by ex-G/O Media writers and editors.

A brief aside: like Defector, Aftermath is built on Alley’s Lede stack, which builds upon a WordPress base with paid subscriptions, bulk email, and Vox Media’s Coral comments system. For its part, Vox announced earlier this year that it would be moving off Chorus and onto WordPress. WordPress is solidifying its place as the default CMS for the web. Aside over.

Rusty Foster:

When the platforms turned off the click hose and hedge fund herbs started buying the tattered remnants of 2010-era blog networks to chop them up for scrap, I wondered when we’d reach the Awl inflection point, where the tools to start a subscription-funded blog were cheap enough and the pool of unemployed reporter goblins was deep enough to start generating a new cohort of publications like this. I think I’m ready to say it’s now, and personally I love to see it.

I welcome this new era of worker-owned independent media outlets built mostly on the success of writers. The only reason I used to read Deadspin was because of who wrote for it; when they moved, so did I. And now they can be supported directly by a core group of readers who trust them.

A little over a year ago, the U.S. Federal Trade Commission sued Kochava, a data broker, for doing things deemed too creepy even for regulators in the United States. In May, a judge required the FTC to narrow its case.

Ashley Belanger, Ars Technica:

One of the world’s largest mobile data brokers, Kochava, has lost its battle to stop the Federal Trade Commission from revealing what the FTC has alleged is a disturbing, widespread pattern of unfair use and sale of sensitive data without consent from hundreds of millions of people.

US District Judge B. Lynn Winmill recently unsealed a court filing, an amended complaint that perhaps contains the most evidence yet gathered by the FTC in its long-standing mission to crack down on data brokers allegedly “substantially” harming consumers by invading their privacy.

[…]

Winmill said that for now, the FTC has provided enough support for its allegations against Kochava for the lawsuit to proceed.

And what a complaint it is. Even with the understanding that its claims are unproven and many are based on Kochava’s marketing documents, it is yet another reminder of how much user data is captured and resold by brokers like these with virtually no oversight or restrictions.

I noticed something notable on page 31 of the complaint. The FTC shows a screenshot of an app with a standard iOS location services consent dialog overtop. The name of the app is redacted, but it appears to be Ibotta based on a screenshot in this blog post. The FTC describes this as a “consent screen that Kochava provided”, and later says it is “not a Kochava app, nor is Kochava mentioned anywhere on the consent screen”. The FTC alleges location data collected through this app still made its way to Kochava without users’ awareness. While the FTC muddles the description of this consent screen, it is worth mentioning that a standard iOS consent screen appears to be, in this framing, inadequately informative.

I was reminded of Nikita Prokopov’s classic post today — “People Expect Technology to Suck Because It Actually Sucks” — in much the same way I think of it many days but, and especially, today. These are all things which happened today from when I woke up:

  • I grabbed my phone off my nightstand and launched the CBC News app. A scrolling gesture in the Top Stories feed was misinterpreted as a tap on an ad, which launched Safari. This is a constant problem in many apps but, particularly, in CBC News.

  • Next, I opened the New York Times app. I tapped on a story, then returned to the Today view, which immediately refreshed and showed some different stories.

  • I marked a story within the Times app to read it later. I assumed I would find this in the For You section of the app, but I was wrong. You actually need to be in the Today view and then you must tap the person icon in the upper-right. I am noting this because I will forget it again and refer back to this post.

  • Messaging a friend, I once again noticed that the autocorrect suggestion bubble is sometimes partially obscured by the keyboard. I have predictive text turned off so this is the old-style iOS autocorrect bubble.

  • Partway through my text to my friend, the predictive text bar appears with no particular trigger — for example, I did not type anything like “my address is” — then disappears, then reappears with a button to send money via Apple Pay, which is not supported in Canada.

I brewed some coffee and started my day on my Mac:

  • There was intermittent lag in Bluetooth keyboard entry in MacOS. Running killall Dock seemed to fix it temporarily; connecting my keyboard via a wire and toggling its Bluetooth mode, then disconnecting the wire seems to have corrected it.

  • I was listening to a song in Music, then I paused it to watch a video on YouTube in Safari, then I closed the Safari tab and tapped the play/pause key on my keyboard, which did nothing because it was — according to the audio playback menubar item — still controlling that closed YouTube tab.

  • When performing ripple deletes in a simple Adobe Audition project, there is lag or delay which increases a little bit with each ripple delete. After ten minutes or so of work, it is necessary to restart Audition. I lost an hour today to tracking down and trying to diagnose this problem. It turns out many people have experienced this problem on MacOS and Windows for years, and there does not appear to be a fix.

    Interestingly, Audition does not consume a lot of resources. It uses less than a single CPU core even while doing complex editing, and its RAM consumption is similarly modest. It is just a really, really slow application.

  • OneDrive and fileproviderd put a combined 300% pressure on my CPU while syncing Audition’s temporary files. I do not necessarily need those temporary files to sync, so I pause OneDrive. Then a colleague asks me to share a link to a file and I find that OneDrive cannot generate links while syncing is paused. Resuming syncing causes high CPU consumption for several minutes.

  • I filed a bug report against this with Microsoft. (The relevant Apple one is FB13320112.) The text box was unresponsive, but in a new way compared to the keyboard entry problems I was having earlier.

  • Attempted to launch Audition from Spotlight by typing “aud” which momentarily flashed Audition before changing to Audio MIDI Setup as I hit return.

  • Noticed my free disk space had dropped by over 20 GB in the span of an hour for no clear reason. A brief investigation did not reveal anything immediately, but I got sidetracked by…

  • …a 34 GB folder of cached Apple Music files sitting in a ~/Library/ folder labelled “com.apple.iTunes”. It appears to have been untouched since iTunes became Music but, for some reason, MacOS has not cleared it out.

  • I switched to my laptop to write a post through MarsEdit this evening. There was apparently a configuration change somewhere — probably at my web host — which causes it to return a “403 Forbidden” error when attempting to publish through MarsEdit. I have, as of writing, spent three hours trying to fix this. I finally gave up and asked my web host for help; they fixed it because their support is great.

  • I tried to AirDrop a website from Safari on my iPhone to my wife’s iPhone. It got stuck on “Waiting…”, so I cancelled the AirDrop. Then I navigated away from the page and the AirDrop occurred a beat later.

  • I dismissed a Time Machine notice that my MacBook Pro has not been backed up in about two months. The hard drive attached to my “server” seems to have a problematic connection or board or something else, and it is something I need to fix.

None of the problems above are life-changing, but this list is representative of the kinds of hiccups I experience more-or-less daily. It could be a different mix of things with less or more impact than those above, but these problems often require I spend time trying to diagnose and fix them. Sometimes I can; sometimes, as with the Adobe Audition problem, the tools just suck and I have no recourse.

I know there are real people working on these products, many of whom really do want to make them the very best. I am encouraged by stories like Mark Gurman’s report today in which it seems that Apple has spent a couple of weeks switching from feature development to bug fixing mode for its next major releases. I am grateful for how incredible most of this stuff often is, and I understand things occasionally need fixing. But not like this. The ways in which these things break rob me of confidence in everything I use. I cannot see a good reason I would want to introduce more computers into my life, like with “smart” home devices.

It is amazing what I do every day with the computer on my desk, the one on my lap, and the one in my pocket. But I wish they did everything more reliably, predictably, and consistently. I am prepared to fix things sometimes. I do not understand why I am tending to these things daily like they are made in a shed instead of by some of the world’s most valuable corporations. We, the users, deserve better than this.

U2 is not my band. I have never been to Las Vegas, and it does not seem like my kind of city. I am more of a standing-space-in-a-gross-nightclub kind of concert-goer.

But these photos from Manton Reece make me want to see just about any show at the Sphere. I had not heard of this venue before reading Reece’s post and it seems tailor-made for an extraordinary experience; I would love to see what other bands with interesting stage ideas — like Talking Heads or Tool — could do with this enormous wraparound screen.

Dan Milmo, the Guardian:

Microsoft’s news aggregation service published the automated poll next to a Guardian story about the death of Lilie James, a 21-year-old water polo coach who was found dead with serious head injuries at a school in Sydney last week.

The poll, created by an AI program, asked: “What do you think is the reason behind the woman’s death?” Readers were then asked to choose from three options: murder, accident or suicide.

This is far from the first time something like this has happened at MSN. An obituary published in September appeared to use some basic word substitution, and an August article promoted the Ottawa Food Bank as a tourism attraction — though a Microsoft spokesperson insisted articles are not published without human oversight.

Karl Bode, Techdirt:

While Microsoft executives have posted endlessly about the responsible use of AI, that apparently doesn’t include their own news website. MSN is routinely embedded as the unavoidable default launch page at a lot of enterprises and companies, ensuring this automated bullshit sees fairly widespread distribution even if users don’t actually want to read any of it.

If you believe Cloudflare’s data, MSN is the fifty-third most popular domain in the world.