Month: June 2024

Ina Fried, Axios:

Adobe on Tuesday updated its terms of service to make explicit that it won’t train AI systems using customer data.

The move follows an uproar over largely unrelated changes Adobe made in recent days to its terms of service — which contained wording that some customers feared was granting Adobe broad rights to customer content.

Again, I must ask whether businesses are aware of how little trust there currently is in technology firms’ A.I. use. People misinterpret legal documents all the time — a minor consequence of how we have normalized signing a non-negotiable contract every time we create a new account. Most people are not equipped to read and comprehend the consequences of those contracts, and it is unsurprising they can assume the worst.

The U.S. Federal Trade Commission:

The Federal Trade Commission is taking action against software maker Adobe and two of its executives, Maninder Sawhney and David Wadhwani, for deceiving consumers by hiding the early termination fee for its most popular subscription plan and making it difficult for consumers to cancel their subscriptions.

A federal court complaint filed by the Department of Justice upon notification and referral from the FTC charges that Adobe pushed consumers toward the “annual paid monthly” subscription without adequately disclosing that cancelling the plan in the first year could cost hundreds of dollars. Wadhwani is the president of Adobe’s digital media business, and Sawhney is an Adobe vice president.

The inclusion of two Adobe executives as co-defendants is notable, though not entirely unique — in September, the FTC added three executives to its complaint against Amazon, a move a judge recently upheld.

The contours of the case itself bear similarities to the Amazon Prime one, too. In both cases, customers are easily coerced into subscriptions which are difficult to cancel. Executives were aware of customer complaints, according to the FTC, yet they allegedly allowed or encouraged these practices. But there are key differences between these cases as well. Amazon Prime is a monthly cancel-anytime subscription — if you can navigate the company’s deliberately confusing process. Adobe, on the other hand, offers three ways to pay for many of its products: on a monthly basis which can be cancelled at any time, on an annual basis, or on a monthly basis locked into an annual contract. However, it predominantly markets its products with the latter option, and preselects it when subscribing. That is where the pain begins.

The difficulty and cost of cancelling an Adobe subscription is legendary. It is right up there with gyms for how badly it treats its customers. It has designed a checkout process that defaults people into an annual contract, and a cancellation workflow which makes extricating oneself from that contract tedious, time-consuming, and expensive. If Adobe wanted to make it obvious what users were opting into at checkout, and easy for them to end a subscription, it could have designed those screens in that way. Adobe did not.

Robb Knight blocked various web scrapers via robots.txt and through nginx. Yet Perplexity seemed to be able to access his site:

I got a perfect summary of the post including various details that they couldn’t have just guessed. Read the full response here. So what the fuck are they doing?

[…]

Before I got a chance to check my logs to see their user agent, Lewis had already done it. He got the following user agent string which certainly doesn’t include PerplexityBot like it should: […]

I am sure Perplexity will respond to this by claiming it was inadvertent, and it has fixed the problem, and it respects publishers’ choices to opt out of web scraping. What matters is how we have only a small amount of control over how our information is used on the web. It defaults to open and public — which is part of the web’s brilliance, until the audience is no longer human.

Unless we want to lock everything behind a login screen, the only mechanisms for control that we have are dependent on companies like Perplexity being honest about their bots. There is no chance this problem only affects the scraping of a handful of independent publishers; this is certainly widespread. Without penalty or legal reform, A.I. companies have little incentive not to do exactly the same as Perplexity.

Kashmir Hill, New York Times:

[Clearview AI] A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

This is an awful move by an awful company. It turns U.S. victims of its global privacy invasion into people who are invested and complicit in its success.

Pavan Davuluri, of Microsoft:

Today, we are communicating an additional update on the Recall (preview) feature for Copilot+ PCs. Recall will now shift from a preview experience broadly available for Copilot+ PCs on June 18, 2024, to a preview available first in the Windows Insider Program (WIP) in the coming weeks. Following receiving feedback on Recall from our Windows Insider Community, as we typically do, we plan to make Recall (preview) available for all Copilot+ PCs coming soon.

Microsoft has always struggled to name its products coherently, but Microsoft Copilot+ PCs with Recall (preview) available first through the Windows Insider Program (WIP) has to take the cake. Absolute gibberish.

Anyway, it is disappointing to see Microsoft botch the announcement of this feature so badly. Investors do not seem to care about how untrustworthy the company is because, face it, how many corporations big and small are going to abandon Windows and Office? As long as its leadership keeps saying the right things, it seems it is still comfortable to sit in the afterglow of its A.I. transformation.

Online privacy isn’t just something you should be hoping for — it’s something you should expect. You should ensure your browsing history stays private and is not harvested by ad networks.

By blocking ad trackers, Magic Lasso Adblock stops you being followed by ads around the web.

Screenshot of Magic Lasso Adblock

It’s a native Safari content blocker for your iPhone, iPad, and Mac that’s been designed from the ground up to protect your privacy.

Rely on Magic Lasso Adblock to:

  • Remove ad trackers, annoyances and background crypto-mining scripts

  • Browse common websites 2.0× faster

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

So, join over 300,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Eryk Salvaggio, Tech Policy Press:

People are growing ever more frustrated by the intrusiveness of tech. This frustration feeds a cycle of fear that can be quickly dismissed, but doing so strikes me as either foolish or cynical. I am not a lawyer, but lately I have been in a lot of rooms with lawyers discussing people’s rights in the spheres of art and AI. One of the things that has come up recently is the challenge of translating oftentimes unfiltered feelings about AI into a legal framework.

[…]

I would never claim to speak to the concerns of everyone I’ve spoken with about AI, but I have made note of a certain set of themes. I understand these as three C’s for data participation: Context, Consent, and Control.

This is a thoughtful essay about what it means for creation to be public, and the imbalanced legal architecture covering appropriation and reuse. I bet many people feel this in their gut — everything is a remix, yet there are vast differences between how intellectual property law deals with individuals compared to businesses.

If I were creating music by hand which gave off the same vibes as another artist, I would be worried about a resulting lawsuit, even if I did not stray into the grey area of sampling. And I would have to obtain everything legally — if I downloaded a song off the back of a truck, so to speak, I would be at risk of yet more legal jeopardy, even if it was for research or commentary. Yet an A.I. company can scrape all the music that has ever been published to the web, and create a paid product that will reproduce any song or artist you might like without credit or compensation; they are arguing this is fair use.

This does not seem like a fair situation, and it is not one that will be remedied by making copyright more powerful. I appreciated Salvaggio’s more careful assessment.

Renee Dudley and Doris Burke, reporting for ProPublica which is not, contrary to the opinion of one U.S. Supreme Court jackass justice, “very well-funded by ideological groups” bent on “look[ing] for any little thing they can find, and they try[ing] to make something out of it”, but is instead a distinguished publication of investigative journalism:

Microsoft hired Andrew Harris for his extraordinary skill in keeping hackers out of the nation’s most sensitive computer networks. In 2016, Harris was hard at work on a mystifying incident in which intruders had somehow penetrated a major U.S. tech company.

[…]

Early on, he focused on a Microsoft application that ensured users had permission to log on to cloud-based programs, the cyber equivalent of an officer checking passports at a border. It was there, after months of research, that he found something seriously wrong.

This is a deep and meaningful exploration of Microsoft’s internal response to the conditions that created 2020’s catastrophic SolarWinds breach. It seems that both Microsoft and the Department of Justice knew well before anyone else — perhaps as early as 2016 in Microsoft’s case — yet neither did anything with that information. Other things were deemed more important.

Perhaps this was simply a multi-person failure in which dozens of people at Microsoft could not see why Harris’ discovery was such a big deal. Maybe they all could not foresee this actually being exploited in the wild, or there was a failure to communicate some key piece of information. I am a firm believer in Hanlon’s razor.

On the other hand, the deep integration of Microsoft’s entire product line into sensitive systems — governments, healthcare, finance — magnifies any failure. The incompetence of a handful of people at a private corporation should not result in 18,000 infected networks.

Ashley Belanger, Ars Technica:

Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

[…]

Microsoft did not dispute ProPublica’s report. Instead, the company provided a statement that almost seems to contradict Smith’s testimony to Congress today by claiming that “protecting customers is always our highest priority.”

Microsoft’s public relations staff can say anything they want. But there is plenty of evidence — contemporary and historic — showing this is untrue. Can it do better? I am sure Microsoft employs many intelligent and creative people who desperately want to change this corrupted culture. Will it? Maybe — but for how long is anybody’s guess.

The Asahi Shimbun, in a non-bylined report:

The new law designates companies that are influential in four areas: smartphone operating systems, app stores, web browsers and search engines.

The new law will prohibit companies from giving preferential treatment for the operator’s own payment system and from preventing third-party companies from launching new application stores.

[…]

The new legislation sets out exceptional rules in cases to protect security, privacy and youth users.

Penalties are 20–30% of Japanese revenue. Japan is one of very few countries in the world where the iPhone’s market share exceeds that of Android phones. I am interested to know if Apple keeps its policies for developers consistent between the E.U. and Japan, or if they will diverge.

Conspirador Norteño” in January 2023:

BNN (the “Breaking News Network”, a news website operated by tech entrepreneur and convicted domestic abuser Gurbaksh Chahal) allegedly offers independent news coverage from an extensive worldwide network of on-the-ground reporters. As is often the case, things are not as they seem. A few minutes of perfunctory Googling reveals that much of BNN’s “coverage” appears to be mildly reworded articles copied from mainstream news sites. For science, here’s a simple technique for algorithmically detecting this form of copying.

Kashmir Hill and Tiffany Hsu, New York Times:

Many traditional news organizations are already fighting for traffic and advertising dollars. For years, they competed for clicks against pink slime journalism — so-called because of its similarity to liquefied beef, an unappetizing, low-cost food additive.

Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy. Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.

See, it is not just humans producing abject garbage; robots can do it, too — and way better. There was a time when newsrooms could be financially stable on display ads. Those days are over for a team of human reporters, even if all they do is rewrite rich guy tweets. But if you only need to pay a skeleton operations staff to ensure the robots continue their automated publishing schedule, well that becomes a more plausible business venture.

Another thing of note from the Times story:

Before ending its agreement with BNN Breaking, Microsoft had licensed content from the site for MSN.com, as it does with reputable news organizations such as Bloomberg and The Wall Street Journal, republishing their articles and splitting the advertising revenue.

I have to wonder how much of an impact this co-sign had on the success of BNN Breaking. Syndicated articles on MSN like these are shown in various places on a Windows computer, and are boosted in Bing search results. Microsoft is increasingly dependent on A.I. for editing its MSN portal with predictable consequences.

Conspirador Norteño” in April:

The YouTube channel is not the only data point that connects Trimfeed to BNN. A quick comparison of the bylines on BNN’s and Trimfeed’s (plagiarized) articles shows that many of the same names appear on both sites, and several X accounts that regularly posted links to BNN articles prior to April 2024 now post links to Trimfeed content. Additionally, BNN seems to have largely stopped publishing in early April, both on its website and social media, with the Trimfeed website and related social media efforts activating shortly thereafter. It is possible that BNN was mothballed due to being downranked in Google search results in March 2024, and that the new Trimfeed site is an attempt to evade Google’s decision to classify Trimfeed’s predecessor as spam.

The Times reporters definitively linked the two and, after doing so, Trimfeed stopped publishing. Its domain, like BNN Breaking, now redirects to BNNGPT, which ostensibly uses proprietary technologies developed by Chahal. Nothing about this makes sense to me and it smells like bullshit.

Apple’s Human Interface Guidelines:

[Beginning in iOS 18 and iPadOS 18] People can customize the appearance of their app icons to be light, dark, or tinted. You can create your own variations to ensure that each one looks exactly the way you way you want. See Apple Design Resources for icon templates.

Design your dark and tinted icons to feel at home next to system app icons and widgets. You can preserve the color palette of your default icon, but be mindful that dark icons are more subdued, and tinted icons are even more so. A great app icon is visible, legible, and recognizable, even with a different tint and background.

Louie Mantia:

Apple’s announcement of “dark mode” icons has me thinking about how I would approach adapting “light mode” icons for dark mode. I grabbed 12 icons we made at Parakeet for our clients to illustrate some ways of going about it.

I appreciated this deep exploration of different techniques for adapting alternate icon appearances. Obviously, two days into the first preview build of a new operating system is not the best time to adjudicate its updates. But I think it is safe to say a quality app from a developer that cares about design will want to supply a specific dark mode icon instead of relying upon the system-generated one. Any icon with more detail than a glyph on a background will benefit.

Also, now that there are two distinct appearances, I also think it would be great if icons which are very dark also had lighter alternates, where appropriate.

Jason Koebler, 404 Media:

Monday, Elon Musk tweeted a thing about Apple’s marketing event, an act that took Musk three seconds but then led to a large portion of the dwindling number of employed human tech journalists to spring into action and collectively spend many hours writing blogs about What This Thing That Probably Won’t Happen All Means.

Karl Bode, Techdirt:

Journalists are quick to insist that it’s their noble responsibility to cover the comments of important people. But journalism is about informing and educating the public, which isn’t accomplished by redirecting limited journalistic resources to cover platform bullshit that means nothing and will result in nothing meaningful. All you’ve done is made a little money wasting people’s time.

The speed at which some publishers insist these “articles” are posted combined with a lack of constraints in airtime or physical paper means the loudest people know they can draw attention by posting deranged nonsense. All those people who got into journalism because they thought they could make a difference are instead cajoled into adding something resembling substance to forty-four tweeted words from the fingers of a dipshit.

Daniel Jalkut, last month year:

Which leads me to my somewhat far-fetched prediction for WWDC: Apple will talk about AI, but they won’t once utter the letters “AI”. They will allude to a major new initiative, under way for years within the company. The benefits of this project will make it obvious that it is meant to serve as an answer to comparable efforts being made by OpenAI, Microsoft, Google, and Facebook. During the crescendo to announcing its name, the letters “A” and “I” will be on all of our lips, and then they’ll drop the proverbial mic: “We’re calling it Apple Intelligence.” Get it?

Apple:

Apple today introduced Apple Intelligence, the personal intelligence system for iPhone, iPad, and Mac that combines the power of generative models with personal context to deliver intelligence that’s incredibly useful and relevant. Apple Intelligence is deeply integrated into iOS 18, iPadOS 18, and macOS Sequoia. It harnesses the power of Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks. With Private Cloud Compute, Apple sets a new standard for privacy in AI, with the ability to flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers.

To Apple’s credit, the letters “A.I.” were only enunciated a handful of times during its main presentation today, far less often than I had expected. Mind you, in sixty-odd places, “A.I.” was instead referred to by the branded “Apple Intelligence” moniker which is also “A.I.” in its own way. I want half-right points.

There are several concerns with features like these, and Apple answered two of them today: how it was trained, and the privacy and security of user data. The former was not explained during today’s presentation, nor in its marketing materials and developer documentation. But it was revealed by John Giannandrea, senior vice president of Machine Learning and A.I. Strategy, in an afternoon question-and-answer session hosted by Justine Ezarik, as live-blogged by Nilay Patel at the Verge:1

What have these models actually been trained on? Giannandrea says “we start with the investment we have in web search” and start with data from the public web. Publishers can opt out of that. They also license a wide amount of data, including news archives, books, and so on. For diffusion models (images) “a large amount of data was actually created by Apple.”

If publishers wish to opt out of Apple’s training models but continue to permit crawling for things like Siri and Spotlight, they should add a disallow rule for Applebot-Extended. Because of Apple’s penchant for secrecy, that usage control was not added until today. That means a site may have been absorbed into training data unless its owners opted out of all Applebot crawling. Hard to decline participating in something you do not even know about.

Additionally, in April, Katie Paul and Anna Tong reported for Reuters that Apple struck a licensing agreement with Shutterstock for image training purposes.

Apple is also, unsurprisingly, promoting heavily the privacy and security policies it has in place. It noted some of these attributes in its presentation — including some auditable code and data minimization — and elaborated on Private Cloud Compute on its security blog:

With services that are end-to-end encrypted, such as iMessage, the service operator cannot access the data that transits through the system. One of the key reasons such designs can assure privacy is specifically because they prevent the service from performing computations on user data. Since Private Cloud Compute needs to be able to access the data in the user’s request to allow a large foundation model to fulfill it, complete end-to-end encryption is not an option. Instead, the PCC compute node must have technical enforcement for the privacy of user data during processing, and must be incapable of retaining user data after its duty cycle is complete.

[…]

  • User data is never available to Apple — even to staff with administrative access to the production service or hardware.

Apple can make all the promises it wants, and it appears it does truly want to use generative A.I. in a more responsible way. For example, the images you can make using Image Playground cannot be photorealistic and — at least for those shown so far — are so strange you may avoid using them. Similarly, though I am not entirely sure, it seems plausible the query system is designed to be more private and secure than today’s Siri.

Yet, as I wrote last week, users may not trust any of these promises. Many of these fears are logical: people are concerned about the environment, creative practices, and how their private information is used. But some are more about the feel of it — and that is okay. Even if all the training data were fully licensed and user data is as private and secure as Apple says, there is still an understandable ick factor for some people. The way companies like Apple, Google, and OpenAI have trained their A.I. models on the sum of human creativity represents a huge imbalance of power, and the only way to control Apple’s public data use was revealed yesterday. Many of the controls Apple has in place are policies which can be changed.

Consider how, so far as I can see, there will be no way to know for certain if your Siri query is being processed locally or by Apple’s servers. You do not know that today when using Siri, though you can infer it based on what you are doing and if something does not work when Apple’s Siri service is down. It seems likely that will be the case with this new version, too.

Then there are questions about the ethos of generative intelligence. Apple has long positioned its products as tools which enable people to express themselves creatively. Generative models have been pitched as almost the opposite: now, you do not have to pay for someone’s artistic expertise. You can just tell a computer to write something and it will do so. It may be shallow and unexciting, but at least it was free and near-instantaneous. Apple notably introduced its set of generative services only a month after it embarrassed itself by crushing analogue tools into an iPad. Happily, it seems this first set of generative features is more laundry and less art — making notifications less intrusive, categorizing emails, making Siri not-shit. I hope I can turn off things like automatic email replies.

You will note my speculative tone. That is because Apple’s generative features have not been made available yet, including in developer beta builds of its new operating system. None of us have any idea how useful these features are, nor what limitations they have. All we can see are Apple’s demonstrations and the metrics it has shared. So, we will see how any of this actually pans out. I have been bamboozled by this same corporation making similar promises before.

“May you live in interesting times”, indeed.


  1. The Verge’s live blog does not have per-update permalinks so you will need to load all the messages and find this for yourself. ↥︎

Celine Nguyen:

But this isn’t really about the software. It’s about what software promises us — that it will help us become who we want to be, living the lives we find most meaningful and fulfilling. The idea of research as leisure activity has stayed with me because it seems to describe a kind of intellectual inquiry that comes from idiosyncratic passion and interest. It’s not about the formal credentials. It’s fundamentally about play. It seems to describe a life where it’s just fun to be reading, learning, writing, and collaborating on ideas.

This is a wonderful essay, albeit one which leaves me with a question of how a reader distinguishes between an amateur’s interpretation of what they read, and an expert’s more considered exploration of a topic — something I have wondered about before.

The amateur or non-professional has their place, of course; I am staking mine on this very website. The expert may not always be correct. But adjudicating the information from each is not a realistic assignment for a layperson. Consider the vast genre of multi-hour YouTube essays, or even short but seemingly authoritative TikTok digests of current events. We are ingesting more information than ever before with fewer gatekeepers — for good and otherwise.

Want to experience twice as fast load times in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock — the ad blocker designed for you. It’s easy to setup, blocks all ads, and doubles the speed at which Safari loads.

Screenshot of Magic Lasso Adblock

Magic Lasso Adblock is an efficient and high performance ad blocker for your iPhone, iPad, and Mac. It simply and easily blocks all intrusive ads, trackers and annoyances in Safari. Just enable to browse in bliss.

By cutting down on ads and trackers, common news websites load 2x faster and use less data.

Over 300,000+ users rely on Magic Lasso Adblock to:

  • Improve their privacy and security by removing ad trackers

  • Block annoying cookie notices and privacy prompts

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad and Mac.

Download today via the Magic Lasso website.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Canadian Prime Minister Justin Trudeau appeared on the New York Times’ “Hard Fork” podcast for a discussion about artificial intelligence, election security, TikTok, and more.

I have to agree with Aaron Vegh:

[…] I loved his messaging on Canada’s place in the world, which is pragmatic and optimistic. He sees his job as ambassador to the world, and he plays the role well.

I just want to pull some choice quotes from the episode that highlight what I enjoyed about Trudeau’s position on technology. He’s not merely well-briefed; he clearly takes an interest in the technology, and has a canny instinct for its implications in society.

I understand Trudeau’s appearance serves as much to promote his government’s efforts in A.I. as it does to communicate any real policy positions — take a sip every time Trudeau mentions how we “need to have a conversation” about something. But I also think co-hosts Kevin Roose and Casey Newton were able to get a real sense of how the Prime Minister thinks about A.I. and Canada’s place in the global tech industry.

If you had just been looking at the headlines from major research organizations, you would see a lack of confidence from the public in big business, technology companies included. For years, poll after poll from around the world has found high levels of distrust in their influence, handling of private data, and new developments.

If these corporations were at all worried about this, they are not much showing it in their products — particularly the A.I. stuff they have been shipping. There has been little attempt at abating last year’s trust crisis. Google decided to launch overconfident summaries for a variety of search queries. Far from helping to sift through all that has ever been published on the web to mash together a representative summary, it was instead an embarrassing mess that made the company look ill prepared for the concept of satire. Microsoft announced a product which will record and interpret everything you do and see on your computer, but as a good thing.

Can any of them see how this looks? If not — if they really are that unaware — why should we turn to them to fill gaps and needs in society? I certainly would not wish to indulge businesses which see themselves as entirely separate from the world.

It is hard to imagine they do not, though. Sundar Pichai, in an interview with Nilay Patel, recognised there were circumstances in which an A.I. summary would be inappropriate, and cautioned that the company still considers it a work in progress. Yet Google still turned it on by default in the U.S. with plans to expand worldwide this year.

Microsoft has responded to criticism by promising Recall will now be a feature users must opt into, rather than something they must turn off after updating Windows. The company also says there are more security protections for Recall data than originally promised but, based on its track record, maybe do not get too excited yet.

These product introductions all look like hubris. Arrogance, really — recognition of the significant power these corporations wield and the lack of competition they face. Google can poison its search engine because where else are most people going to go? How many people would turn off Recall, something which requires foreknowledge of its existence, under Microsoft’s original rollout strategy?

It is more or less an admission they are all comfortable gambling with their customers’ trust to further the perception they are at the forefront of the new hotness.

None of this is a judgement on the usefulness of these features or their social impact. I remain perplexed by the combination of a crisis of trust in new technologies, and the unwillingness of the companies responsible to engage with the public. There seems to be little attempt at persuasion. Instead, we are told to get on board because this rocket ship is taking off with or without us. Concerned? Too bad: the rocket ship is shaped like a giant middle finger.

What I hope we see Monday from Apple — a company which has portrayed itself as more careful and practical than many of its contemporaries — is a recognition of how this feels from outside the industry. Expect “A.I.” to be repeated in the presentation until you are sick of those two letters; investors are going to eat it up. When normal people update their phones in September, though, they should not feel like they are being bullied into accepting our A.I. future.

People need to be given time to adjust and learn. If the polls are representative, very few people trust giant corporations to get this right — understandably — yet these tech companies seem to believe we are as enthusiastic about every change they make as they are. Sorry, we are not, no matter how big a smile a company representative is wearing when they talk about it. Investors may not be patient but many of the rest of us need time.

Joanna Stern, Wall Street Journal:

Porn, violent images, illicit drugs. I could see it all by typing a special string of characters into the Safari browser’s address bar. The parental controls I had set via Apple’s Screen Time? Useless.

Security researchers reported this particular software bug to Apple multiple times over the past three years with no luck. After I contacted Apple about the problem, the company said it would release a fix in the next software update. The bug is a bad one, allowing users to easily circumvent web restrictions, although it doesn’t appear to have been well-known or widely exploited.

It seems lots of parents are frustrated by Screen Time. It is not reliable software but, for privacy reasons, it is hard for third-parties to differentiate themselves as they rely on the same framework.

Stern:

  • Screen usage chart. Want to see your child’s screen usage for the day? The chart is often inaccurate or just blank.

I find this chart is always wildly disconnected from actual usage figures for my own devices. My iMac recently reported a week straight of 24-hour screen-on time per day, including through a weekend when I was out of town, because of a web browser tab I left open in the background.

One could reasonably argue nobody should entirely depend on software to determine how devices are used by themselves or their children, but I do not think many people realistically do. It is part of a combination of factors. Screen Time should perform the baseline functions it promises. It sucks how common problems are basically ignored until Stern writes about them.

Howard Oakley:

Prior to Mac OS X, Adobe Acrobat, both in its free viewer form and a paid-for Pro version, were the de facto standard for reading, printing and working with PDF documents on the Mac. The Preview app had originated in NeXTSTEP in 1989 as its image and PDF viewer, and was brought across to early versions of Mac OS X, where it has remained ever since.

The slow decline of Preview — and Mac PDF rendering in general — since MacOS Sierra is one of the more heartbreaking products of Apple’s annual software churn cycle. To be entirely fair, many of the worst bugs have been fixed, but some remain: sometimes, highlights and notes stop working; search is a mess; copying text is unreliable.

Unfortunately, the apps which render PDF files the most predictably and consistently are Adobe Acrobat and Reader. Both became hideous Electron Chromium-based apps at some point and, so, are gigantic packages which behave nothing like Mac software. It is all pretty disappointing.

Update: A Hacker News commenter rightly pointed out that Acrobat and Reader are not truly Electron apps, and are instead Chromium-based apps. That is to say both are generic-brand shitty instead of the name-brand stuff.