Month: January 2024

David Heinemeier Hansson:

But unfortunately there is no rule of law with the app stores, except that of the jungle, and Apple is the 800 lbs gorilla, ruling as it sees fit. So now HEY is back on trial in their kangaroo court. This time with our new calendar feature, HEY Calendar, which we dared make a separate app in service of users.

After spending 19 days to review our submission, causing us to miss a long-planned January 2nd launch date, Apple rejected our stand-alone free companion app “because it doesn’t do anything”. That is because users are required to login with an existing account to use the functionality.

This feels familiar because the exact same thing happened just before WWDC 2020 with the Hey email client: Apple rejected it because it showed only a login screen and did not permit users to register within the app. This is common among many types of applications available in the App Store but, apparently, this was not allowed for the specific category into which Hey fit.

After some bad press, Apple added a new rule for the App Store to permit free client applications of paid web services, “(eg. VOIP, Cloud Storage, Email Services, Web Hosting)”.

Michael Tsai:

The plain reading of this is that the items in parentheses are examples, not an exhaustive list.

That is the only logical explanation. After all, what rule would permit a free frontend for a paid email service, but not for a calendar? Alas, the Hey Calendar client was rejected by Apple for — the company says — the same reason as the email client, even though there is now a specific exemption in the App Store guidelines for email clients like the one for Hey.

Anyway, Hansson says a new build of Hey Calendar has been submitted purely to address the issue of it not doing anything unless someone logs in, and the thing the app will do is show anniversaries of notable days in Apple’s history.

Stephen Hackett, creator of the Apple History Calendar:

For each of my three Kickstarters, I’ve included digital versions of the highlighted dates for people to import into their calendar apps.

Coincidences happen, right? It is not like Hackett owns these dates and, according to Hackett, it appears Hey’s data is entirely unique and not a duplicate of the Apple History Calendar. But this is Hansson who reverted to type to make it clear that, yes, this was a spiritual ripoff:

This is essentially a digital version of the 2024 Apple History Calendar that raised over $40,000 on Kickstarter. Apple has a rich history that lots of people want to relive, and we’re giving them that inside the beautiful HEY Calendar app. For free!

What a dick.

The primary story remains Apple’s unpredictable policing of the App Store, capriciously rejecting apps from even well-known developers. But the secondary narrative here is of bullies: Apple, yes, but also Hansson. It should have been easy for both Apple and Hansson to make this situation look good in the face of yet another dumb App Review move, but neither chose that route.

David McCabe and Tripp Mickle, New York Times:

The Justice Department is in the late stages of an investigation into Apple and could file a sweeping antitrust case taking aim at the company’s strategies to protect the dominance of the iPhone as soon as the first half of this year, said three people with knowledge of the matter.

The agency is focused on how Apple has used its control over its hardware and software to make it more difficult for consumers to ditch the company’s devices, as well as for rivals to compete, said the people, who spoke anonymously because the investigation was active.

I would not trust commentary before details are made public — though I imagine those specifics are foreshadowed beginning on page 330 of the Investigation of Competition in Digital Markets report (PDF) of 2020 — but there is one thing in this Times story I feel compelled to highlight:

Apple’s new privacy tool, App Tracking Transparency, which allows iPhone users to explicitly choose whether an app can track them, drew scrutiny because of its curtailing of user data collection by advertisers. Advertising companies have said that the tool is anticompetitive.

This is why a baseline level of privacy expectations ought to be regulated in law, not by individual companies. You could quibble with the wisdom of potentially investigating Apple’s offering of privacy protections — perhaps you believe it should be able to compete on those grounds — but, as a major platform operator, it undeniably has leverage which could be construed as illegally anticompetitive.

Will Oremus and Elahe Izadi, Washington Post:

AI systems are typically “trained” on gargantuan data sets that include vast amounts of published material, much of it copyrighted. Through this training, they come to recognize patterns in the arrangement of words and pixels, which they can then draw on to assemble plausible prose and images in response to just about any prompt.

Some AI enthusiasts view this process as a form of learning, not unlike an art student devouring books on Monet or a news junkie reading the Times cover-to-cover to develop their own expertise. But plaintiffs see a more quotidian process at work beneath these models’ hood: It’s a form of copying, and unauthorized copying at that.

David Karpf, in an opinion piece for Foreign Policy:

The story that I often hear from AI evangelists is that technologies such as ChatGPT are here, and they are inevitable. You can’t put this genie back in the bottle. If outdated copyright laws are at odds with the scraping behavior of large language models, then our copyright law will surely need to bend as a result.

And to them I can only say: Remember the Ghost of Napster. We do not live in the future that seemed certain during the Napster era. We need not live in the future that seems certain to AI evangelists today. […]

Karpf’s comparison to Napster is apt, but the lesson drawn from it seems wrong or, at least, incomplete. To avoid quoting paragraph after paragraph, here is my short summary: after Napster’s launch, many pundits assumed it would effectively end copyright, but they were wrong because after the RIAA made itself look like jerks by filing a series of lawsuits, iTunes and Spotify allow people a copyright-respecting way to get music online while artists get a little bit of income. Karpf writes correctly that “[c]opyright law did not bend to accommodate new technologies”, but laments — in the same paragraph — how this “new status quo hasn’t been great for musicians or artists”, and compares this to the liberal use of copyrighted works for generative training data. The inability for music piracy to establish a post-copyright legal business model, Karpf says, is an indication of possible commercial failure of this iteration of generative “A.I.” products since they could fall victim to the same copyright roadblock.

This feels so close but it is not the full story. Because streaming music services like Spotify show how compliance with the law is not necessarily lucrative for the creators of a work, what matters most is the effect forces like Napster have on changing our perception. Karpf quotes John Perry Barlow, from a 2000 Wired article, to support his thesis that many viewed copyright as good as dead in the wake of Napster. But I do not think Karpf fully acknowledged what that means; Barlow:

No law can be successfully imposed on a huge population that does not morally support it and possesses easy means for its invisible evasion.

Copyright itself may not have disappeared — quite the opposite thanks to term extensions passed in Canada and the E.U. — but one cannot deny its effects have been blunted. Piracy was popular before the internet, so of course it did not stop when Spotify launched; downloading of movies and television shows has been increasing. Copyright violations are extremely common among normal people to the extent that YouTube identifies background music and will allow rights-holders to monetize it.

All of this is to say there are options for large language models and machine learning which do not require wholesale rejection of copyright. That is what the legacy of Napster suggests. There may be, as Karpf intimates, a licensing scheme which would compensate those who own the intellectual property — though, it is worth pointing out, they are negotiating with publishers, not writers. Such a scheme would not necessarily be the death of ChatGPT. One significant difference between Napster and generative “A.I.” is that these are not underground efforts. Machine learning, on the other hand, is a deep pocketed move by powerful technology companies, and they are determined to make it work.

A grey market for models based on copyrighted works is a plausible parallel situation. It may not be able to be monetised in the same way as a commercial model, but it would make sense for people to build such a thing.

Thomas Germain, Gizmodo:

Facebook recently rolled out a new “Link History” setting that creates a special repository of all the links you click on in the Facebook mobile app. Users can opt-out, but Link History is turned on by default, and the data is used for targeted ads. As lawmakers introduce tech regulations and Apple and Google beef up privacy restrictions, Meta is doubling down and searching for new ways to preserve its data harvesting empire.

This is a confusing feature and an even more confusing story. It sounds like this is a new vector for data harvesting but, as Germain writes a few paragraphs later, Facebook has long tracked what users do across the web — when they click on a link from Facebook, and also when they visit a webpage anywhere containing Facebook’s tracking tools. Meta tracks your activity across the web because users nominally agreed to it when they signed up and did not read the legal contract they signed, and because the administrators of many websites big and small want to advertise on Meta’s platforms and have been deputized by Meta to help build audience profiles. This is no longer newsworthy. It sucks.

Also, Link History is not brand new, as far as I can tell. The Internet Archive saved a copy of the documentation for this feature in September. (Due to some sort of quirk with how Facebook serves these pages and how Archive saves them, it will appear blank. However, if you view the HTML source, you will see it is the same page with the same text.)

Finally, it is not clear to me that turning this feature off will have the privacy bonafides it may seem. The documentation says, after toggling it off, Facebook “won’t save your link history or use it to improve your ads across Meta technologies”, but that does not necessarily mean the pages you visit will not inform which ads you see if you have not also changed your off-Meta activity settings. The kindest interpretation of such granular and distinct settings is to allow people to make more specific changes. The realistic explanation is that it is very confusing, and most people will just stick with the defaults anyway.

Max Tani, Semafor:

The board of the startup news organization The Messenger weighed shutting the publication down at a meeting on Friday, after learning that the company is on track to run out of cash at the end of January.

The New York Times earlier reported Wednesday that The Messenger, launched last May as a politically centrist, wide-ranging bid for big web traffic and advertising dollars, is laying off nearly two dozen staffers out of a total of around 300.

Looks like I can delete the reminder I set myself for November to check if this thing is still up. I feel bad for the reporters and employees who thought they would be joining a well-funded publication, only to find a boss whose big idea was for it to become one of the the most popular news websites in the U.S. within a year by using ancient growth tactics and who would pay for everything with crappy display ads and chum boxes.

Update: As of January 3, a Messenger spokesperson told Tani “the notion of us discussing closure is beyond absurd” since they had, they said, just raised more money. As of January 31, the Messenger has been shut down.

When I was much younger, I assumed people who were optimistic must have misplaced confidence. How anyone could see a future so bright was a complete mystery, I reasoned, when what we are exposed to is a series of mistakes and then attempts at correction from public officials, corporate executives, and others. This is not conducive to building hope — until I spotted the optimistic part: in the efforts to correct the problem and, ideally, in preventing the same things from happening again.

If you measure your level of optimism by how much course-correction has been working, then 2023 was a pretty hopeful year. In the span of about a decade, a handful of U.S. technology firms have solidified their place among the biggest and most powerful corporations in the world, so nobody should be surprised by a parallel increase in pushback for their breaches of public trust. New regulations and court decisions are part of a democratic process which is giving more structure to the ways in which high technology industries are able to affect our lives. Consider:

That is a lot of change in one year and not all of it has been good. The Canadian government went all-in on the Online News Act which became a compromised disaster; there are plenty of questions about the specific ways the DMA and DSA will be enforced; Montana legislators tried to ban TikTok.

It is also true and should go without saying that technology companies have done plenty of interesting and exciting things in the past year; they are not cartoon villains in permanent opposition to the hero regulators. But regulators are also not evil. New policies and legal decisions which limit the technology industry — like those above — are not always written by doddering out-of-touch bureaucrats and, just as importantly, businesses are not often trying to be malevolent. For example, Apple has arguably good reasons for software validation of repairs; it may not be intended to prevent users from easily swapping parts, but that is the effect its decision has in the real world. What matters most to users is not why a decision was made but how it is experienced. Regulators should anticipate problems before they arise and correct course when new ones show up.

This back-and-forth is something I think will ultimately prove beneficial, though it will not happen in a straight line. It has encouraged a more proactive dialogue for limiting known negative consequences in nascent technologies, like avoiding gender and racial discrimination in generative models, and building new social environments with less concentrated power. Many in tech industry love to be the disruptor; now, the biggest among them are being disrupted, and it is making things weird and exciting.

These changes do not necessarily need to be made from the effects of regulatory bodies. Businesses are able to make things more equitable for themselves, should they so choose. They can be more restrictive about what is permitted on their platforms. They can empower trust and safety teams to assess how their products and services are being used in the real world and adjust them to make things better.

Mike Masnick, Techdirt:

Let’s celebrate actual tech optimism in the belief that through innovation we can actually seek to minimize the downsides and risks, rather than ignore them. That we can create wonderful new things in a manner that doesn’t lead many in the world to fear their impact, but to celebrate the benefits they bring. The enemies of techno optimism are not things like “trust and safety,” but rather the naive view that if we ignore trust and safety, the world will magically work out just fine.

There are those who believe “the arc of the universe […] bends toward justice” is a law which will inevitably be correct regardless of our actions, but it is more realistic to view that as a call to action: people need to bend that arc in the right direction. There are many who believe corporations can generally regulate themselves on these kinds of issues, and I do too — to an extent. But I also believe the conditions by which corporations are able to operate are an ongoing negotiation with the public. In a democracy, we should feel like regulators are operating on our behalf, and much of the policy and legal progress made last year certainly does. This year can be more of the same if we want it to be. We do not need to wait for Meta or TikTok to get better at privacy on their own terms, for example. We can just pass laws.

As I wrote at the outset, the way I choose to be optimistic is to look at all of the things which are being done to correct the imbalanced and repair injustices. Some of those corrections are being made by businesses big and small; many of them have advertising and marketing budgets celebrating their successes to the point where it is almost unavoidable. But I also look at the improvements made by those working on behalf of the public, like the list above. The main problem I have with most of them is how they have been developed on a case-by-case basis which, while setting precedent, is a fragile process open to frequent changes.

That is true, too, for self-initiated changes. Take Apple’s self-repair offerings, which it seems to have introduced in response to years of legislative pressure. It has made parts, tools, and guides available in the United States and in a more limited capacity across the E.U., but not elsewhere. Information and kits are available not from Apple’s own website, but a janky looking third-party. It can stop making this stuff available at any time in areas where it is not legally obligated to provide these resources, which is another reason why it sucks for parts to require software activation. In 2023, Apple made its configuration tools more accessible, but only in regions where its self-service repair program is provided.

People ought to be able to have expectations — for repairs, privacy, security, product reliability, and more. The technology industry today is so far removed from its hackers-in-a-garage lore. Its biggest players are among the most powerful businesses in the world, and should be regulated in that context. That does not necessarily mean a whole bunch of new rules and bureaucratic micromanagement, but we ought to advocate for structures which balance the scales in favour of the public good.

If there was one technology story we will remember from 2023, it was undeniably the near-vertical growth trajectory of generative “artificial intelligence” products. It is everywhere, and it is being used by normal people globally. Yet it is, for all intents and purposes, a nascent sector, and that makes this a great time to set some standards for its responsible development and, more importantly, its use. Nobody is going to respond to this perfectly — not regulators and not the companies building these tools. But they can work together to set expectations and standards for known and foreseeable problems. It seems like that is what is happening in the E.U. and the United States.

That is how I am optimistic about technology now.