Month: July 2024

Craig Hockenberry:

This site now supports Dynamic Type on iOS and iPadOS. If you go to System Settings on your iPhone or iPad, and change the setting for Display & Brightness > Text Size, you’ll see the change reflected on this website.

With the important caveat this only applies to iOS-derived devices — not even Macs — it seems trivial enough to implement in a way that preserves the Dynamic Type font size but permits flexibility with other properties. Apple added this in Safari 7.0 along with a wide variety of other properties — you can set headings to match system sizes, too — but I cannot find many places where it is used even today. (The WebKit blog is one.) Is that a result of poor communication, or perhaps poor focus on accessibility? Or is it just too limited because it is only used on one set of platforms?

Malcolm Owen, AppleInsider:

In earlier reports, it was confirmed by Apple that Epic was mostly in compliance with EU-specific app review guidelines. The objectionable parts were a download button and related copy, which went against rules that forbid developers from making apps that can confuse consumers that elements in the apps were actually Apple-made items.

Epic had defended itself, insisting it used the same naming conventions employed across different platforms. Epic also said it followed standard conventions for buttons in iOS apps.

Apple has since told AppleInsider on Friday that it has approved Epic’s marketplace app. It has also asked Epic to fix the buttons in a future submission of the app for review.

As far as I know, there are no screenshots of the version of Epic Games’ store submitted to Apple. Maybe it is designed in a way that duplicates Apple’s App Store to the point where it is confusing, as Apple argues. Maybe it is intentionally designed in such a way that it creates headlines; Epic Games loves being in this position.

Regardless, it seems like a bad idea for Apple to be using its moderate control over alternative app stores are distributed to litigate intellectual property disputes. Perhaps when trust in the company’s processes is healthier, it would be less objectionable. But right now? If Apple wants to give competition investigators more material, it appears to be succeeding.

Also, it is interesting to see the publications to which Apple chooses to provide quotes. TechCrunch has been a longtime favourite for the company but, increasingly, Apple is giving exclusive statements to smaller blogs like 9to5Mac and AppleInsider. I do not know what to make of this but I am noting it for my own future reference.

Peter Zimonjic, CBC News:

The federal government has enacted a controversial digital services tax that will bring in billions of dollars while threatening Canada’s trading relationships by taxing the revenue international firms earn in Canada.

This has always seemed to me like a fairer response to declining Canadian advertising revenue for media companies than the Online News Act’s link tax. It makes no sense to charge ad-supported platforms for the privilege of pointing users to specific URLs.

U.S. Ambassador to Canada David Cohen issued a media statement Thursday calling the tax “discriminatory.”

“[The United States Trade Representative] has noted its concern with Canada’s digital services tax and is assessing, and is open to using, all available tools that could result in meaningful progress toward addressing unilateral, discriminatory [digital services taxes],” Cohen said in the statement.

I would love to know if it is possible for any non-U.S. government to respond to any number of unique conditions created by massive technology companies without it disproportionately impacting U.S.-based firms. The U.S. spent decades encouraging a soft power empire in the tech industry with its lax competition laws, and it has been an immensely successful endeavour. There will likely be retaliation, which is a similar reflection of its power — the Canadian government can either allow advertising spending to continue to be eaten up by U.S. firms, or it can get hit with some tariff on something else. Like sleeping with an elephant.

Pedro José Pereira Vieito on Threads:

The OpenAI ChatGPT app on macOS is not sandboxed and stores all the conversations in **plain-text** in a non-protected location:

~/Library/Application\ Support/com.openai.chat/conversations-{uuid}/

So basically any other running app / process / malware can read all your ChatGPT conversations without any permission prompt.

I have not yet updated my copy of the desktop app, so I was able to see this for myself, and it clarified the “all your ChatGPT conversations” part of this post. I had only downloaded and signed into the ChatGPT app — I had not used it for any conversations yet — but my entire ChatGPT history was downloaded to this folder. Theoretically, this means any app on a user’s system had access to a copy of their conversations with ChatGPT since they began using it on any device.

Jay Peters, the Verge:

After The Verge contacted OpenAI about the issue, the company released an update that it says encrypts the chats. “We are aware of this issue and have shipped a new version of the application which encrypts these conversations,” OpenAI spokesperson Taya Christianson says in a statement to The Verge. “We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves.”

Virtually all media coverage — including Peters’ article — has focused on the “plain text” aspect. Surely, though, the real privacy and security risk identified in the ChatGPT app — such that there is any risk — was in storing its data outside the app’s sandbox in an unprotected location. This decision made it possible for apps without any special access privileges to read its data without throwing up a permissions dialog.

There are obviously plenty of frustrations and problems with Apple’s sandboxing model in MacOS. Yet there are also many cases where sensitive data is stored in plain text. The difference is it is at least a little bit difficult for a different app to surreptitiously access those files.

Speaking of A.I. and design, I enjoyed Devin Coldewey’s look, for TechCrunch, at the brand and icon design of various services:

The thing is, no one knows what AI looks like, or even what it is supposed to look like. It does everything but looks like nothing. Yet it needs to be represented in user interfaces so people know they’re interacting with a machine learning model and not just plain old searching, submitting, or whatever else.

Although approaches differ to branding this purportedly all-seeing, all-knowing, all-doing intelligence, they have coalesced around the idea that the avatar of AI should be non-threatening, abstract, but relatively simple and non-anthropomorphic. […]

Gradients and gentle shapes abound — with one notable exception.

See Also: Brand New has reviews of the identities for OpenAI’s DevDay and Perplexity — both paywalled.

Cristina Criddle, Financial Times:

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

Emanuel Maiberg, 404 Media:

Generative AI could “distort collective understanding of socio-political reality or scientific consensus,” and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI.

It is probably worth emphasizing this is a preprint published to arXiv, so I am not sure of how much faith should be placed its scholarly rigour. Nevertheless, when in-house researchers are pointing out the ways in which generative A.I. is misused, you might think that would be motivation for their employer to act with caution. But you, reader, are probably not an executive at Google.

This paper was submitted on 19 June. A few days later, reporters at the Information said Google was working on A.I. chat bots with real-person likenesses, according to Pranav Dixit of Engadget:

Google is reportedly building new AI-powered chatbots based on celebrities and YouTube influencers. The idea isn’t groundbreaking — startups like Character.ai and companies like Meta have already launched products like this — but neither is Google’s AI strategy so far.

Maybe nothing will come of this. Maybe it is outdated; Google’s executives may have looked at the research produced by its DeepMind division and concluded the risks are too great. But you would not get that impression from a spate of stories which suggest the company is sprinting into the future, powered by the trust of users it spent twenty years building and a whole lot of fossil fuels.

Emanuel Maiberg, 404 Media:

The design tool Figma has disabled a newly launched AI-powered app design tool after a user showed that it was clearly copying Apple’s weather app. 

Figma disabled the feature, named Make Design, after CEO and cofounder of Not Boring Software Andy Allen tweeted images showing that asking it to make a “weather app” produced several variations of apps that looked almost identical to Apple’s default weather app.

Dylan Field, Figma’s CEO, blamed this result on rushing to launch it at the company’s Config conference last week, and using a set of third-party models the company’s design components (see update below). Still, it is amazing how fast a company will move when it could reasonably be accused of intellectual property infringement.

It is consistent to view this clear duplication of existing works through the same lens of morality as when A.I. tools duplicate articles and specific artists. I have not seen a good explanation for why any of these should be viewed differently from the others. There are compelling reasons for why it is okay to copy the works of others, just as there are similarly great arguments for why it is not.

The duplication of Apple’s weather app by Figma’s new gizmo is laughable, but nobody is going to lose their livelihood because a big corporation’s A.I. feature ripped off the work of a giant corporation. It is outrageous, though, to see the unique style of individual artists and the careful reporting of publications being ripped off at scale for financial gain.

Update: An internal review found design components commissioned by Figma, not the A.I. layer itself, was to blame.

With apologies to Mitchell and Webb.

In a word, my feelings about A.I. — and, in particular, generative A.I. — are complicated. Just search “artificial intelligence” for a reverse chronological back catalogue of where I have landed. It feels like an appropriate position to hold for a set of nascent technologies so sprawling and therefore implying radical change.

Or perhaps that, like so many other promising new technologies, will turn out to be illusory as well. Instead of altering the fundamental fabric of reality, maybe it is used to create better versions of features we have used for decades. This would not necessarily be a bad outcome. I have used this example before, but the evolution of object removal tools in photo editing software is illustrative. There is no longer a need to spend hours cloning part of an image over another area and gently massaging it to look seamless. The more advanced tools we have today allow an experienced photographer to make an image they are happy with in less time, and lower barriers for newer photographers.

A blurry boundary is crossed when an entire result is achieved through automation. There is a recent Drew Gooden video which, even though not everything resonated with me, I enjoyed.1 There is a part in the conclusion which I wanted to highlight because I found it so clarifying (emphasis mine):

[…] There’s so many tools along the way that help you streamline the process of getting from an idea to a finished product. But, at a certain point, if “the tool” is just doing everything for you, you are not an artist. You just described what you wanted to make, and asked a computer to make it for you.

You’re also not learning anything this way. Part of what makes art special is that it’s difficult to make, even with all the tools right in front of you. It takes practice, it takes skill, and every time you do it, you expand on that skill. […] Generative A.I. is only about the end product, but it won’t teach you anything about the process it would take to get there.

This gets at the question of whether A.I. is more often a product or a feature — the answer to which, I think, is both, just not in a way that is equally useful. Gooden shows an X thread in which Jamian Gerard told Luma to convert the “Abbey Road” cover to video. Even though the results are poor, I think it is impressive that a computer can do anything like this. It is a tech demo; a more practical application can be found in something like the smooth slow motion feature in the latest release of Final Cut Pro.

“Generative A.I. is only about the end product” is a great summary of the emphasis we put on satisfying conclusions instead of necessary rote procedure. I cook dinner almost every night. (I recognize this metaphor might not land with everyone due to time constraints, food availability, and physical limitations, but stick with me.) I feel lucky that I enjoy cooking, but there are certainly days when it is a struggle. It would seem more appealing to type a prompt and make a meal appear using the ingredients I have on hand, if that were possible.

But I think I would be worse off if I did. The times I have cooked while already exhausted have increased my capacity for what I can do under pressure, and lowered my self-imposed barriers. These meals have improved my ability to cook more elaborate dishes when I have more time and energy, just as those more complicated meals also make me a better cook.2

These dynamics show up in lots of other forms of functional creative expression. Plenty of writing is not particularly artistic, but the mental muscle exercised by trying to get ideas into legible words is also useful when you are trying to produce works with more personality. This is true for programming, and for visual design, and for coordinating an outfit — any number of things which are sometimes individually expressive, and other times utilitarian.

This boundary only exists in these expressive forms. Nobody, really, mourns the replacement of cheques with instant transfers. We do not get better at paying our bills no matter which form they take. But we do get better at all of the things above by practicing them even when we do not want to, and when we get little creative satisfaction from the result.

It is dismaying to see so many of A.I. product demos show how they can be used to circumvent this entire process. I do not know if that is how they will actually be used. There are plenty of accomplished artists using A.I. to augment their practice, like Sougwen Chen, Anna Ridler, and Rob Sheridan. Writers and programmers are using generative products every day as tools, but they must have some fundamental knowledge to make A.I. work in their favour.

Stock photography is still photography. Stock music is still music, even if nobody’s favourite song is “Inspiring Corporate Advertising Tech Intro Promo Business Infographics Presentation”. (No judgement if that is your jam, though.) A rushed pantry pasta is still nourishment. A jingle for an insurance commercial could be practice for a successful music career. A.I. should just be a tool — something to develop creativity, not to replace it.


  1. There are also some factual errors. At least one of the supposed Google Gemini answers he showed onscreen was faked, and Adobe’s standard stock license is less expensive than the $80 “Extended” license Gooden references. ↥︎

  2. I am wary of using an example like cooking because it implies a whole set of correlative arguments which are unkind and judgemental toward people who do not or cannot cook. I do not want to provide kindling for these positions. ↥︎