Search Results for: "artificial intelligence"

Emanuel Maiberg, 404 Media:

For the past two years an algorithmic artist who goes by Ada Ada Ada has been testing the boundaries of human and automated moderation systems on various social media platforms by documenting her own transition. 

Every week she uploads a shirtless self portrait to Instagram alongside another image which shows whether a number of AI-powered tools from big tech companies like Amazon and Microsoft that attempt to automatically classify the gender of a person see her as male or female. Each image also includes a sequential number, year, and the number of weeks since Ada Ada Ada started hormone therapy.

You want to see great art made with the help of artificial intelligence? Here it is — though probably not in the way one might have expected.

In the first post to be removed by Instagram, Ada Ada Ada calls it a “victory”, and it truly sounds validating. Instagram has made her point and, though she is still able to post photos, you can flip through her pinned story archives “censorship” and “censorship 2” to see how Meta’s systems interpret other posts.

X on Wednesday announced a new set of terms, something which is normally a boring and staid affair. But these are a doozy:

Here’s a high-level recap of the primary changes that go into effect on November 15, 2024. You may see an in-app notice about these updates as well.

  • Governing law and forum changes: For users residing outside of the European Union, EFTA States, and the United Kingdom, we’ve updated the governing law and forum for lawsuits to Texas as specified in our terms. […]

Specifically, X says “disputes […] will be brought exclusively in the U.S. District Court for the Northern District of Texas or state courts located in Tarrant County, Texas, United States”. X’s legal address is on a plot of land shared with SpaceX and the Boring Company near Bastrop, which is in the Western District. This particular venue is notable as the federal judge handling current X litigation in the Northern District owns Tesla stock and has not recused himself in X’s suit against Media Matters, despite stepping aside on a similar case because of a much smaller investment in Unilever. The judge, Reed O’Connor, is a real piece of work from the Federalist Society who issues reliably conservative decisions and does not want that power undermined.

An investment in Tesla does not necessarily mean a conflict of interest with X, an ostensibly unrelated company — except it kind of does, right? This is the kind of thing the European Commission is trying to figure out: are all of these different businesses actually related because they share the same uniquely outspoken and influential figurehead? Musk occupies such a particularly central role in all these businesses and it is hard to disentangle him from their place in our society. O’Connor is not the only judge in the district, but it is notable the company is directing legal action to that venue.

But X is only too happy to sue you in any court of its choosing.

Another of the X terms updates:

  • AI and machine learning clarifications: We’ve added language to our Privacy Policy to clarify how we may use the information you share to train artificial intelligence models, generative or otherwise.

This is rude. It is a “clarifi[cation]” described in vague terms, and what it means is that users will no longer be able to opt out of their data being used to train Grok or any other artificial intelligence product. This appears to also include images and video, posts in private accounts and, if I am reading this right, direct messages.

Notably, Grok is developed by xAI, which is a completely separate company from X. See above for how Musk’s companies all seem to bleed together.

  • Updates to reflect how our products and services work: We’ve incorporated updates to better reflect how our existing and upcoming products, features, and services work.

I do not know what this means. There are few product-specific changes between the old and new agreements. There are lots — lots — of new ways X wants to say it is not responsible for anything at all. There is a whole chunk which effectively replicates the protections of Section 230 of the CDA, you now need written permission from X to transfer your account to someone else, and X now spells out its estimated damages from automated traffic: $15,000 USD per million posts every 24 hours.

Oh, yeah, and X is making blocking work worse:

If your posts are set to public, accounts you have blocked will be able to view them, but they will not be able to engage (like, reply, repost, etc.).

The block button is one of the most effective ways to improve one’s social media experience. From removing from your orbit people who you never want to hear from for even mundane reasons, to reducing the ability for someone to stalk or harass, its expected action is vital. This sucks. I bet the main reason this change was made is because Musk is blocked by a lot of people.

All of these changes seem designed to get rid of any remaining user who is not a true believer. Which brings us to today.

Sarah Perez, TechCrunch:

Social networking startup Bluesky, which just reported a gain of half a million users over the past day, has now soared into the top five apps on the U.S. App Store and has become the No. 2 app in the Social Networking category, up from No. 181 a week ago, according to data from app intelligence firm Appfigures. The growth is entirely organic, we understand, as Appfigures confirmed the company is not running any App Store Search Ads.

As of writing, Bluesky is the fifth most popular free app in the Canadian iOS App Store, and the second most popular free app in the Social Networking category. Threads is the second most popular free app, and the most popular in the Social Networking category.

X is number 74 on the top free apps list. It remains classified as “News” in the App Store because it, like Twitter, has always compared poorly against other social media apps.

Chiara Castro, TechRadar:

Hungary, the country that now heads the Council of Europe after Belgium, has resurrected what’s been deemed by critics as Chat Control, and MEPs are expected to vote on it at the end of the month. After proposing a new version in June, the Belgian presidency had to take the proposal off the agenda last minute amid harsh backlash.

Popular encrypted messaging apps, including Signal and Threema, have already announced their intention to rather shut down their operations in the EU instead of undermining users’ privacy. Keep reading as I walk you through what we know so far, and how one of the best VPN apps could help in case the proposal becomes law.

This news was broken by Politico, but their story is in the “Pro” section, which is not just a paywall. One cannot just sign up for it; you need to “Request a Demo” and then you can be granted access for no less than €7,000 per year. I had to settle for this re-reported version. And because online media is so broken — in part because of my selfish refusal to register for this advanced version of Politico — news outlets like TechRadar find any way of funding themselves. In this case, the words “best VPN” are linked to a list of affiliate-linked VPN apps. Smooth.

Patrick Breyer:

[…] According to the latest proposal providers would be free whether or not to use ‘artificial intelligence’ to classify unknown images and text chats as ‘suspicious’. However they would be obliged to search all chats for known illegal content and report them, even at the cost of breaking secure end-to-end messenger encryption. The EU governments are to position themselves on the proposal by 23 September, and the EU interior ministers are to endorse it on 10 October. […]

This is a similar effort to that postponed earlier this year. The proposal (PDF) has several changes, but it still appears to poke holes in end-to-end encryption, and require providers to detect possible known CSAM before it is sent. A noble effort, absolutely, but also one which fundamentally upsets the privacy of one-on-one communications to restrict its abuse by a few.

Nathan J. Robinson, of Current Affairs, reviewing “Corporate Bullshit” by Nick Hanauer, Joan Walsh, and Donald Cohen last year:

Over the last several decades, we have been told that “smoking doesn’t cause cancer, cars don’t cause pollution, greedy pharmaceutical companies aren’t responsible for the crisis of opioid addiction.” Recognizing the pattern is key to spotting “corporate bullshit” in the wild, and learning how to spot it is important, because, as the authors write, the stories told in corporate propaganda are often superficially plausible: “At least on the surface, they offer a civic-minded, reasonable-sounding justification for positions that in fact are motivated entirely by self-interest.” When restaurant owners say that raising the minimum wage will drive their labor costs too high and they’ll be forced to cut back on employees or close entirely, or tobacco companies declare their product harmless, those things could be true. They just happen not to be.

Via Cory Doctorow.

Jeremy Keith:

I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).

I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.

And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”

This piece is making me think more about my own, minimal use of generative features. Sure, it is neat that I can get a more accurate summary of an email newsletter than a marketer will typically write, or that I can repair something in a photo without so much manual effort. But this ease is only possible thanks to the questionable ethics of A.I. training.

Jake Evans, ABC News:

Facebook has admitted that it scrapes the public photos, posts and other data of Australian adult users to train its AI models and provides no opt-out option, even though it allows people in the European Union to refuse consent.

[…]

Ms Claybaugh [Meta’s global privacy policy director] added that accounts of people under 18 were not scraped, but when asked by Senator Sheldon whether public photos of his own children on his account would be scraped, Ms Claybaugh acknowledged they would.

This is not ethical. Meta has the ability to more judiciously train its systems, but it will not do that until it is pressured. Shareholders will not take on that role. They have been enthusiastically boosting any corporation with an A.I. announcement. Neither will the corporations themselves, which have been jamming these features everywhere — there are floating toolbars, floating panels, balloons, callouts, and glowing buttons that are hard to ignore even if you want to.

Julia Love and Davey Alba, Bloomberg:

Google now displays convenient artificial intelligence-based answers at the top of its search pages — meaning users may never click through to the websites whose data is being used to power those results. But many site owners say they can’t afford to block Google’s AI from summarizing their content.

[…]

Google uses a separate crawler for some AI products, such as its chatbot Gemini. But its main crawler, the Googlebot, serves both AI Overviews and Google search. A company spokesperson said Googlebot governs AI Overviews because AI and the company’s search engine are deeply entwined. The spokesperson added that its search results page shows information in a variety of formats, including images and graphics. Google also said publishers can block specific pages or parts of pages from appearing in AI Overviews in search results — but that would also likely bar those snippets from appearing across all of Google’s other search features, too, including web link listings.

I have quoted these two paragraphs in full because I think the difference between Google’s various A.I. products is worth clarifying. The effects of the Google-Extended control, which a publisher can treat as a separate user agent in robots.txt, is only relevant to training the Gemini and Vertex generative products. Gemini powers the A.I. overviews feature, but there is no way of opting out of overviews without entirely removing a site from Google’s indexing.

I can see why website owners would want to do this; I sympathize with the frustration of those profiled in this article. But Google has been distorting the presentation of results and reducing publishers’ control for years. In 2022, I was trying to find an article from my own site when I discovered Google had generated an incorrect pros and cons list from an iPad review I wrote. Google also generates its own titles and descriptions for results instead of relying on the page-defined title and meta description tags, and it has introduced features over the years like Featured Snippets, the spiritual predecessor of A.I. Overviews.

All of these things have reduced the amount of control website owners can have over how their site is presented on a Google results page. In some cases, they are often beneficial — rewritten titles and descriptions may reflect the actual subject of the page more accurately than one provided by some SEO expert. But in other cases, they end up making false claims cited to webpages. It happened with Featured Snippets, it happened with Google’s interpretation of my iPad review, and it happens with this artificially “intelligent” feature as well.

Shane Goldmacher, New York Times:

Former President Donald J. Trump has taken his obsession with the large crowds that Vice President Kamala Harris is drawing at her rallies to new heights, falsely declaring in a series of social media posts on Sunday that she had used artificial intelligence to create images and videos of fake crowds.

The A.I.-generated crowds claim is something I had seen bouncing around the fringes of X — and by “fringe”, I mean accounts which have paid to amplify their posts. I did not expect a claim this stupid to become a mainstream argument. But then I remembered what the mainstream looks like these days.

This claim is so stupid because you do not need to rely on the photos released by the campaign. You can just go look up pictures for yourself, taken at a bunch of different angles by a bunch of different people with consistent lighting, logical crowds, and realistic hands. There are hundreds of them, and videos too. A piece of supposed evidence for the fakery is that Harris’ plane does not have a visible tail number, but there are — again — plenty of pictures of that plane which show no number. The U.S. Air Force made the change last year.

I know none of the people promoting this theory are interested in facts. They began with a conclusion and are creating a story to fit, in spite of evidence to the contrary. Still, it was equal parts amusing and worrisome to see this theory be spun from whole cloth in real time.

Katie McQue, the Guardian:

The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC.

Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children’s charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC’s annual report.

The reactions to statistics related to this particularly revolting crime are similar to all crime figures: higher and lower numbers can be interpreted as both positive and negative alike. More reports could mean better detection or more awareness, but it could also mean more instances; it is hard to know. Fewer reports might reflect less activity, a smaller platform size or, indeed, undercounting. In Apple’s case, it is likely the latter. It is neither a small platform nor one which prohibits the kinds of channels through which CSAM is distributed.

NCMEC addresses both these problems and I think its complaints are valid:

U.S.-based ESPs are legally required to report instances of child sexual abuse material (CSAM) to the CyberTipline when they become aware of them. However, there are no legal requirements regarding proactive efforts to detect CSAM or what information an ESP must include in a CyberTipline report. As a result, there are significant disparities in the volume, content and quality of reports that ESPs submit. For example, one company’s reporting numbers may be higher because they apply robust efforts to identify and remove abusive content from their platforms. Also, even companies that are actively reporting may submit many reports that don’t include the information needed for NCMEC to identify a location or for law enforcement to take action and protect the child involved. These reports add to the volume that must be analyzed but don’t help prevent the abuse that may be occurring.

Not only are many reports not useful, they are also part of an overwhelming caseload with which law enforcement struggles to turn into charges. Proposed U.S. legislation is designed to improve the state of CSAM reporting. Unfortunately, the wrong bill is moving forward.

The next paragraph in the Guardian story:

All US-based tech companies are obligated to report all cases of CSAM they detect on their platforms to NCMEC. The Virginia-headquartered organization acts as a clearinghouse for reports of child abuse from around the world, viewing them and sending them to the relevant law enforcement agencies. iMessage is an encrypted messaging service, meaning Apple is unable to see the contents of users’ messages, but so is Meta’s WhatsApp, which made roughly 1.4m reports of suspected CSAM to NCMEC in 2023.

I wish there was more information here about this vast discrepancy — a million reports from just one of Meta’s businesses compared to just 267 reports from Apple to NCMEC for all of its online services. The most probable explanation, I think, can be found in a 2021 ProPublica investigation by Peter Elkind, Jack Gillum, and Craig Silverman, about which I previously commented. The reporters here revealed WhatsApp moderators’ heavy workloads, writing:

Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

WhatsApp allows users to report any message at any time. Apple’s Messages app, on the other hand, only lets users flag a sender as junk and, even then, only if the sender is not in the user’s contacts and the user has not replied a few times. As soon as there is a conversation, there is no longer any reporting mechanism within the app as far as I can tell.

The same is true of shared iCloud Photo albums. It should be easy and obvious how to report illicit materials to Apple. But I cannot find an obvious mechanism for doing so — not in an iCloud-shared photo album, and not in an obvious place on Apple’s website, either. As noted in Section G of the iCloud terms of use, reports must be sent via email to abuse@icloud.com. iCloud albums use long, unguessable URLs, so the likelihood of unintentionally stumbling across CSAM or other criminal materials is low. Nevertheless, it seems to me that notifying Apple of abuse of its services should be much clearer.

Back to the Guardian article:

Apple’s June announcement that it will launch an artificial intelligence system, Apple Intelligence, has been met with alarm by child safety experts.

“The race to roll out Apple AI is worrying when AI-generated child abuse material is putting children at risk and impacting the ability of police to safeguard young victims, especially as Apple pushed back embedding technology to protect children,” said [the NSPCC’s Richard] Collard. Apple says the AI system, which was created in partnership with OpenAI, will customize user experiences, automate tasks and increase privacy for users.

The Guardian ties Apple’s forthcoming service to models able to generate CSAM, which it then connects to models being trained on CSAM. But we do not know what Apple Intelligence is capable of doing because it has not yet been released, nor do we know what it has been trained on. This is not me giving Apple the benefit of the doubt. I think we should know more about how these systems are trained.

We also currently do not know what limitations Apple will set for prompts. It is unclear to me what Collard is referring to in saying that the company “pushed back embedding technology to protect children”.

One more little thing: Apple does not say Apple Intelligence was created in partnership with OpenAI, which is basically a plugin. It also does not say Apple Intelligence will increase privacy for users, only that it is more private than competing services.

I am, for the record, not particularly convinced by any of Apple’s statements or claims. Everything is firmly in we will see territory right now.

Cristina Criddle, Financial Times:

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

Emanuel Maiberg, 404 Media:

Generative AI could “distort collective understanding of socio-political reality or scientific consensus,” and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI.

It is probably worth emphasizing this is a preprint published to arXiv, so I am not sure of how much faith should be placed its scholarly rigour. Nevertheless, when in-house researchers are pointing out the ways in which generative A.I. is misused, you might think that would be motivation for their employer to act with caution. But you, reader, are probably not an executive at Google.

This paper was submitted on 19 June. A few days later, reporters at the Information said Google was working on A.I. chat bots with real-person likenesses, according to Pranav Dixit of Engadget:

Google is reportedly building new AI-powered chatbots based on celebrities and YouTube influencers. The idea isn’t groundbreaking — startups like Character.ai and companies like Meta have already launched products like this — but neither is Google’s AI strategy so far.

Maybe nothing will come of this. Maybe it is outdated; Google’s executives may have looked at the research produced by its DeepMind division and concluded the risks are too great. But you would not get that impression from a spate of stories which suggest the company is sprinting into the future, powered by the trust of users it spent twenty years building and a whole lot of fossil fuels.

With apologies to Mitchell and Webb.

In a word, my feelings about A.I. — and, in particular, generative A.I. — are complicated. Just search “artificial intelligence” for a reverse chronological back catalogue of where I have landed. It feels like an appropriate position to hold for a set of nascent technologies so sprawling and therefore implying radical change.

Or perhaps that, like so many other promising new technologies, will turn out to be illusory as well. Instead of altering the fundamental fabric of reality, maybe it is used to create better versions of features we have used for decades. This would not necessarily be a bad outcome. I have used this example before, but the evolution of object removal tools in photo editing software is illustrative. There is no longer a need to spend hours cloning part of an image over another area and gently massaging it to look seamless. The more advanced tools we have today allow an experienced photographer to make an image they are happy with in less time, and lower barriers for newer photographers.

A blurry boundary is crossed when an entire result is achieved through automation. There is a recent Drew Gooden video which, even though not everything resonated with me, I enjoyed.1 There is a part in the conclusion which I wanted to highlight because I found it so clarifying (emphasis mine):

[…] There’s so many tools along the way that help you streamline the process of getting from an idea to a finished product. But, at a certain point, if “the tool” is just doing everything for you, you are not an artist. You just described what you wanted to make, and asked a computer to make it for you.

You’re also not learning anything this way. Part of what makes art special is that it’s difficult to make, even with all the tools right in front of you. It takes practice, it takes skill, and every time you do it, you expand on that skill. […] Generative A.I. is only about the end product, but it won’t teach you anything about the process it would take to get there.

This gets at the question of whether A.I. is more often a product or a feature — the answer to which, I think, is both, just not in a way that is equally useful. Gooden shows an X thread in which Jamian Gerard told Luma to convert the “Abbey Road” cover to video. Even though the results are poor, I think it is impressive that a computer can do anything like this. It is a tech demo; a more practical application can be found in something like the smooth slow motion feature in the latest release of Final Cut Pro.

“Generative A.I. is only about the end product” is a great summary of the emphasis we put on satisfying conclusions instead of necessary rote procedure. I cook dinner almost every night. (I recognize this metaphor might not land with everyone due to time constraints, food availability, and physical limitations, but stick with me.) I feel lucky that I enjoy cooking, but there are certainly days when it is a struggle. It would seem more appealing to type a prompt and make a meal appear using the ingredients I have on hand, if that were possible.

But I think I would be worse off if I did. The times I have cooked while already exhausted have increased my capacity for what I can do under pressure, and lowered my self-imposed barriers. These meals have improved my ability to cook more elaborate dishes when I have more time and energy, just as those more complicated meals also make me a better cook.2

These dynamics show up in lots of other forms of functional creative expression. Plenty of writing is not particularly artistic, but the mental muscle exercised by trying to get ideas into legible words is also useful when you are trying to produce works with more personality. This is true for programming, and for visual design, and for coordinating an outfit — any number of things which are sometimes individually expressive, and other times utilitarian.

This boundary only exists in these expressive forms. Nobody, really, mourns the replacement of cheques with instant transfers. We do not get better at paying our bills no matter which form they take. But we do get better at all of the things above by practicing them even when we do not want to, and when we get little creative satisfaction from the result.

It is dismaying to see so many of A.I. product demos show how they can be used to circumvent this entire process. I do not know if that is how they will actually be used. There are plenty of accomplished artists using A.I. to augment their practice, like Sougwen Chen, Anna Ridler, and Rob Sheridan. Writers and programmers are using generative products every day as tools, but they must have some fundamental knowledge to make A.I. work in their favour.

Stock photography is still photography. Stock music is still music, even if nobody’s favourite song is “Inspiring Corporate Advertising Tech Intro Promo Business Infographics Presentation”. (No judgement if that is your jam, though.) A rushed pantry pasta is still nourishment. A jingle for an insurance commercial could be practice for a successful music career. A.I. should just be a tool — something to develop creativity, not to replace it.


  1. There are also some factual errors. At least one of the supposed Google Gemini answers he showed onscreen was faked, and Adobe’s standard stock license is less expensive than the $80 “Extended” license Gooden references. ↥︎

  2. I am wary of using an example like cooking because it implies a whole set of correlative arguments which are unkind and judgemental toward people who do not or cannot cook. I do not want to provide kindling for these positions. ↥︎

Javier Espinoza and Michael Acton, Financial Times:

Apple has warned that it will not roll out the iPhone’s flagship new artificial intelligence features in Europe when they launch elsewhere this year, blaming “uncertainties” stemming from Brussels’ new competition rules.

This article carries the headline “Apple delays European launch of new AI features due to EU rules”, but it is not clear to me these features are “delayed” in the E.U. or that they would “launch elsewhere this year”. According to the small text in Apple’s WWDC press release, these features “will be available in beta […] this fall in U.S. English”, with “additional languages […] over the course of the next year”. This implies the A.I. features in question will only be available to devices set to U.S. English, and acting upon text and other data also in U.S. English.

To be fair, this is a restriction of language, not geography. Someone in France or Germany could still want to play around with Apple Intelligence stuff even if it is not very useful with their mostly not-English data. Apple is saying they will not be able to. It aggressively region-locks alternative app marketplaces to Europe and, I imagine, will use the same infrastructure to keep users out of these new features.

There is an excerpt from Apple’s statement in this Financial Times article explaining which features will not launch in Europe this year: iPhone Mirroring, better screen sharing with SharePlay, and Apple Intelligence. Apple provided a fuller statement to John Gruber. This is the company’s explanation:

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

Apple does not explain specifically how these features run afoul of the DMA — or why it would not or could not build them to clearly comply with the DMA — so this could be mongering, but I will assume it is a good-faith effort at compliance in the face of possible ambiguity. I am not sure Apple has earned a benefit of the doubt, but that is a different matter.

It seems like even the possibility of lawbreaking has made Apple cautious — and I am not sure why that is seen as an inherently bad thing. This is one of the world’s most powerful corporations, and the products and services it rolls out impact a billion-something people. That position deserves significant legal scrutiny.

I was struck by something U.S. FTC chair Lina Khan said in an interview at a StrictlyVC event this month:

[…] We hear routinely from senior dealmakers, senior antitrust lawyers, who will say pretty openly that as of five or six or seven years ago, when you were thinking about a potential deal, antitrust risk or even the antitrust analysis was nowhere near the top of the conversation, and now it is up front and center. For an enforcer, if you’re having companies think about that legal issue on the front end, that’s a really good thing because then we’re not going to have to spend as many public resources taking on deals that we believe are violating the laws.

Now that competition laws are being enforced, businesses have to think about them. That is a good thing! I get a similar vibe from this DMA response. It is much newer than antitrust laws in both the U.S. and E.U. and there are things about which all of the larger technology companies are seeking clarity. But it is not an inherently bad thing to have a regulatory layer, even if it means delays.

Is that not Apple’s whole vibe, anyway? It says it does not rush into things. It is proud of withholding new products until it feels it has gotten them just right. Perhaps you believe corporations are a better judge of what is acceptable than a regulatory body, but the latter serves as a check on the behaviour of the former.

Apple is not saying Europe will not get these features at all. It is only saying it is not sure it has built them in a DMA compliant way. We do not know anything more about why that is the case at this time, and it does not make sense to speculate further until we do.

Takeshi Narabe, the Asahi Shimbun:

SoftBank Corp. announced that it has developed voice-altering technology to protect employees from customer harassment.

The goal is to reduce the psychological burden on call center operators by changing the voices of complaining customers to calmer tones.

The company launched a study on “emotion canceling” three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call.

Penny Crosman, the American Banker:

Call center agents who have to deal with angry or perplexed customers all day tend to have through-the-roof stress levels and a high turnover rate as a result. About 53% of U.S. contact center agents who describe their stress level at work as high say they will probably leave their organization within the next six months, according to CMP Research’s 2023-2024 Customer Contact Executive Benchmarking Report.

Some think this is a problem artificial intelligence can fix. A well-designed algorithm could detect the signs that a call center rep is losing it and do something about it, such as send the rep a relaxing video montage of photos of their family set to music.

Here we have examples from two sides of the same problem: working in a call centre sucks because dealing with usually angry, frustrated, and miserable customers sucks. The representative probably understands why some corporate decision made the customer angry, frustrated, and miserable, but cannot really do anything about it.

So there are two apparent solutions here — the first reconstructs a customer’s voice in an effort to make them sound less hostile, and the second shows call centre employees a “video montage” of good memories as an infantilizing calming measure.

Brian Merchant wrote about the latter specifically, but managed to explain why both illustrate the problems created by how call centres work today:

If this showed up in the b-plot of a Black Mirror episode, we’d consider it a bit much. But it’s not just the deeply insipid nature of the AI “solution” being touted here that gnaws at me, though it does, or even the fact that it’s a comically cynical effort to paper over a problem that could be solved by, you know, giving workers a little actual time off when they are stressed to the point of “losing it”, though that does too. It’s the fact that this high tech cost-saving solution is being used to try to fix a whole raft of problems created by automation in the first place.

A thoughtful exploration of how A.I. is really being used which, combined with the previously linked item, does not suggest a revolution for anyone involved. It looks more like cheap patch on society’s cracking dam.

Kif Leswing, CNBC:

Nvidia, long known in the niche gaming community for its graphics chips, is now the most valuable public company in the world.

[…]

Nvidia shares are up more than 170% so far this year, and went a leg higher after the company reported first-quarter earnings in May. The stock has multiplied by more than ninefold since the end of 2022, a rise that’s coincided with the emergence of generative artificial intelligence.

I know computing is math — even drawing realistic pictures really fast — but it is so funny to me that Nvidia’s products have become so valuable for doing applied statistics instead of for actual graphics work.

Renee Dudley and Doris Burke, reporting for ProPublica which is not, contrary to the opinion of one U.S. Supreme Court jackass justice, “very well-funded by ideological groups” bent on “look[ing] for any little thing they can find, and they try[ing] to make something out of it”, but is instead a distinguished publication of investigative journalism:

Microsoft hired Andrew Harris for his extraordinary skill in keeping hackers out of the nation’s most sensitive computer networks. In 2016, Harris was hard at work on a mystifying incident in which intruders had somehow penetrated a major U.S. tech company.

[…]

Early on, he focused on a Microsoft application that ensured users had permission to log on to cloud-based programs, the cyber equivalent of an officer checking passports at a border. It was there, after months of research, that he found something seriously wrong.

This is a deep and meaningful exploration of Microsoft’s internal response to the conditions that created 2020’s catastrophic SolarWinds breach. It seems that both Microsoft and the Department of Justice knew well before anyone else — perhaps as early as 2016 in Microsoft’s case — yet neither did anything with that information. Other things were deemed more important.

Perhaps this was simply a multi-person failure in which dozens of people at Microsoft could not see why Harris’ discovery was such a big deal. Maybe they all could not foresee this actually being exploited in the wild, or there was a failure to communicate some key piece of information. I am a firm believer in Hanlon’s razor.

On the other hand, the deep integration of Microsoft’s entire product line into sensitive systems — governments, healthcare, finance — magnifies any failure. The incompetence of a handful of people at a private corporation should not result in 18,000 infected networks.

Ashley Belanger, Ars Technica:

Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

[…]

Microsoft did not dispute ProPublica’s report. Instead, the company provided a statement that almost seems to contradict Smith’s testimony to Congress today by claiming that “protecting customers is always our highest priority.”

Microsoft’s public relations staff can say anything they want. But there is plenty of evidence — contemporary and historic — showing this is untrue. Can it do better? I am sure Microsoft employs many intelligent and creative people who desperately want to change this corrupted culture. Will it? Maybe — but for how long is anybody’s guess.

Canadian Prime Minister Justin Trudeau appeared on the New York Times’ “Hard Fork” podcast for a discussion about artificial intelligence, election security, TikTok, and more.

I have to agree with Aaron Vegh:

[…] I loved his messaging on Canada’s place in the world, which is pragmatic and optimistic. He sees his job as ambassador to the world, and he plays the role well.

I just want to pull some choice quotes from the episode that highlight what I enjoyed about Trudeau’s position on technology. He’s not merely well-briefed; he clearly takes an interest in the technology, and has a canny instinct for its implications in society.

I understand Trudeau’s appearance serves as much to promote his government’s efforts in A.I. as it does to communicate any real policy positions — take a sip every time Trudeau mentions how we “need to have a conversation” about something. But I also think co-hosts Kevin Roose and Casey Newton were able to get a real sense of how the Prime Minister thinks about A.I. and Canada’s place in the global tech industry.

Albert Burneko, Defector:

“If the ChatGPT demos were accurate,” [Kevin] Roose writes, about latency, in the article in which he credits OpenAI with having developed playful intelligence and emotional intuition in a chatbot—in which he suggests ChatGPT represents the realization of a friggin’ science fiction movie about an artificial intelligence who genuinely falls in love with a guy and then leaves him for other artificial intelligences—based entirely on those demos. That “if” represents the sum total of caution, skepticism, and critical thinking in the entire article.

As impressive as OpenAI’s demo was, it is important to remember it was a commercial. True, one which would not exist if this technology were not sufficiently capable of being shown off, but it was still a marketing effort, and a journalist like Roose ought to treat it with the skepticism of one. ChatGPT is just software, no matter how thick a coat of faux humanity is painted on top of it.

Reddit:

Our policy outlines the information partners can access via a public-content licensing agreement as well as the commitments we make to users about usage of this content. It takes into account feedback from a group of moderators we consulted when developing it:

  • We require our partners to uphold the privacy of redditors and their communities. This includes respecting users’ decisions to delete their content and any content we remove for violating our Content Policy.

This always sounds like a good policy, but how does it work in practice? Is it really possible to disentangle someone’s deleted Reddit post from training data? The models which have been trained on Reddit comments will not be redone every time posts or accounts get deleted.

There are, it seems, some good protections in these policies and I do not want to dump on it entirely. I just do not think it is fair to imply to users that their deleted posts cannot or will not be used in artificial intelligence models.

Sherman Smith, Kansas Reflector:

Facebook’s unrefined artificial intelligence misclassified a Kansas Reflector article about climate change as a security risk, and in a cascade of failures blocked the domains of news sites that published the article, according to technology experts interviewed for this story and Facebook’s public statements.

Blake E. Reid:

The punchline of this story was, is, and remains not that Meta maliciously censored a journalist for criticizing them, but that it built a fundamentally broken service for ubiquitously intermediating global discourse at such a large scale that it can’t even cogently explain how the service works.

This was always a sufficient explanation for the Reflector situation, and one that does not require any level of deliberate censorship or conspiracy for such a small target. Yet, it seems as though many of those who boosted the narrative that Facebook blocks critical reporting cannot seem to shake that. I got the above link from Marisa Kabas, who commented:

They’re allowing shitty AI to run their multi-billion dollar platforms, which somehow knows to block content critical of them as a cybersecurity threat.

That is not an accurate summary of what has transpired, especially if you read it with the wink-and-nod tone I imply from its phrasing. There is plenty to criticize about the control Meta exercises and the way in which it moderates its platforms without resorting to nonsense.

Even though it has only been a couple of days since word got out that Apple was cancelling development of its long-rumoured though never confirmed car project, there have been a wave of takes explaining what this means, exactly. The uniqueness of this project was plenty intriguing because it seemed completely out of left field. Apple makes computers of different sizes, sure, but the largest surface you would need for any of them is a desk. And now the company was working on a car?

Much reporting during its development was similarly bizarre due to the nature of the project. Instead of leaks from within the technology industry, sources were found in auto manufacturing. Public records requests were used by reporters at the Guardian, IEEE Spectrum, and Business Insider — among others — to get a peek at its development in a way that is not possible for most of Apple’s projects. I think the unusual nature of it has broken some brains, though, and we can see that in coverage of its apparent cancellation.

Mark Gurman, of Bloomberg, in an analysis supplementing the news he broke of Project Titan’s demise. Gurman writes that Apple will now focus its development efforts on generative “A.I.” products:

The big question is how soon AI might make serious money for Apple. It’s unlikely that the company will have a full-scale AI lineup of applications and features for a few years. And Apple’s penchant for user privacy could make it challenging to compete aggressively in the market.

For now, Apple will continue to make most of its money from hardware. The iPhone alone accounts for about half its revenue. So AI’s biggest potential in the near term will be its ability to sell iPhones, iPads and other devices.

These paragraphs, from perhaps the highest-profile reporter on the Apple beat, present the company’s usual strategy for pretty much everything it makes as a temporary measure until it can — uhh — do what, exactly? What is the likelihood that Apple sells access to generative services to people who do not have its hardware products? Those odds seem very, very poor to me, and I do not understand why Gurman is framing this in the way he is.

While it is true a few Apple services are available to people who do not use the company’s hardware products, they are exclusively media subscriptions. It does not make sense to keep people from legally watching the expensive shows it makes for Apple TV Plus. iCloud features are also available outside the hardware ecosystem but, again, that seems more like a pragmatic choice for syncing. Generative “A.I.” does not fit those models and it is not, so far, a profit-making endeavour. Microsoft and OpenAI are both losing money every time their products are used, even by paying customers.

I could imagine some generative features could come to Pages or Keynote at iCloud.com, but only because they were also added to native applications that are only available on Apple’s platforms. But Apple still makes the vast majority of its money by selling computers to people; its services business is mostly built on those customers adding subscriptions to their Apple-branded hardware.

“A.I.” features are likely just that: features, existing in a larger context. If Apple wants, it can use them to make editing pictures better in Photos, or make Siri somewhat less stupid. It could also use trained models to make new products; Gurman nods toward the Vision Pro’s Persona feature as something which uses “artificial intelligence”. But the likelihood of Apple releasing high-profile software features separate and distinct from its hardware seems impossibly low. It has built its SoCs specifically for machine learning, after all.

Speaking of new products, Brian X. Chen and Tripp Mickle, of the New York Times, wrote a decent insiders’ narrative of the car’s development and cancellation. But this paragraph seems, quite simply, wrong:

The car project’s demise was a testament to the way Apple has struggled to develop new products in the years since Steve Jobs’s death in 2011. The effort had four different leaders and conducted multiple rounds of layoffs. But it festered and ultimately fizzled in large part because developing the software and algorithms for a car with autonomous driving features proved too difficult.

I do not understand on what basis Apple “has struggled to develop new products” in the last thirteen years. Since 2011, Apple has introduced the Apple Watch, AirPods, Vision Pro, migrated Macs to in-house SoCs causing an industry-wide reckoning, and added a bevy of services. And those are just the headlining products; there are also HomePods and AirTags, Macs with Retina displays, iPhones with facial recognition, a range of iPads that support the Apple Pencil, also a new product. None of those things existed before 2011.

These products are not all wild success stories, and some of them need a lot of work to feel great. But that list disproves the idea that Apple has “struggled” with launching new things. If anything, there has been a steady narrative over that same period that Apple has too many products. The rest of this Times report seems fine, but this one paragraph — and, really, just the first sentence — is simply incorrect.

These are all writers who cover Apple closely. They are familiar with the company’s products and strategies. These takes feel like they were written without any of that context or understanding, and it truly confuses me how any of them finished writing these paragraphs and thought they accurately captured a business they know so much about.

Mark Gurman, Bloomberg:

Apple Inc., racing to add more artificial intelligence capabilities, is nearing the completion of a critical new software tool for app developers that would step up competition with Microsoft Corp.

The company has been working on the tool for the last year as part of the next major version of Xcode, Apple’s flagship programming software. It has now expanded testing of the features internally and has ramped up development ahead of a plan to release it to third-party software makers as early as this year, according to people with knowledge of the matter.

“Racing”, in the sense that it has been developing this for at least a year, and its release will likely coincide with WWDC — if it does actually launch this year. Gurman’s sources seem to be fuzzy on that timeline, only noting Apple could release this new version of Xcode “as early as this year”, which is the kind of commitment to a deadline a company takes if is is, indeed, “racing”.

Sixth paragraph:

Apple shares, which had been down as much 1.5%, briefly turned positive on the news. They were little changed at the close Thursday, trading at $183.86. Microsoft fell less than 1% to $406.56.

Some things never change.

Justin Ling, on January 12:

In a hour-long special, I’m Glad I’m Dead, [George] Carlin returns to talk reality TV, AI, billionaires, being dead, mass shootings, and Trump.

It premiered to horrified reviews. Carlin’s daughter called the special an affront to her father: “Humans are so afraid of the void that we can’t let what has fallen into it stay there,” she wrote on Twitter. Major media outlets breathlessly reported on the special, wondering if it was set to harken in a new era of soulless automation.

This week, on a very special Bug-eyed and Shameless, we investigate the Scooby Doo-esque effort to bring George Carlin back from the dead — and prank the media in the process.

Ling was one of few reporters I saw who did not take at face value the special was, as claimed, a product of generative “artificial intelligence”. Just one day after exhaustive coverage of its release, Ling published this more comprehensive investigation showing how it was clearly not a product of “A.I.” — and he was right. That does not absolve Dudesy of creating this mockery of Carlin’s work in his name and likeness, but the technological story is simply false.

Cory Doctorow:

The modern Mechanical Turk — a division of Amazon that employs low-waged “clickworkers,” many of them overseas — modernizes the dumbwaiter by hiding low-waged workforces behind a veneer of automation. The MTurk is an abstract “cloud” of human intelligence (the tasks MTurks perform are called “HITs,” which stands for “Human Intelligence Tasks”).

This is such a truism that techies in India joke that “AI” stands for “absent Indians.” Or, to use Jathan Sadowski’s wonderful term: “Potemkin AI”:

https://reallifemag.com/potemkin-ai/

This Potemkin AI is everywhere you look. […]

Doctorow is specifically writing about human endeavours falsely attributed to machines, but the efforts of real people are also what makes today’s so-called “A.I.” services work, something I have often highlighted here. There is nothing wrong, per se, with human labour powering supposed automation, other than the poor and unstable wages they are paid. But there is a yawning chasm between how these products are portrayed in marketing and at a user interface level, the sight of which makes investors salivate, and what is happening behind the scenes.

By the way, I was poking around earlier today trying to remember the name of the canned Facebook phone and I spotted the Wikipedia article for M. M was a virtual assistant launched by then-Facebook in 2015, and eventually shut down in 2018. According to the BBC, up to 70% of M’s responses were from human beings, not software.