Search Results for: "artificial intelligence"

Katie Mcque, Laís Martins, Ananya Bhattacharya, and Carien Du Plessis, Rest of World:

Brazil’s AI bill is one window into a global effort to define the role that artificial intelligence will play in democratic societies. Large Silicon Valley companies involved in AI software — including Google, Microsoft, Meta, Amazon Web Services, and OpenAI — have mounted pushback to proposals for comprehensive AI regulation in the EU, Canada, and California. 

Hany Farid, former dean of the UC Berkeley School of Information and a prominent regulation advocate who often testifies at government hearings on the tech sector, told Rest of World that lobbying by big U.S. companies over AI in Western nations has been intense. “They are trying to kill every [piece of] legislation or write it in their favor,” he said. “It’s fierce.”

Meanwhile, outside the West, where AI regulations are often more nascent, these same companies have received a red-carpet welcome from many politicians eager for investment. As Aakrit Vaish, an adviser to the Indian government’s AI initiative, told Rest of World: “Regulation is actually not even a conversation.”

It sure seems as though competition is so intense among the biggest players that concerns about risk have been suspended. It is an unfortunate reality that business friendliness is code for a lax regulatory environment since we all have to endure the products of these corporations. It is not as though Europe and Canada have not produced successful A.I. companies, either.

Cristina Criddle and Hannah Murphy, Financial Times:

Meta is betting that characters generated by artificial intelligence will fill its social media platforms in the next few years as it looks to the fast-developing technology to drive engagement with its 3bn users.

[…]

“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform … that’s where we see all of this going,” he [Meta’s Connor Hayes] added.

Imagine opening any of Meta’s products after this has taken over. Imagine how little you will see from the friends and family members you actually care about. Imagine how much slop you will be greeted with — a feed alternating between slop, suggested posts, and ads, with just enough of what you actually opened the app to see. Now consider how this will affect people who are more committed to Meta’s products, whether for economic reasons or social cohesion.

A big problem for Meta is that it is institutionally very dumb. I do not want to oversell this too much, but I truly believe this is the case. There are lots of smart people working there and its leadership clearly understands something about how people use social media.

But there is a vast sense of dumb in its attempts to deliver the next generation of its products. Its social media products are dependent on “engagement”, which is sometimes a product of users’ actual interest and, at other times, an artifact of Meta’s success or failure in controlling what they see. Maybe its “metaverse” will be interesting one day, but it seems deeply embarrassing so far.

Emily Rae Pasiuk, CBC News:

Celebrity investor Kevin O’Leary says he is planning to bankroll and build what he says will be the world’s largest artificial intelligence data centre.

The proposal — named Wonder Valley — is slated to be built in the District of Greenview, near Grande Prairie, Alta.

If you believe this pitch for a seventy billion dollar A.I. data centre campus, you are a mark. A sucker. You have been duped. Bamboozled. Hoodwinked.

Our premier is very excited.

Max Fawcett, the National Observer:

The website in question features a video that has to be seen to be believed — and even then, only as a punchline. The imagined bucolic landscape of rolling hills, gentle streams and friendly wildlife feels like a Disney-fied version of northern California rather than northern Alberta. The campus itself is straight out of Silicon Valley.

The narration is even more hilariously unrealistic, if that’s even possible. “There’s a valley,” we’re told, “where technology and nature will come together in perfect harmony. Where the future will pulse at every turn. Bursting with nature and wildlife, nurturing the present as it embraces the future. The valley where the brightest minds unite, where electrons, machines and data come together in an algorithmic dance.”

If moneyed knuckleheads want to try building this thing, that should be between them and the people of the Municipal District of Greenview — but leave the rest of us out of this. The province should not subsidize or contribute to it, but they are obviously enamoured by the new natural gas power plants to be constructed (emphasis mine):

“The GIG’s ideal cold-weather climate, a highly skilled labor force, Alberta’s pro-business policies and attractive tax regime make the GIG the perfect site for this project. We want to deliver transformative economic impact and the lowest possible carbon emissions afforded to us by the quality of gas in the area, our efficient design and the potential to add Geothermal power as well. Together, these factors create a blueprint for sustainability and success that can be recognized worldwide. This is the Greenview Model” said Mr. O’Leary.

Transparently dishonest.

Wired has been publishing a series of predictions about the coming state of the world. Unsurprisingly, most concern artificial intelligence — how it might impact health, music, our choices, the climate, and more. It is an issue of the magazine Wired describes as its “annual trends briefing”, but it also kind of like a hundred-page op-ed section. It is a mixed bag.

A.I. critic Gary Marcus contributed a short piece about what he sees as the pointlessness of generative A.I. — and it is weak. That is not necessarily because of any specific argument, but because of how unfocused it is despite its brevity. It opens with a short history of OpenAI’s models, with Marcus writing “Generative A.I. doesn’t actually work that well, and maybe it never will”. Thesis established, he begins building the case:

Fundamentally, the engine of generative AI is fill-in-the-blanks, or what I like to call “autocomplete on steroids.” Such systems are great at predicting what might sound good or plausible in a given context, but not at understanding at a deeper level what they are saying; an AI is constitutionally incapable of fact-checking its own work. This has led to massive problems with “hallucination,” in which the system asserts, without qualification, things that aren’t true, while inserting boneheaded errors on everything from arithmetic to science. As they say in the military: “frequently wrong, never in doubt.”

Systems that are frequently wrong and never in doubt make for fabulous demos, but are often lousy products in themselves. If 2023 was the year of AI hype, 2024 has been the year of AI disillusionment. Something that I argued in August 2023, to initial skepticism, has been felt more frequently: generative AI might turn out to be a dud. The profits aren’t there — estimates suggest that OpenAI’s 2024 operating loss may be $5 billion — and the valuation of more than $80 billion doesn’t line up with the lack of profits. Meanwhile, many customers seem disappointed with what they can actually do with ChatGPT, relative to the extraordinarily high initial expectations that had become commonplace.

Marcus’ financial figures here are bizarre and incorrect. He quotes a Yahoo News-syndicated copy of a PC Gamer article, which references a Windows Central repackaging of a paywalled report by the Information — a wholly unnecessary game of telephone when the New York Times obtained financial documents with the same conclusion, and which were confirmed by CNBC. The summary of that Information article — that OpenAI “may run out of cash in 12 months, unless they raise more [money]”, as Marcus wrote on X — is somewhat irrelevant now after OpenAI proceeded to raise $6.6 billion at a staggering valuation of $157 billion.

I will leave analysis of these financials to MBA types. Maybe OpenAI is like Amazon, which took eight years to turn its first profit, or Uber, which took fourteen years. Maybe it is unlike either and there is no way to make this enterprise profitable.

None of that actually matters, though, when considering Marcus’ actual argument. He posits that OpenAI is financially unsound as-is, and that Meta’s language models are free. Unless OpenAI “come outs [sic] with some major advance worthy of the name of GPT-5 before the end of 2025″, the company will be in a perilous state and, “since it is the poster child for the whole field, the entire thing may well soon go bust”. But hold on: we have gone from ChatGPT is disappointing “many customers” — no citation provided — to the entire concept of generative A.I. being a dead end. None of this adds up.

The most obvious problem is that generative A.I. is not just ChatGPT or other similar chat bots; it is an entire genre of features. I wrote earlier this month about some of the features I use regularly, like Generative Remove in Adobe Lightroom Classic. As far as I know, this is no different than something like OpenAI’s Dall‍-‍E in concept: it has been trained on a large library of images to generate something new. Instead of responding to a text-based prompt, it predicts how it should replicate textures and objects in an arbitrary image. It is far from perfect, but it is dramatically better than the healing brush tool before it, and clone stamping before that.

There are other examples of generative A.I. as features of creative tools. It can extend images and replace backgrounds pretty well. The technology may be mediocre at making video on its own terms, but it is capable of improving the quality of interpolated slow motion. In the technology industry, it is good at helping developers debug their work and generate new code.

Yet, if you take Marcus at his word, these things and everything else generative A.I. “might turn out to be a dud”. Why? Marcus does not say. He does, however, keep underscoring how shaky he finds OpenAI’s business situation. But this Wired article is ostensibly about generative A.I.’s usefulness — or, in Marcus’ framing, its lack thereof — which is completely irrelevant to this one company’s financials. Unless, that is, you believe the reason OpenAI will lose five billion dollars this year is because people are unhappy with it, which is not the case. It simply costs a fortune to train and run.

The one thing Marcus keeps coming back to is the lack of a “moat” around generative A.I., which is not an original position. Even if this is true, I do not see this as evidence of a generative A.I. bubble bursting — at least, not in the sense of how many products it is included in or what capabilities it will be trained on.

What this looks like, to me, is commoditization. If there is a financial bubble, this might mean it bursts, but it does not mean the field is wiped out. Adobe is not disappearing; neither are Google or Meta or Microsoft. While I have doubts about whether chat-like interfaces will continue to be a way we interact with generative A.I., it continues to find relevance in things many of us do every day.

Graham Fraser, BBC News:

Apple Intelligence, launched in the UK earlier this week, uses artificial intelligence (AI) to summarise and group together notifications.

This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.

Fraser also points to an inaccurate summary attributed to the New York Times. Even scrolling through my notifications right now, I can see misleading summaries. One, interpreted by Apple Intelligence as a story about a “vaccine lawyer helps pick health officials”, actually refers to an anti-vaccine lawyer who thinks immunizing against polio is dangerous. I have seen far dumber examples since this feature became part of beta builds over the past months.

I am not opposed to the concept of summarized text. It can, in theory, be helpful to glance at my phone and know whether I need to respond to something sooner rather than later. But every error chips away at a user’s trust, to the point where they need to double-check for accuracy — at which point, the summary is creating additional work.

I can see why the BBC is upset about this, particularly after years of declining trust in media. I had notification summaries switched on for only select few apps. I have now ensured they are turned off for news apps.

Update: Markus Müller-Simhofer:

If you are using macOS 15.2, please be careful with those priority alerts. It listed a fraud email as priority for me.

This is not just a one-off. I have also seen this in my own use. Mail also repeatedly decided to prioritize “guest contributor” spam emails over genuinely useful messages like shipping notifications. Sometimes, it works as expected and is useful; sometimes, the priority message feature simply does not show up. It is bizarre.

Cade Metz, New York Times:

Mr. [Sam] Altman said he was “tremendously sad” about the rising tensions between the two one-time collaborators.

“I grew up with Elon as like a mega hero,” he said.

But he rejected suggestions that Mr. Musk could use his increasingly close relationship with President-elect Trump to harm OpenAI.

“I believe pretty strongly that Elon will do the right thing and that it would be profoundly un-American to use political power to the degree that Elon would hurt competitors and advantage his own businesses,” he said.

Alex Heath, the Verge:

Jeff Bezos and President-elect Donald Trump famously didn’t get along the last time Trump was in the White House. This time, Bezos says he’s “very optimistic” and even wants to help out.

“I’m actually very optimistic this time around,” Bezos said of Trump during a rare public appearance at The New York Times DealBook Summit on Wednesday. “He seems to have a lot of energy around reducing regulation. If I can help him do that, I’m going to help him.”

Emily Swanson, the Guardian:

“Mark Zuckerberg has been very clear about his desire to be a supporter of and a participant in this change that we’re seeing all around America,” Stephen Miller, a top Trump deputy, told Fox.

Meta’s president of global affairs, Nick Clegg, agreed with Miller. Clegg said in a recent press call that Zuckerberg wanted to play an “active role” in the administration’s tech policy decisions and wanted to participate in “the debate that any administration needs to have about maintaining America’s leadership in the technological sphere,” particularly on artificial intelligence. Meta declined to provide further comment.

There are two possibilities. The first is that these CEOs are all dummies with memory no more capacious than that of an earthworm. The second is that these people all recognize the transactional and mercurial nature of the incoming administration, and they have begun their ritualistic grovelling. Even though I do not think money and success is evidence of genius, I do not think these CEOs are so dumb they actually believe in the moral fortitude of these goons.

Leah Nylen, Josh Sisco, and Dina Bass, Bloomberg:

The US Federal Trade Commission has opened an antitrust investigation of Microsoft Corp., drilling into everything from the company’s cloud computing and software licensing businesses to cybersecurity offerings and artificial intelligence products.

Seems like a lot of people who thought Microsoft would escape antitrust investigations in the U.S. might have been a little too eager.

This kind of scrutiny is a good thing, and long overdue. Yet one of the unavoidable problems of reducing the influence of these giant corporations now is the pain it is going to cause — almost by definition. If a corporation is abusing its power and scale to such a degree the FTC initiates an investigation, unwinding that will have — to put it mildly — an effect. We are seeing this in the Google case. This is true for any situation where a business or a group of people with too much influence needs correcting. That does not mean it should not happen.

It is true that Microsoft’s products and services are the backbone of businesses and governments the world over. These are delivered through tight integrations, all of which encourages further fealty to this singular solution. For example, it used its dominant position with Office 365 to distribute Teams for free, thereby making it even harder for other businesses to compete. It then leveraged Outlook and Teams to boost its web browser, after doing the same with Windows. If it charged for Teams out of the gate, this would be having a different discussion.

Obviously, the FTC’s concerns with Microsoft’s business practices stretch well beyond bundling Teams. According to this Bloomberg report, the Commission is interested in cloud and identity tying, too. On the one hand, it is enormously useful to businesses to have a suite of products with a single point of management and shared credentials. On the other hand, it is a monolithic system that is a non-starter for potential competitors.

The government is understandably worried about the security and stability risks of global dependence on Microsoft, too, but this is odd:

The CrowdStrike crash that affected millions of devices operating on Microsoft Windows systems earlier this year was itself a testament to the widespread use of the company’s products and how it directly affects the global economy.

This might just be Bloomberg’s contextualizing more than it is relevant to the government’s position. But, still, it seems wrong to me to isolate Windows as the problem instead of Crowdstrike itself, especially with better examples to be found in the SolarWinds breach and its track record with first-party security.

Emanuel Maiberg, 404 Media:

For the past two years an algorithmic artist who goes by Ada Ada Ada has been testing the boundaries of human and automated moderation systems on various social media platforms by documenting her own transition. 

Every week she uploads a shirtless self portrait to Instagram alongside another image which shows whether a number of AI-powered tools from big tech companies like Amazon and Microsoft that attempt to automatically classify the gender of a person see her as male or female. Each image also includes a sequential number, year, and the number of weeks since Ada Ada Ada started hormone therapy.

You want to see great art made with the help of artificial intelligence? Here it is — though probably not in the way one might have expected.

In the first post to be removed by Instagram, Ada Ada Ada calls it a “victory”, and it truly sounds validating. Instagram has made her point and, though she is still able to post photos, you can flip through her pinned story archives “censorship” and “censorship 2” to see how Meta’s systems interpret other posts.

X on Wednesday announced a new set of terms, something which is normally a boring and staid affair. But these are a doozy:

Here’s a high-level recap of the primary changes that go into effect on November 15, 2024. You may see an in-app notice about these updates as well.

  • Governing law and forum changes: For users residing outside of the European Union, EFTA States, and the United Kingdom, we’ve updated the governing law and forum for lawsuits to Texas as specified in our terms. […]

Specifically, X says “disputes […] will be brought exclusively in the U.S. District Court for the Northern District of Texas or state courts located in Tarrant County, Texas, United States”. X’s legal address is on a plot of land shared with SpaceX and the Boring Company near Bastrop, which is in the Western District. This particular venue is notable as the federal judge handling current X litigation in the Northern District owns Tesla stock and has not recused himself in X’s suit against Media Matters, despite stepping aside on a similar case because of a much smaller investment in Unilever. The judge, Reed O’Connor, is a real piece of work from the Federalist Society who issues reliably conservative decisions and does not want that power undermined.

An investment in Tesla does not necessarily mean a conflict of interest with X, an ostensibly unrelated company — except it kind of does, right? This is the kind of thing the European Commission is trying to figure out: are all of these different businesses actually related because they share the same uniquely outspoken and influential figurehead? Musk occupies such a particularly central role in all these businesses and it is hard to disentangle him from their place in our society. O’Connor is not the only judge in the district, but it is notable the company is directing legal action to that venue.

But X is only too happy to sue you in any court of its choosing.

Another of the X terms updates:

  • AI and machine learning clarifications: We’ve added language to our Privacy Policy to clarify how we may use the information you share to train artificial intelligence models, generative or otherwise.

This is rude. It is a “clarifi[cation]” described in vague terms, and what it means is that users will no longer be able to opt out of their data being used to train Grok or any other artificial intelligence product. This appears to also include images and video, posts in private accounts and, if I am reading this right, direct messages.

Notably, Grok is developed by xAI, which is a completely separate company from X. See above for how Musk’s companies all seem to bleed together.

  • Updates to reflect how our products and services work: We’ve incorporated updates to better reflect how our existing and upcoming products, features, and services work.

I do not know what this means. There are few product-specific changes between the old and new agreements. There are lots — lots — of new ways X wants to say it is not responsible for anything at all. There is a whole chunk which effectively replicates the protections of Section 230 of the CDA, you now need written permission from X to transfer your account to someone else, and X now spells out its estimated damages from automated traffic: $15,000 USD per million posts every 24 hours.

Oh, yeah, and X is making blocking work worse:

If your posts are set to public, accounts you have blocked will be able to view them, but they will not be able to engage (like, reply, repost, etc.).

The block button is one of the most effective ways to improve one’s social media experience. From removing from your orbit people who you never want to hear from for even mundane reasons, to reducing the ability for someone to stalk or harass, its expected action is vital. This sucks. I bet the main reason this change was made is because Musk is blocked by a lot of people.

All of these changes seem designed to get rid of any remaining user who is not a true believer. Which brings us to today.

Sarah Perez, TechCrunch:

Social networking startup Bluesky, which just reported a gain of half a million users over the past day, has now soared into the top five apps on the U.S. App Store and has become the No. 2 app in the Social Networking category, up from No. 181 a week ago, according to data from app intelligence firm Appfigures. The growth is entirely organic, we understand, as Appfigures confirmed the company is not running any App Store Search Ads.

As of writing, Bluesky is the fifth most popular free app in the Canadian iOS App Store, and the second most popular free app in the Social Networking category. Threads is the second most popular free app, and the most popular in the Social Networking category.

X is number 74 on the top free apps list. It remains classified as “News” in the App Store because it, like Twitter, has always compared poorly against other social media apps.

Chiara Castro, TechRadar:

Hungary, the country that now heads the Council of Europe after Belgium, has resurrected what’s been deemed by critics as Chat Control, and MEPs are expected to vote on it at the end of the month. After proposing a new version in June, the Belgian presidency had to take the proposal off the agenda last minute amid harsh backlash.

Popular encrypted messaging apps, including Signal and Threema, have already announced their intention to rather shut down their operations in the EU instead of undermining users’ privacy. Keep reading as I walk you through what we know so far, and how one of the best VPN apps could help in case the proposal becomes law.

This news was broken by Politico, but their story is in the “Pro” section, which is not just a paywall. One cannot just sign up for it; you need to “Request a Demo” and then you can be granted access for no less than €7,000 per year. I had to settle for this re-reported version. And because online media is so broken — in part because of my selfish refusal to register for this advanced version of Politico — news outlets like TechRadar find any way of funding themselves. In this case, the words “best VPN” are linked to a list of affiliate-linked VPN apps. Smooth.

Patrick Breyer:

[…] According to the latest proposal providers would be free whether or not to use ‘artificial intelligence’ to classify unknown images and text chats as ‘suspicious’. However they would be obliged to search all chats for known illegal content and report them, even at the cost of breaking secure end-to-end messenger encryption. The EU governments are to position themselves on the proposal by 23 September, and the EU interior ministers are to endorse it on 10 October. […]

This is a similar effort to that postponed earlier this year. The proposal (PDF) has several changes, but it still appears to poke holes in end-to-end encryption, and require providers to detect possible known CSAM before it is sent. A noble effort, absolutely, but also one which fundamentally upsets the privacy of one-on-one communications to restrict its abuse by a few.

Nathan J. Robinson, of Current Affairs, reviewing “Corporate Bullshit” by Nick Hanauer, Joan Walsh, and Donald Cohen last year:

Over the last several decades, we have been told that “smoking doesn’t cause cancer, cars don’t cause pollution, greedy pharmaceutical companies aren’t responsible for the crisis of opioid addiction.” Recognizing the pattern is key to spotting “corporate bullshit” in the wild, and learning how to spot it is important, because, as the authors write, the stories told in corporate propaganda are often superficially plausible: “At least on the surface, they offer a civic-minded, reasonable-sounding justification for positions that in fact are motivated entirely by self-interest.” When restaurant owners say that raising the minimum wage will drive their labor costs too high and they’ll be forced to cut back on employees or close entirely, or tobacco companies declare their product harmless, those things could be true. They just happen not to be.

Via Cory Doctorow.

Jeremy Keith:

I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).

I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.

And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”

This piece is making me think more about my own, minimal use of generative features. Sure, it is neat that I can get a more accurate summary of an email newsletter than a marketer will typically write, or that I can repair something in a photo without so much manual effort. But this ease is only possible thanks to the questionable ethics of A.I. training.

Jake Evans, ABC News:

Facebook has admitted that it scrapes the public photos, posts and other data of Australian adult users to train its AI models and provides no opt-out option, even though it allows people in the European Union to refuse consent.

[…]

Ms Claybaugh [Meta’s global privacy policy director] added that accounts of people under 18 were not scraped, but when asked by Senator Sheldon whether public photos of his own children on his account would be scraped, Ms Claybaugh acknowledged they would.

This is not ethical. Meta has the ability to more judiciously train its systems, but it will not do that until it is pressured. Shareholders will not take on that role. They have been enthusiastically boosting any corporation with an A.I. announcement. Neither will the corporations themselves, which have been jamming these features everywhere — there are floating toolbars, floating panels, balloons, callouts, and glowing buttons that are hard to ignore even if you want to.

Julia Love and Davey Alba, Bloomberg:

Google now displays convenient artificial intelligence-based answers at the top of its search pages — meaning users may never click through to the websites whose data is being used to power those results. But many site owners say they can’t afford to block Google’s AI from summarizing their content.

[…]

Google uses a separate crawler for some AI products, such as its chatbot Gemini. But its main crawler, the Googlebot, serves both AI Overviews and Google search. A company spokesperson said Googlebot governs AI Overviews because AI and the company’s search engine are deeply entwined. The spokesperson added that its search results page shows information in a variety of formats, including images and graphics. Google also said publishers can block specific pages or parts of pages from appearing in AI Overviews in search results — but that would also likely bar those snippets from appearing across all of Google’s other search features, too, including web link listings.

I have quoted these two paragraphs in full because I think the difference between Google’s various A.I. products is worth clarifying. The effects of the Google-Extended control, which a publisher can treat as a separate user agent in robots.txt, is only relevant to training the Gemini and Vertex generative products. Gemini powers the A.I. overviews feature, but there is no way of opting out of overviews without entirely removing a site from Google’s indexing.

I can see why website owners would want to do this; I sympathize with the frustration of those profiled in this article. But Google has been distorting the presentation of results and reducing publishers’ control for years. In 2022, I was trying to find an article from my own site when I discovered Google had generated an incorrect pros and cons list from an iPad review I wrote. Google also generates its own titles and descriptions for results instead of relying on the page-defined title and meta description tags, and it has introduced features over the years like Featured Snippets, the spiritual predecessor of A.I. Overviews.

All of these things have reduced the amount of control website owners can have over how their site is presented on a Google results page. In some cases, they are often beneficial — rewritten titles and descriptions may reflect the actual subject of the page more accurately than one provided by some SEO expert. But in other cases, they end up making false claims cited to webpages. It happened with Featured Snippets, it happened with Google’s interpretation of my iPad review, and it happens with this artificially “intelligent” feature as well.

Shane Goldmacher, New York Times:

Former President Donald J. Trump has taken his obsession with the large crowds that Vice President Kamala Harris is drawing at her rallies to new heights, falsely declaring in a series of social media posts on Sunday that she had used artificial intelligence to create images and videos of fake crowds.

The A.I.-generated crowds claim is something I had seen bouncing around the fringes of X — and by “fringe”, I mean accounts which have paid to amplify their posts. I did not expect a claim this stupid to become a mainstream argument. But then I remembered what the mainstream looks like these days.

This claim is so stupid because you do not need to rely on the photos released by the campaign. You can just go look up pictures for yourself, taken at a bunch of different angles by a bunch of different people with consistent lighting, logical crowds, and realistic hands. There are hundreds of them, and videos too. A piece of supposed evidence for the fakery is that Harris’ plane does not have a visible tail number, but there are — again — plenty of pictures of that plane which show no number. The U.S. Air Force made the change last year.

I know none of the people promoting this theory are interested in facts. They began with a conclusion and are creating a story to fit, in spite of evidence to the contrary. Still, it was equal parts amusing and worrisome to see this theory be spun from whole cloth in real time.

Katie McQue, the Guardian:

The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC.

Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children’s charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC’s annual report.

The reactions to statistics related to this particularly revolting crime are similar to all crime figures: higher and lower numbers can be interpreted as both positive and negative alike. More reports could mean better detection or more awareness, but it could also mean more instances; it is hard to know. Fewer reports might reflect less activity, a smaller platform size or, indeed, undercounting. In Apple’s case, it is likely the latter. It is neither a small platform nor one which prohibits the kinds of channels through which CSAM is distributed.

NCMEC addresses both these problems and I think its complaints are valid:

U.S.-based ESPs are legally required to report instances of child sexual abuse material (CSAM) to the CyberTipline when they become aware of them. However, there are no legal requirements regarding proactive efforts to detect CSAM or what information an ESP must include in a CyberTipline report. As a result, there are significant disparities in the volume, content and quality of reports that ESPs submit. For example, one company’s reporting numbers may be higher because they apply robust efforts to identify and remove abusive content from their platforms. Also, even companies that are actively reporting may submit many reports that don’t include the information needed for NCMEC to identify a location or for law enforcement to take action and protect the child involved. These reports add to the volume that must be analyzed but don’t help prevent the abuse that may be occurring.

Not only are many reports not useful, they are also part of an overwhelming caseload with which law enforcement struggles to turn into charges. Proposed U.S. legislation is designed to improve the state of CSAM reporting. Unfortunately, the wrong bill is moving forward.

The next paragraph in the Guardian story:

All US-based tech companies are obligated to report all cases of CSAM they detect on their platforms to NCMEC. The Virginia-headquartered organization acts as a clearinghouse for reports of child abuse from around the world, viewing them and sending them to the relevant law enforcement agencies. iMessage is an encrypted messaging service, meaning Apple is unable to see the contents of users’ messages, but so is Meta’s WhatsApp, which made roughly 1.4m reports of suspected CSAM to NCMEC in 2023.

I wish there was more information here about this vast discrepancy — a million reports from just one of Meta’s businesses compared to just 267 reports from Apple to NCMEC for all of its online services. The most probable explanation, I think, can be found in a 2021 ProPublica investigation by Peter Elkind, Jack Gillum, and Craig Silverman, about which I previously commented. The reporters here revealed WhatsApp moderators’ heavy workloads, writing:

Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

WhatsApp allows users to report any message at any time. Apple’s Messages app, on the other hand, only lets users flag a sender as junk and, even then, only if the sender is not in the user’s contacts and the user has not replied a few times. As soon as there is a conversation, there is no longer any reporting mechanism within the app as far as I can tell.

The same is true of shared iCloud Photo albums. It should be easy and obvious how to report illicit materials to Apple. But I cannot find an obvious mechanism for doing so — not in an iCloud-shared photo album, and not in an obvious place on Apple’s website, either. As noted in Section G of the iCloud terms of use, reports must be sent via email to abuse@icloud.com. iCloud albums use long, unguessable URLs, so the likelihood of unintentionally stumbling across CSAM or other criminal materials is low. Nevertheless, it seems to me that notifying Apple of abuse of its services should be much clearer.

Back to the Guardian article:

Apple’s June announcement that it will launch an artificial intelligence system, Apple Intelligence, has been met with alarm by child safety experts.

“The race to roll out Apple AI is worrying when AI-generated child abuse material is putting children at risk and impacting the ability of police to safeguard young victims, especially as Apple pushed back embedding technology to protect children,” said [the NSPCC’s Richard] Collard. Apple says the AI system, which was created in partnership with OpenAI, will customize user experiences, automate tasks and increase privacy for users.

The Guardian ties Apple’s forthcoming service to models able to generate CSAM, which it then connects to models being trained on CSAM. But we do not know what Apple Intelligence is capable of doing because it has not yet been released, nor do we know what it has been trained on. This is not me giving Apple the benefit of the doubt. I think we should know more about how these systems are trained.

We also currently do not know what limitations Apple will set for prompts. It is unclear to me what Collard is referring to in saying that the company “pushed back embedding technology to protect children”.

One more little thing: Apple does not say Apple Intelligence was created in partnership with OpenAI, which is basically a plugin. It also does not say Apple Intelligence will increase privacy for users, only that it is more private than competing services.

I am, for the record, not particularly convinced by any of Apple’s statements or claims. Everything is firmly in we will see territory right now.

Cristina Criddle, Financial Times:

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

Emanuel Maiberg, 404 Media:

Generative AI could “distort collective understanding of socio-political reality or scientific consensus,” and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI.

It is probably worth emphasizing this is a preprint published to arXiv, so I am not sure of how much faith should be placed its scholarly rigour. Nevertheless, when in-house researchers are pointing out the ways in which generative A.I. is misused, you might think that would be motivation for their employer to act with caution. But you, reader, are probably not an executive at Google.

This paper was submitted on 19 June. A few days later, reporters at the Information said Google was working on A.I. chat bots with real-person likenesses, according to Pranav Dixit of Engadget:

Google is reportedly building new AI-powered chatbots based on celebrities and YouTube influencers. The idea isn’t groundbreaking — startups like Character.ai and companies like Meta have already launched products like this — but neither is Google’s AI strategy so far.

Maybe nothing will come of this. Maybe it is outdated; Google’s executives may have looked at the research produced by its DeepMind division and concluded the risks are too great. But you would not get that impression from a spate of stories which suggest the company is sprinting into the future, powered by the trust of users it spent twenty years building and a whole lot of fossil fuels.

With apologies to Mitchell and Webb.

In a word, my feelings about A.I. — and, in particular, generative A.I. — are complicated. Just search “artificial intelligence” for a reverse chronological back catalogue of where I have landed. It feels like an appropriate position to hold for a set of nascent technologies so sprawling and therefore implying radical change.

Or perhaps that, like so many other promising new technologies, will turn out to be illusory as well. Instead of altering the fundamental fabric of reality, maybe it is used to create better versions of features we have used for decades. This would not necessarily be a bad outcome. I have used this example before, but the evolution of object removal tools in photo editing software is illustrative. There is no longer a need to spend hours cloning part of an image over another area and gently massaging it to look seamless. The more advanced tools we have today allow an experienced photographer to make an image they are happy with in less time, and lower barriers for newer photographers.

A blurry boundary is crossed when an entire result is achieved through automation. There is a recent Drew Gooden video which, even though not everything resonated with me, I enjoyed.1 There is a part in the conclusion which I wanted to highlight because I found it so clarifying (emphasis mine):

[…] There’s so many tools along the way that help you streamline the process of getting from an idea to a finished product. But, at a certain point, if “the tool” is just doing everything for you, you are not an artist. You just described what you wanted to make, and asked a computer to make it for you.

You’re also not learning anything this way. Part of what makes art special is that it’s difficult to make, even with all the tools right in front of you. It takes practice, it takes skill, and every time you do it, you expand on that skill. […] Generative A.I. is only about the end product, but it won’t teach you anything about the process it would take to get there.

This gets at the question of whether A.I. is more often a product or a feature — the answer to which, I think, is both, just not in a way that is equally useful. Gooden shows an X thread in which Jamian Gerard told Luma to convert the “Abbey Road” cover to video. Even though the results are poor, I think it is impressive that a computer can do anything like this. It is a tech demo; a more practical application can be found in something like the smooth slow motion feature in the latest release of Final Cut Pro.

“Generative A.I. is only about the end product” is a great summary of the emphasis we put on satisfying conclusions instead of necessary rote procedure. I cook dinner almost every night. (I recognize this metaphor might not land with everyone due to time constraints, food availability, and physical limitations, but stick with me.) I feel lucky that I enjoy cooking, but there are certainly days when it is a struggle. It would seem more appealing to type a prompt and make a meal appear using the ingredients I have on hand, if that were possible.

But I think I would be worse off if I did. The times I have cooked while already exhausted have increased my capacity for what I can do under pressure, and lowered my self-imposed barriers. These meals have improved my ability to cook more elaborate dishes when I have more time and energy, just as those more complicated meals also make me a better cook.2

These dynamics show up in lots of other forms of functional creative expression. Plenty of writing is not particularly artistic, but the mental muscle exercised by trying to get ideas into legible words is also useful when you are trying to produce works with more personality. This is true for programming, and for visual design, and for coordinating an outfit — any number of things which are sometimes individually expressive, and other times utilitarian.

This boundary only exists in these expressive forms. Nobody, really, mourns the replacement of cheques with instant transfers. We do not get better at paying our bills no matter which form they take. But we do get better at all of the things above by practicing them even when we do not want to, and when we get little creative satisfaction from the result.

It is dismaying to see so many of A.I. product demos show how they can be used to circumvent this entire process. I do not know if that is how they will actually be used. There are plenty of accomplished artists using A.I. to augment their practice, like Sougwen Chen, Anna Ridler, and Rob Sheridan. Writers and programmers are using generative products every day as tools, but they must have some fundamental knowledge to make A.I. work in their favour.

Stock photography is still photography. Stock music is still music, even if nobody’s favourite song is “Inspiring Corporate Advertising Tech Intro Promo Business Infographics Presentation”. (No judgement if that is your jam, though.) A rushed pantry pasta is still nourishment. A jingle for an insurance commercial could be practice for a successful music career. A.I. should just be a tool — something to develop creativity, not to replace it.


  1. There are also some factual errors. At least one of the supposed Google Gemini answers he showed onscreen was faked, and Adobe’s standard stock license is less expensive than the $80 “Extended” license Gooden references. ↥︎

  2. I am wary of using an example like cooking because it implies a whole set of correlative arguments which are unkind and judgemental toward people who do not or cannot cook. I do not want to provide kindling for these positions. ↥︎

Javier Espinoza and Michael Acton, Financial Times:

Apple has warned that it will not roll out the iPhone’s flagship new artificial intelligence features in Europe when they launch elsewhere this year, blaming “uncertainties” stemming from Brussels’ new competition rules.

This article carries the headline “Apple delays European launch of new AI features due to EU rules”, but it is not clear to me these features are “delayed” in the E.U. or that they would “launch elsewhere this year”. According to the small text in Apple’s WWDC press release, these features “will be available in beta […] this fall in U.S. English”, with “additional languages […] over the course of the next year”. This implies the A.I. features in question will only be available to devices set to U.S. English, and acting upon text and other data also in U.S. English.

To be fair, this is a restriction of language, not geography. Someone in France or Germany could still want to play around with Apple Intelligence stuff even if it is not very useful with their mostly not-English data. Apple is saying they will not be able to. It aggressively region-locks alternative app marketplaces to Europe and, I imagine, will use the same infrastructure to keep users out of these new features.

There is an excerpt from Apple’s statement in this Financial Times article explaining which features will not launch in Europe this year: iPhone Mirroring, better screen sharing with SharePlay, and Apple Intelligence. Apple provided a fuller statement to John Gruber. This is the company’s explanation:

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

Apple does not explain specifically how these features run afoul of the DMA — or why it would not or could not build them to clearly comply with the DMA — so this could be mongering, but I will assume it is a good-faith effort at compliance in the face of possible ambiguity. I am not sure Apple has earned a benefit of the doubt, but that is a different matter.

It seems like even the possibility of lawbreaking has made Apple cautious — and I am not sure why that is seen as an inherently bad thing. This is one of the world’s most powerful corporations, and the products and services it rolls out impact a billion-something people. That position deserves significant legal scrutiny.

I was struck by something U.S. FTC chair Lina Khan said in an interview at a StrictlyVC event this month:

[…] We hear routinely from senior dealmakers, senior antitrust lawyers, who will say pretty openly that as of five or six or seven years ago, when you were thinking about a potential deal, antitrust risk or even the antitrust analysis was nowhere near the top of the conversation, and now it is up front and center. For an enforcer, if you’re having companies think about that legal issue on the front end, that’s a really good thing because then we’re not going to have to spend as many public resources taking on deals that we believe are violating the laws.

Now that competition laws are being enforced, businesses have to think about them. That is a good thing! I get a similar vibe from this DMA response. It is much newer than antitrust laws in both the U.S. and E.U. and there are things about which all of the larger technology companies are seeking clarity. But it is not an inherently bad thing to have a regulatory layer, even if it means delays.

Is that not Apple’s whole vibe, anyway? It says it does not rush into things. It is proud of withholding new products until it feels it has gotten them just right. Perhaps you believe corporations are a better judge of what is acceptable than a regulatory body, but the latter serves as a check on the behaviour of the former.

Apple is not saying Europe will not get these features at all. It is only saying it is not sure it has built them in a DMA compliant way. We do not know anything more about why that is the case at this time, and it does not make sense to speculate further until we do.

Takeshi Narabe, the Asahi Shimbun:

SoftBank Corp. announced that it has developed voice-altering technology to protect employees from customer harassment.

The goal is to reduce the psychological burden on call center operators by changing the voices of complaining customers to calmer tones.

The company launched a study on “emotion canceling” three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call.

Penny Crosman, the American Banker:

Call center agents who have to deal with angry or perplexed customers all day tend to have through-the-roof stress levels and a high turnover rate as a result. About 53% of U.S. contact center agents who describe their stress level at work as high say they will probably leave their organization within the next six months, according to CMP Research’s 2023-2024 Customer Contact Executive Benchmarking Report.

Some think this is a problem artificial intelligence can fix. A well-designed algorithm could detect the signs that a call center rep is losing it and do something about it, such as send the rep a relaxing video montage of photos of their family set to music.

Here we have examples from two sides of the same problem: working in a call centre sucks because dealing with usually angry, frustrated, and miserable customers sucks. The representative probably understands why some corporate decision made the customer angry, frustrated, and miserable, but cannot really do anything about it.

So there are two apparent solutions here — the first reconstructs a customer’s voice in an effort to make them sound less hostile, and the second shows call centre employees a “video montage” of good memories as an infantilizing calming measure.

Brian Merchant wrote about the latter specifically, but managed to explain why both illustrate the problems created by how call centres work today:

If this showed up in the b-plot of a Black Mirror episode, we’d consider it a bit much. But it’s not just the deeply insipid nature of the AI “solution” being touted here that gnaws at me, though it does, or even the fact that it’s a comically cynical effort to paper over a problem that could be solved by, you know, giving workers a little actual time off when they are stressed to the point of “losing it”, though that does too. It’s the fact that this high tech cost-saving solution is being used to try to fix a whole raft of problems created by automation in the first place.

A thoughtful exploration of how A.I. is really being used which, combined with the previously linked item, does not suggest a revolution for anyone involved. It looks more like cheap patch on society’s cracking dam.

Kif Leswing, CNBC:

Nvidia, long known in the niche gaming community for its graphics chips, is now the most valuable public company in the world.

[…]

Nvidia shares are up more than 170% so far this year, and went a leg higher after the company reported first-quarter earnings in May. The stock has multiplied by more than ninefold since the end of 2022, a rise that’s coincided with the emergence of generative artificial intelligence.

I know computing is math — even drawing realistic pictures really fast — but it is so funny to me that Nvidia’s products have become so valuable for doing applied statistics instead of for actual graphics work.

Renee Dudley and Doris Burke, reporting for ProPublica which is not, contrary to the opinion of one U.S. Supreme Court jackass justice, “very well-funded by ideological groups” bent on “look[ing] for any little thing they can find, and they try[ing] to make something out of it”, but is instead a distinguished publication of investigative journalism:

Microsoft hired Andrew Harris for his extraordinary skill in keeping hackers out of the nation’s most sensitive computer networks. In 2016, Harris was hard at work on a mystifying incident in which intruders had somehow penetrated a major U.S. tech company.

[…]

Early on, he focused on a Microsoft application that ensured users had permission to log on to cloud-based programs, the cyber equivalent of an officer checking passports at a border. It was there, after months of research, that he found something seriously wrong.

This is a deep and meaningful exploration of Microsoft’s internal response to the conditions that created 2020’s catastrophic SolarWinds breach. It seems that both Microsoft and the Department of Justice knew well before anyone else — perhaps as early as 2016 in Microsoft’s case — yet neither did anything with that information. Other things were deemed more important.

Perhaps this was simply a multi-person failure in which dozens of people at Microsoft could not see why Harris’ discovery was such a big deal. Maybe they all could not foresee this actually being exploited in the wild, or there was a failure to communicate some key piece of information. I am a firm believer in Hanlon’s razor.

On the other hand, the deep integration of Microsoft’s entire product line into sensitive systems — governments, healthcare, finance — magnifies any failure. The incompetence of a handful of people at a private corporation should not result in 18,000 infected networks.

Ashley Belanger, Ars Technica:

Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

[…]

Microsoft did not dispute ProPublica’s report. Instead, the company provided a statement that almost seems to contradict Smith’s testimony to Congress today by claiming that “protecting customers is always our highest priority.”

Microsoft’s public relations staff can say anything they want. But there is plenty of evidence — contemporary and historic — showing this is untrue. Can it do better? I am sure Microsoft employs many intelligent and creative people who desperately want to change this corrupted culture. Will it? Maybe — but for how long is anybody’s guess.