Search Results for: "artificial intelligence"

Mike Masnick, Techdirt, reacting to Grok’s Nazi turn:

We need to take back control over the tools that we use.

Especially these days, as so many people have started (dangerously) treating AI tools as “objective” sources of truth, people need to understand that they are all subject to biases. Some of these biases are in their training data. Some are in their weights. And some are, as is now quite clear, directly in their system prompts.

The problem isn’t just bias — it’s whose bias gets embedded in the system. When a centralized AI reflects the worldview of tech billionaires rather than the diverse perspectives of its users, we’re not getting artificial intelligence. We’re getting artificial ideology.

I am half compelled by this argument, and half concerned. I obviously believe we should be skeptical of how much trust we place in corporations. After all, they have given us ample reason to be suspicious of them.

Even before it was “X”, Twitter did not have the best reputation for quality discussion. And then it was bought by Elon Musk. I still do not believe there is sufficient evidence for bias in users’ feeds during the recent U.S. presidential election, but the anti-“political correctness” written into Grok is a plainly obvious problem. Even so, a new version of Grok was launched this week, which consults Musk’s tweets when it gets stuck on a query. All of this should undermine the little bit of trust anyone might have left in X and xAI.

A company with a much better reputation, historically, is Google. Even though it has faced decades of scrutiny and questions about its secret website rankings, it has generally gotten things more right than not. To be clear, I can point to dozens of times when it has been bad at search — especially in the last five years — but it remains what most people think of when they think of searching the web. Yet, because it feels to some like A.I. works like magic, that reputation is on the line with good criticisms and very dumb ones. The Attorney General of Missouri — the state that nearly prosecuted a journalist for viewing the source of a website — is investigating Google, Meta, Microsoft, and OpenAI for being insufficiently supportive of the president’s record on Israel–U.S. relations. The Attorney General approvingly cites Missouri v. Biden, which the state lost.

Yet, even with all this in mind, we need to be able to trust institutions to some extent. This is the part of me concerned about Masnick’s piece. I think it is a great suggestion that we should control our own tools, where anyone can “choose your own values, your own sources, and your own filters”. However, most people are unlikely to do these things. Most of us will probably use something from some big company we do not really trust, but it is what ships with the system or is built into the apps we use most, or whatever. We need to ensure the areas where we have little control are trustworthy, too.

What that probably means is some kind of oversight, akin to what we have for other areas of little control. This is how we have some trust in the water we drink, the air we breathe, the medicine we take, and the planes we fly in. Consumer protection laws give us something to stand on when we are taken advantage of. Yes, there are places where this is done better than others, and I think we should learn from them instead of throwing up our hands and pretending this problem will be solved on an individual basis. To be clear, I am not reading Masnick’s writing as some kind of libertarian fantasy or an anti-regulation screed, nor do I interpret that in Alex Komoroske’s manifesto either. But I also believe there should be some regulation because we need to be realistic about the practical limitations of how much time and effort people will invest into controlling their experience.

Jonathan Vanian, CNBC:

Mark Zuckerberg said Monday that he’s creating Meta Superintelligence Labs, which will be led by some of his company’s most recent hires, including Scale AI ex-CEO Alexandr Wang and former GitHub CEO Nat Friedman.

Zuckerberg said the new AI superintelligence unit, MSL, will house the company’s various teams working on foundation models such as the open-source Llama software, products and Fundamental Artificial Intelligence Research projects, according to an internal memo obtained by CNBC.

Kyle Orland, Ars Technica:

When I hear Zuckerberg talk about the promise of AI these days, it’s hard not to hear echoes of his monumental vision for the metaverse from 2021. If anything, Zuckerberg’s vision of our AI-powered future is even more grandiose than his view of the metaverse.

Orland allows for key differences, like how people actually use A.I. products, including those from Meta — Zuckerberg says “more than 1 billion monthly actives”. That seems, to me, to be a pretty big caveat. The series 404 Media has been running about A.I. slop on Facebook looks bad, but at it suggests people are using A.I. in connection with Meta’s products, something nobody can say about the metaverse it decided to use as the foundation for rebranding itself. Embarrassing.

A good faith read of Orland’s argument is that Meta is taking advantage of — and growing — the hype around A.I. in the same way as it attempted to do with the metaverse. This is obviously not a new thing for tech companies. They routinely proclaim world-changing advancements without earning it, and Meta is a particularly poor narrator of its own supposed brilliance. I would not trust it — but not because this all sounds a bit like the metaverse. Meta and Zuckerberg personally simply have not demonstrated a capacity for being visionary. The company has a knack for acquisitions and an ability to retain users’ attention. It has not shown an ability to invent the future.

All of these stories were published yesterday. I am linking to them in chronological order.

Alex Kantrowitz, in his Big Technology newsletter:

Tim Cook ought to call Perplexity CEO Aravind Srinivas and offer him $30 billion for his AI search engine. And he should do it right away.

[…]

“Not likely!” Perplexity chief business officer Dmitry Shevelenko told me of a potential tie-up with Apple. “But Meta-Scale is so unlikely that I feel we aren’t living in a world of likelies.”

Apple and Perplexity have had no M&A discussions to date, Shevelenko added, not even a wink.

Mark Gurman, Katie Roof, and Riley Griffin, Bloomberg:

Meta Platforms Inc. held discussions with artificial intelligence search startup Perplexity AI Inc. about a possible takeover before moving ahead with a multibillion-dollar investment in Scale AI, according to people familiar with the matter.

Deirdre Bosa and Ashley Capoot, of CNBC, confirmed Bloomberg’s reporting, adding that one source “said Perplexity walked away from a potential deal”.

Mark Gurman, Bloomberg:

Apple Inc. executives have held internal discussions about potentially bidding for artificial intelligence startup Perplexity AI, seeking to address the need for more AI talent and technology.

You will note the day began with Kantrowitz’s article calling for Apple to buy Perplexity. It was not a reaction to Gurman’s report, which was published late in the afternoon and came after a different story about another possible Perplexity acquisition, to which Gurman also contributed. Heck of a coincidence all of these dropped on the same day.

Christine Wang, Axios:

Ten years ago, Google crawled two pages for every visitor it sent a publisher, per [Cloudflare CEO Matthew] Prince.

[…]

Now:

  • For Google, it’s 18:1

  • For OpenAI, it’s 1,500:1

  • For Anthropic, it’s 60,000:1

It is a curious side effect of Cloudflare’s size and position that it is among a true handful of companies with this kind of visibility into a meaningful slice of global web traffic.

In an alternate world, these artificial intelligence businesses may have tried to work with publishers. Perhaps they would have given greater prominence to references, self-policed the amount of summarization they would offer, and provide some kind of financial kickback. Instead, they have trained their systems on publishers’ vast libraries without telling them until it is far too late for it to matter. They take so much while providing so little in return. This will surely accelerate the walling-off of the necessarily paid web, further affirming what I have taken to calling “Robinson’s Law”. This helps explain the increasingly unethical means of acquiring this training data.

I, like millions of people, spend a good chunk of my Mondays through Fridays in Microsoft 365 (formerly Office) applications, many of which I partly or wholly access through a browser. For years, I could just type portal.office.com in my browser and get links to Word, Excel, Outlook — all the classics — plus some organization-specific applications.

Microsoft being Microsoft, it has decided to bury all that. We are in the artificial intelligence era now, baby. Instead of seeing the applications millions of people rely on, Microsoft has decided the default view should be a Copilot chat. The familiar suite of apps is in another page accessible from the sidebar; each app can, at least, be pinned to that same sidebar. This was barely announced and seems to have taken many by surprise.

But, hey, Copilot should at least make me more productive, right? It is Microsoft’s own service — albeit backed by OpenAI’s work — and integrated, in theory, with our organization’s applications and files. Maybe it could function as an application launcher. So I told Copilot to “launch Word”. Copilot:

I can’t directly launch Microsoft Word from here, but I can help you open it on your device. Here’s how you can do it:

On Windows:

  1. Press the Windows key on your keyboard or click the Start menu.

  2. Type “Word” in the search bar.

  3. Click on Microsoft Word from the search results.

On macOS:

  1. Open Finder or press Command + Space to open Spotlight Search.

  2. Type “Word” and press Enter when Microsoft Word appears.

Truly embarrassing stuff to interrupt users’ learned workflow for something worse because it is important for Microsoft to be seen as a cutting-edge innovator. It has made Copilot part of its branding — so much so that it will advertise Copilot even where it is not available, for “consistency”.

This, from Adam Newbold, is a perfect encapsulation of a bunch of ethical problems related to artificial intelligence. The prompt:

Generate an image for a Calvin & Hobbes strip. Four panels. Calvin and Hobbes are walking through the woods, talking to each other, both holding smart phones and looking at them intently the entire time.

Panel 1: Calvin says to Hobbes, “This strip was made entirely with ChatGPT, which should be impossible given the strict intellectual property rights restrictions on Calvin & Hobbes content.”

Panel 2: Hobbes responds to Calvin, “Oh? Then how did it make it?”

Panel 3: Calvin responds to Hobbes, “Some guy just typed this into a box and clicked a button. That’s all it took.”

Panel 4: Hobbes responds to Calvin, “That’s so fucked up.”

This is entirely doable without generative artificial intelligence, but it requires far more skill. The ease of this duplication is maddening. I find this offensive in exactly the way Newbold intended it to be.

More important, I think, is the control exercised over the likenesses of Calvin and Hobbes by the strip’s creator Bill Watterson, as Newbold noted in the strip. Watterson famously rejected all but a handful of licensed merchandising ideas. But the mechanism for how he might protect this is the same as the one used by Disney when it fights parody and reinterpretation of its vast intellectual property, even though the motivations are different. Watterson’s protective quality is admirable, driven by artistic integrity to the extent he has left many millions of dollars’ worth of tchotchkes on the table to retain the spirit of the strips. Disney’s is entirely business motivated, evidenced by the tens of billions of dollars in licensed tchotchkes sold last year alone.

This is not the first “Calvin & Hobbes” strip made with generative A.I., nor does generative A.I. begin and end at self-referential prompts like these. Some assholes have created plugins — more-or-less — to badly emulate Watterson’s unique style in generative A.I. programs. It is awful.

I want to live in a world where we can differentiate between the necessary reinterpretation of intellectual property while respecting the wishes of artists. This is a tricky line, I know. It requires us — individually, but also the organizations responsible for generative A.I. stuff — to think about who is making such a request in good faith, and decide whether we are going to honour that.

One more thing: Watterson is a pretty private person, rarely giving interviews. But, right above this paragraph, I think we can get a sense of how he might feel about this.

Drew DeVault:

Now it’s LLMs. If you think these crawlers respect robots.txt then you are several assumptions of good faith removed from reality. These bots crawl everything they can find, robots.txt be damned, including expensive endpoints like git blame, every page of every git log, and every commit in every repo, and they do so using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses – mostly residential, in unrelated subnets, each one making no more than one HTTP request over any time period we tried to measure – actively and maliciously adapting and blending in with end-user traffic and avoiding attempts to characterize their behavior or block their traffic.

As curious and fascinating as I find many applications of generative artificial intelligence, I find it difficult to square with the flagrantly unethical way it has been trained. Server admins have to endure and pay for massive amounts of traffic from well-funded corporations, without compensation, all of which treat robots.txt as something to be worked around. Add to that the kind of copyright infringement that would cost users thousands of dollars per file, and it is clear the whole system is morally bankrupt.

Do not get me wrong — existing intellectual property law is in desperate need of reform. Big, powerful corporations have screwed us all over by extending copyright terms. In Canada, the number of works in the public domain will be stagnant for the next eighteen years after we signed onto the Canada–United States–Mexico Agreement. But what artificial intelligence training is proposing is a worst-of-both-worlds situation, in which some big businesses get to retain a tight grip on artists’ works, and others get to assume anything remotely public is theirs to seize.

Nicholas Chrastil, the Guardian:

State officials have praised Butler Snow for its experience in defending prison cases – and specifically William Lunsford, head of the constitutional and civil rights litigation practice group at the firm. But now the firm is facing sanctions by the federal judge overseeing Johnson’s case after an attorney at the firm, working with Lunsford, cited cases generated by artificial intelligence – which turned out not to exist.

It is one of a growing number of instances in which attorneys around the country have faced consequences for including false, AI-generated information in official legal filings. A database attempting to track the prevalence of the cases has identified 106 instances around the globe in which courts have found “AI hallucinations” in court documents.

The database is now up to 120 cases, including some fairly high-profile ones like that against Timothy Burke.

Here is a little behind-the-scenes from this weekend’s piece about “nimble fingers” and Apple’s supply chain. The claim, as framed by Tripp Mickle, in the New York Times, is that “[y]oung Chinese women have small fingers, and that has made them a valuable contributor to iPhone production because they are more nimble at installing screws and other miniature parts”. This sounded suspicious to me because I thought about it for five seconds. There are other countries where small objects are carefully assembled by hand, for example, and attributing a characteristic like “small fingers” to hundreds of millions of “young Chinese women” seems reductive, to put it mildly. But this assumption had to come from somewhere, especially since Patrick McGee also mentioned it.

So I used both DuckDuckGo and Google to search for relevant keywords within a date range of the last fifteen years and excluding the past month or so. I could not quickly find anything of relevance; both thought I was looking for smartphones for use with small hands. So I thought this might be a good time to try ChatGPT. It immediately returned a quote from a 2014 report from an international labour organization, but did not tell me the title of the report or give me a link. I asked it for the title. ChatGPT responded it was actually a 2012 report that mentioned “nimble fingers” of young women being valuable, and gave me the title. But when I found copies of the report, there was no such quote or anything remotely relevant. I did, however, get the phrase “nimble fingers”, which sent me down the correct search path to finding articles documenting this longstanding prejudice.

Whether because of time crunch or laziness, it baffles me how law firms charging as much as they do have repeatedly failed to verify the claims generated by artificial intelligence tools.

Remember how, in 2023, the U.S. Office of the Director of National Intelligence published a report acknowledging mass stockpiling of third-party data it had purchased? It turns out there is so much private information about people it is creating a big headache for the intelligence agencies — not because of any laws or ethical qualms, but simply because of the sheer volume.

Sam Biddle, the Intercept:

The Office of the Director of National Intelligence is working on a system to centralize and “streamline” the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be “misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons.” The documents state spy agencies will use the web portal not just to search through reams of private data, but also run them through artificial intelligence tools for further analysis.

Apparently, the plan is to feed all this data purchased from brokers and digital advertising companies into artificial intelligence systems. The DNI says it has rules about purchasing and using this data, so there is nothing to worry about.

By the way, the DNI’s Freedom of Information Act page was recently updated to remove links to released records and FOIA logs. They were live on May 5 but, as of May 16, those pages have been removed, and direct links no longer resolve either. Strange.

Update: The ODNI told me its “website is currently under construction”.

Thinking about the energy “footprint” of artificial intelligence products makes it a good time to re-link to Mark Kaufman’s excellent 2020 Mashable article in which he explores the idea of a carbon footprint:

The genius of the “carbon footprint” is that it gives us something to ostensibly do about the climate problem. No ordinary person can slash 1 billion tons of carbon dioxide emissions. But we can toss a plastic bottle into a recycling bin, carpool to work, or eat fewer cheeseburgers. “Psychologically we’re not built for big global transformations,” said John Cook, a cognitive scientist at the Center for Climate Change Communication at George Mason University. “It’s hard to wrap our head around it.”

Ogilvy & Mather, the marketers hired by British Petroleum, wove the overwhelming challenges inherent in transforming the dominant global energy system with manipulative tactics that made something intangible (carbon dioxide and methane — both potent greenhouse gases — are invisible), tangible. A footprint. Your footprint.

The framing of most of the A.I. articles I have seen thankfully shies away from ascribing individual blame; instead, they point to systemic flaws. This is preferable, but it still does little at the scale of electricity generation worldwide.

Casey Crownhart, MIT Technology Review:

Today, new analysis by MIT Technology Review provides an unprecedented and comprehensive look at how much energy the AI industry uses — down to a single query — to trace where its carbon footprint stands now, and where it’s headed, as AI barrels towards billions of daily users.

We spoke to two dozen experts measuring AI’s energy demands, evaluated different AI models and prompts, pored over hundreds of pages of projections and reports, and questioned top AI model makers about their plans. Ultimately, we found that the common understanding of AI’s energy consumption is full of holes.

This robust story comes on the heels of a series of other discussions about how much energy is used by A.I. products and services. Last month, for example, Andy Masley published a comparison of using ChatGPT against other common activities. The Economist ran another, and similar articles have been published before. As far as I can tell, they all come down to the same general conclusion: training A.I. models is energy-intensive, using A.I. products is not, lots of things we do online and offline have a greater impact on the environment, and the current energy use of A.I. is the lowest it will be from now on.

There are lots of good reasons to critique artificial intelligence. I am not sure its environmental impact is a particularly strong one; I think the true energy footprint of tech companies, of which A.I. is one part, is more relevant. Even more pressing, however, is our need to electrify our world as much as we can, and that will require a better and cleaner grid.

Albert Burneko, Defector:

Over this past weekend, the Chicago Sun-Times and Philadelphia Inquirer’s weekend editions included identical huge “Best of Summer” inserts; in the Inquirer’s digital edition the insert runs 54 pages, while the entire rest of the paper occupies 36. Before long, readers began noticing something strange about the “Summer reading list for 2025” section of the insert. Namely, that while the list includes some very well-known authors, most of the books listed in it do not exist.

This is the kind of fluffy insert long purchased by publishers to pad newspapers. In this case, it appears to be produced by Hearst Communications, which feels about right for something with Hearst’s name on it. I cannot imagine most publishers read these things very carefully; adding more work or responsibility is not the point of buying a guide like this.

What I found very funny today was watching the real-time reporting of this story in parallel with Google’s I/O presentation, at which it announced one artificial intelligence feature after another. On the one hand, A.I. features can help you buy event tickets or generate emails offering travel advice based on photos from trips you have taken. On the other, it is inventing books, experts, and diet advice.

Over the weekend, Mark Gurman and Drake Bennett, of Bloomberg, published a lengthy examination of Apple’s fumbling history of artificial intelligence features. It is surprisingly warm to John Giannandrea, who is portrayed as someone who tried hard to build talent and resources at Apple, only to hit walls imposed by other senior leadership figures.

There is a fair bit of news about Siri and Apple Intelligence, which you would know if you read MacRumors because it built four separate articles from various pieces of it. I guess rewriting the whole thing as a single article would have been unethical. Anyway, I thought the one about changing the default virtual assistant was notable:

Apple is planning to give users in the EU the ability to set a default voice assistant other than Siri, according to Bloomberg’s Mark Gurman and Drake Bennett.

[…]

Apple is working on this change in response to expanding EU regulations, the report said.

The way Joe Rossignol phrased this surprised me because I had assumed the DMA already covered default virtual assistant, but it seems that none were designated gatekeepers. I can imagine how difficult it will be for third-party services to act as a drop-in replacement for Siri, too.

Federico Viticci, of MacStories, writing last month about Perplexity’s Voice Assistant feature, a piece which I am chopping up here to make a point but you should read in full anyhow:

[…] Perplexity’s iOS voice assistant isn’t using any “secret” tricks or hidden APIs: they’re simply integrating with existing frameworks and APIs that any third-party iOS developer can already work with. […]

[…] Then there are all the integrations that are exclusive to Siri, which Perplexity can’t implement because Apple doesn’t offer related developer APIs. Only Siri can run shortcuts, set timers, call App Intents, send messages, create notes, open and change device settings, and more. […]

There is a long way to go from this to a full Siri replacement, but I will be hugely envious of those who will be able to take advantage of changing the default. The state of Siri was embarrassing ten years ago. The condition it is in today is a testament to the power of unchangeable defaults and a lack of competition within the iOS universe.

Make no mistake, however: Apple is barely about to “let” E.U. users switch from Siri to something else, as the MacRumors headline claims. It is doing so with the reasonable anticipation Siri’s in-universe monopoly will fall on the wrong side of regulations already established. Good.

Austin Carr and Dina Bass, in a largely defensive but, somehow, still pretty positive profile of Microsoft CEO Satya Nadella in Bloomberg:

Working out of offices in London and Mountain View, California, Suleyman’s employees set about creating a version of Copilot designed for life away from the office. As they’d done at Inflection, they taught this Copilot empathy, humor and kindness. Do people want an emotional connection with the company that makes Excel? Suleyman thinks so. Or at least he believes it’ll be harder to switch to a competitor if the user sees Copilot as a friend or therapist.

For those in the camp seeing artificial intelligence technologies as one tool among many to be incorporated into larger products and services — and I count myself among them — the idea of a faux emotional companion as a standalone goal appears utterly ludicrous. I do not think we should view these tools in this way. They do not substitute for real-life friendships. And why would anyone trust something which can be so easily swayed by bad actors? Using an emotional connection as a product differentiator or a matter of lock-in seems entirely unethical.

Also:

Nadella points out that Microsoft was once less relevant in videoconferencing too. “Everybody would say, ‘Hey, Zoom, Zoom, Zoom,’” Nadella says. “We won that in the enterprise.” (A Zoom spokesperson declined to comment.)

It did so by illegally advantaging Teams, and what Nadella seems to be saying here is that it would again be happy to use its disproportionate power to push its version of other new technologies.

Mark Gurman, Bloomberg:

The company is planning an AI-powered battery management mode for iOS 19, an iPhone software update due in September, according to people with knowledge of the matter. The enhancement will analyze how a person uses their device and make adjustments to conserve energy, said the people, who asked not to be identified because the service hasn’t been announced.

Set aside for now the question of how the notoriously power-hungry group of technologies we call “artificial intelligence” can improve battery life. Gurman says this is “part of the Apple Intelligence platform”, but also says it “will be available for all iPhones that have iOS 19”. This is confusing. Apple has so far marketed Apple Intelligence as being available on only a subset of devices supporting iOS 18. Either Apple’s delineation of “Apple Intelligence” features is about to get even fuzzier, or one of the two statements Gurman made is going to be wrong.

Also:

Besides the AI additions and interface changes, Apple is pushing engineers to ensure that this year’s releases are more functional and less glitchy. Past upgrades were criticized for bugs and features that sometimes didn’t work properly.

I will believe it when I see it. But how is this not the highest priority every year?

Kurt Wagner and Katie Roof, Bloomberg:

Elon Musk said his xAI artificial intelligence startup has acquired the X platform, which he also controls, at a valuation of $33 billion, marking a surprise twist for the social network formerly known as Twitter.

This feels like it has to be part of some kind of financial crime, right? Like, I am sure it is not; I am sure this is just a normal thing businesses do that only feels criminal, like how they move money around the world to avoid taxes.

Wagner and Roof:

The deal gives the new combined entity, called XAI Holdings, a value of more than $100 billion, not including the debt, according to a person familiar with the arrangement, who asked not to be identified because the terms weren’t public. Morgan Stanley was the sole banker on the deal, representing both sides, other people said.

For perspective, that is around about the current value of Lockheed Martin, Rio Tinto — one of the world’s largest mining businesses — and Starbucks. All of those companies make real products with real demand — unfortunately so, in the case of the first. xAI has exactly one external customer today. And it is not like unpleasant social media seems to be a booming business.

Kate Conger and Lauren Hirsch, New York Times:

This month, X continued to struggle to hit its revenue targets, according to an internal email seen by The New York Times. As of March 3, X had served $91 million of ads this year, the message said, well below its first-quarter target of $153 million.

This is including the spending of several large advertisers. For comparison, in the same quarter in the pre-Musk era, Twitter generated over a billion dollars in advertising revenue.

I am begging for Matt Levine to explain this to me.

There is a long line of articles questioning Apple’s ability to deliver on artificial intelligence because of its position on data privacy. Today, we got another in the form of a newsletter.

Reed Albergotti, Semafor:

Meanwhile, Apple was focused on vertically integrating, designing its own chips, modems, and other components to improve iPhone margins. It was using machine learning on small-scale projects, like improving its camera algorithms.

[…]

Without their ads businesses, companies like Google and Meta wouldn’t have built the ecosystems and cultures required to make them AI powerhouses, and that environment changed the way their CEOs saw the world.

Again, I will emphasize this is a newsletter. It may seem like an article from a prestige publisher that prides itself on “separat[ing] the facts from our views”, but you might notice how, aside from citing some quotes and linking to ads, none of Albergotti’s substantive claims are sourced. This is just riffing.

I remain skeptical. Albergotti frames this as both a mindset shift and a necessity for advertising companies like Google and Meta. But the company synonymous with the A.I. boom, OpenAI, does not have the same business model. Besides, Apple behaves like other A.I. firms by scraping the web and training models on massive amounts of data. The evidence for this theory seems pretty thin to me.

But perhaps a reluctance to be invasive and creepy is one reason why personalized Siri features have been delayed. I hope Apple does not begin to mimic its peers in this regard; privacy should not be sacrificed. I think it is silly to be dependent on corporate choices rather than legislation to determine this, but that is the world some of us live in.

Let us concede the point anyhow, since it suggests a role Apple could fill by providing an architecture for third-party A.I. on its products. It does not need to deliver everything to end users; it can focus on building a great platform. Albergotti might sneeze at “designing its own chips […] to improve iPhone margins”, which I am sure was one goal, but it has paid off in ridiculously powerful Macs perfect for A.I. workflows. And, besides, it has already built some kind of plugin architecture into Apple Intelligence because it has integrated ChatGPT. There is no way for other providers to add their own extension — not yet, anyhow — but the system is there.

Gus Mueller:

The crux of the issue in my mind is this: Apple has a lot of good ideas, but they don’t have a monopoly on them. I would like some other folks to come in and try their ideas out. I would like things to advance at the pace of the industry, and not Apple’s. Maybe with a blessed system in place, Apple could watch and see how people use LLMs and other generative models (instead of giving us Genmoji that look like something Fisher-Price would make). And maybe open up the existing Apple-only models to developers. There are locally installed image processing models that I would love to take advantage of in my apps.

Via Federico Viticci, MacStories:

Which brings me to my second point. The other feature that I could see Apple market for a “ChatGPT/Claude via Apple Intelligence” developer package is privacy and data retention policies. I hear from so many developers these days who, beyond pricing alone, are hesitant toward integrating third-party AI providers into their apps because they don’t trust their data and privacy policies, or perhaps are not at ease with U.S.-based servers powering the popular AI companies these days. It’s a legitimate concern that results in lots of potentially good app ideas being left on the table.

One of Apple’s specialties is in improving the experience of using many of the same technologies as everyone else. I would like to see that in A.I., too, but I have been disappointed by its lacklustre efforts so far. Even long-running projects where it has had time to learn and grow have not paid off, as anyone can see in Siri’s legacy.

What if you could replace these features? What if Apple’s operating systems were great platforms by which users could try third-party A.I. services and find the ones that fit them best? What if Apple could provide certain privacy promises, too? I bet users would want to try alternatives in a heartbeat. Apple ought to welcome the challenge.

Benedict Evans:

That takes us to xR, and to AI. These are fields where the tech is fundamental, and where there are real, important Apple kinds of questions, where Apple really should be able to do something different. And yet, with the Vision Pro Apple stumbled, and then with AI it’s fallen flat on its face. This is a concern.

The Vision Pro shipped as promised and works as advertised. But it’s also both too heavy and bulky and far too expensive to be a viable mass-market consumer product. Hugo Barra called it an over-engineered developer kit — you could also call it an experiment, or a preview or a concept. […]

The main problem, I think, with the reception of the Vision Pro is that it was passed through the same marketing lens as Apple uses to frame all its products. I have no idea if Apple considers the sales of this experiment acceptable, the tepid developer adoption predictable, or the skeptical press understandable. However, if you believe the math on display production and estimated sales figures, they more-or-less match.

Of course, as Evans points out, Apple does not ship experiments:

The new Siri that’s been delayed this week is the mirror image of this. […]

However, it clearly is a problem that the Apple execution machine broke badly enough for Apple to spend an hour at WWDC and a bunch of TV commercials talking about vapourware that it didn’t appear to understand was vapourware. The decision to launch the Vision Pro looks like a related failure. It’s a big problem that this is late, but it’s an equally big problem that Apple thought it was almost ready.

Unlike the Siri feature delay, I do not think the Vision Pro’s launch affects the company’s credibility at all. It can keep pushing that thing and trying to turn it into something more mass-market. This Siri stuff is going to make me look at WWDC in a whole different light this year.

Mark Gurman, Bloomberg:

Chief Executive Officer Tim Cook has lost confidence in the ability of AI head John Giannandrea to execute on product development, so he’s moving over another top executive to help: Vision Pro creator Mike Rockwell. In a new role, Rockwell will be in charge of the Siri virtual assistant, according to the people, who asked not to be identified because the moves haven’t been announced.

[…]

Rockwell is known as the brains behind the Vision Pro, which is considered a technical marvel but not a commercial hit. Getting the headset to market required a number of technical breakthroughs, some of which leveraged forms of artificial intelligence. He is now moving away from the Vision Pro at a time when that unit is struggling to plot a future for the product.

If you had no context for this decision, it looks like Rockwell is being moved off Apple’s hot new product and onto a piece of software that perennially disappoints. It looks like a demotion. That is how badly Siri needs a shakeup.

Giannandrea will remain at the company, even with Rockwell taking over Siri. An abrupt departure would signal publicly that the AI efforts have been tumultuous — something Apple is reluctant to acknowledge. Giannandrea’s other responsibilities include oversight of research, testing and technologies related to AI. The company also has a team reporting to Giannandrea investigating robotics.

I figured as much. Gurman does not clarify in this article how much of Apple Intelligence falls under Giannandrea’s rubric, and how much is part of the “Siri” stuff that is being transferred to Rockwell. It does not sound as though Giannandrea will have no further Apple Intelligence responsibilities — yet — but the high-profile public-facing stuff is now overseen by Rockwell and, ultimately, Craig Federighi.

Molly White:

Instead of worrying about “wait, not like that”, I think we need to reframe the conversation to “wait, not only like that” or “wait, not in ways that threaten open access itself”. The true threat from AI models training on open access material is not that more people may access knowledge thanks to new modalities. It’s that those models may stifle Wikipedia and other free knowledge repositories, benefiting from the labor, money, and care that goes into supporting them while also bleeding them dry. It’s that trillion dollar companies become the sole arbiters of access to knowledge after subsuming the painstaking work of those who made knowledge free to all, killing those projects in the process.

This is such a terrific and thoughtful essay. I am suspicious of using more aggressive intellectual property laws to contain artificial intelligence companies, but there is a clear power imbalance between individuals and the businesses helping themselves to their — oh, who am I kidding? Our — work in bulk.

Josh Sisco and Davey Alba, Bloomberg, earlier this week:

Google is urging officials at President Donald Trump’s Justice Department to back away from a push to break up the search engine company, citing national security concerns, according to people familiar with the discussions.

[…]

Google’s argument isn’t new, and it has previously raised these concerns in public in response to antitrust pressure from regulators and lawmakers. But the company is re-upping the issue in discussions with officials at the department under Trump because the case is in its second stage, known as the “remedy” phase, during which the court can impose sweeping changes on Google’s business.

Ryan Whitwam, Ars Technica:

The government’s 2024 request also sought to have Google’s investment in AI firms curtailed even though this isn’t directly related to search. If, like Google, you believe leadership in AI is important to the future of the world, limiting its investments could also affect national security. But in November, Mehta suggested he was open to considering AI remedies because “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market.

Jody Godoy, Reuters:

The U.S. Department of Justice on Friday dropped a proposal to force Alphabet’s Google to sell its investments in artificial intelligence companies, including OpenAI competitor Anthropic, to boost competition in online search.

[…]

Many of the measures prosecutors proposed in November remain intact with a few tweaks.

For example, a requirement that Google share search query data with competitors now says that Google can charge a marginal fee for access and that the competitors must not pose a national security risk.

The Department of Justice included in its filings today a version of the proposed judgement with revisions shown (PDF). Google’s proposed judgement (PDF) is, rather predictably, much shorter. It sounds like its national security arguments swayed the prosecution, however.