Search Results for: "artificial intelligence"

Kurt Wagner and Katie Roof, Bloomberg:

Elon Musk said his xAI artificial intelligence startup has acquired the X platform, which he also controls, at a valuation of $33 billion, marking a surprise twist for the social network formerly known as Twitter.

This feels like it has to be part of some kind of financial crime, right? Like, I am sure it is not; I am sure this is just a normal thing businesses do that only feels criminal, like how they move money around the world to avoid taxes.

Wagner and Roof:

The deal gives the new combined entity, called XAI Holdings, a value of more than $100 billion, not including the debt, according to a person familiar with the arrangement, who asked not to be identified because the terms weren’t public. Morgan Stanley was the sole banker on the deal, representing both sides, other people said.

For perspective, that is around about the current value of Lockheed Martin, Rio Tinto — one of the world’s largest mining businesses — and Starbucks. All of those companies make real products with real demand — unfortunately so, in the case of the first. xAI has exactly one external customer today. And it is not like unpleasant social media seems to be a booming business.

Kate Conger and Lauren Hirsch, New York Times:

This month, X continued to struggle to hit its revenue targets, according to an internal email seen by The New York Times. As of March 3, X had served $91 million of ads this year, the message said, well below its first-quarter target of $153 million.

This is including the spending of several large advertisers. For comparison, in the same quarter in the pre-Musk era, Twitter generated over a billion dollars in advertising revenue.

I am begging for Matt Levine to explain this to me.

There is a long line of articles questioning Apple’s ability to deliver on artificial intelligence because of its position on data privacy. Today, we got another in the form of a newsletter.

Reed Albergotti, Semafor:

Meanwhile, Apple was focused on vertically integrating, designing its own chips, modems, and other components to improve iPhone margins. It was using machine learning on small-scale projects, like improving its camera algorithms.

[…]

Without their ads businesses, companies like Google and Meta wouldn’t have built the ecosystems and cultures required to make them AI powerhouses, and that environment changed the way their CEOs saw the world.

Again, I will emphasize this is a newsletter. It may seem like an article from a prestige publisher that prides itself on “separat[ing] the facts from our views”, but you might notice how, aside from citing some quotes and linking to ads, none of Albergotti’s substantive claims are sourced. This is just riffing.

I remain skeptical. Albergotti frames this as both a mindset shift and a necessity for advertising companies like Google and Meta. But the company synonymous with the A.I. boom, OpenAI, does not have the same business model. Besides, Apple behaves like other A.I. firms by scraping the web and training models on massive amounts of data. The evidence for this theory seems pretty thin to me.

But perhaps a reluctance to be invasive and creepy is one reason why personalized Siri features have been delayed. I hope Apple does not begin to mimic its peers in this regard; privacy should not be sacrificed. I think it is silly to be dependent on corporate choices rather than legislation to determine this, but that is the world some of us live in.

Let us concede the point anyhow, since it suggests a role Apple could fill by providing an architecture for third-party A.I. on its products. It does not need to deliver everything to end users; it can focus on building a great platform. Albergotti might sneeze at “designing its own chips […] to improve iPhone margins”, which I am sure was one goal, but it has paid off in ridiculously powerful Macs perfect for A.I. workflows. And, besides, it has already built some kind of plugin architecture into Apple Intelligence because it has integrated ChatGPT. There is no way for other providers to add their own extension — not yet, anyhow — but the system is there.

Gus Mueller:

The crux of the issue in my mind is this: Apple has a lot of good ideas, but they don’t have a monopoly on them. I would like some other folks to come in and try their ideas out. I would like things to advance at the pace of the industry, and not Apple’s. Maybe with a blessed system in place, Apple could watch and see how people use LLMs and other generative models (instead of giving us Genmoji that look like something Fisher-Price would make). And maybe open up the existing Apple-only models to developers. There are locally installed image processing models that I would love to take advantage of in my apps.

Via Federico Viticci, MacStories:

Which brings me to my second point. The other feature that I could see Apple market for a “ChatGPT/Claude via Apple Intelligence” developer package is privacy and data retention policies. I hear from so many developers these days who, beyond pricing alone, are hesitant toward integrating third-party AI providers into their apps because they don’t trust their data and privacy policies, or perhaps are not at ease with U.S.-based servers powering the popular AI companies these days. It’s a legitimate concern that results in lots of potentially good app ideas being left on the table.

One of Apple’s specialties is in improving the experience of using many of the same technologies as everyone else. I would like to see that in A.I., too, but I have been disappointed by its lacklustre efforts so far. Even long-running projects where it has had time to learn and grow have not paid off, as anyone can see in Siri’s legacy.

What if you could replace these features? What if Apple’s operating systems were great platforms by which users could try third-party A.I. services and find the ones that fit them best? What if Apple could provide certain privacy promises, too? I bet users would want to try alternatives in a heartbeat. Apple ought to welcome the challenge.

Benedict Evans:

That takes us to xR, and to AI. These are fields where the tech is fundamental, and where there are real, important Apple kinds of questions, where Apple really should be able to do something different. And yet, with the Vision Pro Apple stumbled, and then with AI it’s fallen flat on its face. This is a concern.

The Vision Pro shipped as promised and works as advertised. But it’s also both too heavy and bulky and far too expensive to be a viable mass-market consumer product. Hugo Barra called it an over-engineered developer kit — you could also call it an experiment, or a preview or a concept. […]

The main problem, I think, with the reception of the Vision Pro is that it was passed through the same marketing lens as Apple uses to frame all its products. I have no idea if Apple considers the sales of this experiment acceptable, the tepid developer adoption predictable, or the skeptical press understandable. However, if you believe the math on display production and estimated sales figures, they more-or-less match.

Of course, as Evans points out, Apple does not ship experiments:

The new Siri that’s been delayed this week is the mirror image of this. […]

However, it clearly is a problem that the Apple execution machine broke badly enough for Apple to spend an hour at WWDC and a bunch of TV commercials talking about vapourware that it didn’t appear to understand was vapourware. The decision to launch the Vision Pro looks like a related failure. It’s a big problem that this is late, but it’s an equally big problem that Apple thought it was almost ready.

Unlike the Siri feature delay, I do not think the Vision Pro’s launch affects the company’s credibility at all. It can keep pushing that thing and trying to turn it into something more mass-market. This Siri stuff is going to make me look at WWDC in a whole different light this year.

Mark Gurman, Bloomberg:

Chief Executive Officer Tim Cook has lost confidence in the ability of AI head John Giannandrea to execute on product development, so he’s moving over another top executive to help: Vision Pro creator Mike Rockwell. In a new role, Rockwell will be in charge of the Siri virtual assistant, according to the people, who asked not to be identified because the moves haven’t been announced.

[…]

Rockwell is known as the brains behind the Vision Pro, which is considered a technical marvel but not a commercial hit. Getting the headset to market required a number of technical breakthroughs, some of which leveraged forms of artificial intelligence. He is now moving away from the Vision Pro at a time when that unit is struggling to plot a future for the product.

If you had no context for this decision, it looks like Rockwell is being moved off Apple’s hot new product and onto a piece of software that perennially disappoints. It looks like a demotion. That is how badly Siri needs a shakeup.

Giannandrea will remain at the company, even with Rockwell taking over Siri. An abrupt departure would signal publicly that the AI efforts have been tumultuous — something Apple is reluctant to acknowledge. Giannandrea’s other responsibilities include oversight of research, testing and technologies related to AI. The company also has a team reporting to Giannandrea investigating robotics.

I figured as much. Gurman does not clarify in this article how much of Apple Intelligence falls under Giannandrea’s rubric, and how much is part of the “Siri” stuff that is being transferred to Rockwell. It does not sound as though Giannandrea will have no further Apple Intelligence responsibilities — yet — but the high-profile public-facing stuff is now overseen by Rockwell and, ultimately, Craig Federighi.

Molly White:

Instead of worrying about “wait, not like that”, I think we need to reframe the conversation to “wait, not only like that” or “wait, not in ways that threaten open access itself”. The true threat from AI models training on open access material is not that more people may access knowledge thanks to new modalities. It’s that those models may stifle Wikipedia and other free knowledge repositories, benefiting from the labor, money, and care that goes into supporting them while also bleeding them dry. It’s that trillion dollar companies become the sole arbiters of access to knowledge after subsuming the painstaking work of those who made knowledge free to all, killing those projects in the process.

This is such a terrific and thoughtful essay. I am suspicious of using more aggressive intellectual property laws to contain artificial intelligence companies, but there is a clear power imbalance between individuals and the businesses helping themselves to their — oh, who am I kidding? Our — work in bulk.

Josh Sisco and Davey Alba, Bloomberg, earlier this week:

Google is urging officials at President Donald Trump’s Justice Department to back away from a push to break up the search engine company, citing national security concerns, according to people familiar with the discussions.

[…]

Google’s argument isn’t new, and it has previously raised these concerns in public in response to antitrust pressure from regulators and lawmakers. But the company is re-upping the issue in discussions with officials at the department under Trump because the case is in its second stage, known as the “remedy” phase, during which the court can impose sweeping changes on Google’s business.

Ryan Whitwam, Ars Technica:

The government’s 2024 request also sought to have Google’s investment in AI firms curtailed even though this isn’t directly related to search. If, like Google, you believe leadership in AI is important to the future of the world, limiting its investments could also affect national security. But in November, Mehta suggested he was open to considering AI remedies because “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market.

Jody Godoy, Reuters:

The U.S. Department of Justice on Friday dropped a proposal to force Alphabet’s Google to sell its investments in artificial intelligence companies, including OpenAI competitor Anthropic, to boost competition in online search.

[…]

Many of the measures prosecutors proposed in November remain intact with a few tweaks.

For example, a requirement that Google share search query data with competitors now says that Google can charge a marginal fee for access and that the competitors must not pose a national security risk.

The Department of Justice included in its filings today a version of the proposed judgement with revisions shown (PDF). Google’s proposed judgement (PDF) is, rather predictably, much shorter. It sounds like its national security arguments swayed the prosecution, however.

Sean Monahan:

People love to quote the third of Arthur C. Clark’s laws — “Any sufficiently advanced technology is indistinguishable from magic.” And almost always with a positive spin. The Disneyfied language makes it sound wondrous. But let’s reconsider the quote with a different word:

Any sufficiently advanced technology is indistinguishable from witchcraft.

This is far from the first article aligning new technologies — particularly artificial intelligence — with magical thinking. Just searching some related keywords yields at least three academic papers, plus loads of articles with similar framing. Some include this same Clarke quote for obvious reasons.

Monahan, though, ties it to a broader social condition in which magical thinking is on the rise, and there is something to think about in this. I am not necessarily in agreement — I think this is more of an interesting exercise in language than it is describing a broader trend — but I have also just finished reading “If It Sounds Like a Quack” and, so, my head might be in a funny space.

Apple in 2017:

Apple has committed to investing at least $1 billion with US-based companies as part of the [Advanced Manufacturing Fund], which is designed to foster innovative production and highly skilled jobs that will help lay the foundation for a new era of technology-driven manufacturing in the US.

This note was part of a press release announcing a large investment in Corning from this fund, and it began something of a tradition. While Apple has occasionally issued press announcements for similar investments at other times of the year, this one — from the early days of the first Trump administration — was the first in a two-part debut of what has become a predictable update.

Apple in 2018 issued what is effectively the second part:

Combining new investments and Apple’s current pace of spending with domestic suppliers and manufacturers — an estimated $55 billion for 2018 — Apple’s direct contribution to the US economy will be more than $350 billion over the next five years, not including Apple’s ongoing tax payments, the tax revenues generated from employees’ wages and the sale of Apple products.

Planned capital expenditures in the US, investments in American manufacturing over five years and a record tax payment upon repatriation of overseas profits will account for approximately $75 billion of Apple’s direct contribution.

This one was issued a few months after the Tax Cuts and Jobs Act took effect, containing tax repatriation policies for which Apple successfully lobbied.

Apple in April 2021, not long after the inauguration of Joe Biden:

Apple today announced an acceleration of its US investments, with plans to make new contributions of more than $430 billion and add 20,000 new jobs across the country over the next five years. Over the past three years, Apple’s contributions in the US have significantly outpaced the company’s original five-year goal of $350 billion set in 2018. Apple is now raising its level of commitment by 20 percent over the next five years, supporting American innovation and driving economic benefits in every state. This includes tens of billions of dollars for next-generation silicon development and 5G innovation across nine US states.

Apple today:

Apple today announced its largest-ever spend commitment, with plans to spend and invest more than $500 billion in the U.S. over the next four years. This new pledge builds on Apple’s long history of investing in American innovation and advanced high-skilled manufacturing, and will support a wide range of initiatives that focus on artificial intelligence, silicon engineering, and skills development for students and workers across the country.

These press releases are not for the wider public; they are not even directly for investors. They are for politicians and, in particular, whomever is the U.S. president. Apple issued nothing like these releases in 2019 or 2020, and not again from 2022–2024. It is plausible all of these announced investments would have happened regardless of who was president. The press releases are post-election reminders that Apple is a large business with the power to shape swathes of the U.S. and world economy, and they would prefer if their products avoid regulations and tariffs.

It would be useful if somebody were keeping tracking of all the company’s promises, though.

Remember when new technology felt stagnant? All the stuff we use — laptops, smartphones, watches, headphones — coalesced around a similar design language. Everything became iterative or, in more euphemistic terms, mature. Attempts to find a new thing to excite people mostly failed. Remember how everything would change with 5G? How about NFTs? How is your metaverse house? The world’s most powerful hype machine could not make any of these things stick.

This is not necessarily a problem in the scope of the world. There should be a point at which any technology settles into a recognizable form and function. These products are, ideally, utilitarian — they enable us to do other stuff.

But here we are in 2025 with breakthroughs in artificial intelligence and, apparently, quantum computing and physics itself. The former is something I have written about at length already because it has become adopted so quickly and so comprehensively — whether we like it or not — that it is impossible to ignore. But the news in quantum computers is different because it is much, much harder for me to grasp. I feel like I should be fascinated, and I suppose I am, but mainly because I find it all so confusing.

This is not an explainer-type article. This is me working things out for myself. Join me. I will not get far.

Hartmut Neven, of Google, in December:

Today I’m delighted to announce Willow, our latest quantum chip. Willow has state-of-the-art performance across a number of metrics, enabling two major achievements.

  • The first is that Willow can reduce errors exponentially as we scale up using more qubits. This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years.

  • Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe.

Catherine Bolgar, Microsoft:

Microsoft today introduced Majorana 1, the world’s first quantum chip powered by a new Topological Core architecture that it expects will realize quantum computers capable of solving meaningful, industrial-scale problems in years, not decades.

It leverages the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.

Microsoft says it created a new state of matter and observed a particular kind of particle, both for the first time. In a twelve-minute video, the company defines this new era — called the “quantum age” — as a literal successor to the Stone Age and the Bronze Age. Jeez.

There is hype, and then there is hype. This is the latter. Even if it is backed by facts — I have no reason to suspect Microsoft is lying in large part because, to reiterate, I do not know anything about this — and even if Microsoft deserves this much attention, it is a lot. Maybe I have become jaded by one too many ostensibly world-changing product launches.

There is good reason to believe the excitement shown by Google and Microsoft is not pure hyperbole. The problem is neither company is effective at explaining why. As of writing the first sentence of this piece, my knowledge of quantum computers was only that they can be much, much, much faster than any computer today, thanks to the unique properties of quantum mechanics and, specifically, quantum bits. That is basically all. But what does a wildly fast computer enable in the real world? My brain can only grasp the consumer-level stuff I use, so I am reminded of something I wrote when the first Mac Studio was announced a few years ago: what utility does speed have?

I am clearly thinking in terms far too small. Domenico Vicinanza wrote a good piece for the Conversation earlier this year:

Imagine being able to explore every possible solution to a problem all at once, instead of once at a time. It would allow you to navigate your way through a maze by simultaneously trying all possible paths at the same time to find the right one. Quantum computers are therefore incredibly fast at finding optimal solutions, such as identifying the shortest path, the quickest way.

This explanation helped me — not a lot, but a little bit. What I remain confused by are the examples in the announcements from Google and Microsoft. Why quantum computing could help “discover new medicines” or “lead to self-healing materials” seems like it should be obvious to anyone reading, but I do not get it.

I am suspicious in part because technology companies routinely draw links between some new buzzy thing they are selling and globally significant effects: alleviating hunger, reducing waste, fixing our climate crisis, developing alternative energy sources, and — most of all — revolutionizing medical care. Search the web for (hyped technology) cancer and you can find this kind of breathless revolutionary language drawing a clear line between cancer care and 5G, 6G, blockchain, DAOs, the metaverse, NFTs, and Web3 as a whole. This likely speaks as much about insidious industries that take advantage of legitimate qualms with the medical system and fears of cancer, but it is nevertheless a pattern with these new technologies.

I am not even saying these promises are always wrong. Technological advancement has surely led to improving cancer care, among other kinds of medical treatments.

I have no big goal for this post — no grand theme or message. I am curious about the promises of quantum computers for the same reason I am curious about all kinds of inventions. I hope they work in the way Google, Microsoft, and other inventors in this space seem to believe. It would be great if some of the world’s neglected diseases can be cured and we could find ways to fix our climate.

But — and this is a true story — I read through Microsoft’s various announcement pieces and watched that video while I was waiting on OneDrive to work properly. I struggle to understand how the same company that makes a bad file syncing utility is also creating new states of matter. My brain is fully cooked.

Katie Mcque, Laís Martins, Ananya Bhattacharya, and Carien Du Plessis, Rest of World:

Brazil’s AI bill is one window into a global effort to define the role that artificial intelligence will play in democratic societies. Large Silicon Valley companies involved in AI software — including Google, Microsoft, Meta, Amazon Web Services, and OpenAI — have mounted pushback to proposals for comprehensive AI regulation in the EU, Canada, and California. 

Hany Farid, former dean of the UC Berkeley School of Information and a prominent regulation advocate who often testifies at government hearings on the tech sector, told Rest of World that lobbying by big U.S. companies over AI in Western nations has been intense. “They are trying to kill every [piece of] legislation or write it in their favor,” he said. “It’s fierce.”

Meanwhile, outside the West, where AI regulations are often more nascent, these same companies have received a red-carpet welcome from many politicians eager for investment. As Aakrit Vaish, an adviser to the Indian government’s AI initiative, told Rest of World: “Regulation is actually not even a conversation.”

It sure seems as though competition is so intense among the biggest players that concerns about risk have been suspended. It is an unfortunate reality that business friendliness is code for a lax regulatory environment since we all have to endure the products of these corporations. It is not as though Europe and Canada have not produced successful A.I. companies, either.

Cristina Criddle and Hannah Murphy, Financial Times:

Meta is betting that characters generated by artificial intelligence will fill its social media platforms in the next few years as it looks to the fast-developing technology to drive engagement with its 3bn users.

[…]

“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform … that’s where we see all of this going,” he [Meta’s Connor Hayes] added.

Imagine opening any of Meta’s products after this has taken over. Imagine how little you will see from the friends and family members you actually care about. Imagine how much slop you will be greeted with — a feed alternating between slop, suggested posts, and ads, with just enough of what you actually opened the app to see. Now consider how this will affect people who are more committed to Meta’s products, whether for economic reasons or social cohesion.

A big problem for Meta is that it is institutionally very dumb. I do not want to oversell this too much, but I truly believe this is the case. There are lots of smart people working there and its leadership clearly understands something about how people use social media.

But there is a vast sense of dumb in its attempts to deliver the next generation of its products. Its social media products are dependent on “engagement”, which is sometimes a product of users’ actual interest and, at other times, an artifact of Meta’s success or failure in controlling what they see. Maybe its “metaverse” will be interesting one day, but it seems deeply embarrassing so far.

Emily Rae Pasiuk, CBC News:

Celebrity investor Kevin O’Leary says he is planning to bankroll and build what he says will be the world’s largest artificial intelligence data centre.

The proposal — named Wonder Valley — is slated to be built in the District of Greenview, near Grande Prairie, Alta.

If you believe this pitch for a seventy billion dollar A.I. data centre campus, you are a mark. A sucker. You have been duped. Bamboozled. Hoodwinked.

Our premier is very excited.

Max Fawcett, the National Observer:

The website in question features a video that has to be seen to be believed — and even then, only as a punchline. The imagined bucolic landscape of rolling hills, gentle streams and friendly wildlife feels like a Disney-fied version of northern California rather than northern Alberta. The campus itself is straight out of Silicon Valley.

The narration is even more hilariously unrealistic, if that’s even possible. “There’s a valley,” we’re told, “where technology and nature will come together in perfect harmony. Where the future will pulse at every turn. Bursting with nature and wildlife, nurturing the present as it embraces the future. The valley where the brightest minds unite, where electrons, machines and data come together in an algorithmic dance.”

If moneyed knuckleheads want to try building this thing, that should be between them and the people of the Municipal District of Greenview — but leave the rest of us out of this. The province should not subsidize or contribute to it, but they are obviously enamoured by the new natural gas power plants to be constructed (emphasis mine):

“The GIG’s ideal cold-weather climate, a highly skilled labor force, Alberta’s pro-business policies and attractive tax regime make the GIG the perfect site for this project. We want to deliver transformative economic impact and the lowest possible carbon emissions afforded to us by the quality of gas in the area, our efficient design and the potential to add Geothermal power as well. Together, these factors create a blueprint for sustainability and success that can be recognized worldwide. This is the Greenview Model” said Mr. O’Leary.

Transparently dishonest.

Wired has been publishing a series of predictions about the coming state of the world. Unsurprisingly, most concern artificial intelligence — how it might impact health, music, our choices, the climate, and more. It is an issue of the magazine Wired describes as its “annual trends briefing”, but it also kind of like a hundred-page op-ed section. It is a mixed bag.

A.I. critic Gary Marcus contributed a short piece about what he sees as the pointlessness of generative A.I. — and it is weak. That is not necessarily because of any specific argument, but because of how unfocused it is despite its brevity. It opens with a short history of OpenAI’s models, with Marcus writing “Generative A.I. doesn’t actually work that well, and maybe it never will”. Thesis established, he begins building the case:

Fundamentally, the engine of generative AI is fill-in-the-blanks, or what I like to call “autocomplete on steroids.” Such systems are great at predicting what might sound good or plausible in a given context, but not at understanding at a deeper level what they are saying; an AI is constitutionally incapable of fact-checking its own work. This has led to massive problems with “hallucination,” in which the system asserts, without qualification, things that aren’t true, while inserting boneheaded errors on everything from arithmetic to science. As they say in the military: “frequently wrong, never in doubt.”

Systems that are frequently wrong and never in doubt make for fabulous demos, but are often lousy products in themselves. If 2023 was the year of AI hype, 2024 has been the year of AI disillusionment. Something that I argued in August 2023, to initial skepticism, has been felt more frequently: generative AI might turn out to be a dud. The profits aren’t there — estimates suggest that OpenAI’s 2024 operating loss may be $5 billion — and the valuation of more than $80 billion doesn’t line up with the lack of profits. Meanwhile, many customers seem disappointed with what they can actually do with ChatGPT, relative to the extraordinarily high initial expectations that had become commonplace.

Marcus’ financial figures here are bizarre and incorrect. He quotes a Yahoo News-syndicated copy of a PC Gamer article, which references a Windows Central repackaging of a paywalled report by the Information — a wholly unnecessary game of telephone when the New York Times obtained financial documents with the same conclusion, and which were confirmed by CNBC. The summary of that Information article — that OpenAI “may run out of cash in 12 months, unless they raise more [money]”, as Marcus wrote on X — is somewhat irrelevant now after OpenAI proceeded to raise $6.6 billion at a staggering valuation of $157 billion.

I will leave analysis of these financials to MBA types. Maybe OpenAI is like Amazon, which took eight years to turn its first profit, or Uber, which took fourteen years. Maybe it is unlike either and there is no way to make this enterprise profitable.

None of that actually matters, though, when considering Marcus’ actual argument. He posits that OpenAI is financially unsound as-is, and that Meta’s language models are free. Unless OpenAI “come outs [sic] with some major advance worthy of the name of GPT-5 before the end of 2025″, the company will be in a perilous state and, “since it is the poster child for the whole field, the entire thing may well soon go bust”. But hold on: we have gone from ChatGPT is disappointing “many customers” — no citation provided — to the entire concept of generative A.I. being a dead end. None of this adds up.

The most obvious problem is that generative A.I. is not just ChatGPT or other similar chat bots; it is an entire genre of features. I wrote earlier this month about some of the features I use regularly, like Generative Remove in Adobe Lightroom Classic. As far as I know, this is no different than something like OpenAI’s Dall‍-‍E in concept: it has been trained on a large library of images to generate something new. Instead of responding to a text-based prompt, it predicts how it should replicate textures and objects in an arbitrary image. It is far from perfect, but it is dramatically better than the healing brush tool before it, and clone stamping before that.

There are other examples of generative A.I. as features of creative tools. It can extend images and replace backgrounds pretty well. The technology may be mediocre at making video on its own terms, but it is capable of improving the quality of interpolated slow motion. In the technology industry, it is good at helping developers debug their work and generate new code.

Yet, if you take Marcus at his word, these things and everything else generative A.I. “might turn out to be a dud”. Why? Marcus does not say. He does, however, keep underscoring how shaky he finds OpenAI’s business situation. But this Wired article is ostensibly about generative A.I.’s usefulness — or, in Marcus’ framing, its lack thereof — which is completely irrelevant to this one company’s financials. Unless, that is, you believe the reason OpenAI will lose five billion dollars this year is because people are unhappy with it, which is not the case. It simply costs a fortune to train and run.

The one thing Marcus keeps coming back to is the lack of a “moat” around generative A.I., which is not an original position. Even if this is true, I do not see this as evidence of a generative A.I. bubble bursting — at least, not in the sense of how many products it is included in or what capabilities it will be trained on.

What this looks like, to me, is commoditization. If there is a financial bubble, this might mean it bursts, but it does not mean the field is wiped out. Adobe is not disappearing; neither are Google or Meta or Microsoft. While I have doubts about whether chat-like interfaces will continue to be a way we interact with generative A.I., it continues to find relevance in things many of us do every day.

Graham Fraser, BBC News:

Apple Intelligence, launched in the UK earlier this week, uses artificial intelligence (AI) to summarise and group together notifications.

This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.

Fraser also points to an inaccurate summary attributed to the New York Times. Even scrolling through my notifications right now, I can see misleading summaries. One, interpreted by Apple Intelligence as a story about a “vaccine lawyer helps pick health officials”, actually refers to an anti-vaccine lawyer who thinks immunizing against polio is dangerous. I have seen far dumber examples since this feature became part of beta builds over the past months.

I am not opposed to the concept of summarized text. It can, in theory, be helpful to glance at my phone and know whether I need to respond to something sooner rather than later. But every error chips away at a user’s trust, to the point where they need to double-check for accuracy — at which point, the summary is creating additional work.

I can see why the BBC is upset about this, particularly after years of declining trust in media. I had notification summaries switched on for only select few apps. I have now ensured they are turned off for news apps.

Update: Markus Müller-Simhofer:

If you are using macOS 15.2, please be careful with those priority alerts. It listed a fraud email as priority for me.

This is not just a one-off. I have also seen this in my own use. Mail also repeatedly decided to prioritize “guest contributor” spam emails over genuinely useful messages like shipping notifications. Sometimes, it works as expected and is useful; sometimes, the priority message feature simply does not show up. It is bizarre.

Cade Metz, New York Times:

Mr. [Sam] Altman said he was “tremendously sad” about the rising tensions between the two one-time collaborators.

“I grew up with Elon as like a mega hero,” he said.

But he rejected suggestions that Mr. Musk could use his increasingly close relationship with President-elect Trump to harm OpenAI.

“I believe pretty strongly that Elon will do the right thing and that it would be profoundly un-American to use political power to the degree that Elon would hurt competitors and advantage his own businesses,” he said.

Alex Heath, the Verge:

Jeff Bezos and President-elect Donald Trump famously didn’t get along the last time Trump was in the White House. This time, Bezos says he’s “very optimistic” and even wants to help out.

“I’m actually very optimistic this time around,” Bezos said of Trump during a rare public appearance at The New York Times DealBook Summit on Wednesday. “He seems to have a lot of energy around reducing regulation. If I can help him do that, I’m going to help him.”

Emily Swanson, the Guardian:

“Mark Zuckerberg has been very clear about his desire to be a supporter of and a participant in this change that we’re seeing all around America,” Stephen Miller, a top Trump deputy, told Fox.

Meta’s president of global affairs, Nick Clegg, agreed with Miller. Clegg said in a recent press call that Zuckerberg wanted to play an “active role” in the administration’s tech policy decisions and wanted to participate in “the debate that any administration needs to have about maintaining America’s leadership in the technological sphere,” particularly on artificial intelligence. Meta declined to provide further comment.

There are two possibilities. The first is that these CEOs are all dummies with memory no more capacious than that of an earthworm. The second is that these people all recognize the transactional and mercurial nature of the incoming administration, and they have begun their ritualistic grovelling. Even though I do not think money and success is evidence of genius, I do not think these CEOs are so dumb they actually believe in the moral fortitude of these goons.

Leah Nylen, Josh Sisco, and Dina Bass, Bloomberg:

The US Federal Trade Commission has opened an antitrust investigation of Microsoft Corp., drilling into everything from the company’s cloud computing and software licensing businesses to cybersecurity offerings and artificial intelligence products.

Seems like a lot of people who thought Microsoft would escape antitrust investigations in the U.S. might have been a little too eager.

This kind of scrutiny is a good thing, and long overdue. Yet one of the unavoidable problems of reducing the influence of these giant corporations now is the pain it is going to cause — almost by definition. If a corporation is abusing its power and scale to such a degree the FTC initiates an investigation, unwinding that will have — to put it mildly — an effect. We are seeing this in the Google case. This is true for any situation where a business or a group of people with too much influence needs correcting. That does not mean it should not happen.

It is true that Microsoft’s products and services are the backbone of businesses and governments the world over. These are delivered through tight integrations, all of which encourages further fealty to this singular solution. For example, it used its dominant position with Office 365 to distribute Teams for free, thereby making it even harder for other businesses to compete. It then leveraged Outlook and Teams to boost its web browser, after doing the same with Windows. If it charged for Teams out of the gate, this would be having a different discussion.

Obviously, the FTC’s concerns with Microsoft’s business practices stretch well beyond bundling Teams. According to this Bloomberg report, the Commission is interested in cloud and identity tying, too. On the one hand, it is enormously useful to businesses to have a suite of products with a single point of management and shared credentials. On the other hand, it is a monolithic system that is a non-starter for potential competitors.

The government is understandably worried about the security and stability risks of global dependence on Microsoft, too, but this is odd:

The CrowdStrike crash that affected millions of devices operating on Microsoft Windows systems earlier this year was itself a testament to the widespread use of the company’s products and how it directly affects the global economy.

This might just be Bloomberg’s contextualizing more than it is relevant to the government’s position. But, still, it seems wrong to me to isolate Windows as the problem instead of Crowdstrike itself, especially with better examples to be found in the SolarWinds breach and its track record with first-party security.

Emanuel Maiberg, 404 Media:

For the past two years an algorithmic artist who goes by Ada Ada Ada has been testing the boundaries of human and automated moderation systems on various social media platforms by documenting her own transition. 

Every week she uploads a shirtless self portrait to Instagram alongside another image which shows whether a number of AI-powered tools from big tech companies like Amazon and Microsoft that attempt to automatically classify the gender of a person see her as male or female. Each image also includes a sequential number, year, and the number of weeks since Ada Ada Ada started hormone therapy.

You want to see great art made with the help of artificial intelligence? Here it is — though probably not in the way one might have expected.

In the first post to be removed by Instagram, Ada Ada Ada calls it a “victory”, and it truly sounds validating. Instagram has made her point and, though she is still able to post photos, you can flip through her pinned story archives “censorship” and “censorship 2” to see how Meta’s systems interpret other posts.

X on Wednesday announced a new set of terms, something which is normally a boring and staid affair. But these are a doozy:

Here’s a high-level recap of the primary changes that go into effect on November 15, 2024. You may see an in-app notice about these updates as well.

  • Governing law and forum changes: For users residing outside of the European Union, EFTA States, and the United Kingdom, we’ve updated the governing law and forum for lawsuits to Texas as specified in our terms. […]

Specifically, X says “disputes […] will be brought exclusively in the U.S. District Court for the Northern District of Texas or state courts located in Tarrant County, Texas, United States”. X’s legal address is on a plot of land shared with SpaceX and the Boring Company near Bastrop, which is in the Western District. This particular venue is notable as the federal judge handling current X litigation in the Northern District owns Tesla stock and has not recused himself in X’s suit against Media Matters, despite stepping aside on a similar case because of a much smaller investment in Unilever. The judge, Reed O’Connor, is a real piece of work from the Federalist Society who issues reliably conservative decisions and does not want that power undermined.

An investment in Tesla does not necessarily mean a conflict of interest with X, an ostensibly unrelated company — except it kind of does, right? This is the kind of thing the European Commission is trying to figure out: are all of these different businesses actually related because they share the same uniquely outspoken and influential figurehead? Musk occupies such a particularly central role in all these businesses and it is hard to disentangle him from their place in our society. O’Connor is not the only judge in the district, but it is notable the company is directing legal action to that venue.

But X is only too happy to sue you in any court of its choosing.

Another of the X terms updates:

  • AI and machine learning clarifications: We’ve added language to our Privacy Policy to clarify how we may use the information you share to train artificial intelligence models, generative or otherwise.

This is rude. It is a “clarifi[cation]” described in vague terms, and what it means is that users will no longer be able to opt out of their data being used to train Grok or any other artificial intelligence product. This appears to also include images and video, posts in private accounts and, if I am reading this right, direct messages.

Notably, Grok is developed by xAI, which is a completely separate company from X. See above for how Musk’s companies all seem to bleed together.

  • Updates to reflect how our products and services work: We’ve incorporated updates to better reflect how our existing and upcoming products, features, and services work.

I do not know what this means. There are few product-specific changes between the old and new agreements. There are lots — lots — of new ways X wants to say it is not responsible for anything at all. There is a whole chunk which effectively replicates the protections of Section 230 of the CDA, you now need written permission from X to transfer your account to someone else, and X now spells out its estimated damages from automated traffic: $15,000 USD per million posts every 24 hours.

Oh, yeah, and X is making blocking work worse:

If your posts are set to public, accounts you have blocked will be able to view them, but they will not be able to engage (like, reply, repost, etc.).

The block button is one of the most effective ways to improve one’s social media experience. From removing from your orbit people who you never want to hear from for even mundane reasons, to reducing the ability for someone to stalk or harass, its expected action is vital. This sucks. I bet the main reason this change was made is because Musk is blocked by a lot of people.

All of these changes seem designed to get rid of any remaining user who is not a true believer. Which brings us to today.

Sarah Perez, TechCrunch:

Social networking startup Bluesky, which just reported a gain of half a million users over the past day, has now soared into the top five apps on the U.S. App Store and has become the No. 2 app in the Social Networking category, up from No. 181 a week ago, according to data from app intelligence firm Appfigures. The growth is entirely organic, we understand, as Appfigures confirmed the company is not running any App Store Search Ads.

As of writing, Bluesky is the fifth most popular free app in the Canadian iOS App Store, and the second most popular free app in the Social Networking category. Threads is the second most popular free app, and the most popular in the Social Networking category.

X is number 74 on the top free apps list. It remains classified as “News” in the App Store because it, like Twitter, has always compared poorly against other social media apps.

Chiara Castro, TechRadar:

Hungary, the country that now heads the Council of Europe after Belgium, has resurrected what’s been deemed by critics as Chat Control, and MEPs are expected to vote on it at the end of the month. After proposing a new version in June, the Belgian presidency had to take the proposal off the agenda last minute amid harsh backlash.

Popular encrypted messaging apps, including Signal and Threema, have already announced their intention to rather shut down their operations in the EU instead of undermining users’ privacy. Keep reading as I walk you through what we know so far, and how one of the best VPN apps could help in case the proposal becomes law.

This news was broken by Politico, but their story is in the “Pro” section, which is not just a paywall. One cannot just sign up for it; you need to “Request a Demo” and then you can be granted access for no less than €7,000 per year. I had to settle for this re-reported version. And because online media is so broken — in part because of my selfish refusal to register for this advanced version of Politico — news outlets like TechRadar find any way of funding themselves. In this case, the words “best VPN” are linked to a list of affiliate-linked VPN apps. Smooth.

Patrick Breyer:

[…] According to the latest proposal providers would be free whether or not to use ‘artificial intelligence’ to classify unknown images and text chats as ‘suspicious’. However they would be obliged to search all chats for known illegal content and report them, even at the cost of breaking secure end-to-end messenger encryption. The EU governments are to position themselves on the proposal by 23 September, and the EU interior ministers are to endorse it on 10 October. […]

This is a similar effort to that postponed earlier this year. The proposal (PDF) has several changes, but it still appears to poke holes in end-to-end encryption, and require providers to detect possible known CSAM before it is sent. A noble effort, absolutely, but also one which fundamentally upsets the privacy of one-on-one communications to restrict its abuse by a few.

Nathan J. Robinson, of Current Affairs, reviewing “Corporate Bullshit” by Nick Hanauer, Joan Walsh, and Donald Cohen last year:

Over the last several decades, we have been told that “smoking doesn’t cause cancer, cars don’t cause pollution, greedy pharmaceutical companies aren’t responsible for the crisis of opioid addiction.” Recognizing the pattern is key to spotting “corporate bullshit” in the wild, and learning how to spot it is important, because, as the authors write, the stories told in corporate propaganda are often superficially plausible: “At least on the surface, they offer a civic-minded, reasonable-sounding justification for positions that in fact are motivated entirely by self-interest.” When restaurant owners say that raising the minimum wage will drive their labor costs too high and they’ll be forced to cut back on employees or close entirely, or tobacco companies declare their product harmless, those things could be true. They just happen not to be.

Via Cory Doctorow.

Jeremy Keith:

I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).

I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.

And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”

This piece is making me think more about my own, minimal use of generative features. Sure, it is neat that I can get a more accurate summary of an email newsletter than a marketer will typically write, or that I can repair something in a photo without so much manual effort. But this ease is only possible thanks to the questionable ethics of A.I. training.

Jake Evans, ABC News:

Facebook has admitted that it scrapes the public photos, posts and other data of Australian adult users to train its AI models and provides no opt-out option, even though it allows people in the European Union to refuse consent.

[…]

Ms Claybaugh [Meta’s global privacy policy director] added that accounts of people under 18 were not scraped, but when asked by Senator Sheldon whether public photos of his own children on his account would be scraped, Ms Claybaugh acknowledged they would.

This is not ethical. Meta has the ability to more judiciously train its systems, but it will not do that until it is pressured. Shareholders will not take on that role. They have been enthusiastically boosting any corporation with an A.I. announcement. Neither will the corporations themselves, which have been jamming these features everywhere — there are floating toolbars, floating panels, balloons, callouts, and glowing buttons that are hard to ignore even if you want to.

Julia Love and Davey Alba, Bloomberg:

Google now displays convenient artificial intelligence-based answers at the top of its search pages — meaning users may never click through to the websites whose data is being used to power those results. But many site owners say they can’t afford to block Google’s AI from summarizing their content.

[…]

Google uses a separate crawler for some AI products, such as its chatbot Gemini. But its main crawler, the Googlebot, serves both AI Overviews and Google search. A company spokesperson said Googlebot governs AI Overviews because AI and the company’s search engine are deeply entwined. The spokesperson added that its search results page shows information in a variety of formats, including images and graphics. Google also said publishers can block specific pages or parts of pages from appearing in AI Overviews in search results — but that would also likely bar those snippets from appearing across all of Google’s other search features, too, including web link listings.

I have quoted these two paragraphs in full because I think the difference between Google’s various A.I. products is worth clarifying. The effects of the Google-Extended control, which a publisher can treat as a separate user agent in robots.txt, is only relevant to training the Gemini and Vertex generative products. Gemini powers the A.I. overviews feature, but there is no way of opting out of overviews without entirely removing a site from Google’s indexing.

I can see why website owners would want to do this; I sympathize with the frustration of those profiled in this article. But Google has been distorting the presentation of results and reducing publishers’ control for years. In 2022, I was trying to find an article from my own site when I discovered Google had generated an incorrect pros and cons list from an iPad review I wrote. Google also generates its own titles and descriptions for results instead of relying on the page-defined title and meta description tags, and it has introduced features over the years like Featured Snippets, the spiritual predecessor of A.I. Overviews.

All of these things have reduced the amount of control website owners can have over how their site is presented on a Google results page. In some cases, they are often beneficial — rewritten titles and descriptions may reflect the actual subject of the page more accurately than one provided by some SEO expert. But in other cases, they end up making false claims cited to webpages. It happened with Featured Snippets, it happened with Google’s interpretation of my iPad review, and it happens with this artificially “intelligent” feature as well.