Search Results for: "artificial intelligence"

Hundreds of experts in artificial intelligence — including several executives and developers in the field — issued a brief and worrying statement via the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The Center calls out Meta by name for not signing onto the letter; Elon Musk also did not endorse it.

OpenAI’s Sam Altman was among the hundreds of signatories after feigning an absolute rejection of regulations which he and his peers did not have a role in writing. Perhaps that is an overly cynical take, but it is hard to read this statement with the gravity it suggests.

Martin Peers, the Information:

Perhaps instead of issuing a single-sentence statement meant to freak everyone out, AI scientists should use their considerable skills to figure out a solution to the problem they have wrought.

I believe the researchers, academics, and ethicists are earnest in their endorsement of this statement. I do not believe the corporate executives who simultaneously claim artificial intelligence is a threat to civilization itself while rapidly deploying their latest developments in the field. Their obvious hypocrisy makes it hard to take them seriously.

Last month, we caught glimpses of Imran Chaudhri’s preview of the device being developed by Humane. Chaudhri presented this sneak peek at the TED conference in Vancouver, and the full presentation was published today.

While it is important not to rush to judge a device which none of us have used, I feel like its debut in a TED Talk is worrisome. It is a conference that has become synonymous with catchy hacks and apparently counterintuitive thinking which have little basis in reality. That is not a great sign.

Also, and this is a little thing, TED says (PDF) “speakers may never use the TED or TEDx stage to pitch their products or services”, and that “if it feels like an advertisement, it probably is”. Chaudhri’s talk is all about how great artificial intelligence is going to be, but it is all structured around a device debut which feels like a supercut of iPhone launch presentations.

Nilay Patel, the Verge:

Google Search is so useful and so pervasive that its overwhelming influence on our lives is also strangely invisible: Google’s grand promise was to organize the world’s information, but over the past quarter century, an enormous amount of the world’s information has been organized for Google — to rank in Google results. Almost everything you encounter on the web — every website, every article, every infobox — has been designed in ways that makes them easy for Google to understand. In many cases, the internet has become more parseable by search engines than it is by humans.

I am reminded of Goodhart’s law in thinking about the adversarial relationship between Google and search optimization experts which, presumably, will morph into a similar situation between artificial intelligence services and self-proclaimed experts in that field. Because it has been an unrivalled default for so long, there is no reason for the most popular parts of the web to work anything better than as a feed for Google’s consumption and, hopefully, its users’ attention. All of this has broken the web as much as it has broken Google Search.

Patel wrote this as an introduction to a series of articles the Verge is running this year in recognition of Google’s twenty-fifth anniversary. The first, about Accelerated Mobile Pages, is a good summary of the kind of control Google has over publishers.

Ted Chiang, the New Yorker:

Today, we find ourselves in a situation in which technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress. If you try to criticize capitalism, you are accused of opposing both technology and progress. But what does progress even mean, if it doesn’t include better lives for people who work? What is the point of greater efficiency, if the money being saved isn’t going anywhere except into shareholders’ bank accounts? We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital. We need to be able to criticize harmful uses of technology — and those include uses that benefit shareholders over workers — without being described as opponents of technology.

The whole article is terrific — as the headline alludes to, an imagining of artificial intelligence technologies performing a sort of McKinsey-like role in executing the worst impulses of our economic system — but this paragraph is damn near perfect.

Not too long ago, in the era of gadget blogs and technology enthusiasm gone mainstream, there was a specific kind of optimism where every new product or service was imagined as beneficial. Tides turned, and criticism is the current default position. I think that is a healthier and more realistic way of viewing this market, even as it feels more negative. What good is thinking about new technologies if they are not given adequate context? We have decades of personal computing to draw from, plus hundreds of years of efficiency gains. On the cusp of another vast transformation, we should put that knowledge to use.

Odanga Madung, Nation:

At a meeting held in Nairobi on Monday, 200 content moderators from Sama and Majorel — the firms that serve Facebook, YouTube, TikTok and Chat GPT — took a stand against tech giants’ mistreatment of their workers by coming together to lobby for their rights.

In a first-of-its-kind event, moderators covering 14 different African languages came together on Labour Day to vote for establishing a union to address issues including mistreatment of workers.

Majorel is based in Luxemborg; Teleperformance, based in France, recently offered to buy it. Sama is based in the United States and was, last year, sued by a former moderator. The vast distance between these companies, their employees in Kenya, and their clients mostly located in Silicon Valley is not only geographic. These are some of the people who remove the worst of the web and make artificial intelligence work better.

Good for them.

Normally, I would not link to something for which I have not read the source story. In this case, I will make an exception, as the original is by Wayne Ma of the Information, who has a solid track record. I hope these two summaries are accurate reflections of Ma’s reporting.

Hartley Charlton, MacRumors:

The extensive paywalled report explains why former Apple employees who worked in the company’s AI and machine learning groups believe that a lack of ambition and organizational dysfunction have hindered Siri and the company’s AI technologies. Apple’s virtual assistant is apparently “widely derided” inside the company for its lack of functionality and minimal improvement over time.

[…]

Apple executives are said to have dismissed proposals to give Siri the ability to conduct extended back-and-forth conversations, claiming that the feature would be difficult to control and gimmicky. Apple’s uncompromising stance on privacy has also created challenges for enhancing Siri , with the company pushing for more of the virtual assistant’s functions to be performed on-device.

Samuel Axon, Ars Technica:

For example, it reveals that the team that has been working on Apple’s long-in-development mixed reality headset was so frustrated with Siri that it considered developing a completely separate, alternative voice control method for the headset.

But it goes beyond just recounting neutral details; rather, it lays all that information out in a structured case to argue that Apple is ill-prepared to compete in the fast-moving field of AI.

By the sound of that, Ma is making a similar argument as was reported by Brian X. Chen, Nico Grant, and Karen Weise in the New York Times last month. I linked to it noting two things: first, that the headline’s proclamation that Apple has “lost the A.I. race” is premature; second, that the vignette in the lede is factually incorrect. But there was a detail I think is worth mentioning in the context of Siri’s capabilities:

Siri also had a cumbersome design that made it time-consuming to add new features, said [former Apple employee John] Burkey, who was given the job of improving Siri in 2014. Siri’s database contains a gigantic list of words, including the names of musical artists and locations like restaurants, in nearly two dozen languages.

That made it “one big snowball,” he said. If someone wanted to add a word to Siri’s database, he added, “it goes in one big pile.”

This is a claim sourced to a single person, but it would not surprise me if the entire Siri backend really is a simple database of known queries and expected responses. Sources the Times reporters spoke to say this structure cannot be adapted to fit a large language model system and, so, Apple is far behind.

Maybe all that is true. But what I cannot understand is why anyone would think users would want to have a conversation with Siri, when many would probably settle for a version of that basic database association schema working correctly.

Siri is infamously frustrating to use. It has unknowable limits to its capabilities — for example, requesting a scoreboard works for some sports but not others, and asking for a translation is only available between a small number of languages. It, like other voice assistants, assumes a stage-practiced speech cadence, which impairs its usability for those with atypical speech, or queries with pauses or corrections. But the things which bum me out in my own use of Siri are the ways in which it does not seem to be built by the same people who made the phone it runs on.

I know reading a list of bugs is boring, so here are two small examples:

  1. My wife, driving home, texts me while I am making dinner to ask if there is anything she should pick up. I see the notification come in on the Lock Screen, but my hands are dirty, so I say “hey Siri, reply to [her name]”. Instead of the prompt asking “okay, what would you like to say?”, I am instead asked “okay, which one should I use?” with the list of phone numbers from her contact card.

    There are three things wrong with this: my query uses the word “reply”, so it should compose a message to whatever contact method from which she sent the message; for several versions of iOS now, Messages consolidates conversations from the same contact, so Siri’s behaviour should work the same way; and, I am trying to send something to one of my most-messaged contacts, so it feels particularly dumb.

  2. Siri is, as of a recent version of iOS, hardwired to associate music-related commands to Apple Music. It will sometimes ask if the user wants an alternative app. But it also means it does not reliably play music from a local library, and it has no awareness of whether one has turned off cellular data use for Music.

    So if you are driving along, with a local library full of songs, and you ask Siri to play one of them, it will stream it from Apple Music instead; or, if you have cellular data off for Music, it will read out an error message. Meanwhile, the songs are sitting right there, in the library.

Neither of these examples, as far as I can see, should require a humanlike level of deep language understanding. In fact, both of these queries used to work as expected before becoming broken. It seems likely to me the latter was a deliberate change made to promote Apple’s services. In a similar vein, Ma, via Charlton, reports “specific decisions [were made] to exclude information such as iPhone prices from Siri to push users directly to Apple’s website instead”. If true, it is a cynical decision that has no benefit to users. The first problem I listed is simply baffling.

Perhaps these kinds of bugs would be less common if Siri were based on large language models — this is completely outside my field and my inbox is open — but I find that hard to believe. It is not the case that Siri is failing to understand what I am asking it to do. Rather, it is faltering at simple hurdles and functioning as an ad for other Apple services. I would be fine with Siri if it were a database that performed reliably and expectedly, and excited for the possibilities of one fronted by more capable artificial intelligence. What I am, though, is doubtful — doubtful that basic tasks like these will become meaningfully better, instead of a different set of bugs and obstacles I will need to learn.

Ma reports, via Charlton, that some people working on Siri left because there was too much human intervention. I wish it felt anything like that.

Brian X. Chen, Nico Grant, and Karen Weise, New York Times:

On a rainy Tuesday in San Francisco, Apple executives took the stage in a crowded auditorium to unveil the fifth-generation iPhone. The phone, which looked identical to the previous version, had a new feature that the audience was soon buzzing about: Siri, a virtual assistant.

This first paragraph vignette has problems — and, no, I cannot help myself. The iPhone 4S and Siri were unveiled on October 4, 2011 at Apple’s campus in Cupertino, not in San Francisco, and it did not rain until that night in San Francisco. It was a Tuesday, though.

Please note there are three bylines on this story.

Anyway, the authors of this Times story attempt to illustrate how voice assistants, like Siri and Alexa, have been outdone by products like OpenAI’s ChatGPT:

The assistants and the chatbots are based on different flavors of A.I. Chatbots are powered by what are known as large language models, which are systems trained to recognize and generate text based on enormous data sets scraped off the web. They can then suggest words to complete a sentence.

In contrast, Siri, Alexa and Google Assistant are essentially what are known as command-and-control systems. These can understand a finite list of questions and requests like “What’s the weather in New York City?” or “Turn on the bedroom lights.” If a user asks the virtual assistant to do something that is not in its code, the bot simply says it can’t help.

The article’s conclusion? The architecture of voice assistants has precluded them from becoming meaningful players in artificial intelligence. They have “squandered their lead in the A.I. race”; the headline outright says they have “lost”. But hold on — it seems pretty early to declare outright winners and losers, right?

John Voorhees of MacStories sure thinks so:

It’s not surprising that sources have told The New York Times that Apple is researching the latest advances in artificial intelligence. All you have to do is visit the company’s Machine Learning Research website to see that. But to declare a winner in ‘the AI race’ based on the architecture of where voice assistants started compared to today’s chatbots is a bit facile. Voice assistants may be primitive by comparison to chatbots, but it’s far too early to count Apple, Google, or Amazon out or declare the race over, for that matter.

“Siri” and “Alexa” are just marketing names. The underlying technologies can change. It is naïve to think Google is not working to integrate something like the Bard system into its Assistant. I have no idea if any of these companies will be able to iterate as quickly as OpenAI has been doing — I have been wrong about this before — but to count them out now, mere months after ChatGPT’s launch, is ridiculous, especially as Siri, alone, is in a billion pockets.

Kirby Ferguson’s latest is not to be missed: a thoughtful exploration of artificial intelligence within his “Everything is a Remix” framework. Great soundtrack, too.

It is also, Ferguson says, his last video. In an April email to subscribers — which I cannot figure out how to link to — Ferguson says further works will be more likely written rather than videos, owing in part to time constraints. If this is indeed the end of Ferguson’s personal video career, it is a beautiful way to bow out. If you have not checked out his back catalogue, it would be worth your time.

Thanks, Kirby.

Miles Kruppa and Sam Schechner, Wall Street Journal:

Now Google, the company that helped pioneer the modern era of artificial intelligence, finds its cautious approach to that very technology being tested by one of its oldest rivals. Last month Microsoft Corp. announced plans to infuse its Bing search engine with the technology behind the viral chatbot ChatGPT, which has wowed the world with its ability to converse in humanlike fashion. Developed by a seven-year-old startup co-founded by Elon Musk called OpenAI, ChatGPT piggybacked on early AI advances made at Google itself.

[…]

Google’s approach could prove to be prudent. Microsoft said in February it would put new limits on its chatbot after users reported inaccurate answers, and sometimes unhinged responses when pushing the app to its limits.

Many in the tech commentariat have predicted victory for Microsoft products so often it has become a running joke in some circles. It released one of the first folding phones which it eventually unloaded on Woot at a 70% discount; it was one of many instances where Microsoft’s entry was prematurely championed.

Even after all the embarrassing problems in Google’s Bard presentation and demos and the pressure the company is now facing, I imagine its management is feeling somewhat vindicated by Microsoft’s rapid dampening of the sassier side of its assistant. While Microsoft could use its sheer might to rapidly build a user base — as it did with Teams — Google could also flip a switch to do the same because it is the world’s most popular website.

Christopher Mims, Wall Street Journal:

Technologists broadly agree that the so-called generative AI that powers systems like ChatGPT has the potential to change how we live and work, despite the technology’s clear flaws. But some investors, chief executives and engineers see signs of froth that remind them of the crypto boom that recently fizzled.

[…]

“The people talking about generative AI right now were the people talking about Web3 and blockchain until recently—the Venn diagram is a circle,” says Ben Waber, chief executive of Humanyze, a company that uses AI and other tools to analyze work behavior. “People have just rebranded themselves.”

One difference between this wave of hype and those which have come before it is how anyone can imagine how they might use these services. Who gives a shit about virtual reality meetings? Pretty much nobody so far. Generative services seem like a more lasting proposition, assuming it is possible to make them more accurate and comprehensive.

There are many things which I was considering linking to in recent weeks, but it is easier to do so via a story told through block quotes. They share a common thread and theme.

I begin with Can Duruk, at Margins:

I’m old enough to remember when Google came out, which makes me old enough to remember at least 20 different companies that touted as called Google-killers. I applaud every single one of them! A single American company operating as a bottleneck behind the world’s information is a dangerous, and inefficient proposition and a big theme of the Margins is that monopolies are bad so it’s also on brand.

But for one reason and another, none of the Google competitors have seemed to capture the world’s imagination.

Until now, that is. Yes, sorry, I’m talking about that AI bot.

Dave Winer:

I went to ChatGPT and entered “Simple instructions about how to send email from a Node.js app?” What came back was absolutely perfect, none of the confusing crap and business models you see in online instructions in Google. I see why Google is worried.

Michael Tsai has a collection of related links. All of the above were posted within the month of January. It really felt like the narrative of competition between Google and ChatGPT was reaching some kind of peak.

Owen Yin last week:

Gone are the days of typing in specific keywords and struggling to find the information you need. Microsoft Bing is on the cusp of releasing its ChatGPT integration, which will allow users to ask questions in a natural way and get a tailored search experience that will reshape how we explore the internet.

I got a preview of Bing’s ChatGPT integration and managed to get some research in before it was shut off.

That is right: Microsoft was first to show a glimpse of the future of search engines with Bing, but only for a few people briefly. I tried to URL hack Bing to see if I could find any remnant of this and I think I got close: visiting bing.com/chat will show search results for “Bing AI”. If nothing else, it is a very good marketing tease.

Sundar Pichai today:

We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

[…]

Now, our newest AI technologies — like LaMDA, PaLM, Imagen and MusicLM — are building on this, creating entirely new ways to engage with information, from language and images to video and audio. We’re working to bring these latest AI advancements into our products, starting with Search.

Via Andy Baio:

Google used to take pride in minimizing time we spent there, guiding us to relevant pages as quickly as possible. Over time, they tried to answer everything themselves: longer snippets, inline FAQs, search results full of knowledge panels.

Today’s Bard announcement feels like their natural evolution: extracting all value out of the internet for themselves, burying pages at the bottom of each GPT-generated essay like footnotes.

Google faced antitrust worries due, in part, to its Snippets feature, which automatically excerpts webpage text to answer queries without the searcher having to click. As of 2019, over half of Google searches returned a result without a user clicking out.

The original point of search engines was to be directed to websites of interest. But that has not been the case for years. People are not interested in visiting websites about a topic; they, by and large, just want answers to their questions. Google has been strip-mining the web for years, leveraging its unique position as the world’s most popular website and its de facto directory to replace what made it great with what allows it to retain its dominance. Artificial intelligence — or some simulation of it — really does make things better for searchers, and I bet it could reduce some tired search optimization tactics. But it comes at the cost of making us all into uncompensated producers for the benefit of trillion-dollar companies like Google and Microsoft.

Baio:

Personally, I wish that the “code red” response that ChatGPT inspired at Google wasn’t to launch a dozen AI products that their red teams and AI ethicists have warned them not to release, but to combat the tsunami of AI-generated SEO spam bullshit that’s in the process of destroying their core product. Instead, they’re blissfully launching new free tools to generate even more of it.

It is fascinating to see Google make its “Bard” announcement in the weeks following CNet’s embarrassing and lucrative generated articles.

Jon Christian, Futurism:

Looking at the outraged public reaction to the news about CNET‘s AI articles, [director of search engine optimization, Jake] Gronsky realized that the company might have gone too far. In fact, he explained, he saw the whole scandal as a cautionary tale illustrating that Red Ventures shouldn’t mark its AI-generated content for readers at all.

“Disclosing AI content is like telling the IRS you have a cash-only business,” he warned.

Gronsky wasn’t just concerned about CNET and Bankrate, though. He was also worried that a Google crackdown could impact Red Ventures’ formidable portfolio of sites that target prospective college students, known internally as Red Ventures EDU.

To be clear, it seems like saner and more ethically conscious heads prevailed and CNet articles generated by automated means are marked.

The web is corrupt and it is only getting worse. Websites supported by advertising — itself an allegedly illegal Google monopoly — rely on clicks from searchers, so they have compromised their integrity to generate made-for-Google articles, many of which feature high-paying affiliate links. Search optimization experts have spent years in an adversarial relationship with Google in an attempt to get their clients’ pages to the coveted first page of results, often through means which make results worse for searchers. Artificial intelligence is, it seems, a way out of this mess — but the compromise is that search engines get to take from everyone while giving nothing back. Google has been taking steps in this direction for years: its results page has been increasingly filled with ways of discouraging people from leaving its confines. A fully automated web interpreter is a more fully realized version of this initiative. Searchers get the results they are looking for and, in the process, undermine the broader web. The price we all pay is the worsening of the open web for Google’s benefit.

New world, familiar worries.

Patrick McGee, Financial Times:

Apple is taking steps to separate its mobile operating system from features offered by Google parent Alphabet, making advances around maps, search and advertising that has created a collision course between the Big Tech companies.

[…]

One of these people said Apple is still engaged in a “silent war” against its arch-rival. It is doing so by developing features that could allow the iPhone-maker to further separate its products from services offered by Google. Apple did not respond to requests for comment.

This is a strange article. The thesis, above, is that Apple is trying to reduce its dependence on Google’s services. But McGee cannot seem to decide whether Apple’s past, present, or future changes are directly relevant, so he kind of posits that they all are. Here, look:

The first front of this battle is mapping, which started in 2012 when Apple released Maps, displacing its Google rival as a pre-downloaded app.

The move was supposed to be a shining moment for Apple’s software prowess but the launch was so buggy — some bridges, for example, appeared deformed and sank into oceans — that chief executive Tim Cook said he was “extremely sorry for the frustration this has caused our customers”.

Apple Maps turns eleven years old in 2023, so it is safe to say that Apple adequately distanced itself from its reliance upon Google for maps, oh, about eleven years ago. Whether users have is another question entirely. The 3D rendering problems may have been the most memorable glitches, but the biggest day-to-day problems for users were issues with bad data.

So what is new?

Apple’s Maps has improved considerably in the past decade, however. Earlier this month it announced Business Connect, a feature that lets companies claim their digital location so they can interact with users, display photos and offer promotions.

While businesses have been able to claim their listing and manage its details for years, the recently launched Business Connect is a more comprehensive tool. That has advantages for businesses and users alike, as there may be better point-of-interest data, though it is another thing businesses need to pay attention to. But as far as ways for Apple to distance itself from Google, I am not sure I see the connection.

McGee:

The second front in the battle is search. While Apple rarely discusses products while in development, the company has long worked on a feature known internally as “Apple Search”, a tool that facilitates “billions of searches” per day, according to employees on the project.

Now I am confused: is this a service which is in development, or is it available to users? To fit his thesis, McGee appears to want it both ways:

Apple’s search team dates back to at least 2013, when it acquired Topsy Labs, a start-up that had indexed Twitter to enable searches and analytics. The technology is used every time an iPhone user asks Apple’s voice assistant Siri for information, types queries from the home screen, or uses the Mac’s “Spotlight” search feature.

Once again, I have to ask how a feature eight years old means Apple is only now in the process of disentangling itself from Google. Apparently, it is because of speculation in the paragraphs which follow the one above:

Apple’s search offering was augmented with the 2019 purchase of Laserlike, an artificial intelligence start-up founded by former Google engineers that had described its mission as delivering “high quality information and diverse perspectives on any topic from the entire web”.

Josh Koenig, chief strategy officer at Pantheon, a website operations platform, said Apple could quickly take a bite out of Google’s 92 per cent share of the search market by not making Google the default setting for 1.2bn iPhone users.

There is no segue here, and no indication that Apple is actually working to make such a change. Koenig insinuates it could be beneficial to users, but McGee acknowledges it “would be expensive” because Apple would lose its effort-free multibillion-dollar annual payout from Google.

As an aside: an Apple search engine to rival Google’s has long been rumoured. If it is a web search engine, I have long thought Apple could use the siri.com domain it already owns. But it may not have to be web-based — it is plausible that searching the web would display results like a webpage in Safari, but it would only be accessible from within that browser, kind of like the existing Siri Suggestions feature. An idle thought as I write this but, as I said, the article provides no indication that Apple is pursuing this.

McGee:

The third front in Apple’s battle could prove the most devastating: its ambitions in online advertising, where Alphabet makes more than 80 per cent of its revenues.

This is the “future” part of the thesis. Based on job ads, it appears Apple is working on its own advertising system, as first reported by Shoshana Wodinsky at Marketwatch in August. As I wrote then, it looks bad that Apple is doing this in the wake of App Tracking Transparency, and I question the likely trajectory of this. But this is, again, not something which Apple is doing to distance its platform from Google’s products and services, unless you seriously believe Apple will prohibit Google’s ads on its platforms. So long as Google is what it is to internet ads — by the way, stay tuned on that front — Apple may only hope to be a little thorn in Google’s side.

These three examples appear to fit into categories which seem similar but are very different. Business Connect for Apple Maps is not a competitor to Google Business Profile; any business is going to have to maintain both. There are no concrete details provided about Apple’s search ambitions, but it is the only thing here which would reduce Apple’s dependence on Google. Another advertising platform would give Google some competition and put more money in Apple’s pocket, but it may only slightly reduce how much advertisers rely on Google. It seems to me there are pro-competition examples here and there are anti-anti-competition arguments: the U.S. Department of Justice sued Google in September over its exclusivity agreements.

Anyway, speaking of Apple’s contracts with Google, whatever happened to Project McQueen?

Leyland Cecco, the Guardian:

Apple has quietly launched a catalogue of books narrated by artificial intelligence in a move that may mark the beginning of the end for human narrators. The strategy marks an attempt to upend the lucrative and fast-growing audiobook market – but it also promises to intensify scrutiny over allegations of Apple’s anti-competitive behaviour.

[…]

On the company’s Books app, searching for “AI narration” reveals the catalogue of works included in the scheme, which are described as being “narrated by digital voice based on a human narrator”.

Apple says it is starting with fiction and romance titles and, of course, in English only.

In addition to the examples on Apple’s website, I listened to a random selection of previews in Apple Books. They are good and often convincing. But I do not think this is what listeners ought to receive when they spend real money on an audiobook. These voices are only a little better than Apple’s screen reading voices, available in each platform’s Accessibility preferences. “Better than nothing” is not the most compelling argument for me, but I suppose it is inevitable; Google has offered a similar service since last year.

Cristiano Lima, Washington Post:

An academic study finding that Google’s algorithms for weeding out spam emails demonstrated a bias against conservative candidates has inflamed Republican lawmakers, who have seized on the results as proof that the tech giant tried to give Democrats an electoral edge.

[…]

That finding has become the latest piece of evidence used by Republicans to accuse Silicon Valley giants of bias. But the researchers said it’s being taken out of context.

[Muhammad] Shahzad said while the spam filters demonstrated political biases in their “default behavior” with newly created accounts, the trend shifted dramatically once they simulated having users put in their preferences by marking some messages as spam and others as not.

Shahzad and the other researchers who authored the paper have disputed the sweeping conclusions of bias drawn by lawmakers. Their plea for nuance has been ignored. Earlier this month, a group of senators introduced legislation to combat this apparent bias. It intends to prohibit email providers from automatically flagging any political messages as spam, and requires providers to publish quarterly reports detailing how many emails from political parties were filtered.

According to reporting from Mike Masnick at Techdirt, it looks like this bill was championed by Targeted Victory, which also promoted the study to conservative media channels. You may remember Targeted Victory from their involvement in Meta’s campaign against TikTok.

Masnick:

Anyway, looking at all this, it is not difficult to conclude that the digital marketing firm that Republicans use all the time was so bad at its job spamming people, that it was getting caught in spam filters. And rather than, you know, not being so spammy, it misrepresented and hyped up a study to pretend it says something it does not, blame Google for Targeted Victory’s own incompetence, and then have its friends in the Senate introduce a bill to force Google to not move its own emails to spam.

I am of two minds about this. A theme you may have noticed developing on this website over the last several years is a deep suspicion of automated technologies, however they are branded — “machine learning”, “artificial intelligence”, “algorithmic”, and the like. So I do think some scrutiny may be warranted in understanding how automated systems determine a message’s routing.

But it does not seem at all likely to me that a perceived political bias in filtering algorithms is deliberate, so any public report indicating the number or rate of emails from each political party being flagged as spam is wildly unproductive. It completely de-contextualizes these numbers and ignores decades of spam filters being inaccurate from time to time for no good reason.

A better approach for all transparency around automated systems is one that helps the public understand how these decisions are made without playing to perceived bias by parties with a victim complex. Simply counting the number of emails flagged as spam from each party is an idiotic approach. I, too, would like to know why many of the things I am recommended by algorithms are entirely misguided. This is not the way.

By the way, politicians have a long and proud history of exempting themselves from unfavourable regulations. Insider trading laws virtually do not apply to U.S. congresspersons, even with regulations to ostensibly rein it in. In Canada, politicians excluded themselves from unsolicited communications laws by phone and email. Is it any wonder why polls have showed declining trust in institutions for decades?

Thomas Brewster, Forbes:

[…] On Wednesday, deputy prime minister and head of the Digital Transformation Ministry in Ukraine, Mykhailo Fedorov, confirmed on his Telegram profile that surveillance technology was being used in this way, a matter of weeks after Clearview AI, the New York-based facial recognition provider, started offering its services to Ukraine for those same purposes. Fedorov didn’t say what brand of artificial intelligence was being used in this way, but his department later confirmed to Forbes that it was Clearview AI, which is providing its software for free. They’ll have a good chance of getting some matches: In an interview with Reuters earlier this month, Clearview CEO Hoan Ton-That said the company had a store of 10 billion users’ faces scraped from social media, including 2 billion from Russian Facebook alternative Vkontakte. Fedorov wrote in a Telegram post that the ultimate aim was to “dispel the myth of a ‘special operation’ in which there are ‘no conscripts’ and ‘no one dies.’”

Tim Cushing, Techdirt:

Or maybe it’s just Clearview jumping on the bandwagon by supporting a country that already has the support of the most powerful governments in the world. Grabbing onto passing coattails and contacting journalists to get the word out about the company’s reverse-heel turn is savvy marketing. But it’s little more than that. The tech may prove useful (if the Ukraine government is even using it), but that shouldn’t be allowed to whitewash Clearview’s (completely earned) terrible reputation. Even if it’s useful, it’s only useful because the company was willing to do what no other company was: scrape millions of websites and sell access to the scraped data to anyone willing to pay for it.

It has been abundantly clear for a long time that accurate facial recognition can have its benefits, just as recording everyone’s browser history could make it easier to investigate crime. Even if this seems helpful, it is still an uneasy technology developed by ethically bankrupt company. It is hard for me to see this as much more than Clearview cynically using a war as a marketing opportunity given that it spread news of its participation weeks before anyone in the Ukrainian government confirmed it.

The marketplace for exploits and software of an ethically questionable nature is a controversial one, but something even I can concede has value. If third-party vendors are creating targeted surveillance methods, it means that the vast majority of us can continue to have secure and private systems without mandated “back doors”. It seems like an agreeable compromise so long as those vendors restrict their sales to governments and organizations with good human rights records.

NSO Group, creators of Pegasus spyware, seems to agree. Daniel Estrin, reporting last month at NPR:

NSO says it has 60 customers in 40 countries, all of them intelligence agencies, law enforcement bodies and militaries. It says in recent years, before the media reports, it blocked its software from five governmental agencies, including two in the past year, after finding evidence of misuse. The Washington Post reported the clients suspended include Saudi Arabia, Dubai in the United Arab Emirates and some public agencies in Mexico.

Pegasus can have legitimate surveillance use, but it has great potential for abuse. NSO Group would like us to believe that it cares deeply about selling only to clients that will use the software to surveil possible terrorists and valuable criminal targets. So, how is that going?

Bill Marczak, et al., Citizen Lab:

We identified nine Bahraini activists whose iPhones were successfully hacked with NSO Group’s Pegasus spyware between June 2020 and February 2021. Some of the activists were hacked using two zero-click iMessage exploits: the 2020 KISMET exploit and a 2021 exploit that we call FORCEDENTRY.

[…]

At least four of the activists were hacked by LULU, a Pegasus operator that we attribute with high confidence to the government of Bahrain, a well-known abuser of spyware. One of the activists was hacked in 2020 several hours after they revealed during an interview that their phone was hacked with Pegasus in 2019.

As Citizen Lab catalogues, Bahrain’s record of human rights failures and internet censorship should have indicated to NSO Group that misuse of its software was all but guaranteed.

NSO Group is just one company offering software with dubious ethics. Remember Clearview? When Buzzfeed News reported last year that the company was expanding internationally, Hoan Ton-That, Clearview’s CEO, brushed aside human rights concerns:

“Clearview is focused on doing business in USA and Canada,” Ton-That said. “Many countries from around the world have expressed interest in Clearview.”

Later last year, Clearview went a step further and said it would terminate private contracts, and its Code of Conduct promises that it only works with law enforcement entities and that searches must be “authorized by a supervisor”. You can probably see where this is going.

Ryan Mac, Caroline Haskins, and Antonio Pequeño IV, Buzzfeed News:

Like a number of American law enforcement agencies, some international agencies told BuzzFeed News that they couldn’t discuss their use of Clearview. For instance, Brazil’s Public Ministry of Pernambuco, which is listed as having run more than 100 searches, said that it “does not provide information on matters of institutional security.”

But data reviewed by BuzzFeed News shows that individuals at nine Brazilian law enforcement agencies, including the country’s federal police, are listed as having used Clearview, cumulatively running more than 1,250 searches as of February 2020. All declined to comment or did not respond to requests for comment.

[…]

Documents reviewed by BuzzFeed News also show that Clearview had a fledgling presence in Middle Eastern countries known for repressive governments and human rights concerns. In Saudi Arabia, individuals at the Artificial Intelligence Center of Advanced Studies (also known as Thakaa) ran at least 10 searches with Clearview. In the United Arab Emirates, people associated with Mubadala Investment Company, a sovereign wealth fund in the capital of Abu Dhabi, ran more than 100 searches, according to internal data.

As noted, this data only covers up until February last year; perhaps the policies governing acceptable use and clientele were only implemented afterward. But it is alarming to think that a company which bills itself as the world’s best facial recognition provider ever felt comfortable enabling searches by regimes with poor human rights records, private organizations, and individuals in non-supervisory roles. It does jibe with Clearview’s apparent origin story, and that should be a giant warning flag.

These companies can make whatever ethical promises they want, but money talks louder. Unsurprisingly, when faced with a choice about whether to allow access to their software judiciously, they choose to gamble that nobody will find out.

Katie Notopoulos, Buzzfeed News:

The Great Deplatforming was a response to a singular and extreme event: Trump’s incitement of the Capitol attack. As journalist Casey Newton pointed out in his newsletter, Platformer, it was notable how quickly the full stack of tech companies reacted. We shouldn’t assume that Amazon will just start taking down any site because it did it this time. This was truly an unprecedented event. On the other hand, do we dare think for a moment that Bad Shit won’t keep happening? Buddy, bad things are going to happen. Worse things. Things we can’t even imagine yet!

[…]

Long before Facebook, Twitter, and YouTube were excusing their moderation failures with lines like “there’s always more work to be done” and “if you only knew about all the stuff we remove before you see it,” Something Awful, the influential message board from the early internet, managed to create a healthy community by aggressively banning bozos. As the site’s founder, Rich “Lowtax” Kyanka, told the Outline in 2017, the big platforms might have had an easier time of it if they’d done the same thing, instead of chasing growth at any cost: […]

Ben Smith, New York Times:

This is the Oversight Board, a hitherto obscure body that will, over the next 87 days, rule on one of the most important questions in the world: Should Donald J. Trump be permitted to return to Facebook and reconnect with his millions of followers?

[…]

But the board has been handling pretty humdrum stuff so far. It has spent a lot of time, two people involved told me, discussing nipples, and how artificial intelligence can identify different nipples in different contexts. Board members have also begun pushing to have more power over the crucial question of how Facebook amplifies content, rather than just deciding on taking posts down and putting them up, those people said. In October, it took on a half-dozen cases, about posts by random users, not world leaders: Can Facebook users in Brazil post images of women’s nipples to educate their followers about breast cancer? Should the platform allow users to repost a Muslim leader’s angry tweet about France? It is expected to finally issue rulings at the end of this week, after what participants described as a long training followed by slow and intense deliberations.

That is certainly a range of topics, though one continues to wonder how much — if anything — can truly be offloaded to artificial intelligence.

Cory Doctorow:

[…] Our media, speech forums, and distribution systems are all run by cartels and monopolists whom governments can’t even tax – forget regulating them.

The most consequential regulation of these industries is negative regulation – a failure to block anticompetitive mergers and market-cornering vertical monopolies.

Doctorow calls this censorship; I disagree that what we have seen from tech giants truly amounts to that. Moderation is not synonymous with censorship. I do not think you can look at the vast landscape of publishing options available to just about anyone and conclude that speech is more restricted now than it was, say, ten or twenty years ago.

But Doctorow is right in observing that there are now bigger beasts that effectively function as unelected governments. They materialized because venture capital and regulators incentivized lower costs and faster growth. Our existing primary venues for online discussion are a product of this era, and it shows: Twitter’s newest solution for countering misinformation is adding comments from trusted users. It is such a Web 2.0 solution that it could have a wet floor effect logo and one of those shiny gummy-coloured “beta” callouts.

The way that we figure out how to create healthier communities is by trying new things in technology and antitrust policy. Ironically, I anticipate many of these new ideas will more closely model those from a previous generation, but necessarily updated for billions of people connected through the internet. It does seem unlikely to me that all of those people will be connected through the same platform — a new Facebook, for example — and more likely that they will be connected through the same protocols.

Julia Kollewe, the Guardian:

Uber has ditched efforts to develop its own self-driving car with the multibillion-dollar sale of its driverless car division to a Silicon Valley startup.

The ride-hailing company is selling the business, known as Advanced Technologies Group (ATG), for a reported $4bn (£3bn) to Aurora, a start-up that makes sensors and software for autonomous vehicles and is backed by Amazon and Sequoia Capital.

As part of the deal, Uber is investing $400m in Aurora in return for a minority stake of 26%. Uber’s chief executive, Dara Khosrowshahi, will join Aurora’s board. The deal will also give Aurora access to a carmaker, Japan’s Toyota, which has invested in ATG. ATG has grown to a venture with 1,200 employees.

This does not necessarily mean that Uber is unlikely to have autonomous vehicles — I think that is unlikely for lots of reasons — but it surely does not indicate that the project is going well if it is being offloaded to a startup. Uber’s autonomous transportation effort was, according to its S-1 public offering document, key to its long-term success:

If we fail to develop and successfully commercialize autonomous vehicle technologies or fail to develop such technologies before our competitors, or if such technologies fail to perform as expected, are inferior to those of our competitors, or are perceived as less safe than those of our competitors or non-autonomous vehicles, our financial performance and prospects would be adversely impacted.

So that was yesterday’s shake-up; here is today’s, from Mark Gurman at Bloomberg:

Apple Inc. has moved its self-driving car unit under the leadership of top artificial intelligence executive John Giannandrea, who will oversee the company’s continued work on an autonomous system that could eventually be used in its own car.

The project, known as Titan, is run day-to-day by Doug Field. His team of hundreds of engineers have moved to Giannandrea’s artificial intelligence and machine-learning group, according to people familiar with the change. An Apple spokesman declined to comment.

Previously, Field reported to Bob Mansfield, Apple’s former senior vice president of hardware engineering. Mansfield has now fully retired from Apple, leading to Giannandrea taking over.

Mansfield, you may recall, retired in June 2012 only to be hired back just a few months later to oversee a new generically-titled Technologies group. Mansfield only committed to remaining at Apple through 2014 but stuck around: in 2016, Daisuke Wakabayashi of the Wall Street Journal reported that Mansfield was moved to run the autonomous car project:

Bob Mansfield had stepped back from a day-to-day role at the company a few years ago, after leading the hardware engineering development of products including the MacBook Air laptop computer, the iMac desktop computer, and the iPad tablet. Apple now has Mr. Mansfield running the company’s secret autonomous, electric-vehicle initiative, code-named Project Titan, the people said.

[…]

Mr. Mansfield’s reassignment brings a leader with a record of delivering challenging technical products to market to an effort that has been mired in problems, according to people familiar with the project.

Mansfield’s other major project in that time was the Apple Watch. I wonder if this is an indication that much of the hardware work is done and turning it over to Giannandrea is the remaining step in solving the “mother of all AI problems”.

Truly autonomous vehicles are, I continue to believe, a pipe dream for this generation. But if it is not — if self-driving cars really are within reach — I struggle to believe that the company that brought us Siri is capable of cracking this in the not-too-distant future. I would love to be proved wrong, but I have also wanted Siri to work as expected for nearly a decade now.

Yimou Lee and Ben Blanchard, Reuters:

Foxconn said on Thursday its investment plan did not depend on who the U.S. president was. It was, however, exploring the option of building a new production line there.

“We continue to push forward in Wisconsin as planned, but the product has to be in line with the market demand … there could be a change in what product we make there,” Chairman Liu Young-way said at an investor conference.

Possible new products include those related to servers, telecommunications and artificial intelligence, he later told reporters.

Like some sort of AI 8K+5G combination, perhaps? I am dying to know what, precisely, Foxconn believes that is.