Search Results for: "artificial intelligence"

Miles Kruppa and Sam Schechner, Wall Street Journal:

Now Google, the company that helped pioneer the modern era of artificial intelligence, finds its cautious approach to that very technology being tested by one of its oldest rivals. Last month Microsoft Corp. announced plans to infuse its Bing search engine with the technology behind the viral chatbot ChatGPT, which has wowed the world with its ability to converse in humanlike fashion. Developed by a seven-year-old startup co-founded by Elon Musk called OpenAI, ChatGPT piggybacked on early AI advances made at Google itself.

[…]

Google’s approach could prove to be prudent. Microsoft said in February it would put new limits on its chatbot after users reported inaccurate answers, and sometimes unhinged responses when pushing the app to its limits.

Many in the tech commentariat have predicted victory for Microsoft products so often it has become a running joke in some circles. It released one of the first folding phones which it eventually unloaded on Woot at a 70% discount; it was one of many instances where Microsoft’s entry was prematurely championed.

Even after all the embarrassing problems in Google’s Bard presentation and demos and the pressure the company is now facing, I imagine its management is feeling somewhat vindicated by Microsoft’s rapid dampening of the sassier side of its assistant. While Microsoft could use its sheer might to rapidly build a user base — as it did with Teams — Google could also flip a switch to do the same because it is the world’s most popular website.

Christopher Mims, Wall Street Journal:

Technologists broadly agree that the so-called generative AI that powers systems like ChatGPT has the potential to change how we live and work, despite the technology’s clear flaws. But some investors, chief executives and engineers see signs of froth that remind them of the crypto boom that recently fizzled.

[…]

“The people talking about generative AI right now were the people talking about Web3 and blockchain until recently—the Venn diagram is a circle,” says Ben Waber, chief executive of Humanyze, a company that uses AI and other tools to analyze work behavior. “People have just rebranded themselves.”

One difference between this wave of hype and those which have come before it is how anyone can imagine how they might use these services. Who gives a shit about virtual reality meetings? Pretty much nobody so far. Generative services seem like a more lasting proposition, assuming it is possible to make them more accurate and comprehensive.

There are many things which I was considering linking to in recent weeks, but it is easier to do so via a story told through block quotes. They share a common thread and theme.

I begin with Can Duruk, at Margins:

I’m old enough to remember when Google came out, which makes me old enough to remember at least 20 different companies that touted as called Google-killers. I applaud every single one of them! A single American company operating as a bottleneck behind the world’s information is a dangerous, and inefficient proposition and a big theme of the Margins is that monopolies are bad so it’s also on brand.

But for one reason and another, none of the Google competitors have seemed to capture the world’s imagination.

Until now, that is. Yes, sorry, I’m talking about that AI bot.

Dave Winer:

I went to ChatGPT and entered “Simple instructions about how to send email from a Node.js app?” What came back was absolutely perfect, none of the confusing crap and business models you see in online instructions in Google. I see why Google is worried.

Michael Tsai has a collection of related links. All of the above were posted within the month of January. It really felt like the narrative of competition between Google and ChatGPT was reaching some kind of peak.

Owen Yin last week:

Gone are the days of typing in specific keywords and struggling to find the information you need. Microsoft Bing is on the cusp of releasing its ChatGPT integration, which will allow users to ask questions in a natural way and get a tailored search experience that will reshape how we explore the internet.

I got a preview of Bing’s ChatGPT integration and managed to get some research in before it was shut off.

That is right: Microsoft was first to show a glimpse of the future of search engines with Bing, but only for a few people briefly. I tried to URL hack Bing to see if I could find any remnant of this and I think I got close: visiting bing.com/chat will show search results for “Bing AI”. If nothing else, it is a very good marketing tease.

Sundar Pichai today:

We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

[…]

Now, our newest AI technologies — like LaMDA, PaLM, Imagen and MusicLM — are building on this, creating entirely new ways to engage with information, from language and images to video and audio. We’re working to bring these latest AI advancements into our products, starting with Search.

Via Andy Baio:

Google used to take pride in minimizing time we spent there, guiding us to relevant pages as quickly as possible. Over time, they tried to answer everything themselves: longer snippets, inline FAQs, search results full of knowledge panels.

Today’s Bard announcement feels like their natural evolution: extracting all value out of the internet for themselves, burying pages at the bottom of each GPT-generated essay like footnotes.

Google faced antitrust worries due, in part, to its Snippets feature, which automatically excerpts webpage text to answer queries without the searcher having to click. As of 2019, over half of Google searches returned a result without a user clicking out.

The original point of search engines was to be directed to websites of interest. But that has not been the case for years. People are not interested in visiting websites about a topic; they, by and large, just want answers to their questions. Google has been strip-mining the web for years, leveraging its unique position as the world’s most popular website and its de facto directory to replace what made it great with what allows it to retain its dominance. Artificial intelligence — or some simulation of it — really does make things better for searchers, and I bet it could reduce some tired search optimization tactics. But it comes at the cost of making us all into uncompensated producers for the benefit of trillion-dollar companies like Google and Microsoft.

Baio:

Personally, I wish that the “code red” response that ChatGPT inspired at Google wasn’t to launch a dozen AI products that their red teams and AI ethicists have warned them not to release, but to combat the tsunami of AI-generated SEO spam bullshit that’s in the process of destroying their core product. Instead, they’re blissfully launching new free tools to generate even more of it.

It is fascinating to see Google make its “Bard” announcement in the weeks following CNet’s embarrassing and lucrative generated articles.

Jon Christian, Futurism:

Looking at the outraged public reaction to the news about CNET‘s AI articles, [director of search engine optimization, Jake] Gronsky realized that the company might have gone too far. In fact, he explained, he saw the whole scandal as a cautionary tale illustrating that Red Ventures shouldn’t mark its AI-generated content for readers at all.

“Disclosing AI content is like telling the IRS you have a cash-only business,” he warned.

Gronsky wasn’t just concerned about CNET and Bankrate, though. He was also worried that a Google crackdown could impact Red Ventures’ formidable portfolio of sites that target prospective college students, known internally as Red Ventures EDU.

To be clear, it seems like saner and more ethically conscious heads prevailed and CNet articles generated by automated means are marked.

The web is corrupt and it is only getting worse. Websites supported by advertising — itself an allegedly illegal Google monopoly — rely on clicks from searchers, so they have compromised their integrity to generate made-for-Google articles, many of which feature high-paying affiliate links. Search optimization experts have spent years in an adversarial relationship with Google in an attempt to get their clients’ pages to the coveted first page of results, often through means which make results worse for searchers. Artificial intelligence is, it seems, a way out of this mess — but the compromise is that search engines get to take from everyone while giving nothing back. Google has been taking steps in this direction for years: its results page has been increasingly filled with ways of discouraging people from leaving its confines. A fully automated web interpreter is a more fully realized version of this initiative. Searchers get the results they are looking for and, in the process, undermine the broader web. The price we all pay is the worsening of the open web for Google’s benefit.

New world, familiar worries.

Patrick McGee, Financial Times:

Apple is taking steps to separate its mobile operating system from features offered by Google parent Alphabet, making advances around maps, search and advertising that has created a collision course between the Big Tech companies.

[…]

One of these people said Apple is still engaged in a “silent war” against its arch-rival. It is doing so by developing features that could allow the iPhone-maker to further separate its products from services offered by Google. Apple did not respond to requests for comment.

This is a strange article. The thesis, above, is that Apple is trying to reduce its dependence on Google’s services. But McGee cannot seem to decide whether Apple’s past, present, or future changes are directly relevant, so he kind of posits that they all are. Here, look:

The first front of this battle is mapping, which started in 2012 when Apple released Maps, displacing its Google rival as a pre-downloaded app.

The move was supposed to be a shining moment for Apple’s software prowess but the launch was so buggy — some bridges, for example, appeared deformed and sank into oceans — that chief executive Tim Cook said he was “extremely sorry for the frustration this has caused our customers”.

Apple Maps turns eleven years old in 2023, so it is safe to say that Apple adequately distanced itself from its reliance upon Google for maps, oh, about eleven years ago. Whether users have is another question entirely. The 3D rendering problems may have been the most memorable glitches, but the biggest day-to-day problems for users were issues with bad data.

So what is new?

Apple’s Maps has improved considerably in the past decade, however. Earlier this month it announced Business Connect, a feature that lets companies claim their digital location so they can interact with users, display photos and offer promotions.

While businesses have been able to claim their listing and manage its details for years, the recently launched Business Connect is a more comprehensive tool. That has advantages for businesses and users alike, as there may be better point-of-interest data, though it is another thing businesses need to pay attention to. But as far as ways for Apple to distance itself from Google, I am not sure I see the connection.

McGee:

The second front in the battle is search. While Apple rarely discusses products while in development, the company has long worked on a feature known internally as “Apple Search”, a tool that facilitates “billions of searches” per day, according to employees on the project.

Now I am confused: is this a service which is in development, or is it available to users? To fit his thesis, McGee appears to want it both ways:

Apple’s search team dates back to at least 2013, when it acquired Topsy Labs, a start-up that had indexed Twitter to enable searches and analytics. The technology is used every time an iPhone user asks Apple’s voice assistant Siri for information, types queries from the home screen, or uses the Mac’s “Spotlight” search feature.

Once again, I have to ask how a feature eight years old means Apple is only now in the process of disentangling itself from Google. Apparently, it is because of speculation in the paragraphs which follow the one above:

Apple’s search offering was augmented with the 2019 purchase of Laserlike, an artificial intelligence start-up founded by former Google engineers that had described its mission as delivering “high quality information and diverse perspectives on any topic from the entire web”.

Josh Koenig, chief strategy officer at Pantheon, a website operations platform, said Apple could quickly take a bite out of Google’s 92 per cent share of the search market by not making Google the default setting for 1.2bn iPhone users.

There is no segue here, and no indication that Apple is actually working to make such a change. Koenig insinuates it could be beneficial to users, but McGee acknowledges it “would be expensive” because Apple would lose its effort-free multibillion-dollar annual payout from Google.

As an aside: an Apple search engine to rival Google’s has long been rumoured. If it is a web search engine, I have long thought Apple could use the siri.com domain it already owns. But it may not have to be web-based — it is plausible that searching the web would display results like a webpage in Safari, but it would only be accessible from within that browser, kind of like the existing Siri Suggestions feature. An idle thought as I write this but, as I said, the article provides no indication that Apple is pursuing this.

McGee:

The third front in Apple’s battle could prove the most devastating: its ambitions in online advertising, where Alphabet makes more than 80 per cent of its revenues.

This is the “future” part of the thesis. Based on job ads, it appears Apple is working on its own advertising system, as first reported by Shoshana Wodinsky at Marketwatch in August. As I wrote then, it looks bad that Apple is doing this in the wake of App Tracking Transparency, and I question the likely trajectory of this. But this is, again, not something which Apple is doing to distance its platform from Google’s products and services, unless you seriously believe Apple will prohibit Google’s ads on its platforms. So long as Google is what it is to internet ads — by the way, stay tuned on that front — Apple may only hope to be a little thorn in Google’s side.

These three examples appear to fit into categories which seem similar but are very different. Business Connect for Apple Maps is not a competitor to Google Business Profile; any business is going to have to maintain both. There are no concrete details provided about Apple’s search ambitions, but it is the only thing here which would reduce Apple’s dependence on Google. Another advertising platform would give Google some competition and put more money in Apple’s pocket, but it may only slightly reduce how much advertisers rely on Google. It seems to me there are pro-competition examples here and there are anti-anti-competition arguments: the U.S. Department of Justice sued Google in September over its exclusivity agreements.

Anyway, speaking of Apple’s contracts with Google, whatever happened to Project McQueen?

Leyland Cecco, the Guardian:

Apple has quietly launched a catalogue of books narrated by artificial intelligence in a move that may mark the beginning of the end for human narrators. The strategy marks an attempt to upend the lucrative and fast-growing audiobook market – but it also promises to intensify scrutiny over allegations of Apple’s anti-competitive behaviour.

[…]

On the company’s Books app, searching for “AI narration” reveals the catalogue of works included in the scheme, which are described as being “narrated by digital voice based on a human narrator”.

Apple says it is starting with fiction and romance titles and, of course, in English only.

In addition to the examples on Apple’s website, I listened to a random selection of previews in Apple Books. They are good and often convincing. But I do not think this is what listeners ought to receive when they spend real money on an audiobook. These voices are only a little better than Apple’s screen reading voices, available in each platform’s Accessibility preferences. “Better than nothing” is not the most compelling argument for me, but I suppose it is inevitable; Google has offered a similar service since last year.

Cristiano Lima, Washington Post:

An academic study finding that Google’s algorithms for weeding out spam emails demonstrated a bias against conservative candidates has inflamed Republican lawmakers, who have seized on the results as proof that the tech giant tried to give Democrats an electoral edge.

[…]

That finding has become the latest piece of evidence used by Republicans to accuse Silicon Valley giants of bias. But the researchers said it’s being taken out of context.

[Muhammad] Shahzad said while the spam filters demonstrated political biases in their “default behavior” with newly created accounts, the trend shifted dramatically once they simulated having users put in their preferences by marking some messages as spam and others as not.

Shahzad and the other researchers who authored the paper have disputed the sweeping conclusions of bias drawn by lawmakers. Their plea for nuance has been ignored. Earlier this month, a group of senators introduced legislation to combat this apparent bias. It intends to prohibit email providers from automatically flagging any political messages as spam, and requires providers to publish quarterly reports detailing how many emails from political parties were filtered.

According to reporting from Mike Masnick at Techdirt, it looks like this bill was championed by Targeted Victory, which also promoted the study to conservative media channels. You may remember Targeted Victory from their involvement in Meta’s campaign against TikTok.

Masnick:

Anyway, looking at all this, it is not difficult to conclude that the digital marketing firm that Republicans use all the time was so bad at its job spamming people, that it was getting caught in spam filters. And rather than, you know, not being so spammy, it misrepresented and hyped up a study to pretend it says something it does not, blame Google for Targeted Victory’s own incompetence, and then have its friends in the Senate introduce a bill to force Google to not move its own emails to spam.

I am of two minds about this. A theme you may have noticed developing on this website over the last several years is a deep suspicion of automated technologies, however they are branded — “machine learning”, “artificial intelligence”, “algorithmic”, and the like. So I do think some scrutiny may be warranted in understanding how automated systems determine a message’s routing.

But it does not seem at all likely to me that a perceived political bias in filtering algorithms is deliberate, so any public report indicating the number or rate of emails from each political party being flagged as spam is wildly unproductive. It completely de-contextualizes these numbers and ignores decades of spam filters being inaccurate from time to time for no good reason.

A better approach for all transparency around automated systems is one that helps the public understand how these decisions are made without playing to perceived bias by parties with a victim complex. Simply counting the number of emails flagged as spam from each party is an idiotic approach. I, too, would like to know why many of the things I am recommended by algorithms are entirely misguided. This is not the way.

By the way, politicians have a long and proud history of exempting themselves from unfavourable regulations. Insider trading laws virtually do not apply to U.S. congresspersons, even with regulations to ostensibly rein it in. In Canada, politicians excluded themselves from unsolicited communications laws by phone and email. Is it any wonder why polls have showed declining trust in institutions for decades?

Thomas Brewster, Forbes:

[…] On Wednesday, deputy prime minister and head of the Digital Transformation Ministry in Ukraine, Mykhailo Fedorov, confirmed on his Telegram profile that surveillance technology was being used in this way, a matter of weeks after Clearview AI, the New York-based facial recognition provider, started offering its services to Ukraine for those same purposes. Fedorov didn’t say what brand of artificial intelligence was being used in this way, but his department later confirmed to Forbes that it was Clearview AI, which is providing its software for free. They’ll have a good chance of getting some matches: In an interview with Reuters earlier this month, Clearview CEO Hoan Ton-That said the company had a store of 10 billion users’ faces scraped from social media, including 2 billion from Russian Facebook alternative Vkontakte. Fedorov wrote in a Telegram post that the ultimate aim was to “dispel the myth of a ‘special operation’ in which there are ‘no conscripts’ and ‘no one dies.’”

Tim Cushing, Techdirt:

Or maybe it’s just Clearview jumping on the bandwagon by supporting a country that already has the support of the most powerful governments in the world. Grabbing onto passing coattails and contacting journalists to get the word out about the company’s reverse-heel turn is savvy marketing. But it’s little more than that. The tech may prove useful (if the Ukraine government is even using it), but that shouldn’t be allowed to whitewash Clearview’s (completely earned) terrible reputation. Even if it’s useful, it’s only useful because the company was willing to do what no other company was: scrape millions of websites and sell access to the scraped data to anyone willing to pay for it.

It has been abundantly clear for a long time that accurate facial recognition can have its benefits, just as recording everyone’s browser history could make it easier to investigate crime. Even if this seems helpful, it is still an uneasy technology developed by ethically bankrupt company. It is hard for me to see this as much more than Clearview cynically using a war as a marketing opportunity given that it spread news of its participation weeks before anyone in the Ukrainian government confirmed it.

The marketplace for exploits and software of an ethically questionable nature is a controversial one, but something even I can concede has value. If third-party vendors are creating targeted surveillance methods, it means that the vast majority of us can continue to have secure and private systems without mandated “back doors”. It seems like an agreeable compromise so long as those vendors restrict their sales to governments and organizations with good human rights records.

NSO Group, creators of Pegasus spyware, seems to agree. Daniel Estrin, reporting last month at NPR:

NSO says it has 60 customers in 40 countries, all of them intelligence agencies, law enforcement bodies and militaries. It says in recent years, before the media reports, it blocked its software from five governmental agencies, including two in the past year, after finding evidence of misuse. The Washington Post reported the clients suspended include Saudi Arabia, Dubai in the United Arab Emirates and some public agencies in Mexico.

Pegasus can have legitimate surveillance use, but it has great potential for abuse. NSO Group would like us to believe that it cares deeply about selling only to clients that will use the software to surveil possible terrorists and valuable criminal targets. So, how is that going?

Bill Marczak, et al., Citizen Lab:

We identified nine Bahraini activists whose iPhones were successfully hacked with NSO Group’s Pegasus spyware between June 2020 and February 2021. Some of the activists were hacked using two zero-click iMessage exploits: the 2020 KISMET exploit and a 2021 exploit that we call FORCEDENTRY.

[…]

At least four of the activists were hacked by LULU, a Pegasus operator that we attribute with high confidence to the government of Bahrain, a well-known abuser of spyware. One of the activists was hacked in 2020 several hours after they revealed during an interview that their phone was hacked with Pegasus in 2019.

As Citizen Lab catalogues, Bahrain’s record of human rights failures and internet censorship should have indicated to NSO Group that misuse of its software was all but guaranteed.

NSO Group is just one company offering software with dubious ethics. Remember Clearview? When Buzzfeed News reported last year that the company was expanding internationally, Hoan Ton-That, Clearview’s CEO, brushed aside human rights concerns:

“Clearview is focused on doing business in USA and Canada,” Ton-That said. “Many countries from around the world have expressed interest in Clearview.”

Later last year, Clearview went a step further and said it would terminate private contracts, and its Code of Conduct promises that it only works with law enforcement entities and that searches must be “authorized by a supervisor”. You can probably see where this is going.

Ryan Mac, Caroline Haskins, and Antonio Pequeño IV, Buzzfeed News:

Like a number of American law enforcement agencies, some international agencies told BuzzFeed News that they couldn’t discuss their use of Clearview. For instance, Brazil’s Public Ministry of Pernambuco, which is listed as having run more than 100 searches, said that it “does not provide information on matters of institutional security.”

But data reviewed by BuzzFeed News shows that individuals at nine Brazilian law enforcement agencies, including the country’s federal police, are listed as having used Clearview, cumulatively running more than 1,250 searches as of February 2020. All declined to comment or did not respond to requests for comment.

[…]

Documents reviewed by BuzzFeed News also show that Clearview had a fledgling presence in Middle Eastern countries known for repressive governments and human rights concerns. In Saudi Arabia, individuals at the Artificial Intelligence Center of Advanced Studies (also known as Thakaa) ran at least 10 searches with Clearview. In the United Arab Emirates, people associated with Mubadala Investment Company, a sovereign wealth fund in the capital of Abu Dhabi, ran more than 100 searches, according to internal data.

As noted, this data only covers up until February last year; perhaps the policies governing acceptable use and clientele were only implemented afterward. But it is alarming to think that a company which bills itself as the world’s best facial recognition provider ever felt comfortable enabling searches by regimes with poor human rights records, private organizations, and individuals in non-supervisory roles. It does jibe with Clearview’s apparent origin story, and that should be a giant warning flag.

These companies can make whatever ethical promises they want, but money talks louder. Unsurprisingly, when faced with a choice about whether to allow access to their software judiciously, they choose to gamble that nobody will find out.

Katie Notopoulos, Buzzfeed News:

The Great Deplatforming was a response to a singular and extreme event: Trump’s incitement of the Capitol attack. As journalist Casey Newton pointed out in his newsletter, Platformer, it was notable how quickly the full stack of tech companies reacted. We shouldn’t assume that Amazon will just start taking down any site because it did it this time. This was truly an unprecedented event. On the other hand, do we dare think for a moment that Bad Shit won’t keep happening? Buddy, bad things are going to happen. Worse things. Things we can’t even imagine yet!

[…]

Long before Facebook, Twitter, and YouTube were excusing their moderation failures with lines like “there’s always more work to be done” and “if you only knew about all the stuff we remove before you see it,” Something Awful, the influential message board from the early internet, managed to create a healthy community by aggressively banning bozos. As the site’s founder, Rich “Lowtax” Kyanka, told the Outline in 2017, the big platforms might have had an easier time of it if they’d done the same thing, instead of chasing growth at any cost: […]

Ben Smith, New York Times:

This is the Oversight Board, a hitherto obscure body that will, over the next 87 days, rule on one of the most important questions in the world: Should Donald J. Trump be permitted to return to Facebook and reconnect with his millions of followers?

[…]

But the board has been handling pretty humdrum stuff so far. It has spent a lot of time, two people involved told me, discussing nipples, and how artificial intelligence can identify different nipples in different contexts. Board members have also begun pushing to have more power over the crucial question of how Facebook amplifies content, rather than just deciding on taking posts down and putting them up, those people said. In October, it took on a half-dozen cases, about posts by random users, not world leaders: Can Facebook users in Brazil post images of women’s nipples to educate their followers about breast cancer? Should the platform allow users to repost a Muslim leader’s angry tweet about France? It is expected to finally issue rulings at the end of this week, after what participants described as a long training followed by slow and intense deliberations.

That is certainly a range of topics, though one continues to wonder how much — if anything — can truly be offloaded to artificial intelligence.

Cory Doctorow:

[…] Our media, speech forums, and distribution systems are all run by cartels and monopolists whom governments can’t even tax – forget regulating them.

The most consequential regulation of these industries is negative regulation – a failure to block anticompetitive mergers and market-cornering vertical monopolies.

Doctorow calls this censorship; I disagree that what we have seen from tech giants truly amounts to that. Moderation is not synonymous with censorship. I do not think you can look at the vast landscape of publishing options available to just about anyone and conclude that speech is more restricted now than it was, say, ten or twenty years ago.

But Doctorow is right in observing that there are now bigger beasts that effectively function as unelected governments. They materialized because venture capital and regulators incentivized lower costs and faster growth. Our existing primary venues for online discussion are a product of this era, and it shows: Twitter’s newest solution for countering misinformation is adding comments from trusted users. It is such a Web 2.0 solution that it could have a wet floor effect logo and one of those shiny gummy-coloured “beta” callouts.

The way that we figure out how to create healthier communities is by trying new things in technology and antitrust policy. Ironically, I anticipate many of these new ideas will more closely model those from a previous generation, but necessarily updated for billions of people connected through the internet. It does seem unlikely to me that all of those people will be connected through the same platform — a new Facebook, for example — and more likely that they will be connected through the same protocols.

Julia Kollewe, the Guardian:

Uber has ditched efforts to develop its own self-driving car with the multibillion-dollar sale of its driverless car division to a Silicon Valley startup.

The ride-hailing company is selling the business, known as Advanced Technologies Group (ATG), for a reported $4bn (£3bn) to Aurora, a start-up that makes sensors and software for autonomous vehicles and is backed by Amazon and Sequoia Capital.

As part of the deal, Uber is investing $400m in Aurora in return for a minority stake of 26%. Uber’s chief executive, Dara Khosrowshahi, will join Aurora’s board. The deal will also give Aurora access to a carmaker, Japan’s Toyota, which has invested in ATG. ATG has grown to a venture with 1,200 employees.

This does not necessarily mean that Uber is unlikely to have autonomous vehicles — I think that is unlikely for lots of reasons — but it surely does not indicate that the project is going well if it is being offloaded to a startup. Uber’s autonomous transportation effort was, according to its S-1 public offering document, key to its long-term success:

If we fail to develop and successfully commercialize autonomous vehicle technologies or fail to develop such technologies before our competitors, or if such technologies fail to perform as expected, are inferior to those of our competitors, or are perceived as less safe than those of our competitors or non-autonomous vehicles, our financial performance and prospects would be adversely impacted.

So that was yesterday’s shake-up; here is today’s, from Mark Gurman at Bloomberg:

Apple Inc. has moved its self-driving car unit under the leadership of top artificial intelligence executive John Giannandrea, who will oversee the company’s continued work on an autonomous system that could eventually be used in its own car.

The project, known as Titan, is run day-to-day by Doug Field. His team of hundreds of engineers have moved to Giannandrea’s artificial intelligence and machine-learning group, according to people familiar with the change. An Apple spokesman declined to comment.

Previously, Field reported to Bob Mansfield, Apple’s former senior vice president of hardware engineering. Mansfield has now fully retired from Apple, leading to Giannandrea taking over.

Mansfield, you may recall, retired in June 2012 only to be hired back just a few months later to oversee a new generically-titled Technologies group. Mansfield only committed to remaining at Apple through 2014 but stuck around: in 2016, Daisuke Wakabayashi of the Wall Street Journal reported that Mansfield was moved to run the autonomous car project:

Bob Mansfield had stepped back from a day-to-day role at the company a few years ago, after leading the hardware engineering development of products including the MacBook Air laptop computer, the iMac desktop computer, and the iPad tablet. Apple now has Mr. Mansfield running the company’s secret autonomous, electric-vehicle initiative, code-named Project Titan, the people said.

[…]

Mr. Mansfield’s reassignment brings a leader with a record of delivering challenging technical products to market to an effort that has been mired in problems, according to people familiar with the project.

Mansfield’s other major project in that time was the Apple Watch. I wonder if this is an indication that much of the hardware work is done and turning it over to Giannandrea is the remaining step in solving the “mother of all AI problems”.

Truly autonomous vehicles are, I continue to believe, a pipe dream for this generation. But if it is not — if self-driving cars really are within reach — I struggle to believe that the company that brought us Siri is capable of cracking this in the not-too-distant future. I would love to be proved wrong, but I have also wanted Siri to work as expected for nearly a decade now.

Yimou Lee and Ben Blanchard, Reuters:

Foxconn said on Thursday its investment plan did not depend on who the U.S. president was. It was, however, exploring the option of building a new production line there.

“We continue to push forward in Wisconsin as planned, but the product has to be in line with the market demand … there could be a change in what product we make there,” Chairman Liu Young-way said at an investor conference.

Possible new products include those related to servers, telecommunications and artificial intelligence, he later told reporters.

Like some sort of AI 8K+5G combination, perhaps? I am dying to know what, precisely, Foxconn believes that is.

Yesterday, Tim Bradshaw and Patrick McGee of the Financial Times reported that Apple is ostensibly building a rival to Google’s search engine. You can find a syndicated copy of the article at Ars Technica. It left me scratching my head because it undermines its premise on two fronts: it seems to claim that Apple is surely building a true rival to Google’s search engine, and that Apple does not already have a search engine. The first claim does not seem to be substantiated, and the second seems to be contradicted by the article’s own reporting.

Let’s start with the headline:

Apple Develops Alternative to Google Search

“Develops” is a curious and ambiguous choice of word. It leaves the impression that Apple is either currently working on a true Google Search competitor, or that it has already built one. I am not sure which is the case; let’s find out. Here’s the lede:

Apple is stepping up efforts to develop its own search technology as US antitrust authorities threaten multibillion-dollar payments that Google makes to secure prime placement of its engine on the iPhone.

That indicates, to me, that this search engine is something new or more directly opposing Google’s efforts. But it is followed by this paragraph:

In a little-noticed change to the latest version of the iPhone operating system, iOS 14, Apple has begun to show its own search results and link directly to websites when users type queries from its home screen.

This seems to refer to Siri web suggestions that used to only display within the Safari address bar but are now in Spotlight. As far as I can tell, these are exactly the same suggestions but surfaced in a different place.

There are also keyword search suggestions in Spotlight. But tapping on any of those will boot you into the search engine of your choice — whichever you set in Safari preferences.

Both certainly point to Apple shipping a search engine today. It may not be a website with a list of links based on a query, but Google’s search engine is increasingly unlike that, too. So I am left with the impression that this is a service that currently exists, but then the article posits that it is merely a warm-up act:

That web search capability marks an important advance in Apple’s in-house development and could form the foundation of a fuller attack on Google, according to several people in the industry.

Here is where things become more speculative. Bradshaw and McGee make no reference to having any sources at Apple, only quotes from a handful of people in adjacent businesses. Maybe they have background information from people who are familiar with Apple’s efforts, but nothing is cited in this article. The claim that Apple is, perhaps, working on a direct competitor to Google’s web search engine appears to be nothing more than speculation about what Apple could do from people who believe that it is something Apple is doing. That position seems to be predicated on regulatory pressures and recent hires:

Two and a half years ago, Apple poached Google’s head of search, John Giannandrea. The hire was ostensibly to boost its artificial intelligence capabilities and its Siri virtual assistant, but also brought eight years of experience running the world’s most popular search engine.

[…]

“They [Apple] have a credible team that I think has the experience and the depth, if they wanted to, to build a more general search engine,” said Bill Coughran, Google’s former engineering chief, who is now a partner at Silicon Valley investor Sequoia Capital.

Apple’s interest in a search engine seems to be a regular rumour, but now that its contract with Google is attracting attention in the United States and United Kingdom, perhaps there is more substance this time around than in previous years. That raises more questions for me from an antitrust perspective: for example, would regulators who questioned the prominence of Siri on Apple’s devices find it equally dubious for the company to have its own search engine presumably set as the default?

Whatever the case, I am not sure this Financial Times piece sheds light on Apple’s path forward. The only substantive fact in this article is that Apple has expanded Safari’s Siri suggestions to Spotlight. Everything else appears to be speculative.

Al Root, writing in a Barron’s article confidently titled “Don’t Bet Against Musk”:

Now Musk plans to crack “level 5 autonomous driving” in the coming year. That’s the message Musk delivered Thursday at the World Artificial Intelligence Conference in China.

The society of automotive engineers defines five levels of autonomous driving from Level 1 to Level 5. A Level 1 feature is something like parking assistance. Level 5 means the driver — if you can still call the person behind the steering wheel a driver — doesn’t have to do anything in all driving conditions.

This is the exact same tune Musk has been playing for five years: in 2015, he said that fully autonomous vehicles would be on the road within two years. Credulous reporting by financial publications like Barron’s, above, and Forbes to industry sites like Electrek — and even mainstream publications like the BBC — has helped cement Musk’s claim that Level 5 autonomy is just around the corner for years now.

Tesla, of course, continues to market its driver aids as “Autopilot” while insinuating that a car bought from the company today has “Full Self-Driving Capability”. But these are exaggerations that I feel go beyond puffery. The features of the “Autopilot” system are exactly the same as the lane-keeping and radar-guided cruise control that have been available across the automotive industry for years now. For comparison, its namesake technology on aircraft can follow a predetermined flight path with speed and altitude adjustments. The things that make it “full[y] self-driv[ing]”, meanwhile, have fine print that clearly states that these features do not make the car drive itself:

The currently enabled features require active driver supervision and do not make the vehicle autonomous.

Jason Torchinsky, Jalopnik:

I’m pretty comfortable saying that, no, Tesla will not have full Level 5 autonomy solved by the end of the year, especially not with current hardware on their fleet of cars. I do not think we will see a software solution to L5 downloaded to Teslas any time soon.

I don’t really understand why Elon is pushing this narrative, either. Tesla has been saying they’re just about to release “Full Self-Driving” for years now, and they haven’t.

Elon’s remarks both suggest he’s aware of the scale of the problem, yet he seems to trivialize the issues or ignore them, anyway. While I think it’s possible that enough of the issues for Level 5 can be engineered away to be viable, we’re not really close yet.

You said it yourself, Elon: the world is complex and weird. You need to respect that, and be honest about the challenges of truly full autonomy.

I don’t think it is surprising that Musk is not honest about the likelihood of achieving full autonomy soon. I do expect the press to scrutinize such claims and correctly contextualize them, as Torchinsky has done here.

John Oliver:

This technology raises troubling philosophical questions about personal freedom, and, right now, there are also some very immediate practical issues. Even though it is currently being used, this technology is still very much a work in progress, and its error rate is particularly high when it comes to matching faces in real time. In fact, in the U.K., when human rights researchers watched police put one such system to the test, they found that only eight out of 42 matches were ‘verifiably correct’ — and that’s even before we get into the fact that these systems can have some worrying blind spots, as one researcher found out when testing numerous algorithms, including Amazon’s own Rekognition system:

At first glance, MIT researcher Joy Buolamwini says that the overall accuracy rate was high, even though all companies better detected men’s faces than women’s. But the error rate grew as she dug deeper.

“Lighter male faces were the easiest to guess the gender on, and darker female faces were the hardest.”

One system couldn’t even detect whether she had a face. The others misidentified her gender. White guy? No problem.

Yeah: “white guy? No problem” which, yes, is the unofficial motto of history, but it’s not like what we needed right now was to find a way for computers to exacerbate the problem. And it gets worse. In one test, Amazon’s system even failed on the face of Oprah Winfrey, someone so recognizable her magazine only had to type the first letter of her name and your brain autocompleted the rest.

Oliver covers a broad scope of different things that fit under the umbrella definition of “facial recognition” — everything from Face ID to police databases and Clearview AI.

Today, the RCMP and Clearview suspended their contract; the RCMP was, apparently, Clearview’s last remaining client in Canada.

Such a wide range of technologies raise complex questions about their regulation. Sweeping bans may prohibit the use of something like Face ID or Windows Hello, but even restricting use based on consent would make it difficult to build something like the People library built into Photos. Here’s how Apple describes it:

Face recognition and scene and object detection are done completely on your device rather than in the cloud. So Apple doesn’t know what’s in your photos. And apps can access your photos only with your permission.

Apple even put together a lengthy white paper (PDF) that, in part, describes how iOS and MacOS keep various features in Photos private to the user. However, in this case, the question is not about the privacy of one’s own data, but whether it is fair for someone to use facial recognition privately. It is a question of agency. Is it fair for anyone to have their face used, without their permission, to automatically associate pictures of themselves? Perhaps it is, but is it then fair to do so more publicly, as Facebook does? What is a comfortable line?

I don’t mean that as a rhetorical question. As Oliver often says, “the answer to the question of ‘where do we draw the line?’ is somewhere”, and I think there is a “somewhere” in the case of facial recognition. But the legislation to define it will need to be very nuanced.

Rebecca Heilweil, Vox:

So it seems that as facial recognition systems become more ambitious — as their databases become larger and their algorithms are tasked with more difficult jobs — they become more problematic. Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, told Recode that facial recognition needs to be evaluated on a “sliding scale of harm.”

When the technology is used in your phone, it spends most of its time in your pocket, not scanning through public spaces. “A Ring camera, on the other hand, isn’t deployed just for the purpose of looking at your face,” Guariglia said. “If facial recognition was enabled, that’d be looking at the faces of every pedestrian who walked by and could be identifying them.”

[…]

A single law regulating facial recognition technology might not be enough. Researchers from the Algorithmic Justice League, an organization that focuses on equitable artificial intelligence, have called for a more comprehensive approach. They argue that the technology should be regulated and controlled by a federal office. In a May proposal, the researchers outlined how the Food and Drug Administration could serve as a model for a new agency that would be able to adapt to a wide range of government, corporate, and private uses of the technology. This could provide a regulatory framework to protect consumers from what they buy, including devices that come with facial recognition.

This is such a complex field of technology that it will take a while to establish ground rules and expectations. Something like Clearview AI’s system should not be allowed; it is a heinous abuse of publicly-visible imagery. Real-time recognition is also extremely creepy and I believe should also be prohibited.

There are further complications: though the U.S. may be attempting to sort out its comfort level, those boundaries have been breached elsewhere.

Craig Silverman, BuzzFeed News:

Sensor Tower, a popular analytics platform for tech developers and investors, has been secretly collecting data from millions of people who have installed popular VPN and ad-blocking apps for Android and iOS, a BuzzFeed News investigation has found. These apps, which don’t disclose their connection to the company or reveal that they feed user data to Sensor Tower’s products, have more than 35 million downloads.

Since 2015, Sensor Tower has owned at least 20 Android and iOS apps. Four of these — Free and Unlimited VPN, Luna VPN, Mobile Data, and Adblock Focus — were recently available in the Google Play store. Adblock Focus and Luna VPN were in Apple’s App Store. Apple removed Adblock Focus and Google removed Mobile Data after being contacted by BuzzFeed News. The companies said they continue to investigate.

Once installed, Sensor Tower’s apps prompt users to install a root certificate, a small file that lets its issuer access all traffic and data passing through a phone. The company told BuzzFeed News it only collects anonymized usage and analytics data, which is integrated into its products. Sensor Tower’s app intelligence platform is used by developers, venture capitalists, publishers, and others to track the popularity, usage trends, and revenue of apps.

This is comparable to Facebook’s use of its Onavo VPN to spy on users’ app activity.

Joseph Cox and Jason Koebler, Vice:

Banjo, an artificial intelligence firm that works with police used a shadow company to create an array of Android and iOS apps that looked innocuous but were specifically designed to secretly scrape social media, Motherboard has learned.

[…]

Banjo did not have that sort of data access. So it created Pink Unicorn Labs, which one former employee described as a “shadow company,” that developed apps to harvest social media data.

[…]

But once users logged into the innocent looking apps via a social network OAuth provider, Banjo saved the login credentials, according to two former employees and an expert analysis of the apps performed by Kasra Rahjerdi, who has been an Android developer since the original Android project was launched. Banjo then scraped social media content, those two former employees added. The app also contained nonstandard code written by Pink Unicorn Labs: “The biggest red flag for me is that all the code related to grabbing Facebook friends, photos, location history, etc. is directly from their own codebase,” Rahjerdi said.

These are entirely separate events and companies, but the reports overlap in their descriptions of what can only be described as a worrying indifference to ethical norms. If the people running these companies have to cauterize their soul before work each day, perhaps they should treat that as a yelping klaxon that something is wildly wrong.

I expect to see more reports like these in the coming years as the country where similar companies are headquartered — and, consequently, where users’ rights are often contractually obligated — has yet to enact and enforce meaningful privacy rights.

Will Oremus, writing for Medium’s OneZero publication:

But Amazon’s public image as a cheerfully dependable “everything store” belies the vast and secretive behemoth that it has become  —  and how the products it’s building today could erode our privacy not just online but also in the physical world. Even as rival tech companies reassess their data practices, rethink their responsibilities, and call for new regulations, Amazon is doubling down on surveillance devices, disclaiming responsibility for how its technology is used, and dismissing concerns raised by academics, the media, politicians, and its own employees.

[…]

While the outcome of that case remains to be seen, the complaint represents just the tip of the iceberg. The Amazon of today runs enormous swaths of the public internet; uses artificial intelligence to crunch data for many of the world’s largest companies and institutions, including the CIA; tracks user shopping habits to build detailed profiles for targeted advertising; and sells cloud-connected, A.I.-powered speakers and screens for our homes. It acquired a company that makes mesh Wi-Fi routers that have access to our private Internet traffic. Through Amazon’s subsidiary Ring, it is putting surveillance cameras on millions of people’s doorbells and inviting them to share the footage with their neighbors and the police on a crime-focused social network. It is selling face recognition systems to police and private companies.

I am shocked at how unregulated markets tend to produce monopolies operating in unethical but profitable business categories with impunity.

M. R. O’Connor, for in the New Yorker, reviewed the idea of living in a shifting era of what it means to be a driver (the Roy here is Alex Roy, who you may know for his Polizei 144 antics or for driving across the United States in just over 31 hours):

Finally, Roy points out that many of the problems autonomous cars promise to solve also have simpler, non-technological solutions. (This is true, of course, only if one assumes that driving isn’t a problem in itself.) To reduce traffic, governments can invest in mass-transit and road infrastructure. To diminish pollution, they can build bike lanes and encourage the adoption of electric cars. In Roy’s opinion, the best way to make driving safer has nothing to do with technology: it’s to raise licensing standards and improve driver education. Over lunch — a Niçoise salad — Roy argued that our fixation on driverless cars flows from our civic laziness. “It’s easier to imagine that technology can solve a problem that education or regulation could also fix,” he said. In place of the driverless utopia that technologists often picture, he asked me to consider another possibility: a congested urban hellscape in which autonomous vehicles are subsidized by companies that pump them full of advertising; in exchange for free rides, companies might require you to pass by particular stores or watch commercial messages displayed on the vehicles’ windows. (A future very much like this was recently imagined by T. Coraghessan Boyle, in his short story “Asleep at the Wheel.”) In such a world, Roy said, “The joy of the ride is taken away.”

[…]

Perhaps it was inevitable that a nascent right-to-drive movement would spring up in America, where — as fervent gun-rights advocates and anti-vaccinators have shown — we seem intent on preserving freedom of choice even if it kills us. “People outside the United States look at it with bewilderment,” Toby Walsh, an Australian artificial-intelligence researcher, told me. In his book “Machines That Think: The Future of Artificial Intelligence,” from 2018, Walsh predicts that, by 2050, autonomous vehicles will be so safe that we won’t be allowed to drive our own cars. Unlike Roy, he believes that we will neither notice nor care. In Walsh’s view, a constitutional amendment protecting the right to drive would be as misguided as the Second Amendment. “We will look back on this time in fifty years and think it was the Wild West,” he went on. “The only challenge is, how do we get to zero road deaths? We’re only going to get there by removing the human.”

I would love to hear from readers around the world whether Walsh’s perspective is the case. Is the apprehension to self-driving cars or the desire to have human rights to control autonomous vehicles a mostly American stance? For what it’s worth, it was a software control that could not easily be overridden that brought down two 737 Max airplanes.

Also, I thought this was an insightful observation in the context of platform freedom, obfuscated code, and increasingly locked-down hardware:

In his book “Shop Class as Soulcraft: An Inquiry Into the Value of Work,” from 2009, the political philosopher and motorcycle mechanic Matthew B. Crawford argues that manual competence — our ability to repair the machines and devices in our lives—is a kind of ethical practice. Knowing how to fix things ourselves creates opportunities for meaningful work and individual agency; it allows us to grasp more deeply the built world around us. The mass-market economy, Crawford writes, produces devices that are practically impenetrable. If we try to repair our microwaves or printers, we’ll quickly be discouraged by their complexity; many cars produced today lack even dipsticks to check their oil levels. Driving the Tesla Model 3 has been compared to using a giant iPhone: instead of controlling the car directly, one seems to pilot it by means of a user interface.

This is a great essay.

Michelle Lou and Saeed Ahmed, CNN:

A global network of telescopes known as the Event Horizon Telescope project collected millions of gigabytes of data about M87 using a technique known as interferometry. However, there were still large gaps in the data that needed to be filled in.

That’s where [Katie Bouman’s] algorithm — along with several others — came in. Using imaging algorithms like Bouman’s, researchers created three scripted code pipelines to piece together the picture.

They took the “sparse and noisy data” that the telescopes spit out and tried to make an image. For the past few years, Bouman directed the verification of images and selection of imaging parameters.

It’s worth reading the 2016 press release announcing the development of this algorithm for a great explanation of how it works.

Steve Kovach, CNBC:

As iPhone sales continue to sink, Apple has made several key moves over the last year as it prepares new offerings to juice growth elsewhere in the business.

If you’ve been listening to CEO Tim Cook’s comments on earnings calls and in interviews recently, none of this should come as a surprise. The company has stopped reporting iPhone unit sales figures, and instead talks more about its growing base of active devices, which the company says can be used to squeeze out more revenue through its digital services like Apple Music, App Store sales and extra iCloud storage.

But it’s not just about those subscription services. Apple has made several shifts in recent months that signal its preparing to move beyond the iPhone in other ways, such as artificial intelligence, the growing smart home market and digital health monitoring.

A framing device I’ve seen a lot amongst tech analysts and journalists since Apple revised its first-quarter earnings forecast is the idea that the company’s increased push into services and other parts of its business is correlated with — or even because of — lower iPhone sales. I think this is a myopic view of the company’s products.

Let’s think about this in the inverse: I don’t see anyone seriously making the argument that Apple would not have increased their investments in services and machine learning if iPhone sales continued to grow.

More to the point, many of these service offerings were rumoured for a long time. Steve Jobs was asked about an Apple Music-like service in 2007. Apple’s apparently-forthcoming Netflix competitor has been rumoured for years. Even their long-rumoured car project was reported as being approved around the same time that the iPhone 6 was launched.

These projects all take lots of time; they are not a result of less-dramatic iPhone sales figures. Apple has been highlighting their subscription services more for a few years now and, in that time, they had their biggest-ever quarter, largely on the back of iPhone revenue. Based on all of this, the most likely reason that Apple is rumoured to be on the cusp of launching new services is simply because they’re ready now. Is this release time frame any different than it would have been if their most recent holiday quarter had surpassed expectations instead of falling short of Apple’s forecast? I don’t think there’s any evidence that supports that.