Month: November 2021

Tiffany Hsu and Marc Tracy, New York Times:

Marc Bernier, a talk radio host in Daytona Beach, Fla., whose show is available for download or streaming on iHeart’s and Apple’s digital platforms, was among the talk radio hosts who died of Covid-19 complications after expressing anti-vaccination views on their programs. The deaths made national news and set off a cascade of commentary on social media. What drew less attention was the industry that helped give them an audience.

[…]

Jimmy DeYoung Sr., whose program was available on iHeart, Apple and Spotify, died of Covid-19 complications after making his show a venue for false or misleading statements about vaccines. One of his frequent guests was Sam Rohrer, a former Pennsylvania state representative who likened the promotion of Covid-19 vaccines to Nazi tactics and made a sweeping false statement. “This is not a vaccine, by definition,” Mr. Rohrer said on an April episode. “It is a permanent altering of my immune system, which God created to handle the kinds of things that are coming that way.” Mr. DeYoung thanked his guest for his “insight.” Mr. DeYoung died four months later.

Hsu and Tracy report that iHeart, Apple, and Spotify have rules for podcasters that, if enforced, would require the removal of lies like the ones broadcast by these hosts. But the FCC, which regulates public airwaves, cannot intervene because it is a government agency and would, therefore, be violating the First Amendment. The message of this article seems to be that podcast directories ought to do a better job of moderating their platforms because they are able to.

Let me set aside the technical requirements of doing so, and focus on the theory alone, because I think it strays into uncomfortable territory. For one, many of the hosts profiled in this piece are not podcast hosts — they are syndicated radio hosts who happen to also distribute their shows in podcast form. The biggest problem with these jackasses is how they exploit their platform to miseducate audiences. This has been an AM radio trope for decades. Is anyone surprised the same people continue to spread dumb contrarianism?

My view of podcasts differs subtly from my enthusiasm for moderation by Facebook or Twitter or YouTube of users’ posts. Social media posts are mostly written by a public that is ill prepared for celebrity. Many of the most popular podcasts, on the other hand, are from professional broadcasters who should be tempered by editors and management. I think it is funny that iHeart has a theoretical higher standard for what its hosts say in podcast form compared to radio. Apple and Spotify, meanwhile, have no mandate to carry these shows as podcasts, but it seems ridiculous for it to be left to either company to moderate what radio hosts say.

Daniel Aleksandersen, developer of EdgeDeflector which opens microsoft-edge:// links as standard https:// links:

This brings us back to today. Windows 10 and 11 no longer care about the default web browser setting. Microsoft even removed the default web browser setting from Windows 11. Instead of a single setting for the default web browser, customers must set individual “link associations” for the http:// and https:// protocols; as well as file associations for the .html file type. This is a huge jump in complexity compared to the previous design. It’s clearly a user-hostile move that sees Windows compromise its own product usability in order to make it more difficult to use competing products.

According to a guide by Barbara Krasnoff, of the Verge, all file extensions that you might associate with a web browser — .htm, .html, .shtml — have individual default preferences. Presumably, this is for all of the exactly three people who wish to open each file format in a different web browser by default and not because Microsoft, the world’s most valuable publicly-traded company, cannot shake its anti-competitive habits.

Furthermore, Microsoft has added first-party experiences like News and Interest in Windows 10 and Widgets in Windows 11. It gave the features prominent positions on the taskbar. These “web experiences”, as Microsoft calls them, feature links to online news, weather, and other resources. Search result links in the Start menu and links sent to the device from a paired Samsung or Android devices are also affected.

However, these features don’t use regular web links (https://). Instead, they use microsoft-edge:// links that only work with the company’s web browser. These links are also featured in other Microsoft apps and are found around the Windows shell. These special links only exist to force users into using Microsoft Edge. They serve no other purpose than to circumvent the user’s default browser preference to promote a Microsoft product.

In the latest build of Windows 11, Microsoft has blocked the methods used by EdgeDeflector and web browsers like Firefox. It is almost admirable how transparently anti-competitive Microsoft is being because, despite being the world’s most valuable company, it faces little scrutiny from regulators. Sucks for the users, but who gives a damn about them?

Update: On November 15, Microsoft confirmed to the Verge that this was a deliberate change to block utilities like EdgeDeflector.

Louise Matsakis, Wired:

In May, TikTok began rolling out another feature that’s less standard. By default, it now suggests your account “to people who sent links to you or opened links you sent to them,” even on other apps. That means if you share a random TikTok video with a stranger on a dating site, it will show them your TikTok profile without any warning, as long as they click on the link. In return, you will receive a notification that they watched it prompting you to follow their account.

“Now when I share videos with people, I often get a notification telling me they’ve watched the video I’ve shared,” Eugene Wei, a former tech executive who previously published a series of viral essays about TikTok, wrote on his blog last month. “Often these notifications are the only way I know they even have a TikTok account and what their username is.”

Matsakis reports that TikTok also uses shared URLs to establish personal network connections and suggest accounts users can follow. Yet another violation of users’ trust by a big social media company that will have virtually no consequences for it or the managers who decided to make this change.

Hannah Murphy, Financial Times:

The man leading Facebook’s push into the metaverse has told employees he wants its virtual worlds to have “almost Disney levels of safety”, but also acknowledged that moderating how users speak and behave “at any meaningful scale is practically impossible”.

Andrew Bosworth, who has been steering a $10 billion-a-year budget to build “the metaverse”, warned that virtual reality can often be a “toxic environment” especially for women and minorities, in an internal memo from March seen by the Financial Times.

He added that this would be an “existential threat” to Facebook’s ambitious plans if it turned off “mainstream customers from the medium entirely”.

Perhaps the more personal qualities of augmented and virtual reality may pressure users into adopting more collegial and humane behaviour — but I have my doubts. This is debuting after an era where communications platforms like Facebook seemed to use 4chan as inspiration for their lenient community policies. It sure seems like in-person discussions are often modelled after more hostile online interactions. What is the interaction environment like in the uncanny valley?

It is unsettling to see how eager today’s technology and platform giants are to embrace mixed-reality interfaces for the future, while the software they ship today is buggy and the communications platforms implicitly encourage inflammatory discussions. Why should we trust any of them to get this right?

Kate Lindsay, Embedded:

I wrote about all this over at Study Hall, but it bears repeating here: the internet, it turns out, is not forever. It’s on more of like a 10-year cycle. It’s constantly upgrading and migrating in ways that are incompatible with past content, leaving broken links and error pages in its wake. In other instances, the sites simply shutter, or become so layered over that finding your own footprint is impossible — I have searched “Kate Lindsay Myspace” every which way and have concluded that my content from that platform must simply be lost to time, ingested by the Shai-Hulud of the internet.

It makes me wonder what will happen to a generation of materials shared over the internet but not on any website. The Internet Archive Wayback Machine is kind of a misnomer since it is only an archive of websites — and, even then, not all of them. But what about all of the stuff that only exists in apps? It is going to be a weird future which, I bet, will make Lindsay’s post feel almost quaint.

In separate recent appearances, both Craig Federighi and Tim Cook have voiced their opposition to sideloading. Both executives presumably timed their statements ahead of an E.U. leaders’ summit tomorrow which will, in part, include discussions of regulations impacting the App Store model of the iPhone and iPad.

It is notable how Apple executives are appearing publicly in an attempt to sway this regulation. It should not take guts for a company to put a face and name to the statements made on behalf of it but, well, Apple and many tech companies struggle with integrity and candour. It is nice to be able to hear what specific executives believe about policies like these.

Michael Tsai has a great roundup of commentary. The most common retort is, obviously, that the Mac allows applications from places other than a moderated App Store, and it is pretty secure. Why should non-Mac products be so much more constrained? Is there really that big a difference in terms of security and privacy?

Being the technology-confident person that I am, it is hard to see the downside of a Gatekeeper-like approach on iOS. It is something I would enable immediately because I am sure there are many developers who would happily reclaim their businesses.

At the same time, the number of people using non-Mac devices from Apple is many many times greater than the Mac user base. It is also possible a smartphone may somehow be different to a personal computer in ways not clearly articulated. But those are not the arguments these Apple executives are making. They are claiming that people actively choose the iPhone over an Android phone because it is more locked down.

Joe Rossignol of MacRumors reporting on Federighi’s comments:

Federighi said that while the Digital Markets Act has an “admirable mission” to promote competition and ensure that users have choice, he believes that the provision requiring sideloading would be a “step backwards” that takes away a user’s choice of a “more secure platform” in iOS compared to Android.

Cook:

[…] That choice exists when you go into the carrier shop. If that is important to you, then you should buy an Android phone. From our point of view, it would be like if I were an automobile manufacturer telling [a customer] not to put airbags and seat belts in the car. He would never think about doing this in today’s time. It’s just too risky to do that […]

Apparently, over 40% of Americans want the smartphone equivalent of a car without seatbelts or airbags. This is clearly absurd, and I have to wonder if Apple’s arguments make sense.

For comparison, back when the largest iPhones had displays no bigger than 3.5-inches or even 4-inches, it was common to see people pointing to its display size as an advantage compared to larger-screened Android phones. The theory was that people bought iPhones, in part, because they were smaller. After the monster success of the iPhone 6 and 6 Plus — likely the most popular smartphone ever in terms of unit sales — it was plainly obvious people were buying the iPhone despite its small screen, something Apple’s buyer surveys also revealed.

I think there is a market for devices that are very secure, and Apple generally delivers on that. But I wonder if the public views the App Store model as a component of that security architecture. If a future iOS update enabled some Gatekeeper-like features, would people hesitate to upgrade? I am not sure I accept Apple’s argument that it would put gaping holes in the iOS security model, but if we accept that sideloading marginally increases some security risks, would that affect the number of people buying iPhones?

I think people surely want a more secure phone over a less secure phone, but what they also want are acceptable compromises. Apple is claiming that it has made the right compromise since 2008; the E.U. is arguing this gives Apple and Google too much power for how ubiquitous smartphones are today. I do not know that iPhone users would generally put a lack of sideloading at the top of their list of complaints, but I do think there is a deserved skepticism of large tech companies that Apple must contend with. Loosening the reins just a little may ease that compromise in the right direction.

A multi-bylined report from Rest of World:

“Very little of what I have read about the Papers comes as a surprise,” said Rosemary Ajayi, a lead researcher at Digital Africa Research Lab. “It is, however, vindicating because a lot of people from developing nations have been working behind the scenes for years, investigating and flagging systemic failures, only to be gaslit by the platforms.”

[…]

In Africa, civil society figures worried that governments could use the Facebook Papers as an opportunity to limit freedom online, rather than address misinformation.

[…]

“Campaigns like #DeleteFacebook are highly unlikely to pick up steam on the continent,” said Ajayi of Digital Africa Research Lab. “How do you delete what many consider to be the internet?”

This article reflects an upsetting mix of understandable pessimism and deeper concerns. This product that the U.S. has exported and artificially advantaged in many developing regions is being used as a vehicle for authoritarianism. Internal discussions leaked from Facebook are only confirmation.

I still think these documents must be shared around the world. It is difficult for reporters in the United States or western Europe to accurately assess the importance of what they are reading about countries in Central and South America, Africa, and across Asia.

After iFixit explored the components of the new iPhone 13 and iPhone 13 Pro models, it tried making some standard repairs. It found that simply swapping one phone’s screen with the genuine display from another caused Face ID to stop working. This is due to a new serial number chip within the screen, which can be logged by software at Apple-authorized repair shops and the company’s own stores, but which is not available to unaffiliated repairers or device owners.

That sucks, since iPhone displays get replaced by independent shops all the time. Happily, the company tells the Verge that it will issue a software update permitting screen replacement.

What I found interesting is how this story has been framed. Kevin Purdy of iFixit:

It’s hard to believe, after years of repair-blocking issues with Touch ID, batteries, and cameras, that Apple’s latest iPhone part lock-out is accidental. As far as our engineers can tell, keeping Face ID working on the iPhone 13 after a screen swap should be easier than ever, since its scanner is wholly separate from the display. Technically, yes: Face ID failure could be a very specific hardware bug for one of the most commonly replaced components, one that somehow made it through testing, didn’t get fixed in a major software update, and just happens to lock out the kind of independent repair from which the company doesn’t profit.

More likely, though, is that this is a strategy, not an oversight. This situation makes AppleCare all but required for newer iPhones, unless you happen to know that your local repair shop is ready for the challenge. Or you simply plan to never drop your phone.

If today’s announcement from Apple pans out, I see Purdy’s speculation as more of the kind of fear mongering that makes it hard for me to trust iFixit and other right-to-repair advocates. Last year, Purdy hyped up problems with iPhone 12 camera swaps as “the end of the repairable iPhone”, but the problem was fixed in a January software update. As I wrote then, the problem is almost certainly not a deliberate tactic by Apple, and instead reveals that the company prioritizes its own stores when considering device repairability.

The Verge chose to run this news today under the headline “Apple Backs Off of Breaking Face ID After DIY iPhone 13 Screen Replacements”. The “backs off” suggests this was a deliberate move by Apple to prevent independent shops’ repairs and it is reversing course, perhaps because of bad publicity. Again, this ascribes a motivation I do not believe you can see in available evidence. It is obviously a deliberate move for Apple to serialize displays and pair them to specific Face ID modules, but one could also assume this is because the displays and Face ID modules are now separate, and this pairing step is for security or calibration purposes.

Like last year, I feel compelled to mention that I support right-to-repair legislation. It makes complete sense for iPhone owners to get a new battery swapped or get their display fixed without having to bring their phone to an Apple Store or through an Apple-connected support channel. Given how essential the smartphone is, all of them should support easier repairs. Apple could do better and it should do better. Adequate repair legislation would require this software update to be released before the iPhone 13 could be offered for sale, for example, and ensure common repairs can be completed on new products. These are real concerns that can be addressed with care and sober thought.

Until something like that becomes reality, I think iFixit should ease off its doom and gloom narrative. Pointing out that iPhone 12 camera modules cannot be swapped, and the same for iPhone 13 displays, ought to be enough without inserting a more fictional narrative.

Sirin Kale, the Guardian:

On Street View, we have a panoptical view of the world and all the mysteries, non-sequiturs and idiocies that are part of everyday life. Here is Sherlock Holmes hailing a cab in Cambridge; a car submerged in a Michigan lake containing the body of a long-missing person; Mary Poppins waiting on the sidewalk at an amusement park; a caravan being stolen by a thief.

“I couldn’t believe it,” says David Soanes, a 56-year-old teacher from Linton, Derbyshire, and the owner of said caravan, which was stolen in June 2009. His son discovered the suspect on Street View and police were able to identify the man involved, although sadly this wasn’t sufficient evidence for a conviction. “I go back and look at it from time to time,” says Soanes, of the image of his former caravan mid-transfer to a new owner.

I acknowledge the privacy questions of fixing photography of the world’s roadside from the perspective of a Subaru Impreza, but I find Street View’s benefits to largely outweigh its costs.1 It is a remarkable invention, captured well here by Kale’s interview subjects.

John Paczkowski, Buzzfeed News:

Australia’s national privacy regulator has ordered controversial facial recognition company Clearview AI to destroy all images and facial templates belonging to individuals living in Australia, following a BuzzFeed News investigation.

On Wednesday, the Office of the Australian Information Commissioner (OAIC) said Clearview had violated Australians’ privacy by scraping their biometric information from the web and disclosing it via a facial recognition tool built on a vast database of photos scraped from Facebook, Instagram, LinkedIn, and other websites.

This sounds great, but it faces some of the same problems as removing Canadian faces. Does Clearview know which faces in its collection are Australian? In the Commissioner’s determination, Clearview told the office that it, in the words of the Commissioner, “collects images without regard to geography or source”. Presumably, it can tie some photos to people living in Australia. But does it reliably retain information about, say, an Australian living abroad? It seems like one of those edge cases that could affect about a million people.

Also, in that determination, I found it telling that Clearview “repeatedly asserted that it is not subject to the Privacy Act” largely because it is a company based in the U.S. and, under U.S. law, it is arguing its collection of biometric information from published images is legal. The U.S. is like a tax haven but for privacy abuses. (And taxes too.)

Amanda Mull, the Atlantic:

It sounds harmful and inefficient — all the box trucks and tractor trailers and cargo planes and container ships set in motion to deal with changed minds or misleading product descriptions, to say nothing of the physical waste of the products themselves, and the waste created to manufacture things that will never be used. That’s because it is harmful and inefficient. Retailers of all kinds have always had to deal with returns, but processing this much miscellaneous, maybe-used, maybe-useless stuff is an invention of the past 15 years of American consumerism. In a race to acquire new customers and retain them at any cost, retailers have taught shoppers to behave in ways that are bad for virtually all involved.

The supposedly efficient marketplace has produced a system where the cost of production has decreased so much — through deliberately seeking the lowest-wage factory workers and taking a hands-off approach to their safety — that it is trivial for some retailers to discard huge amounts of merchandise from returns and overstock. The incentives are all backwards.

Anja Karadeglija, National Post:

The Liberal government’s proposal to force online platforms to monitor all user content and take down posts they judge to be illegal “will result in the blocking of legitimate content,” says Google.

[…]

The Liberal government has promised to introduce legislation tackling online harms – defined as terrorist content, content that incites violence, hate speech, intimate images shared non-consensually and child sexual exploitation – within 100 days of Parliament’s return. The government outlined details of its proposal in July and asked for feedback, though it has refused to release the 423 submissions it received. Google is now the latest among a number of organizations and academics to make their own submissions public.

Interestingly, Google is not suggesting scrapping this bill, only modifying it to permit a more considered approach toward platform moderation. But I think repairing this mess will require changes so fundamental that it would require an entirely different bill.

I would love to see what is being suggested by other respondents but, as Karadeglija writes, the government has not released those records.

See Also: A summary of platform moderation legislation (PDF) proposals in various countries, as prepared by Reset for U.K. Parliament.

Taras Grescoe, Smithsonian Magazine:

While archaeologists have excavated concrete vats used for making garum from Tunisia to France, intact organic remains have proven harder to come by. A breakthrough occurred in 2009, when Italian researchers discovered six sealed dolia (large clay storage vessels) in a building that modern scholars have dubbed the Garum Shop at Pompeii. The eruption of Mount Vesuvius in A.D. 79 buried the building under several feet of ash, perfectly preserving a small factory just as it was salting down a late-summer catch of locally fished picarel to make liquamen.

Food technicians from the universities of Cádiz and Seville have analyzed the charred, powdered remains from Pompeii. Using that information, and guided by a liquamen recipe thought to have been written in the third century A.D. — it calls for heavily salted small fish to be fermented with dill, coriander, fennel and other dried herbs in a closed vessel for one week — the researchers produced what they claim is the first scientific recreation of the 2,000-year-old fish sauce.

Grescoe was able to make a Mason jar’s worth of garum, and a 2005 blog post gives a different recipe for recreating it. Fascinating stuff.

Rebecca Heilweil, Recode:

But Meta’s announcement comes with a couple of big caveats. While Meta says that facial recognition isn’t a feature on Instagram and its Portal devices, the company’s new commitment doesn’t apply to its metaverse products, Meta spokesperson Jason Grosse told Recode. In fact, Meta is already exploring ways to incorporate biometrics into its emerging metaverse business, which aims to build a virtual, internet-based simulation where people can interact as avatars. Meta is also keeping DeepFace, the sophisticated algorithm that powers its photo-tagging facial recognition feature.

Emily Baker-White, Buzzfeed News:

The fact is: Meta intends to collect unique, identifying information about its users’ faces. Last week, Facebook founder Mark Zuckerberg told Stratechery’s Ben Thompson that “one of the big new features” of Meta’s new Cambria headset “is around eye-tracking and face-tracking.” And while the platform has “turned off the service” that previously created facial profiles of Facebook users, the New York Times reported that the company is keeping the algorithm on which that service relied. A Meta spokesperson declined to answer questions from BuzzFeed News about how that algorithm remains in use today.

Meta may have shut down the facial recognition system on Facebook that raised so many concerns, but given that it intends to keep the algorithm that powered that system, there is no reason the company couldn’t “simply turn it on again later,” according to David Brody, senior counsel at the Lawyers’ Committee for Civil Rights Under Law.

Since so much of this vision is currently speculative, it may be tempting to write this off as a mongering load of fear, uncertainty, and doubt. But if you are at least a little bit intrigued by the direction Meta is taking — and I am — you should be similarly concerned about the privacy risks it creates. Just as it is probably not worth rushing out to buy a complete metaverse-ready setup today, it is not worth panicking about these concerns — but they are worth keeping an eye on.

Annia Ciezadlo, Wired:

The team that is running the consortium is making the same mistake that Facebook did: By excluding the non-Western world from a global discussion of its own human rights, they have been ensuring that those rights will continue to be violated. When authoritarian governments use Facebook to surveil, harass, and intimidate people, those people need to be at the table when those abuses are being documented. People from non-Western countries deserve a chance to report on those who have clearly been reporting on them. This is a moral imperative, but also a practical one. ​​If a consortium set up to expose Facebook’s global abuses treats institutions in the global south as irrelevant, second-string, or unworthy of equal access to the truth, how can it hold Facebook accountable for doing the same thing?

Ideally, the Facebook Papers — which are largely comprised of internal discussions, and not necessarily final decisions — would be made freely available to anyone interested in their contents. But I understand the risks with doing so, and I hope that many more news outlets are granted access, especially those in developing nations where Meta’s products are exempt from metered mobile data.

Dan Moren, Six Colors:

Apple’s added a few features over the last couple years that help us cope with our current world situation, whether it be unlocking our iPhones with our Apple Watches or improvements to FaceTime. In iOS 15.1 last month, it rolled out the ability to store a digital version of your vaccine record in the Wallet app.

With more and more places requiring proof of vaccination, it seems like digital vaccine records would be the way to go — way better than trying to cram that huge card into your wallet. So I decided to give it a whirl.

Coincidentally, I just returned from a much-needed vacation and I had been meaning to write a similar article. So allow me to piggyback on Moren’s observations as I write about the experiences me and my partner had with our digital vaccine cards.

We arrived in Vancouver last Tuesday, October 26, which could not have been timed any better. On October 24, the British Columbian provincial government mandated full vaccination for patrons to be admitted at restaurants and bars. It would have been a little inconvenient if we had to pull up the PDF proof we had saved to our phones but, luckily, Apple released iOS 15.1 the day before we travelled, and the Alberta government began correctly signing records, both of which allowed us to add a vaccine card to Wallet.

That is the good news. The bad news is that QR codes for vaccine records from Alberta do not seem to be compatible with the scanners used in British Columbia. I do not know which party is at fault here, or if this is a temporary glitch in the early stages of this rollout. An App Store search for “QR vaccine scanner” reveals several province-specific verification apps, but plenty of others that seem to be more universal.

In practice, this incompatibility meant explaining to restaurant greeters that our barcodes could not be scanned, and presenting our driving licenses to prove our identity. We did not encounter any problems and every place we visited happily accepted this as proof of vaccination.

Is this a fraud-proof system? Probably not. In an ideal world, a record that cannot be verified should be assumed to be fraudulent. But there are enough edge cases that such a system might be impractical, especially since proving vaccination status is a temporary measure that allows a return to near-normalcy for most people and encourage holdouts. It is not a permanent measure that necessarily needs a permanent solution. Please spare me any conspiracy theories.

A return to near-normalcy is decidedly what it engendered. When we sat down, we could take off our masks and enjoy the company of those around us knowing that the likelihood of catching or spreading COVID-19 was reduced to an exception rather than a rule. Waitstaff continued to wear masks, which was the only thing to break the illusion. I worry about their ongoing exposure, and hope that these measures are enough to ensure their safety.

We flew back just a couple days after the federal government began requiring vaccinations aboard aircraft. This is where we again encountered some fracturing in the system. As of writing, Alberta is just one of two provinces without compatibility with the nationwide proof of vaccination. Right now, the provincial proof is still being accepted. However, we were not asked for any proof to board our flight; we were only asked to confirm our health status, not vaccinations, when checking in.

Moren:

This does, however, raise a second obstacle: uncertainty. I haven’t tried to use this digital vaccine card anywhere yet. Because even though the record is is verifiable using a freely available app, it’s unclear which places are actually going to be checking digital records. […]

Of course, this isn’t all on Apple — after all, people on other platforms will surely have digital vaccine readers, and all those entities that want to check people’s vaccination status have a vested interest as well. But, again, that fractured system is what makes it so tough.

It looks like things are quickly coalescing around agreed-upon standards to prove vaccination, at least around here.

I am hopeful these restriction-easing procedures are effective in reducing this disease’s presence in our lives without sacrificing public health. As eager as I am for socializing in the way things once were — I really miss live music — I want to do so with care and without urgency. Being able to sit in the company of equally vaccinated people enjoying a meal and conversation truly feels good again, albeit with these reservations. It is imperative to make sure everyone can enjoy that, too. Vaccinations and proof of immunity are interlinked ways of getting there.

Instagram announced the change — naturally — on Twitter:

They said it would never happen… Twitter Card previews start rolling out TODAY.

Now, when you share an Instagram link on Twitter a preview of that post will appear.

In December 2012, Instagram removed Twitter previews to direct more traffic to its website. Today’s change adds a summary card, which shows a small version of the image and still requires that users click through to see any detail. It took Instagram nearly a decade to slightly compromise and make sharing on Twitter suck a little less.

There are a few things I appreciate about this announcement last week from Glass:

With Glass 1.2 coming out today, you can now share your Glass Profile anywhere! We’re excited to open up profiles, letting you truly make Glass your home for highlighting your photography. Whether you’re using it as a portfolio, a quick showcase of your best shots, or a space to experiment with a new style, we’ve been blown away with the quality of your profiles. Now you can share them anywhere.

I like that this is not an attempt to duplicate the app on the web. You can see users’ photos, technical data, and profile information, but comments are not displayed and there is — as far as I can tell — no way to participate from the web. That gives it a different focus and a different feel.

I also appreciate that these profiles are not switched on by default. Users have to manually enable them. It feels like a slower and more deliberate action — a way to give users more control.

Glass is doing many things a little bit differently. I like it.

Jerome Pesenti, VP at Meta:

In the coming weeks, we will shut down the Face Recognition system on Facebook as part of a company-wide move to limit the use of facial recognition in our products. As part of this change, people who have opted in to our Face Recognition setting will no longer be automatically recognized in photos and videos, and we will delete the facial recognition template used to identify them.

[…]

But like most challenges involving complex social issues, we know the approach we’ve chosen involves some difficult tradeoffs. For example, the ability to tell a blind or visually impaired user that the person in a photo on their News Feed is their high school friend, or former colleague, is a valuable feature that makes our platforms more accessible. But it also depends on an underlying technology that attempts to evaluate the faces in a photo to match them with those kept in a database of people who opted-in. The changes we’re announcing today involve a company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.

Good. Pesenti says this will affect over a billion users, or about one-third Facebook’s user base. When it launched in 2010, users were opted into it by default; it took until 2019 for the company to require that users switch it on themselves. The risks of facial recognition — stalking, abuse, false matches, devalued privacy — are too great for ad hoc regulatory intervention.

The format of this announcement is interesting. It tries desperately to strike a positive tone, with several paragraphs citing specific examples of the benefits of facial recognition and only gesturing to the potential for harm and abuse. I am glad Facebook sees so many great uses for it; I see them, too. But I wish the company were anywhere near as specific about the acknowledgements of harm. As it is presented, it looks defensive.

Kashmir Hill and Ryan Mac, New York Times:

When the Federal Trade Commission fined Facebook a record $5 billion to settle privacy complaints in 2019, the facial recognition software was among the concerns. Last year, the company also agreed to pay $650 million to settle a class-action lawsuit in Illinois that accused Facebook of violating a state law that requires residents’ consent to use their biometric information, including their “face geometry.”

While Facebook says it is deleting information used to recognize individual faces, it is not clear if the products of that data will — or even can — be deleted. If the faces of a billion users have already been integrated into machine learning models, it seems likely they are inseparable and irrevocable.