Bennett Cyphers, of the Electronic Frontier Foundation:
Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn’t learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, FLoC, which is perhaps the most ambitious — and potentially the most harmful.
In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won’t be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings “mean” — what kinds of people they contain — through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability — after all, they aren’t directly targeting protected categories, they’re just reaching people based on behavior. And the whole system will be more opaque to users and regulators.
I got this wrong and I feel bamboozled, but I have an excuse: this stuff is incomprehensible. Google’s proposal for dropping tracking is just a slightly broader version of tracking which happens to be much harder to understand. That should mean less adverse publicity for Google while still being hostile to user privacy.
For example, if you tell Siri to play a song, album or artist, it may ask you which service you want to use to listen to this sort of content. However, your response to Siri is not making that particular service your “default,” Apple says. In fact, Siri may ask you again at some point — a request that could confuse users if they thought their preferences had already been set.
This is a clever way of dealing with multiple options without requiring users to dig into menus or make manual settings. But we often want to pick something and stick with it. Computers are just tools; as they attempt to make more decisions for us, it can sometimes be delightful but it can also be maddening.
It reminds me of that button in the Twitter app that allows you to toggle between an algorithmically sorted timeline and a reverse-chronological one. If you open the app often enough, it will usually stick with the last sorting mode you selected. But if you do not, it will revert to showing an algorithmic timeline. If you prefer reverse-chronological, it sucks.
It is a bit like if you went out to your car one morning and the seat and steering wheel were in a completely different position to the way you left it. It is uncomfortable. It is no longer yours.
We want to share an update on our plans and guidance to help you prepare for these changes. We have decided to stop our iOS apps’ collection of IDFA data for now. Although this change affects the LinkedIn Audience Network (LAN), Conversion Tracking and Matched Audiences, we expect limited impact to your campaign performance, and don’t foresee major changes required for your campaign set-up.
The bill in North Dakota that failed last month was a pebble bouncing off a windscreen. But it looks like Apple is catching up to the gravel truck and cracks are beginning to form.
Leah Nylen, Politico:
Undeterred by that loss, the developers are focusing now on bills to let app-makers choose their own payment processors. So far, that legislation has been introduced in seven states, including New York, Illinois and Massachusetts, though the Arizona bill is the farthest along.
The state legislative fights are the latest manifestation of the long-running gripes that developers such as Spotify have aired to lawmakers and antitrust regulators in both Washington, D.C., and Europe about app store policies. Epic, the creator of the popular video game Fortnite, is also waging its own court fights against Apple and Google in an attempt to defang their app policies.
These bills are a testament to Google’s increasing control and Apple’s unwillingness to make smaller meaningful changes over time. Developers have been complaining about this stuff for years, without much response from either company — but particularly from Apple. Sure, subscriptions were opened to apps of all types several years ago and Apple takes a smaller commission from recurring subscribers. And, yes, small changes were made last year under increasing antitrust scrutiny. But I have to wonder if it would have become such a significant issue as to inspire legislation if Apple had been more proactive about making incremental rule changes to loosen its control and adapt more readily to complaints.
The way I see it, Apple could have continued to run the App Store as it has done while responding to developers’ feedback; or, at the very least, it could have loosened its control on its own terms. But it overplayed its hand. If enough state legislatures pass these bills — particularly in Illinois and New York — it is going to be forced to make major changes.
Here, I’ll introduce you to some of the guys. On the end, that’s Gmail. He’s the only one been around longer’n me, and he’s not going anywhere. Practically invincible. This fella next to me has a page of upstate real estate listings that get refreshed every so often. We call him The Dreamer. And that’s Bank Web Portal. He’s sat idle so long that he’s auto-logged out. That’s a death sentence. As soon as he gets noticed, he’s a goner. Don’t stare, son.
I’d say I thank god that I’ve stuck around as long as I have, but when you stick around as long as I have, you realize there is no god. Just an unfeeling, capricious universe, playing with us as a child with marbles. You’ll learn. Over time you’ll move further and further to the left, pixel by pixel, as each new recruit pops in. All you can do is load pages as fast as you can, keep your ad blocker ready to fire at a moment’s notice, and try to tune out the constant thrum of lo-fi hip-hop beats to relax/study to.
For years now, Apple has trumpeted its commitment to the privacy of its customers. Unlike most of its competitors, Apple’s business model (primarily selling products and services, not advertising) allows it to succeed without relying on collecting personal information from its customers. It’s a big advantage, and Apple knows it.
But when I look at Apple’s product strategy, I’m surprised at all the ways that the company has failed to take advantage of its unique position. From operating-system features to new services, the company should double down on privacy — and widen the lead it has over its competitors.
Snell is onto something here, and Apple could do more to improve privacy in Mail and improving the safety of its App Store. But I disagree with the suggestion at the end of this piece:
Most of my suggestions will cost Apple a lot of money to implement. In the case of something like App Store integrity, the company needs to pay up and do the right thing. But since Apple has built an impressive business on selling services to its users, perhaps there’s an opportunity here to increase privacy by providing a subscription service.
Privacy should be an expectation for all — not a premium product offering for those who can afford it. I have written before about how, in a better world, Apple could not market based on privacy because every company would be similarly respectful of its users. I am adamant that this is still true.
Privacy should not be something users need to become experts in so that they can make purchasing decisions. It should not be up to individual companies, and it should not be a competitive advantage. It ought to be what we all have no matter our choice of computer, phone, web browser, search engine, or television. This has always been a public policy issue that has, so far, been addressed as a business differentiator.
It’s difficult to conceive of the internet we know today — with information on every topic, in every language, at the fingertips of billions of people — without advertising as its economic foundation. But as our industry has strived to deliver relevant ads to consumers across the web, it has created a proliferation of individual user data across thousands of companies, typically gathered through third-party cookies. This has led to an erosion of trust: In fact, 72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or other companies, and 81% say that the potential risks they face because of data collection outweigh the benefits, according to a study by Pew Research Center. […]
I do like how Temkin admits that the company whose website this announcement is being made on has played a major role in degrading the web for around four out of five people surveyed — without actually saying that. But go on:
That’s why last year Chrome announced its intent to remove support for third-party cookies, and why we’ve been working with the broader industry on the Privacy Sandbox to build innovations that protect anonymity while still delivering results for advertisers and publishers. Even so, we continue to get questions about whether Google will join others in the ad tech industry who plan to replace third-party cookies with alternative user-level identifiers. Today, we’re making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products.
We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not — like PII graphs based on people’s email addresses. We don’t believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions, and therefore aren’t a sustainable long term investment. Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers.
One reason Google is doing this is because it operates at such a vast scale that it can continue to abuse user privacy with its own services with little adjustment. This affects third-party tracking and data, so it disadvantages smaller ad tech firms that are not part of the web advertising duopoly.
Nevertheless, and in combination with antitrust action, this is a good step for an internet economy that depends less on surveillance. When the New York Times announced last May that it was going to phase out third-party user data for its online advertising, there was a similar chorus of misguided jeering. Less cross-site tracking is better for the web and better for privacy — full stop. But better is not enough.
A company that rents out access to more than 10 million Web browsers so that clients can hide their true Internet addresses has built its network by paying browser extension makers to quietly include its code in their creations. This story examines the lopsided economics of extension development, and why installing an extension can be such a risky proposition.
The risks increase as we work more in web browsers and through web apps. Browser extensions can be nasty software. Yet, despite knowing all of this and covering it for years, I still think of them less seriously than standalone applications. I don’t know about you but, in my mind, browser extensions are just lightweight little scripts that give me a download button on YouTube or block egregious ads — even though I know that is not the case.
The internet is filled with stories from people whose Google accounts were locked for unexplained reasons, causing them to lose all of their data, including years of email, so I was somewhat concerned. But I’d never heard of similar cases involving Apple’s services, and I wouldn’t expect such behavior from a customer-focused company like Apple, so I figured it was a glitch and made a mental note to try again later.
As it turns out, my bank account number changed in January, causing Apple Card autopay to fail. Then the Apple Store made a charge on the card. Less than fifteen days after that, my App Store, iCloud, Apple Music, and Apple ID accounts had all been disabled by Apple Card.
This whole story is troublesome for several reasons — the poor customer service response, the confusing email Curtis received, the dead-end address Curtis was supposed to reply to — but primarily so because it validates some of the concerns people had about Apple entering the finance business. There are incentives for users to put their Apple purchases on their Apple Card. But the hidden risk of creating a closed loop of payments, products, and services is that being locked out of any of them will impact all of them. I do not blame anyone for switching their Apple-related payments to an Apple Card, but it seems like it comes with more caveats than Apple is letting on.
This financialization push is also incentivizing Apple to become an enforcer for a company that sucks.
First, to hold your digital life hostage until you pay off a credit card bill is grotesquely disproportionate. Second, Apple is acting as a debt collector on behalf of another company, and telling you that until you sort out your payments with them, you’re going to be shut out of your digital life. Yes, it’s called the “Apple Card”, but as the statement above explicitly details, you have to settle your debts with Goldman Sachs. Because that’s who offered you the credit. Not Apple.
This is true. What is not true is how Hansson frames this problem: “Apple basically bricked his computer as a debt-collection tactic”. Curtis’ computer was not “bricked”. The problems Curtis experienced, the grim world of credit cards, and the conglomeration of companies are all serious enough concerns that we do not need to make things up. I hope this never happens to anyone else regardless of the reason, but that really only solves the first issue. Apple is still in the credit card business with Goldman Sachs, which means it is still profiting off interest payments from indebted individuals.
We apologize for any confusion or inconvenience we may have caused for this customer. The issue in question involved a restriction on the customer’s Apple ID that disabled App Store and iTunes purchases and subscription services, excluding iCloud. Apple provided an instant credit for the purchase of a new MacBook Pro, and as part of that agreement, the customer was to return their current unit to us. No matter what payment method was used, the ability to transact on the associated Apple ID was disabled because Apple could not collect funds. This is entirely unrelated to Apple Card.
So this is entirely related to the autopay problem Curtis had and the trade-in program not going according to plan. That is good news for Apple Card users.
But that good news has been dampened somewhat because the media has focused on another number: efficacy, or how effective the vaccines are in preventing any illness at all, however mild it is. Viewed through that prism, the Johnson & Johnson vaccine (with a 66 percent efficacy rate) looks substantially worse than Pfizer/BioNTech’s or Moderna’s (both 95 percent efficacious).
But I’ll let you in on a secret: Even the 66 percent efficacy rate is an impressive result. You need only look at seasonal flu vaccines for proof.
What went wrong? The same thing that’s going wrong right now with the reporting on whether vaccines will protect recipients against the new viral variants. Some outlets emphasize the worst or misinterpret the research. Some public-health officials are wary of encouraging the relaxation of any precautions. Some prominent experts on social media — even those with seemingly solid credentials — tend to respond to everything with alarm and sirens. So the message that got heard was that vaccines will not prevent transmission, or that they won’t work against new variants, or that we don’t know if they will. What the public needs to hear, though, is that based on existing data, we expect them to work fairly well — but we’ll learn more about precisely how effective they’ll be over time, and that tweaks may make them even better.
A year into the pandemic, we’re still repeating the same mistakes.
We lack existing domestic production capacity for these vaccines in Canada, so we’re having to import our entire supply from Europe — at least until manufacturing facilities are completed. That means we have had hiccups in our rollout compared to the United States, where around two million doses are being administered every day now. We have vaccinated at about 20% of the rate of the U.S. so far.
But it sounds like many hiccups have been sorted out, and we’ll be fully vaccinated by the “end of the summer”. I cannot wait to see live music again.
Given these stakes, it’s all the more surprising that we spend so little time trying to understand the source of this discontent. Many in the business community tend to dismiss the psychological toll from e-mail as an incidental side effect caused by bad in-box habits or a weak constitution. I’ve come to believe, however, that much deeper forces are at play in generating our mismatch with this tool, including some that get at the very core of what drives us as humans.
The flip side of an evolutionary obsession with social interaction is a corresponding feeling of distress when it’s thwarted. Much in the same way that our attraction to food is coupled with the gnawing sensation of hunger in its absence, our instinct to connect is accompanied by an anxious unease when we neglect these interactions. This matters in the office, because an unfortunate side effect of overwhelming e-mail communication is that it constantly exposes you to exactly this form of social distress. A frenetic approach to professional collaboration generates messages faster than you can keep up — you finish one response only to find that three more have arrived in the interim, and, while you are at home at night, or over the weekend, or when you are on vacation, you cannot escape the awareness that the missives in your in-box are piling up ever thicker in your absence.
It is awfully charming that the New Yorker continues to insist upon hyphenating “email” and “inbox”, as though these are new words heretofore unwritten and unpublished. Bless ’em.
I have found wild success in subverting my evolutionary bias for connection by simply turning notification sounds off. I keep many banners and badges on, but making them inaudible removes, for me, the urgency of each new thing I am supposed to deal with. But that is just the notifications; email itself is, according to Newport’s book extract, just as problematic.
I thought I had solved that by being atrocious at email. I archive liberally. I do not reply quickly or, often, ever. I can only hit inbox zero by deleting everything sight-unseen. The result of this is that there are several people awaiting my response to messages that are actually quite urgent.
Just look what this article has done: it has given me a reason to be stressed about email again.
In our discussions, journalists and human rights defenders, including those from Myanmar, described fearing the weight of having to relentlessly prove what’s real and what is fake. They worried their work would become not just debunking rumors, but having to prove that something is authentic. Skeptical audiences and public factions second-guess the evidence to reinforce and protect their worldview, and to justify actions and partisan reasoning. In the US, for example, conspiracists and right-wing supporters dismissed former president Donald Trump’s awkward concession speech after the attack on the Capitol by claiming “it’s a deepfake.”
The TikTok videos of “Tom Cruise” are certainly impressive and terrifying, but they are also an edge case. If you have not yet seen them, I recommend checking them out. I think Gregory nails the real concern of deepfake videos: it is the paranoia, more than the videos themselves. It is the mere presence of deepfakes as a concept that is concerning, because it is yet another piece of technobabble that can manifest in the wrong hands as propaganda and conspiracy theory mongering.
Sure is bizarre to be living at a time when we are, as humankind, more scientifically literate than ever before while increasingly doubting the reality in front of our very eyes. Last year, Kirby Ferguson put together a terrific video about magical thinking. The subject matter is kind of heavy, but it is worth a watch for capturing the strangeness of this time.
Last week, Apple published an update to its Platform Security Guide. The PDF now weighs in at nearly two hundred pages and includes a lot of updates — particularly with the launch of the M1 Macs late last year. Unfortunately, because of its density, it is not exactly a breezy thing to write about.
As wonderful as the Apple Platform Security guide is as a resource, writing about it is about as easy as writing a hot take on the latest updates to the dictionary. Sure, the guide has numerous updates and lots of new content, but the real story isn’t in the details, but in the larger directions of Apple’s security program, how it impacts Apple’s customers, and what it means to the technology industry at large.
From that broader perspective, the writing is on the wall. The future of cybersecurity is vertical integration. By vertical integration, I mean the combination of hardware, software, and cloud-based services to build a comprehensive ecosystem. Vertical integration for increased security isn’t merely a trend at Apple, it’s one we see in wide swaths of the industry, including such key players as Amazon Web Services. When security really matters, it’s hard to compete if you don’t have complete control of the stack: hardware, software, and services.
Vertical integration in the name of privacy and security is the purest expression of the Cook doctrine that I can think of. We got a little preview of the acceleration of this strategy not too long ago — and a glimpse of its limitations in November — but it has been a tentpole of Apple’s security strategy for ages. Recall the way Touch ID was pitched when the iPhone 5S was introduced, for example. Phil Schiller repeatedly pointed to its deep software integration while Dan Riccio, in a promotional video, explained how the fingerprints were stored in the Secure Enclave.
All of this makes me wonder whatever happened to Project McQueen, Apple’s effort to eliminate its reliance on third-party data centres for iCloud. Surely this project did not die when some of the engineers responsible for it left the company, but Apple still depends on others for hosting. From page 109 of the guide:
Each file is broken into chunks and encrypted by iCloud using AES128 and a key derived from each chunk’s contents, with the keys using SHA256. The keys and the file’s metadata are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information or the keys, using both Apple and third- party storage services — such as Amazon Web Services or Google Cloud Platform — but these partners don’t have the keys to decrypt the user’s data stored on their servers.
Even though Amazon and Google absolutely cannot — and, even if these files were not strongly encrypted, would not — access users’ data, it is still strange that Apple still relies on third-party data centres given its tight proprietary integration.
I’ve been picking through this guide for the past week and trying to understand it as best I can. It is, as an Apple spokesperson explained to Jai Vijayan at Dark Reading, intended to be more reflective of security researchers’ wishes and needs, which you can read as being more comprehensive with a greater level of technical detail. One item I noticed on pages 14–15 is the new counter lockbox feature in recent devices:
Devices first released in Fall 2020 or later are equipped with a 2nd-generation Secure Storage Component. The 2nd-generation Secure Storage Component adds counter lockboxes. Each counter lockbox stores a 128-bit salt, a 128-bit passcode verifier, an 8-bit counter, and an 8-bit maximum attempt value. Access to the counter lockboxes is through an encrypted and authenticated protocol.
Counter lockboxes hold the entropy needed to unlock passcode-protected user data. To access the user data, the paired Secure Enclave must derive the correct passcode entropy value from the user’s passcode and the Secure Enclave’s UID. The user’s passcode can’t be learned using unlock attempts sent from a source other than the paired Secure Enclave. If the passcode attempt limit is exceeded (for example, 10 attempts on iPhone), the passcode-protected data is erased completely by the Secure Storage Component.
I read this as a countermeasure against devices, such as the GrayKey, that try to crack iPhones by guessing their passcodes using some vulnerability that gives them infinite attempts. I cannot find any record of a GrayKey successfully being used against an iPhone 12 model, but I did find an article highlighting a recent funding round for the company.
But not all are convinced by Grayshift’s long-term capabilities, given Apple’s consistent improvement of the iPhone’s security. The GrayKey is believed to be capable of hacking iPhones up to the iPhone 11, though it’s unclear how effective the tool is against the iPhone 12. “It’s most likely they can’t do much, if anything at all, with the iPhone 12 and iOS 14,” said Vladimir Katalov, CEO of another forensics company, Elcomsoft. “Perhaps they just want to cash out.”
Katalov previously speculated that iOS 12 defeated the GrayKey but, clearly, some new method was developed to keep it working — at least, until recently. A new method may be discovered again. But it seems that Apple is particularly keen to address concerns about passcode vulnerabilities exploited by third parties. It is too bad that iCloud backups remain a critically weak point.
During a scheduled companywide meeting, Andrew Bosworth, Facebook’s vice president of augmented and virtual reality, told employees that the company is currently assessing whether or not it has the legal capacity to offer facial recognition on devices that are reportedly set to launch later this year. Nothing had been decided, he said, and he noted that current state laws may make it impossible for Facebook to offer people the ability to search for others based on pictures of their face.
“Face recognition … might be the thorniest issue, where the benefits are so clear, and the risks are so clear, and we don’t know where to balance those things,” Bosworth said in response to an employee question about whether people would be able to “mark their faces as unsearchable” when smart glasses become a prevalent technology. The unnamed worker specifically highlighted fears about the potential for “real-world harm,” including “stalkers.”
Andrew Bosworth confirmed the report on Twitter with an unsurprisingly defensive tone:
We’ve been open about our efforts to build AR glasses and are still in the early stages. Face recognition is a hugely controversial topic and for good reason and I was speaking about was how we are going to have to have a very public discussion about the pros and cons.
In our meeting today I specifically said the future product would be fine without it but there were some nice use cases if it could be done in a way the public and regulators were comfortable with.
Anyone who thinks for even a second about the negative consequences about a feature like this knows that there is absolutely no circumstance in which this is an open-ended “discussion”, as Bosworth seems to think. Face blindness is certainly real, and we must be compassionate to those who live with it, but adding a technology layer from one of the world’s most privacy-hostile companies is not a solution. It is a catastrophe waiting to happen.
Services are also attempting to reduce the content-moderation load by reducing the incentives or opportunity for bad behavior. Pinterest, for example, has from its earliest days minimized the size and significance of comments, says Ms. Chou, the former Pinterest engineer, in part by putting them in a smaller typeface and making them harder to find. This made comments less appealing to trolls and spammers, she adds.
The dating app Bumble only allows women to reach out to men. Flipping the script of a typical dating app has arguably made Bumble more welcoming for women, says Mr. Davis, of Spectrum Labs. Bumble has other features designed to pre-emptively reduce or eliminate harassment, says Chief Product Officer Miles Norris, including a “super block” feature that builds a comprehensive digital dossier on banned users. This means that if, for example, banned users attempt to create a new account with a fresh email address, they can be detected and blocked based on other identifying features.
No matter how effectively platforms become at removing unwanted and inappropriate media, it will always be preferable to me for these services and products to be designed to discourage the need for heavy moderation in the first place. It is unsurprising to me that the platforms taking this approach and highlighted here by Mims are used by womenmore than, say, Twitter, Reddit, or YouTube. I have long harboured a pet theory that it is a positive feedback loop due, in part, to considering the negative ramifications of specific features. These platforms are certainly not perfect, but their more thoughtful feature design means they are less prone to misuse, which means they are more appealing to women and other people who are more likely to face abuse online. By contrast, platforms that deploy features without that kind of foresight quickly become overwhelmed with misuse, driving away some of those who tend to be on its receiving end.
89,132,938 videos were removed globally in the second half of 2020 for violating our Community Guidelines or Terms of Service, which is less than 1% of all videos uploaded on TikTok. Of these videos, 11,775,777 were removed in the US.
92.4% of these videos were removed before a user reported them, 83.3% were removed before they received any views, and 93.5% were removed within 24 hours of being posted.
51,505 videos were removed for promoting COVID-19 misinformation. Of these videos, 86% were removed before they were reported to us, 87% were removed within 24 hours of being uploaded to TikTok, and 71% had zero views.
TikTok credits its automated systems for detecting violating videos before they were viewed. For comparison, around 94% of YouTube videos were automatically flagged, but only around 40% were removed with zero views.
TikTok’s moderation efforts do come with a bit of an asterisk, however, because the platform is owned by ByteDance, which also runs Douyin, the version of TikTok only available in China.
The truth is, political speech comprised a tiny fraction of deleted content. Chinese netizens are fluent in self-censorship and know what not to say. ByteDance’s platforms — Douyin, Toutiao, Xigua and Huoshan — are mostly entertainment apps. We mostly censored content the Chinese government considers morally hazardous — pornography, lewd conversations, nudity, graphic images and curse words — as well as unauthorized livestreaming sales and content that violated copyright.
It was certainly not a job I’d tell my friends and family about with pride. When they asked what I did at ByteDance, I usually told them I deleted posts (删帖). Some of my friends would say, “Now I know who gutted my account.” The tools I helped create can also help fight dangers like fake news. But in China, one primary function of these technologies is to censor speech and erase collective memories of major events, however infrequently this function gets used.
For clarity, TikTok and Douyin are entirely separate platforms. But one of the reasons TikTok’s moderation efforts are so effective — especially for a platform that has grown dramatically in such a short period of time — is, basically, because they have to be.
According to Kim Scheinberg, she and her husband John Kullmann had decided to move back to the east coast in 2000. In order to make the move, Kullmann had to work on a more independent project at Apple. Ultimately, he started work on an Intel version of Mac OS X. Eighteen months later, in December 2001, his boss asks him to show him what he’s been working on.
In JK’s office, Joe watches in amazement as JK boots up an Intel PC and up on the screen comes the familiar ‘Welcome to Macintosh’.
Joe pauses, silent for a moment, then says, “I’ll be right back.”
He comes back a few minutes later with Bertrand Serlet.
Max (our 1-year-old) and I were in the office when this happened because I was picking JK up from work. Bertrand walks in, watches the PC boot up, and says to JK, “How long would it take you to get this running on a (Sony) Vaio?” JK replies, “Not long” and Bertrand says, “Two weeks? Three?”
JK said more like two *hours*. Three hours, tops.
Bertrand tells JK to go to Fry’s (the famous West Coast computer chain) and buy the top of the line, most expensive Vaio they have. So off JK, Max and I go to Frys. We return to Apple less than an hour later. By 7:30 that evening, the Vaio is running the Mac OS. [My husband disputes my memory of this and says that Matt Watson bought the Vaio. Maybe Matt will chime in.]
I’m Canadian; this story is my only association with Fry’s. From what I’ve seen of the memories that have been posted across the web tonight, it held a special place in the hearts of many.
It appears that Facebook and the Australian government are resolving their differences. Facebook says that it will be restoring links to news on its platform; the government will make some adjustments to the law.
But while a country and a social media company were scuffling, the latter’s power became obvious to those in the South Pacific.
Dr Amanda Watson, a research fellow at the Australian National University’s Coral Bell School of Asia Pacific Affairs, and an expert in digital technology use in the Pacific, said there was widespread confusion across the Pacific about the practical ramifications of Facebook’s Australian news ban.
“Facebook is the primary platform, because a number of telco providers offer cheaper Facebook data, or bonus Facebook data. Many Pacific Islanders might know how to do some basic Facebooking, but it’s questionable if they would be able to open an internet search engine and search for news, or go to a particular web address. There are technical confidence issues, and that’s linked to education levels in the Pacific, and how long people have had access to the internet.”
Watson is describing the practice of zero-rating and one reason why it is so pernicious. Zero-rating sounds great on its face. It means that popular services can strike deals with telecom providers so, at its best, some of the things most people do on the web are not counted against data quotas.
In the case of Facebook Free Basics — formerly Internet.org, which is among the most specious branding exercises I can imagine — there are a handful of websites and services that are included in mobile plans. Many of the websites selected by Facebook to receive this special treatment are American, including Facebook itself, of course. The result of this is that, according to a 2015 survey, only 5% of Americans agreed with the statement that “Facebook is the internet” compared to 65% of respondents in Nigeria, 61% in Indonesia, and 58% in India — countries where Facebook Free Basics is available.
In the quote above, Watson describes a lack of technical ability for not accessing many websites outside of Facebook, but there is another major hurdle: cost. Data plans can be expensive, and many news websites are garbage. Sticking to the websites included in Facebook Free Basics is not just easier, it is an economic reality that Facebook is taking advantage of.
In much of the world, internet policy effectively is Facebook policy, and vice-versa. One reason for that is the ferocious speed at which Facebook grew and acquired potential competitors. Though an American company, WhatsApp was wildly popular mostly outside of the U.S. before Facebook bought it. That’s why treating antitrust as a solely American concern — or something of trivial relevance, or something that can be eradicated with the eventual passage of time — is such a frustrating response from those of us who live elsewhere.
So, yes, Australian policy requiring Facebook to pay Rupert Murdoch’s empire so that users of the former can link to the latter does seem pretty ridiculous. But it is extraordinary to see a huge chunk of the world’s ad spending redirected to two American companies headquartered within a ten minute drive of each other. Many independent and local media entities around the world are bleeding so that Murdoch can buy another yacht with the money Facebook and Google should be using to pay their taxes.
Update: A reluctance to effectively govern in the United States is not the only way to gain technical dominance.
Apple relies on bugs reported through its Feedback systems. As this Spotlight bug isn’t easy to recognise, users and third-party developers are only now realising the effects of Dave’s simple coding error. Without thorough testing, Apple is almost completely reliant on Feedback to detect and diagnose bugs.
This system is both flawed and woefully inefficient, as any expert in quality management will tell you. It’s like letting cars roll off the production line with no windows, and waiting for customers to bring them back to have them installed. By far the best choice is to build correctly the first time, or, as second best, to detect and rectify defects before shipping. So long as shipping updates remains relatively cheap, and your customers are happy to report all the defects which you didn’t fix, it appears to work, at least in the short term.
I’ve now reached the stage where I simply don’t have time to report all these bugs, nor should I have to. Indeed, I’ve realised that in doing so, I only help perpetuate Apple’s flawed engineering practices.
I was thinking about this piece earlier today as I filed a handful of pretty standard bug reports based on some visual problems I noticed in Big Sur.
For each one, Feedback Assistant automatically collected whole-system diagnostics, which wholly consumes system resources for a few minutes as it spits out a folder of logs totalling well over a gigabyte, plus the same folder as a compressed archive. The archive file is submitted to Apple and the uncompressed folder is locally cached for a little while — the oldest one on my drive is from January 4. It does not matter what the feedback is related to; this is a minimum requirement of all bug reports. If you are filing a report about many system features — Bluetooth or Time Machine, for example — it will also require you to collect separate diagnostics.1
Often, I suspect, users will not attach all of the diagnostics needed for Apple’s developers to even find the bug. But I have to wonder how effective it is to be collecting so many system reports all of the time, and whether it is making a meaningful difference to the quality of software — particularly before it is shipped. I have hundreds of open bug reports, many of which are years old and associated with “more than ten” similar reports. How can any engineering team begin to triage all of this information to fix problems that have shipped?
To its credit, the quality of Apple’s software seems to have stabilized in the last year or so. But after the last several years, it feels more like the hole has stopped getting deeper and less like we are climbing out of it.
FB8993839 for the Time Machine bug. I have a recent top-of-the-line iMac connected by USB-C to a fast SSD and it’s still slow as hell. I do not understand this. ↩︎
In July last year, scientists shot another car to Mars from Earth. It touched down a few days ago and, for the first time, provided video of the descent and audio from the planet. There is also a helicopter aboard, which will apparently be flown near the rover within the next month. A little over a century ago, humans first took to the air on Earth; by the end of March, if all goes to plan, humans will remotely pilot an aircraft in another planet’s atmosphere. Incredible.
I linked to an article earlier this month from Mike Masnick of Techdirt, explaining that several similar bills were being pushed in U.S. state legislatures to combat so-called “social media censorship”. These bills share virtually all of their language and have obvious First Amendment problems. In that linked piece, I showed that these bills were likely the work of a bizarre campaign associated with Chris Sevier.
As of February 3, Sevier says that “28 states are moving forward on this”. I could not come up with a number anywhere near that. But there is one thing Sevier is right about: the bill template is now, technically, bipartisan, as Democratic lawmaker Mike Gabbard introduced it in the Hawaiian Senate.
It looks like Apple has started to crack down on scam attempts by rejecting apps that look like they have subscriptions or other in-app purchases with prices that don’t seem reasonable to the App Review team.
9to5Mac obtained access to a rejection email shared by a developer that provides a subscription service through their app. It shows a rejection message from Apple telling them that their app would not be approved because the prices of their in-app purchase products “do not reflect the value of the features and content offered to the user.” Apple’s email goes as far as calling it a “rip-off to customers” (you can read the full letter at the end of this post).
This is not Apple’s sole response to fighting App Store scams. iOS 14.5 has a subtly redesigned subscription sheet that more clearly displays the cost of the subscription and its payment term.
I have waffled a bit on whether it makes sense for Apple to be the filter for the appropriateness of app pricing. It has always been a little bit at the mercy of Apple’s discretion — remember the I Am Rich app? — but legitimatedevelopers have concerns about whether their apps will be second-guessed by some reviewer as being too expensive. And I am quite sure that, if the hypothetical becomes a reality, it is likely to be resolved with a few emails. But developers’ livelihoods are often on the line; there are no alternative native app marketplaces on iOS.
The proof of this strategy’s success will be in Apple’s execution, but that in itself is a little worrisome. It is a largely subjective measure; who is an app reviewer to say whether an app is worth five dollars a week or five dollars a month? Apple does not have a history of wild incompetence with its handling of the App Store, but there are enough stories of mistakes and heavy-handedness that this is being viewed as a potential concern even by longstanding developers of high-quality apps.
I hope this helps. There are enough of these fleeceware scams in the store to be impacting its reputation. A crackdown is clearly necessary. The question is whether the App Store team is capable of executing it.
In recent years, and whether we realize it or not, biometric technologies such as face and iris recognition have crept into every facet of our lives. These technologies link people who would otherwise have public anonymity to detailed profiles of information about them, kept by everything from security companies to financial institutions. They are used to screen CCTV camera footage, for keyless entry in apartment buildings, and even in contactless banking. And now, increasingly, algorithms designed to recognize us are being used in border control. Canada has been researching and piloting facial recognition at our borders for a few years, but — at least based on publicly available information — we haven’t yet implemented it on as large a scale as the US has. Examining how these technologies are being used and how quickly they are proliferating at the southern US border is perhaps our best way of getting a glimpse of what may be in our own future—especially given that any American adoption of technology shapes not only Canada–US travel but, as the world learned after 9/11, international travel protocols.
Canada has tested a “deception-detection system,” similar to iBorderCtrl, called the Automated Virtual Agent for Truth Assessment in Real Time, or AVATAR. Canada Border Services Agency employees tested AVATAR in March 2016. Eighty-two volunteers from government agencies and academic partners took part in the experiment, with half of them playing “imposters” and “smugglers,” which the study labelled “liars,” and the other half playing innocent travellers, referred to as “non-liars.” The system’s sensors recorded more than a million biometric and nonbiometric measurements for each person and spat out an assessment of guilt or innocence. The test showed that AVATAR was “better than a random guess” and better than humans at detecting “liars.” However, the study concluded, “results of this experiment may not represent real world results.” The report recommended “further testing in a variety of border control applications.” (A CBSA spokesperson told me the agency has not tested AVATAR beyond the 2018 report and is not currently considering using it on actual travellers.)
These technologies are deeply concerning from a privacy perspective. The risks of their misuse are so great that their implementation should be prohibited — at least until a legal framework is in place, but I think forever. There is no reason we should test them on a “trial” basis; no new problems exist that biometrics systems are solving by being used sooner.
But I am curious about our relationship with their biases and accuracy. The fundamental concerns about depending on machine learning boil down to whether suspicions about its reliability are grounded in reality, and whether we are less prone to examining its results in depth. I have always been skeptical of machines replacing humans in jobs that require high levels of judgement. But I began questioning that very general assumption last summer after reading a convincing argument from Aaron Gordon at Vice that speed cameras are actually fine:
Speed and red light cameras are a proven, functional technology that make roads safer by slowing drivers down. They’re widely used in other countries and can also enforce parking restrictions like not blocking bus or bike lanes. They’re incredibly effective enforcers of the law. They never need coffee breaks, don’t let their friends or coworkers off easy, and certainly don’t discriminate based on the color of the driver’s skin. Because these automated systems are looking at vehicles, not people’s faces, they avoid the implicit bias quandaries that, say, facial recognition systems have, although, as Dave Cooke from the Union of Concerned Scientists tweeted, “the equitability of traffic cameras is dependent upon who is determining where to place them.”
Loath as I am to admit it, Gordon and the researchers in his article have got a point. There are few instances where something is as unambiguous as a vehicle speeding or running a red light. If the equipment is accurately calibrated and there is ample amber light time, the biggest frustration for drivers is that they can no longer speed with abandon or race through changing lights — which are things they should not have been doing in any circumstance. I am not arguing that we should put speed cameras every hundred metres on every road, nor that punitive measures are the only or even best behavioural correction, merely that these cameras can actually reduce bias. Please do not send hate mail.
Facial recognition, iris recognition, gait recognition — these biometrics methods are clearly more complex than identifying whether a car was speeding. But I have to wonder if there is an assumption by some that there is a linear and logical progression from one to the other, and there simply is not. Biometrics are more like forensics, and courtrooms still accept junk science. It appears that all that is being done with machine learning is to disguise the assumptions involved in matching one part of a person’s body or behaviour to their entire self.
It comes back to Maciej Cegłowski’s aphorism that “machine learning is money laundering for bias”:
When we talk about the moral economy of tech, we must confront the fact that we have created a powerful tool of social control. Those who run the surveillance apparatus understand its capabilities in a way the average citizen does not. My greatest fear is seeing the full might of the surveillance apparatus unleashed against a despised minority, in a democratic country.
What we’ve done as technologists is leave a loaded gun lying around, in the hopes that no one will ever pick it up and use it.
Well we’re using it now, and we have done little to assure there are no bystanders in the path of the bullet.
With a Canadian law being drafted that is similar to the one moving forward in Australia, I have been watching this story intently. My hope has been that Canadian lawmakers will see the responses to these policies and adjust theirs accordingly. I am particularly concerned about its effects on local media, like the excellent Sprawl here in Calgary.
Third, this incident provides an important reminder that independent and smaller media will bear the biggest brunt of these policies. The reality is that the Australian battle really pits Facebook against Rupert Murdoch’s media empire. In other words, giant vs. giant. In Canada, the large media companies such as Postmedia and Torstar are the most vocal lobbyists on this issue, but smaller, independent media have already indicated that they do not support the News Media Canada lobbying campaign and want the benefit of links from social media services.
These policy proposals seem to fundamentally misunderstand the use of links on the web. This is entirely speculative, but I have long wondered if the appearance of Open Graph tags has anything to do with confusion about what is part of Facebook and what is third-party material. Link previews have repeatedly been associated with bad framing and untrustworthy practices. I wonder if these thumbnails may also blur the lines too much with what most people consider an external link.