Geofence location warrants and reverse search warrants such as the ones McCoy dealt with are increasingly becoming the tool of choice for law enforcement. Google revealed for the first time in August that it received 11,554 geofence location warrants from law enforcement agencies in 2020, up from 8,396 in 2019 and 982 in 2018.
It’s a concerning trend, argue experts and advocates. They worry the increase signals the start of a new era, one in which law enforcement agencies find ever more creative ways to obtain user information from data-rich tech companies. And they fear agencies and jurisdictions will use this relatively unchecked mechanism in the context of new and controversial laws such as the criminalization of nearly all abortions in Texas.
If this topic sounds familiar to you, thank you for being a regular reader. I think this is a critical topic to understand since how law enforcement, which is generally prohibited from monitoring large groups of people indiscriminately, is able to work around pesky restrictive laws by subpoenaing advertisers and data brokers. Byron Tau of the Wall Street Journal has covered this extensively, and so has Joseph Cox of Vice and reporters at Buzzfeed News. In some cases, law enforcement is able to collect information without a warrant, as Tau revealed in an article earlier this week.
Where I think this article jumps the rails is in its attempt to tie Apple’s proposed CSAM detecting efforts to the above warrantless data collection methods:
For tech companies that count advertising among their revenue streams – or as a major source of revenue, as is the case for Google, there’s no real technical solution to curbing government requests for their data. “It would be technically impossible to have this data available to advertisers in a way that police couldn’t buy it, subpoena it or take it with a warrant,” Cahn said.
That’s why Apple’s now-postponed plan to launch a feature that scans for CSAM caused such a furor. When the FBI in 2019 asked Apple to unlock the phone of the suspect in a mass shooting in San Bernardino, California, Apple resisted the request arguing the company couldn’t comply without building a backdoor, which it refused to do. Once Apple begins scanning and indexing the photos of anyone who uses its devices or services, however, there’s little stopping law enforcement from issuing warrants or subpoenas for those images in investigations unrelated to CSAM.
While I understand the concern, this is simply not how the proposed feature would work.
For one thing, Apple is already able to respond to warrants with photos stored in iCloud. The CSAM detection proposal would not change that.
For another, photos are not really being scanned or indexed, but compared against hashes of known CSAM photos and flagged with information about whether a match was found. These would only be for photos stored in iCloud, so someone could disable the feature by disabling iCloud Photo Library.
Perhaps I am missing something key here, but Bhuiyan’s attempt to connect this feature with dragnet warrants seems tenuous at best. When law enforcement subpoenas Apple, they ask for information connected to specific Apple IDs or iCloud accounts. That is very different from the much scarier warrants issued based on the devices connected to a location, or the users that are connected with search queries.
Ad tech companies and data brokers have so much information about individual users that their databases can be used as a proxy for mass surveillance — that is a more pressing ongoing concern.
It seems like so long ago that the iPhone 6 launched and, with it, the ”really very pragmatic optimization” of the camera bump, and even longer still since the original iPhone presentation where Steve Jobs barely acknowledged that it had a “two megapixel camera built in”. Now look at the camera. It is less of a bump and more of a boulder.
Apple’s accessory design guidelines have not been updated with these phones yet. But if the webpage rendering is anything to go by, the bump is now over 50% of the width of the back glass and over 25% of its height. This is not a complaint, per se, as I appreciate the technical achievements of building so much camera into so little space. But I have to wonder how much farther this can go. Will a not-too-distant iPhone model just make the whole phone as thick as the camera bump, as if for the cycle to start anew?
Apple is a company of shifting patterns. For years, it has been content with a tick-tock cycle in iPhone hardware. In one year, the flagship model will have a new visual design language with modest under-hood improvements. The following year, they will be replaced at the top end by phones that have a similar — if not identical — industrial design, but with bigger improvements to the processor, cameras, and other hardware elements.
Rinse and repeat annually until you have one the most successful businesses the world has ever seen.
But we have not seen an S-branded iPhone since the XS of 2018. Its successor models — the 11 and 11 Pro — seem to have set a template for the iPhones of today: a shared industrial design language with subtle differences between the two product lines — and also some changes that set them apart from last year’s models — with one line that is more consumer-oriented, and another that has better cameras and nicer materials.
The iPhone 13 and iPhone 13 Pro models introduced today seem like a continuation of that pattern.1 But as you read through the press release or launch coverage, one thing that seems apparent is how much of the changes are on the inside. Sure, they have slightly smaller notches and the cameras on the back of the 13 are orientated differently — for technical reasons, Apple says — but the improvements are otherwise entirely about what the hardware can do, not what it looks like.
In the past, a faster processor, a radically improved camera system, a new display, and some new colours would surely have encouraged an “iPhone 12S” moniker. But the S-branded models generally receive worse coverage purely because of their looks. Instead of being seen as new iPhones, their updates are treated as more modest — even though their technical improvements have often eclipsed comparable changes in non-S models.
I still find it hilarious how the wise tech commentariat of Twitter and the mainstream press alike yawn at S-branded iPhones despite their internal improvements. It reveals so much about the often ridiculous way we consume products. But that reaction is no good for Apple. It wants people to pay as much attention to its iPhone events even when it is not creating an entirely new industrial design language. Just look at the Cinematic mode in these new models, which allows users to change video focus after capturing it. If it works as well as we saw today, that is a huge leap forward.
From a marketing rationale, I think the S-branded models are gone for good. The question is whether we can now expect the numerical branding to continue incrementing for the foreseeable future. The iPhone naming scheme is uniquely cumbersome for an Apple product, but it is hard to see how the company could change it without messing up its pricing strategy. In the U.S., base iPhone models are priced from $399 all the way up to $1,099 — and that is before you change storage options. There are nearly no gaps in the base price of an iPhone; the biggest jump is $200, between the 13 and the 13 Pro.
As long as Apple wants to continue offering such a wide range of prices while including previous years’ models, I think it will stick with this naming scheme. Dropping the “S” naming convention simplifies the line further: an S-branded phone means nothing, but it is implicit that a higher number means a better model. And, if it means less chance of people minimizing it as a tweaked version of last year’s phone, that is even better for Apple.
I am glad the Mini is sticking around for another year, too. ↩︎
The Securities and Exchange Commission announced Tuesday that it’s charging App Annie, the mobile app data provider, with securities fraud, accusing the company of “engaging in deceptive practices” and misrepresenting the origins of its data. App Annie will pay a $10 million settlement, according to the announcement, although the company has not admitted to any of the SEC’s findings.
The ability for companies to settle charges like these without admitting fault is a fascinating piece of legal spin I would love to learn more about. I looked at all of the press releases issued by the SEC since July 1. About one-third of them contained some variation of the phrase “without admitting or denying the SEC’s findings” — including for settlements for inflated income reporting by Kraft Heinz, misreporting a security breach of Pearson, auditing interference by Ernst & Young, and UBS failing to control for risky investments. Allegedly.
We all have to use the word “allegedly” because none of the above companies — including App Annie — admitted guilt, nor were found guilty. They all get to pretend as though they have not broken the law. This settlement process may be less expensive than taking these cases to trial, but the result is that fraud and systemic abuse is treated as a business expense. And remember: these press releases are all from the last ten weeks.
Anyway, all of that is surely beyond the scope of this little website. I wanted to look at that App Annie settlement in more detail and got sidetracked. Here:
[…] The order finds that App Annie and Schmitt understood that companies would only share their confidential app performance data with App Annie if it promised not to disclose their data to third parties, and as a result App Annie and Schmitt assured companies that their data would be aggregated and anonymized before being used by a statistical model to generate estimates of app performance. Contrary to these representations, the order finds that from late 2014 through mid-2018, App Annie used non-aggregated and non-anonymized data to alter its model-generated estimates to make them more valuable to sell to trading firms.
A reminder that App Annie’s data collection practices, like other similar companies, are horrible and creepy.
John Voorhees at MacStories has posted copies of all of the videos from today’s “California Streaming” launch of the iPhone 13 lineup, Apple Watch Series 7, and new iPad and iPad Mini models. Apple did not launch new AirPods — which is a bit embarrassing for me — but at least I was not peddling rumours about a big Apple Watch redesign or satellite connectivity in the iPhone.
To find these videos, you may have to look at little harder than simply visiting YouTube and searching for “Apple”. Right now, the top-ranked video for that query is a live stream of a cryptocurrency scam. When I checked earlier, over fifteen thousand people were watching, and the channel broadcasting it somehow has over a million subscribers. As of writing, it is still live, and the Bitcoin and Ethereum addresses associated with the scam have received over $170,000 in just a few hours today.
Google has not responded to my questions about how easy it is to hijack an obviously popular brand term on YouTube with a commonplace scam like this one.
Update: A Google spokesperson confirmed the channel was terminated.
How was your weekend? Mine was pretty quiet. I made a peach crumble that I daresay turned out real nice, even though my baking skills are terrible — which is why I make crumbles instead of pies.
Epic Games’ lawyers, on the other hand, were hard at work. The company paid its court-ordered six million dollar penalty — which CEO Tim Sweeney announced with a low-resolution Apple Pay logo for some reason — and filed its expected appeal.
The appellate court will revisit how Judge Gonzalez Rogers defined the market where Epic Games had argued Apple was acting as a monopolist. Contrary to both parties’ wishes, Gonzalez Rogers defined it as the market for “digital mobile gaming transactions” specifically. Though an appeal may or may not see the court shifting its opinion in Epic Games’ favor, a new ruling could potentially help to clarify the vague language used in the injunction to describe how Apple must now accommodate developers who want to point their customers to other payment mechanisms.
After some catch-up reading today, I think my takeaway on Friday stands. This ruling was well-written and well-articulated; but, while the intention of the injunction was implied, its implications for Apple and developers are still unclear.
As a developer, I’d love to see more changes to Apple’s control over iOS. But it’s hard to make larger changes without potentially harming much of what makes iOS great for both users and developers.
Judge Gonzalez Rogers got it right: we needed a minor course correction to address the most egregiously anticompetitive behavior, but most of the way Apple runs iOS is best left to Apple.
I still think there are more things that regulators ought to be looking into when it comes to the expansive offerings of companies like Apple, Google, and Microsoft. But I think Arment makes a good case for the almost status quo.
(Update: I keep thinking about the likelihood of the sideloading doomsday scenarios that Arment writes about. This next part of the parenthetical will only make sense if you read his post: I could see Facebook creating its own app marketplace for iOS, but I am unclear why developers would need to submit their apps to multiple marketplaces, so long as Apple gets to keep its first-party App Store. An adjacent anxiety is the piecemeal way application marketplaces are being regulated. If Apple would like to retain some level of control over the way the iOS app model works around the world, I hope it sees what regulators are looking into and is able to work with them to assuage their concerns, because a Facebook app marketplace is a worrisome prospect indeed.)
I also appreciated Ben Thompson’s take summarizing some of the court’s definitions and legal justifications; here, quoting Judge Gonzalez Rogers’ decision:
If Apple could no longer require developers to use IAP for digital transactions, Apple’s competitive advantage on security issues, in the broad sense, would be undermined and ultimately could decrease consumer choice in terms of smartphone devices and hardware…to a lesser extent, the use of different payment solutions for each app may reduce the quality of the experience for some consumers by denying users the centralized option of managing a single account through IAP. This would harm both consumers and developers by weakening the quality of the App Store to those that value this centralized system.
That was a lot of legalese, but this is the takeaway: IAP is distinct intellectual property from developer tools broadly; it is the entire set of app management tools, not just a payment processor; and Apple has legitimate competitive justification to require IAP be used for in-app purchases.
Interesting days ahead for the App Store. This modest corrective action is, I think, a good step toward a store that improves users’ experiences while opening up new possibilities. I still hope Apple takes greater advantage to simultaneously release regulatory pressure and the hostility felt by developers.
Apple today released iOS 14.8, iPadOS 14.8, WatchOS 7.6.2, and MacOS updates to patch two vulnerabilities exploited in the wild, including one by NSO Group. Bill Marczak, et al., of Citizen Lab:
Because the format of the files matched two types of crashes we had observed on another phone when it was hacked with Pegasus, we suspected that the “.gif” files might contain parts of what we are calling the FORCEDENTRY exploit chain.
Citizen Lab forwarded the artifacts to Apple on Tuesday, September 7. On Monday, September 13, Apple confirmed that the files included a zero-day exploit against iOS and MacOS. They designated the FORCEDENTRY exploit CVE-2021-30860, and describe it as “processing a maliciously crafted PDF may lead to arbitrary code execution.”
The exploit works by exploiting an integer overflow vulnerability in Apple’s image rendering library (CoreGraphics). We are publishing limited technical information about CVE-2021-30860 at this time.
NSO Group’s spyware is almost always deployed in a highly targeted way but, now that some knowledge about this vulnerability is public, it is only a matter of time before it is exploited more broadly. Update your software today.
Mark Zuckerberg has publicly said Facebook Inc. allows its more than three billion users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame.
In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal.
The program, known as “cross check” or “XCheck,” was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show. Some users are “whitelisted” — rendered immune from enforcement actions — while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.
I do not think it is surprising that moderation of high-profile accounts is treated differently than that of average users, nor do I necessarily think it is wrong. Social media is all grown up, with celebrities and organizations treating it as an official broadcast system. The U.S. Securities and Exchange Commission treats Facebook posts as adequate investor disclosure.
But what Facebook has built, according to Horwitz, is not a system to protect the integrity and security of Facebook users with a large audience. It is an over-broad attempt to ward off what employees call “PR fires” — a side effect of which being that the users with the biggest megaphones are given another channel by which to spread whatever information they choose with little consequence.
Also, nearly six million users are enrolled in this thing?
The documents that describe XCheck are part of an extensive array of internal Facebook communications reviewed by The Wall Street Journal. They show that Facebook knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands.
Moreover, the documents show, Facebook often lacks the will or the ability to address them.
This is the first in a series of articles based on those documents and on interviews with dozens of current and former employees.
I recently finished “An Ugly Truth”. If you have been paying attention to reporting on Facebook recently, you likely will not be surprised by its contents, but it is worthwhile to have so much encapsulated in a single work.
“An Ugly Truth” is a deliberate summary of about the last five years of Facebook’s internal practices and external controversies. In a way, that is fair: some of the most consequential actions in the company’s history were made from the 2016 U.S. presidential election onward. But many of the problems raised by the book have their roots in decisions made years prior, when mainstream publications — like the one its authors work at — were more comfortable extolling the assumed virtues of connecting as many people on a single discussion platform.
The outcome of that election caused many publications to question those assumptions, as acknowledged by the book’s authors, and I think it tainted some of the investigations critical of Facebook as merely being “anti-Trump”. As much as he singlehandedly tested the limits of platform moderation, that should not be the case. Privacy advocates were raising similar concerns about Facebook for years before that election and, when mainstream outlets got more involved, they were able to use more resources to dig deeper.
Aside from the new information that may be uncovered in this Journal series, it may also be able to present it in a way that could seem less politically charged. I welcome that.
Remember how, back in March, all links to Shortcuts just stopped working? I had a lot of guesses about why that was — an internal software update went poorly, perhaps? Or maybe a single server’s problems cascaded across an entire data centre? The truth is, as always, far more wild than you might expect.
Quite early on I noticed that a lot of Apple’s own apps used a technology called CloudKit and you could say it is Apple’s equivalent to Google’s Firebase. It has a database storage that is possible to authenticate to and directly fetch and save records from the client itself.
It was quite complex to understand all different authentication flows, and security roles, and this made me curious. Could it be that this was not only complex for me to understand, but also for teams using it internally at Apple? I started investigating where it was being used and for what.
The climax of this post is a screenshot of an email Rosén sent to Apple’s security team with the subject line “Urgent – CloudKit issue, access misconfiguration with com.apple.shortcuts, accidentally deleted whole public _defaultZone and now gallery and all shared shortcuts for all users are gone”. I guess the answer to the earlier question is “yes”.
At this point, antitrust intervention in Europe, the U.S., or both is almost certain. By refusing to engage with the legitimate concerns of policymakers, Apple is risking its core security and privacy brand to protect business practices that are not essential to its future.
It is a strategic error for Apple’s lobbyists and surrogates in Washington to argue against every new antitrust law targeting the tech industry. Apple has made itself a target by being incredibly successful and by adopting communications strategies that mimic tech giants whose anticompetitive behavior is substantially more damaging. Apple is almost certain to lose something, but there is still room to protect your most valuable assets. There may also be an opportunity to gain competitive advantage. Google’s Android operating system has roughly 85% global share in smartphones and smart devices, so robust antitrust intervention against Google may give Apple an opportunity to gain market share in its most important business.
This was published yesterday; even though the judgement in Epic Games v. Apple was handed down today, I think it holds up well.
If there is some ambiguity as to what rules the permanent injunction permits Apple to create around in-app purchases, my hope is that the company uses this as an opportunity to ease off a little. I am not saying that I expect this to happen — today’s judgement indicates that Apple has little reason to stop pursuing its existing App Store strategy, with only the aforementioned exception. But a world in which Apple is not in an antagonistic role with developers is a better one for everyone, assuming that Apple can maintain or improve upon iOS’ privacy and security reputation. These fights are just noise.
A couple of weeks before WWDC this year, arguments wrapped in Epic Games v. Apple. Judge Yvonne Gonzalez Rogers took the summer to sort through the mountain of testimony, emails, and contracts and now, just a few days before Apple is set to launch new models of iPhone, Apple Watch, and AirPods, the judgement has been handed down.
You know what the weirdest thing about it is? The nearly two hundred page order is very readable and well-written, but the injunction ordering Apple to scrap the last sentence of the first bullet in App Store rule 3.1.1 leaves plenty of ambiguity over what developers can do and what Apple must allow. This will undoubtably be clarified with time, but it is the only part of the result that creates more questions than it answers. Apple is apparently interpreting it as requiring the company to, in effect, apply its settlement with the Japan Fair Trade Commission to all apps, not just Apple’s “reader” app category. That means the anti-steering App Store policies will be removed within three months. But it may not mean that Apple must permit alternative in-app purchase options.
It is strange to see many stories framing this result as a win for Epic Games, too. It is undoubtably big news that Apple’s anti-steering rules are going away, but that seems like a moderate sacrifice for the company to retain the vast majority of its App Store model— a real cut off the nose to spite the face result. Apple is calling it an affirmation of the App Store’s success.
As for Epic’s other claims, Gonzalez Rogers said the company “overreached” and couldn’t prove that Apple was a monopolist. That doesn’t necessarily mean that Apple isn’t a monopoly, nor that another plaintiff couldn’t make a better argument that it is. Gonzalez Rogers added: “The trial record was not as fulsome with respect to antitrust conduct in the relevant market as it could have been.” The 30 percent commission Apple takes on most subscriptions and in-app purchases, she said, “appears inflated” and was “potentially anticompetitive.” But, since Epic wasn’t challenging the amount of the commission (only the fact that there was one), she wasn’t able to rule on it.
I will repeat what I wrote in May: Epic was a bad plaintiff. It did what plaintiffs do: go for the biggest plausible case and hope to settle somewhere in the middle. But Epic did not gamble well, and is unsatisfied with this ruling — understandably, as it now owes Apple several million dollars. I understand there are many developers who were hoping for an outcome more favourable to them, but a better case needs to be made.
The judge’s order shows the limitations in how competition law is currently interpreted by the courts. Apple may be operating almost entirely within those laws, but lawmakers seem increasingly keen to reduce the power of companies like Apple and Google. Expect more on this front, and not just because Epic will appeal this ruling.
Earlier this week, ProPublicacaught some flak for an article it published about WhatsApp’s message flagging processes. In summary, ProPublica argued that WhatsApp’s marketing promises about end-to-end encryption were misleading because messages are forwarded to contract moderators when users report a chat. That obviously does not require encryption to be broken or undermine the promises of it being “end-to-end”, but the muddy messaging travelled.
I think ProPublica is accurate in calling this a clarification and not a retraction. Most of its original story remains intact, and the little that did change only emphasizes that the moderators only see and review messages that are reported. That detail was present in the original, but it was buried in a longer paragraph.
That is one of the problems with the story as a whole, in fact: it is, in the words of Ted Han, “trying to do too much”. Almost none of the story is about the encrypted contents of messages; instead, it is about their unencrypted metadata:
WhatsApp metadata was pivotal in the arrest and conviction of Natalie “May” Edwards, a former Treasury Department official with the Financial Crimes Enforcement Network, for leaking confidential banking reports about suspicious transactions to BuzzFeed News. The FBI’s criminal complaint detailed hundreds of messages between Edwards and a BuzzFeed reporter using an “encrypted application,” which interviews and court records confirmed was WhatsApp. “On or about August 1, 2018, within approximately six hours of the Edwards pen becoming operative — and the day after the July 2018 Buzzfeed article was published — the Edwards cellphone exchanged approximately 70 messages via the encrypted application with the Reporter-1 cellphone during an approximately 20-minute time span between 12:33 a.m. and 12:54 a.m.,” FBI Special Agent Emily Eckstut wrote in her October 2018 complaint. Edwards and the reporter used WhatsApp because Edwards believed the platform to be secure, according to a person familiar with the matter.
The strange thing is that there has long been a glaring privacy loophole in WhatsApp’s systems that these reporters could have touched on: chat backups are not encrypted. While an investigator with a search warrant may not be able to see the contents of WhatsApp messages from Facebook, they can absolutely gain access through Apple or Google. But that is changing soon with some news Facebook announced today.
In the “coming weeks,” users on WhatsApp will see an option to generate a 64-digit encryption key to lock their chat backups in the cloud. Users can store the encryption key offline or in a password manager of their choice, or they can create a password that backs up their encryption key in a cloud-based “backup key vault” that WhatsApp has developed. The cloud-stored encryption key can’t be used without the user’s password, which isn’t known by WhatsApp.
A reminder that iMessages may be end-to-end encrypted, but iCloud Backups contain the key to decrypt stored messages. A good rule of thumb remains that cloud storage should not be treated the same way you treat a local hard drive. If you have reason to be concerned that your cloud backups might be compromised — this does not have to be for illegal or nefarious reasons — use local backups only.
Today we’re excited to launch Ray-Ban Stories: Smart glasses that give you an authentic way to capture photos and video, share your adventures, and listen to music or take phone calls — so you can stay present with friends, family, and the world around you. Starting at $299 USD and available in 20 style combinations, the smart glasses are available for purchase online and in select retail stores in the US as well as Australia, Canada, Ireland, Italy, and the UK.
To make it clear to bystanders that you’re taking a video with your camera glasses, there’s a small white LED light in the frame corner that lights up whenever the camera is on. However, the tiny light is far less obvious than Snapchat’s version, which had a larger swirling light ring while filming.
Although you can’t turn off the light on the glasses or through the app, I was able to do this the old fashion way: I put a tiny piece of masking tape over the LED light and colored the tape black with a Sharpie. It covered it up perfectly.
Alex Himel, VP of AR at Facebook Reality Labs, informed me over a Zoom chat that taping over the LED light was a violation of the terms of service of the glasses, which prohibit tampering with the device. Be warned.
I love the idea that the terms of service are a law or some kind of incantation that Facebook can recite to prevent people from doing obviously creepy things with these glasses.
Notopoulos reports that Facebook added the LED That Must Not Be Covered on the advice of privacy advocates. Apparently, this was not a thought that had independently occurred to those developing the product. Facebook is not a company that values privacy, and its internal culture reflects that.
Facebook launched a dedicated site that more-or-less acknowledges these risks by pleading with users to “wear [their] smart glasses responsibly” and turn them off in locker rooms and doctor’s offices. Maybe there is a certain amount of personal responsibility here, but maybe there is some corporate responsibility as well. For all of the benefits these kinds of glasses may create, they also make the world creepier for anyone who is not using them. Just because a camera can now fit into the frame of a pair of Wayfarers, that does not mean it should. I know that you can buy spy glasses, but there is a big difference when a corporate giant markets them as a headphone-like everyday gadget. This recontextualizes them in a way that denudes their invasive properties, and transforms them from an illicit-like purchase into something more socially acceptable.
Many viewers know that free streaming apps are most likely selling their personal information, but most viewers may not know that most paid subscription streaming apps are also selling users’ data. Even more expensive streaming plans with “no ads” or “limited ads” still collect viewing data from use of the app to track and serve users advertisements on other apps and services across the internet. Also, data brokers buy and sell users’ data and share it with other companies for data recombination purposes.
Our privacy evaluations of the top 10 streaming apps indicate that all streaming apps (except Apple TV+) have privacy practices that put consumers’ privacy at considerable risk including selling data, sending third‐party marketing communications, displaying targeted advertisements, tracking users across other sites and services, and creating advertising profiles for data brokers.
Some of the failures were downright ugly, like making no real exceptions for the data collection of children. Many of the issues revealed weren’t the end of the world, but they make it repeatedly clear that companies aren’t being transparent about what is collected, and often enjoy making opting out of data collection and monetization as cumbersome and annoying as possible.
Also remember that smart televisions are among the worst offenders of user privacy. Even if you use an Apple TV box and watch shows through Apple TV Plus, your television may still be automatically recognizing everything you watch.
This is U.S. specific, but it is a similar — though slightly less alarming — story here in Canada. A recent survey from Angus Reid and an online-first study published in the Lancet show a much lower willingness to vaccinate in Canada’s most conservative provinces. You can see the results in national coverage maps: in British Columbia, Manitoba, Ontario, and Quebec, three-quarters of all eligible people are fully vaccinated; in Alberta and Saskatchewan, the rate is under seventy percent, with predictable and often tragic results. There are many reasons for anti-vaccination beliefs, but let us not pretend that the overlap with certain political beliefs is coincidental.
Today, I rushed to finish Alec MacGillis’ Fulfillment. Partly, that is because it is a riveting series of vignettes ostensibly about the distorting effects of Amazon in America; partly, that is because the library needs this copy back to lend to someone else.
I cannot recommend this book highly enough. Do your best to set aside any thoughts you may have about antitrust and the kinds of big theoretical questions that a massive company like Amazon engenders. Try to read these stories as presented: many, many people who have found their lives turned upside down by the extraordinary influence of Amazon working in concert with lawmakers at all levels, for the economic advancement of the few. It is devastating.
I bet it is available at your local bookstore or library. But, if you cannot find it there and you enjoy living a life of irony, it is also available on Amazon.
Ford Motor said Tuesday it hired former Tesla and Apple executive Doug Field to lead its emerging technology efforts, a key focus for the automaker under its new Ford+ turnaround plan.
Field — who led development of Tesla’s Model 3 — most recently served as vice president of special projects at Apple, which reportedly included the tech giant’s Titan car project.
Last we heard — “we” being those of us who are not disclosed Apple employees — Field was reporting to John Giannandrea, who took on Project Titan after Bob Mansfield’s retirement.
Field said he decided to join Ford after speaking with company executives and realizing there’s a “deep desire” to remake the automotive industry, specifically with connected vehicles.
Connected vehicles are a key part to Ford’s new turnaround plan that’s designed to reposition the automaker to generate more recurring revenue through software services.
Hey, remember when you could just buy something? And you could just, like, own it, in perpetuity, without making monthly payments? It sounds like science fiction, but that is how the world used to work — really!
Seven years post-launch, new PYMNTS data shows that 93.9% of consumers with Apple Pay activated on their iPhones do not use it in-store to pay for purchases.
That means only 6.1% do.
That finding is based on PYMNTS’ national study of 3,671 U.S. consumers conducted between Aug. 3-10, 2021.
After seven years, Apple Pay’s adoption and usage isn’t much larger than it was 2015 (5.1%), a year after its launch, and is the same as it was in 2019, the last full year before the pandemic.
If you had asked me, before I read this article, how many iPhone users I thought make payments in stores using Apple Pay, I am not sure what I would have guessed — but I think it would have been more than six percent. PYMNTS’ own stats from last year indicate that about eight percent use Apple Pay in-store. Either way, it seems remarkably low, especially for U.S. consumers. But there are some interesting takeaways from this survey, especially if you pair it with an analysis last February showing that around five percent of all card transactions worldwide were being made through Apple Pay.
This survey shows an approximately flat use rate from 2019 through 2021, down slightly from 2018. Webster writes that the pandemic ought to have “changed the trajectory of Apple Pay” as “contactless and touchless have become the consumer’s checkout mantra”. But anyone with a Face ID-equipped iPhone can tell you that wearing a mask requires you to authenticate by using your passcode, so it has been far easier for the past eighteen months to simply tap a card. That is probably true generally, as well; Apple Pay may have better privacy and security, but it is no easier to use than a card that supports tap to pay, even without the added complication of pandemic precautions.
If U.S. consumers are using Apple Pay infrequently, how does that square with the study from last year showing huge numbers of card transactions flowing through the service? Well, the PYMNTS survey does not cover the use of Apple Pay on websites or in apps, and I bet the latter represents an overwhelming volume. I would love to see a similar survey for online purchases.
The last mystery for me in the PYMNTS survey was the discrepancy between the number of users who have set up Apple Pay compared to the number who are actually using it. That can be explained by the iPhone’s setup process, which prompts users to add a credit card to Apple Pay. Given how much emphasis the screen’s design puts on setting up Apple Pay and how much iOS bugs you later if you do not add a card, I would not be surprised if many people set it up just to shut it up.
What a silly name. It is like they hate vowels or something, he wrote at pxlnv.com. ↩︎
Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of Child Sexual Abuse Material. Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.
It’s unclear at this point what specific changes Apple could make to satisfy its critics. Green and Pfefferkorn both suggest that the company could limit its scanning to shared iCloud albums rather than involving its customers’ devices. And Stamos says the NeuralHash issues reinforce the importance of incorporating the research community more fully from the start, especially for an untested technology.
Others remain steadfast that the company should make its pause permanent. “Apple’s plan to conduct on-device scanning of photos and messages is the most dangerous proposal from any tech company in modern history,” says Evan Greer, deputy director of digital rights nonprofit Fight for the Future. “It’s encouraging that the backlash has forced Apple to delay this reckless and dangerous surveillance plan, but the reality is that there is no safe way to do what they are proposing. They need to abandon this plan entirely.”
I doubt that this thing will be entirely scrapped, especially if — as Green hopes — Apple is on a path toward end-to-end encryption for iCloud storage. If you think Apple lacks the backbone to resist political pressure for expanding the CSAM matching database, you definitely cannot hope for wholly encrypted iCloud storage without any way of detecting abuse.
I am curious about the company’s next steps, though. This has been a contentious proposal — one that I have covered extensively and, as a result, found myself going from concerned to cautiously optimistic. I still think Apple bungled this announcement; its Child Safety page still reads as though these are finished products that will ship in this form, with the exception of the notice added today. This was a big public push that even media trained executives struggled to explain in a clear way, and relied too much on trust in Apple at a time when tech companies are facing increased public skepticism. I look forward to a solution that can alleviate many researchers’ concerns, but I suspect — as with the App Store — trust has been burned. Only Apple can rebuild it.
Using Keynote.app for the first time since macOS Big Sur. Boy, the shift to smaller all monochrome icons is a big usability loss.
Guess what, these cones in your eye are great things, you should use them.
I think about this every time I have opened the iWork apps in Big Sur. The icons in the toolbars are smaller, they are monochromatic, the button click target is barely bigger than the icon, and the click target only shows on hover after a delay. It is like this in every iWork app.
The change from the glossy and dimensional era of the Keynote toolbar to the post-Yosemite flatter appearance was a question of taste. This transformation is a usability regression.