Search Results for: Clearview

Bryan Carney, the Tyee, March 2019:

The RCMP has been quietly running an operation monitoring individuals’ Facebook, Twitter, Instagram and other social media activity for at least two years, The Tyee has learned.

[…]

“There is a position taken that this is public information and does not constitute private information, and that is an inaccurate assessment of the way that Canadian law assess public and private in this country as far as I’m concerned,” he [Chris Parsons of Citizen Lab] said.

Carney, of the Tyee, in a November 2020 followup article:

A 3,000-page batch of internal communications from the RCMP obtained by The Tyee provides a window into how the force builds its capabilities to spy on internet users and works to hide its methods from the public.

[…]

Back on Dec. 28, 2016, the RCMP ordered “optional goods” — extra software and features — in a Babel X contract found in the documents, but the list was blanked out. No contract or procurement documents naming Babel X appeared on Public Services and Procurement Canada websites until 2020.

Last year, the U.S. Office of the Director of National Intelligence published a report acknowledging it collects vastly more information than it needs for immediate investigative purposes.

Philippe Dufrense, Privacy Commissioner of Canada, in the introduction to a similarly scathing report about the RCMP’s Project Wide Awake program, published Thursday:

These issues are at the heart of the Office of the Privacy Commissioner of Canada’s (OPC) investigation into the Royal Canadian Mounted Police’s (RCMP) Project Wide Awake initiative.

The initiative uses privacy impactful third-party services to collect personal information from a range of sources, including social media, forums, the dark web, location-based services and fee-for-access private databases. The data is used for a variety of policing purposes, including investigating suspected unlawful activity, locating missing persons, identifying suspects, detecting threats at public events attended by high-profile individuals, and maintaining situational awareness during an active situation.

The OPC’s investigation identified concerns related to both accountability and transparency, namely that the RCMP did not take the necessary steps to ensure that the personal information collection practices of all of its service providers were compliant with Canadian privacy law.

The Commissioner found possible violations of privacy law, particularly with the use of Babel X, and says the office made three specific recommendations, “none of which were accepted by the RCMP”. Alas, this office has little recourse; Facebook and Clearview could simply ignore the results of similar investigations.

Eyal Press, the New Yorker:

In June, an appellate court ordered the N.Y.P.D. to turn over detailed information about a facial-recognition search that had led a Queens resident named Francisco Arteaga to be charged with robbing a store. The court requested both the source code of the software used and information about its algorithm. Because the technology was “novel and untested,” the court held, denying defendants access to such information risked violating the Brady rule, which requires prosecutors to disclose all potentially exculpatory evidence to suspects facing criminal charges. Among the things a defendant might want to know is whether the photograph that had been used in a search leading to his arrest had been digitally altered. DataWorks Plus notes on its Web site that probe images fed into its software “can be edited using pose correction, light normalization, rotation, cropping.” Some systems enable the police to combine two photographs; others include 3-D-imaging tools for reconstructing features that are difficult to make out.

This example is exactly why artificial intelligence needs regulation. There are many paragraphs in this piece which contain alarming details about overconfidence in facial recognition systems, but proponents of allowing things to play out as things are currently legislated might chalk that up to human fallibility. Yes, software might present a too-rosy impression of its capabilities, one might argue, but it is ultimately the operator’s responsibility to cross-check things before executing an arrest and putting an innocent person in jail. After all, there are similar problems with lots of forensic tools.

Setting aside how much incentive there is for makers of facial recognition software to be overconfident in their products, and how much leeway law enforcement seems to give them — agencies kept signing contracts with Clearview, for example, even after stories of false identification and arrests based on its technology — one could at least believe searches use photographs. But that is not always the case. DataWorks Plus markets tools which allow searches using synthesized faces which are based on real images, as Press reports — but you will not find that on its website. When I went looking, DataWorks Plus seems to have pulled the page where it appeared; happily, the Internet Archive captured it. You can see in its examples how it is filling in the entire right-hand side of someone’s face in a “pose correction” feature.

It is plausible to defend this as just a starting point for an investigation, and a way to generate leads. If it does not pan out, no harm, right? But it does seem to this layperson like a computer making its best guess about someone’s facial features is not an ethical way of building a case. This is especially true when we do not know how systems like these work, and it does not inspire confidence that there are no standards specific with which “A.I.” tools must comply.

It should have been a surprise to exactly nobody that Threads, Meta’s Twitter-alike, was going to seem hungrier for personal data than Bluesky or Mastodon.

That is how Meta makes its money: its products surveil as much of your behaviour as possible, then they let others buy targeted advertising. That differs from the other two services I mentioned. Bluesky says its business model “must be fundamentally different” because anyone will be able to spin up their own server; it has raised venture capital and is selling custom domain names. Mastodon is similarly free; its development is supported by various sponsors and it has a Patreon account pulling in nearly half a million dollars annually, while most individual servers are donationware.

Meta is not currently running advertising on Threads, but one day it will. Even so, its listings in Apple’s App Store and Google’s Play Store suggest a wide range of privacy infractions, and this has not gone unnoticed. Reece Rogers, of Wired, compared the privacy labels of major Twitter-like applications and services on iOS, and Tristan Louis did the same with a bunch of social apps on Android. Michael Kan, of PC Magazine, noticed before Threads launched that its privacy label indicated it collected all the same data as Twitter plus “Health & Fitness, Financial Info, Sensitive Info, and “Other Data””. That seems quite thorough.

Just as quickly as these critical articles were published came those who rationalized and even defended the privacy violations implied by these labels. Dare Obasanjo — who, it should be noted, previously worked for Meta — said it was “a list of features used by the app framed in the scariest way possible”. Colin Devroe explained Threads had to indicate such a vibrant data collection scheme because Threads accounts are linked to Instagram accounts. My interpretation of this is because you can, for example, shop with Instagram, it is possible to link billing information to a profile.

That there is any coverage whatsoever of the specific privacy impacts of these applications is a testament to the direct language used in these labels. Even though Meta’s privacy policy and the supplemental policy for Threads have been written with comprehension in mind, they are nowhere near as readable as these simplified privacy labels.

Whether those labels are being accurately comprehended, though, is a different story, as indicated in the above examples. The number of apparent privacy intrusions in which Threads engages is alarming at first glance. But many of the categories, at least on iOS, require that users grant permission first, including health information, photos, contacts, and location. Furthermore, it is not clear to me how these data types are used. I only see a couple of passing references to the word “health” in Meta’s privacy policy, for example, and nothing says it communicates at all with the Health database in iOS. Notably, not only does Threads not ask for access to Health information, it also does not request access to any type of protected data — there is no way to look for contacts on Threads, for example — nor does it ask to track when launched. In covering all its bases, Meta has created a privacy label which suggests it is tracking possibly everything, but perhaps not, and there is no way to tell how close that is to reality nor exactly what is being done with that information.

This is in part because privacy labeling is determined by developers, and the consequences of violations for misrepresentation are at the discretion of Apple and Google. Ironically, because each of the players involved are giant businesses, it seems to me like there may be limitations about the degree to which these privacy labels are able to be policed. If Apple or Google were to de-list or block Meta’s apps, you know some lawyers would be talking about possibly anti-competitive motives.

That is not to say privacy labels are useless. A notable difference between the privacy label for Threads and some other apps is not found in the list of categories of information collected. Instead, it is in the title of that list: “Data Linked to You”. It should not be worrisome for a developer to collect diagnostic information, for example, but is it it necessary to associate it with a specific individual? Sure enough, while some apps — like Tapbots’ Ivory and the first-party Mastodon client — say they collect nothing, others, like Bluesky and Tooot — a Mastodon client — acknowledge collecting diagnostic information, but say they do not associate it with individual users. Apple is also pushing for greater transparency by requiring that developers disclose third-party SDKs which collect user data.

All of this, however, continues to place the onus of responsibility on individual users. Somehow, we must individually assess the privacy practices of the apps we use and the SDKs they use. We must be able to forecast how our granting of specific types of data access today will be abused tomorrow, and choose to avoid all features and apps which stray too far. Perhaps the most honest disclosure statements are in the form of the much-hated cookie consent screens which — at their best — give users the option for agreeing to each possible third-party disclosure, or agreeing or disagreeing in bulk. While they provide an aggressive freedom of choice, they are overwhelming and easily ignored.

A better option may be found in not giving users a choice.

The rate at which tech products have changed and evolved has made it impossible to foresee how today’s normal use becomes tomorrow’s privacy exploitation. Vacation photos and selfies posted ten or twenty years ago are now part of at least one massive facial recognition database. Doorbell cameras become a tool for vigilante justice. Using the web and everyday devices normally subjects everyone to unchecked surveillance, traces of which persist for years. The defaults are all configured against personal privacy, and it is up to individuals to find ways of opting out of this system where they can. Besides, blaming users for not fully comprehending all possible consequences of their actions is the weakest rebuttal to reasonable consumer protections.

Privacy labeling, which appeared first in the App Store before it was added to the Play Store, were inspired by food nutrition labels. I am happy to extend that metaphor. At the bottom of many restaurant menus will often be printed a statement which reads something like “eating raw or lightly cooked foods of animal origin may increase your risk of food poisoning”. There are good reasons (PDF) to be notified of that risk and make judgements based on your personal tolerance. But nobody expects the secret ingredient added by a restaurant to their hollandaise to be salmonella. This reasonable disclosure statement is not sufficient to protect kitchen staff from taking reasonable precautions to avoid poisoning patrons.

We can only guess at some pretty scary ways these everyday exploitations of our private data may be used, but we do not have to. We have plenty of evidence already that we need more protections against today’s giant corporations and tomorrow’s startups. It should not be necessary to compare ambiguous labels against company privacy policies and imagine what they could do with all that information just to have a text-based social media account. Frivolity should not be so poisoned.

From a press release issued by the Office of the Privacy Commissioner of Canada:

The privacy protection authorities for Canada, Québec, British Columbia and Alberta announced today that they will jointly investigate the short-form video streaming application TikTok.

[…]

The four privacy regulators will examine whether the organization’s practices are in compliance with Canadian privacy legislation and in particular, whether valid and meaningful consent is being obtained for the collection, use and disclosure of personal information. The investigation will also determine if the company is meeting its transparency obligations, particularly when collecting personal information from its users.

This comes after multiple European authorities have investigated TikTok or are in the process of doing so; the company has been fined for its practices in France and the Netherlands. It will be interesting to see what Canadian regulators can dig up.

A quirk of the OPC is how it can make recommendations but has no authority to prosecute. After a similar investigation into Clearview’s facial recognition systems, it concluded the company conducted “mass surveillance of Canadians”, but could not issue fines or order Clearview to make changes. The company’s response was predictably weak: it created a manual opt-out mechanism and pulled out of the Canadian market. But Clearview is still conducting mass surveillance on all Canadians who have not requested removal.

Similarly, while the OPC may find embarrassing and dangerous things TikTok could be doing in Canada, Bytedance can simply deny any wrongdoing and carry on — unless the OPC pursues the matter in court.

Gilad Edelman, Wired:

Now comes an even bigger surprise: A new version of the ADPPA has taken shape, and privacy advocates are mostly jazzed about it. It just might have enough bipartisan support to become law — meaning that, after decades of inaction, the United States could soon have a real federal privacy statute.

Perhaps the most distinctive feature of the new bill is that it focuses on what’s known as data minimization. Generally, companies would only be allowed to collect and make use of user data if it’s necessary for one of 17 permitted purposes spelled out in the bill — things like authenticating users, preventing fraud, and completing transactions. Everything else is simply prohibited. Contrast this with the type of online privacy regime most people are familiar with, which is all based on consent: an endless stream of annoying privacy pop-ups that most people click “yes” on because it’s easier than going to the trouble of turning off cookies. That’s pretty much how the European Union’s privacy law, the GDPR, has played out.

If this law is as described and passes more-or-less intact, it could fundamentally reshape the economy of the web and be a model for the rest of the world.

The Electronic Frontier Foundation is “disappointed”:

We have three initial objections to the version that the committee passed this week. Before a floor vote, we urge the House to fix the bill and use this historic opportunity to strengthen — not diminish — the country’s privacy landscape now and for years to come.

The Foundation is concerned about rollbacks of FCC authority, poor individual right to action reform, and the preemption of state laws by this national law. The latter is a particularly fraught matter: a federal regulation simplifies compliance, reduces reliance on weak state-level laws lobbied for by tech companies, and improves international competitiveness, but it could mean privacy rollbacks for those in states with more stringent laws. The Foundation points to a few examples, undermining Edelman’s claim that “it goes further than any of the state laws it would preempt — even California’s”.

Look out for reactions to this bill from technology company front groups like the Competitiveness Coalition and American Edge. Both have been focused on the American Innovation and Choice Online Act — perhaps an indication of tech companies’ priorities — but keep an eye out. The Interactive Advertising Bureau unsurprisingly opposes the law, saying it would “impose heavier regulations than any state currently does” — a demonstrably untrue claim.

Muyi Xiao, Paul Mozur, Isabelle Qian, and Alexander Cardia of the New York Times put together a haunting short documentary about the state of surveillance in China. It shows a complete loss of privacy, and any attempt to maintain one’s sense of self is regarded as suspicious. From my limited perspective, I cannot imagine making such a fundamental sacrifice.

This is why it is so important to match the revulsion we feel over things like that Cadillac Fairview surreptitious facial recognition incident or Clearview AI — in its entirety — with strong legislation. These early-stage attempts at building surveillance technologies that circumvent legal processes forecast an invasive future for everyone.

This settlement is significant, but perhaps not as triumphant as the ACLU makes it out to be:

The central provision of the settlement restricts Clearview from selling its faceprint database not just in Illinois, but across the United States. Among the provisions in the binding settlement, which will become final when approved by the court, Clearview is permanently banned, nationwide, from making its faceprint database available to most businesses and other private entities. The company will also cease selling access to its database to any entity in Illinois, including state and local police, for five years.

This does not eliminate the need for stronger privacy laws in the United States. Outside the U.S., it seems that Clearview AI is able to continue developing and selling its product under the cover of American jurisdiction, unless expressly prohibited by local laws. Clearview is still expanding.

This settlement does prohibit Clearview from providing free trial access without supervisor approval, among its biggest sales tactics. Good.

Matt O’Brien and Tali Arbel, the Associated Press:

A controversial face recognition company that’s built a massive photographic dossier of the world’s people for use by police, national governments and — most recently — the Ukrainian military is now planning to offer its technology to banks and other private businesses.

[…]

The new “consent-based” product would use Clearview’s algorithms to verify a person’s face, but would not involve its ever-growing trove of some 20 billion images, which [Clearview CEO Hoan] Ton-That said is reserved for law enforcement use. Such ID checks that can be used to validate bank transactions or for other commercial purposes are the “least controversial use case” of facial recognition, he said.

Remember when the company promised to only allow law enforcement uses? Ton-That killed that principle earlier this year. If Clearview could have operated with individual consent, it would have obtained it already.

Every day this company is allowed to keep operating represents an increasing policy failure.

Thomas Brewster, Forbes:

[…] On Wednesday, deputy prime minister and head of the Digital Transformation Ministry in Ukraine, Mykhailo Fedorov, confirmed on his Telegram profile that surveillance technology was being used in this way, a matter of weeks after Clearview AI, the New York-based facial recognition provider, started offering its services to Ukraine for those same purposes. Fedorov didn’t say what brand of artificial intelligence was being used in this way, but his department later confirmed to Forbes that it was Clearview AI, which is providing its software for free. They’ll have a good chance of getting some matches: In an interview with Reuters earlier this month, Clearview CEO Hoan Ton-That said the company had a store of 10 billion users’ faces scraped from social media, including 2 billion from Russian Facebook alternative Vkontakte. Fedorov wrote in a Telegram post that the ultimate aim was to “dispel the myth of a ‘special operation’ in which there are ‘no conscripts’ and ‘no one dies.’”

Tim Cushing, Techdirt:

Or maybe it’s just Clearview jumping on the bandwagon by supporting a country that already has the support of the most powerful governments in the world. Grabbing onto passing coattails and contacting journalists to get the word out about the company’s reverse-heel turn is savvy marketing. But it’s little more than that. The tech may prove useful (if the Ukraine government is even using it), but that shouldn’t be allowed to whitewash Clearview’s (completely earned) terrible reputation. Even if it’s useful, it’s only useful because the company was willing to do what no other company was: scrape millions of websites and sell access to the scraped data to anyone willing to pay for it.

It has been abundantly clear for a long time that accurate facial recognition can have its benefits, just as recording everyone’s browser history could make it easier to investigate crime. Even if this seems helpful, it is still an uneasy technology developed by ethically bankrupt company. It is hard for me to see this as much more than Clearview cynically using a war as a marketing opportunity given that it spread news of its participation weeks before anyone in the Ukrainian government confirmed it.

Kate Kaye, Protocol:

When it comes to today’s data-centric business models, algorithmic systems and the data used to build and train them are intellectual property, products that are core to how many companies operate and generate revenue. While in the past the FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.

[…]

The winds inside the FTC seem to be shifting. “Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” former FTC Commissioner Rohit Chopra, now director of the Consumer Financial Protection Bureau, wrote in a statement related to the Everalbum case. He said requiring the company to “forfeit the fruits of its deception” was “an important course correction.”

As with authorities requiring Clearview to delete identification data, I am confused about how it is possible to extricate illegally-acquired materials from software like machine learning models. In the Everalbum case (PDF), for example, the FTC ordered the company to delete data derived from the collection of faces from users who deactivated their accounts, as well as any algorithms or models created from that information. But it is possible some or all of those photos were used to train the machine learning models used by all users. Without rolling the model back to a state before the creation of any data relevant to the FTC’s order, how is this possible? I am genuinely curious. This sounds like a far way to treat businesses that have exploited illegally acquired data, but I am unclear how it works.

The headline on this article is ridiculous, by the way. It claims this penalty strategy “spells death for algorithms”, but the article clarifies “the term ‘algorithm’ can cover any piece of code that can make a software application do a set of actions”. Whoever picked this headline entirely divorced it from Kaye’s excellent reporting.

Big scoop for Drew Harwell at the Washington Post:

The facial recognition company Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post.

[…]

And the company wants to expand beyond scanning faces for the police, saying in the presentation that it could monitor “gig economy” workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar.

Slides Harwell posted on Twitter reveal a stunning expansion of personal recognition. For example, Clearview is exploring ways of tying identity to license plates, movements, and locations. This is a brazen theft of personal liberty undertaken by a private company with virtually no accountability or regulation.

Harwell’s story is full of outrageous details, one paragraph after another. But one must be mindful they are sourced from an investor presentation that puts the company in the best possible light for people with deep pockets. While unearned optimism is allowed, outright lies are not supposed to be included, though some of the companies’ logos apparently associated with Clearview denied to Harwell any connection.

With that in mind, Clearview says it has thousands of clients in the United States alone, even as it is increasingly banned in other countries. It says it is ingesting photos at a rate of one-and-a-half billion per month, and wants to identify pretty much everyone on Earth, though that presumably excludes children and everyone from Australia, Canada, and France. It should also be excluding images posted to major social media networks, many of which have told Clearview to cease and desist its scraping practices.

It has plenty of ideas for how it wishes to use its database and, according to the slides posted by Harwell, it also wants to license them to third parties for their own uses. How does that square with its promise to only permit law enforcement uses?

Clearview has dismissed criticism of its data collection and surveillance work by saying it is built exclusively for law enforcement and the public good. In an online “principles” pledge, the company said that it works only with government agencies and that it limits its technology to “lawful investigative processes directed at criminal conduct, or at preventing specific, substantial, and imminent threats to people’s lives or physical safety.”

[…]

In his statement to The Post, [founder Hoan Ton-That] said: “Our principles reflect the current uses of our technology. If those uses change, the principles will be updated, as needed.”

If your principles change because of financial incentives, you do not have principles. Clearview is sketchy as hell and has no place being in business.

Ton-That told The Post the document was shared with a “small group of individuals who expressed interest in the company.” It included proposals, he said, not just for its main facial-search engine but also for other business lines in which facial recognition could be useful, such as identity verification or secure-building access.

He said Clearview’s photos have “been collected in a lawful manner” from “millions of different websites” on the public Internet. A person’s “public source metadata” and “social linkage information,” he added, can be found on the websites that Clearview has linked to their facial photos.

I found the second sentence here confusing, but it seems to mean that a user of Clearview is able to see where an image in the database was sourced from. The way Ton-That phrases it makes it sound like a reassurance or something, but nothing could be further from the case. The company still makes people individually opt out from the mass surveillance machine it is hoping to grow and license if they happen to live in California or Illinois. Otherwise, Clearview says it will scrape anything it can access and it is your responsibility to remove from the web anything you do not wish to contribute to its business.

Any reasonable nation should be working hard on legislation that would prevent anything like Clearview from being used, and level crippling penalties for any of its citizens’ data found to be in its systems. That is my baseline expectation. I am not optimistic it will be achieved.

Tonya Riley, CyberScoop:

In fact, CyberScoop identified more than 20 federal law enforcement contracts with a total overall ceiling of over $7 million that included facial recognition in the award description or to companies whose primary product is facial recognition technology since June, when a government watchdog released a report warning about the unmitigated technology. Even that number, which was compiled from a database of government contracts created by transparency nonprofit Tech Inquiry and confirmed with federal contracting records, is likely incomplete. Procurement awards often use imprecise descriptions and sometimes the true beneficiary of the award is obscured by subcontractor status.

Among the contracts CyberScoop cites is one between the FBI and Clearview AI. Quite a stark contrast compared to countries like Canada and France that have banned the company from operating within their borders or using any citizens’ data.

Natasha Lomas, TechCrunch:

France’s privacy watchdog said today that Clearview has breached Europe’s General Data Protection Regulation (GDPR).

In an announcement of the breach finding, the CNIL also gives Clearview formal notice to stop its “unlawful processing” and says it must delete user data within two months.

Good; keep these orders coming. Like previous deletion demands, there are likely problems with ascertaining who in Clearview’s database is covered, but at least there is collective action by countries that have laws concerning individuals’ privacy. It is a stance that declares its entire operation an unacceptable violation. I see nothing wrong with putting Clearview out of business and discouraging others from replicating it.

John Paczkowski, Buzzfeed News:

Australia’s national privacy regulator has ordered controversial facial recognition company Clearview AI to destroy all images and facial templates belonging to individuals living in Australia, following a BuzzFeed News investigation.

On Wednesday, the Office of the Australian Information Commissioner (OAIC) said Clearview had violated Australians’ privacy by scraping their biometric information from the web and disclosing it via a facial recognition tool built on a vast database of photos scraped from Facebook, Instagram, LinkedIn, and other websites.

This sounds great, but it faces some of the same problems as removing Canadian faces. Does Clearview know which faces in its collection are Australian? In the Commissioner’s determination, Clearview told the office that it, in the words of the Commissioner, “collects images without regard to geography or source”. Presumably, it can tie some photos to people living in Australia. But does it reliably retain information about, say, an Australian living abroad? It seems like one of those edge cases that could affect about a million people.

Also, in that determination, I found it telling that Clearview “repeatedly asserted that it is not subject to the Privacy Act” largely because it is a company based in the U.S. and, under U.S. law, it is arguing its collection of biometric information from published images is legal. The U.S. is like a tax haven but for privacy abuses. (And taxes too.)

Will Knight, Wired:

The company’s cofounder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from across the web — more than three times as many as has been previously reported.

[…]

Some of Clearview’s new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including “deblur” and “mask removal” tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person’s face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images.

I am stunned Clearview is allowed to remain in business, let alone continue to collect imagery and advance new features, given how invasive, infringing, and dangerous its technology is.

Sometimes, it makes sense to move first and wait for laws and policies to catch up. Facial recognition is not one of those times. And, to make matters worse, policymakers have barely gotten started in many jurisdictions. We are accelerating toward catastrophe and Clearview is leading the way.

The marketplace for exploits and software of an ethically questionable nature is a controversial one, but something even I can concede has value. If third-party vendors are creating targeted surveillance methods, it means that the vast majority of us can continue to have secure and private systems without mandated “back doors”. It seems like an agreeable compromise so long as those vendors restrict their sales to governments and organizations with good human rights records.

NSO Group, creators of Pegasus spyware, seems to agree. Daniel Estrin, reporting last month at NPR:

NSO says it has 60 customers in 40 countries, all of them intelligence agencies, law enforcement bodies and militaries. It says in recent years, before the media reports, it blocked its software from five governmental agencies, including two in the past year, after finding evidence of misuse. The Washington Post reported the clients suspended include Saudi Arabia, Dubai in the United Arab Emirates and some public agencies in Mexico.

Pegasus can have legitimate surveillance use, but it has great potential for abuse. NSO Group would like us to believe that it cares deeply about selling only to clients that will use the software to surveil possible terrorists and valuable criminal targets. So, how is that going?

Bill Marczak, et al., Citizen Lab:

We identified nine Bahraini activists whose iPhones were successfully hacked with NSO Group’s Pegasus spyware between June 2020 and February 2021. Some of the activists were hacked using two zero-click iMessage exploits: the 2020 KISMET exploit and a 2021 exploit that we call FORCEDENTRY.

[…]

At least four of the activists were hacked by LULU, a Pegasus operator that we attribute with high confidence to the government of Bahrain, a well-known abuser of spyware. One of the activists was hacked in 2020 several hours after they revealed during an interview that their phone was hacked with Pegasus in 2019.

As Citizen Lab catalogues, Bahrain’s record of human rights failures and internet censorship should have indicated to NSO Group that misuse of its software was all but guaranteed.

NSO Group is just one company offering software with dubious ethics. Remember Clearview? When Buzzfeed News reported last year that the company was expanding internationally, Hoan Ton-That, Clearview’s CEO, brushed aside human rights concerns:

“Clearview is focused on doing business in USA and Canada,” Ton-That said. “Many countries from around the world have expressed interest in Clearview.”

Later last year, Clearview went a step further and said it would terminate private contracts, and its Code of Conduct promises that it only works with law enforcement entities and that searches must be “authorized by a supervisor”. You can probably see where this is going.

Ryan Mac, Caroline Haskins, and Antonio Pequeño IV, Buzzfeed News:

Like a number of American law enforcement agencies, some international agencies told BuzzFeed News that they couldn’t discuss their use of Clearview. For instance, Brazil’s Public Ministry of Pernambuco, which is listed as having run more than 100 searches, said that it “does not provide information on matters of institutional security.”

But data reviewed by BuzzFeed News shows that individuals at nine Brazilian law enforcement agencies, including the country’s federal police, are listed as having used Clearview, cumulatively running more than 1,250 searches as of February 2020. All declined to comment or did not respond to requests for comment.

[…]

Documents reviewed by BuzzFeed News also show that Clearview had a fledgling presence in Middle Eastern countries known for repressive governments and human rights concerns. In Saudi Arabia, individuals at the Artificial Intelligence Center of Advanced Studies (also known as Thakaa) ran at least 10 searches with Clearview. In the United Arab Emirates, people associated with Mubadala Investment Company, a sovereign wealth fund in the capital of Abu Dhabi, ran more than 100 searches, according to internal data.

As noted, this data only covers up until February last year; perhaps the policies governing acceptable use and clientele were only implemented afterward. But it is alarming to think that a company which bills itself as the world’s best facial recognition provider ever felt comfortable enabling searches by regimes with poor human rights records, private organizations, and individuals in non-supervisory roles. It does jibe with Clearview’s apparent origin story, and that should be a giant warning flag.

These companies can make whatever ethical promises they want, but money talks louder. Unsurprisingly, when faced with a choice about whether to allow access to their software judiciously, they choose to gamble that nobody will find out.

Kashmir Hill, the New York Times:

Clearview AI is currently the target of multiple class-action lawsuits and a joint investigation by Britain and Australia. That hasn’t kept investors away.

The New York-based start-up, which scraped billions of photos from the public internet to build a facial-recognition tool used by law enforcement, closed a Series B round of $30 million this month.

The investors, though undeterred by the lawsuits, did not want to be identified. Hoan Ton-That, the company’s chief executive, said they “include institutional investors and private family offices.”

It makes sense that these investors would want their association with the company kept secret, since identifying them as supporters of a creepy facial recognition company is more embarrassing that their inability to understand irony. Still, it shows how the free market is betting that this company will grow and prosper despite its disregard for existing laws, proposed legislation, and a general sense of humanity or ethics.

Dismantle this company and legislate its industry out of existence. Expose the investors who are propping it up.

Kashmir Hill has continued to report on Clearview AI after breaking the news of its existence early last year. Today, for the New York Times Magazine, she shared an update on the company:

It seemed entirely possible that Clearview AI would be sued, legislated or shamed out of existence. But that didn’t happen. With no federal law prohibiting or even regulating the use of facial recognition, Clearview did not, for the most part, change its practices. Nor did it implode. While it shut down private companies’ accounts, it continued to acquire government customers. Clearview’s most effective sales tool, at first, was a free trial it offered to anyone with a law-enforcement-affiliated email address, along with a low, low price: You could access Clearview AI for as little as $2,000 per year. Most comparable vendors — whose products are not even as extensive — charged six figures. The company later hired a seasoned sales director who raised the price. “Our growth rate is crazy,” Hoan Ton-That, Clearview’s chief executive, said.

Clearview has now raised $17 million and, according to PitchBook, is valued at nearly $109 million. As of January 2020, it had been used by at least 600 law-enforcement agencies; the company says it is now up to 3,100. […]

Any way you cut it, this is disturbing. The public’s reaction to news of Clearview’s existence was overwhelmingly negative, but police saw that article as an advertisement.

Shameless companies will not change from public pressure.

Hill:

Clearview is now fighting 11 lawsuits in the state [Illinois], including the one filed by the A.C.L.U. in state court. In response to the challenges, Clearview quickly removed any photos it determined came from Illinois, based on geographical information embedded in the files it scraped — but if that seemed on the surface like a capitulation, it wasn’t.

Clearview assumes that it can scrape, store, and transform anything in the public realm unless it is certain it would be prohibited from doing so. Data is inherently valuable to the company, so it is incentivized to capture as much as possible.

But that means there is likely a whole bunch of stuff in its systems that it cannot legally use but has no way of knowing that. For example, there are surely plenty of photos taken in Illinois that do not have GPS coordinates in their metadata. Why would any of those be cleared from Clearview’s inventory? Clearview also allows people to request removal from its systems, but there are surely photographs from those people that are not positively matched, so the company has no way of identifying them as part of a removal request.

This is an aside, but that raises an interesting question: if images scraped without legal consent were used to train Clearview’s machine learning models, is it truly possible to remove those illegal images?

If Clearview were even slightly more ethical, it would only scrape the images it has explicit permission to access. I would still disagree with that on its face, but at least it would be done with permission. But this is the perhaps inevitable consequence of the Uber-like fuck your rules philosophy — as Hill writes, it is a “gamble that the rules would successfully be bent in their favor”.

Sadly, that Silicon Valley indifference to legality and ethics will not remain localized. There is no way to know for certain that Clearview has complied with the Privacy Commissioner’s recommendation that the company must delete all collected data on Canadians.

Hill digs into Clearview’s origin story, too, which of course involves Peter Thiel and someone who is even more detestable:

After I broke the news about Clearview AI, BuzzFeed and The Huffington Post reported that Ton-That and his company had ties to the far right and to a notorious conservative provocateur named Charles Johnson. I heard the same about Johnson from multiple sources. So I emailed him. At first, he was hesitant to talk to me, insisting he would do so only off the record, because he was still frustrated about the last time he talked to a New York Times journalist, when the media columnist David Carr profiled him in 2014.

“Provacateur” is an awfully kind description of Johnson, though Hill expands further in the successive paragraphs. Just so we’re clear here, Johnson is a hateful subreddit in human form; a moron attached to a megaphone. Johnson has a lengthy rap sheet of crimes against intelligence, decency, facts, and ethics. He has denied the Holocaust, and did Jacob Wohl’s dumb bit before Wohl was old enough to vote.

Johnson is, apparently, a sort of unofficial cofounder of Clearview, who agreed to talk with Hill apparently because he thought it would rehabilitate his image. Reading between the lines, as of earlier this month he still held shares in a company that seeks to eradicate privacy on a global scale, so I am not sure how that is supposed to make me think more highly of him.

I thought this was amusing:

Johnson believes that giving this superpower only to the police is frightening — that it should be offered to anyone who would use it for good. In his mind, a world without strangers would be a friendlier, nicer world, because all people would be accountable for their actions.

I thought “cancel culture” was a scourge; I guess some fairly terrible people want to automate it.

Let’s not give this “superpower” to anyone.

The Office of the Privacy Commissioner of Canada has been investigating Clearview’s behaviour since Kashmir Hill of the New York Times broke the story a little more than a year ago. In its overview, the Office said:

Clearview did not attempt to seek consent from the individuals whose information it collected. Clearview asserted that the information was “publicly available”, and thus exempt from consent requirements. Information collected from public websites, such as social media or professional profiles, and then used for an unrelated purpose, does not fall under the “publicly available” exception of PIPEDA, PIPA AB or PIPA BC. Nor is this information “public by law”, which would exempt it from Quebec’s Private Sector Law, and no exception of this nature exists for other biometric data under LCCJTI. Therefore, we found that Clearview was not exempt from the requirement to obtain consent.

Furthermore, the Offices determined that Clearview collected, used and disclosed the personal information of individuals in Canada for inappropriate purposes, which cannot be rendered appropriate via consent. We found that the mass collection of images and creation of biometric facial recognition arrays by Clearview, for its stated purpose of providing a service to law enforcement personnel, and use by others via trial accounts, represents the mass identification and surveillance of individuals by a private entity in the course of commercial activity. We found Clearview’s purposes to be inappropriate where they: (i) are unrelated to the purposes for which those images were originally posted; (ii) will often be to the detriment of the individual whose images are captured; and (iii) create the risk of significant harm to those individuals, the vast majority of whom have never been and will never be implicated in a crime. Furthermore, it collected images in an unreasonable manner, via indiscriminate scraping of publicly accessible websites.

The Office said that Clearview should entirely exit the Canadian market and remove data it collected about Canadians. But, as Kashmir Hill says, it is not a binding decision, and it is much easier said than done:

The commissioners, who noted that they don’t have the power to fine companies or make orders, sent a “letter of intention” to Clearview AI telling it to cease offering its facial recognition services in Canada, cease the scraping of Canadians’ faces, and to delete images already collected.

That is a difficult order: It’s not possible to tell someone’s nationality or where they live from their face alone.

The weak excuse for a solution that Clearview has come up with is to tell Canadians to individually submit a request to be removed from its products. To be removed, you must give Clearview your email address and a photo of your face. Clearview expects that it is allowed to process facial recognition for every single person for whom images are available unless they manually opt out. It insists that it does not need consent because the images it collects are public. But, as the Office correctly pointed out, the transformative use of these images requires explicit consent:

Beyond Clearview’s collection of images, we also note that its creation of biometric information in the form of vectors constituted a distinct and additional collection and use of personal information, as previously found by the OPC, OIPC AB and OIPC BC in the matter of Cadillac Fairview.

[…]

In our view, biometric information is sensitive in almost all circumstances. It is intrinsically, and in most instances permanently, linked to the individual. It is distinctive, unlikely to vary over time, difficult to change and largely unique to the individual. That being said, within the category of biometric information, there are degrees of sensitivity. It is our view that facial biometric information is particularly sensitive. Possession of a facial recognition template can allow for identification of an individual through comparison against a vast array of images readily available on the Internet, as demonstrated in the matter at hand, or via surreptitious surveillance.

The Office also found that scraping online profiles does not match the legal definition of “publicly accessible”.

This is such a grotesque violation of privacy that there is no question in my mind that Clearview and companies like it cannot continue to operate. United States law has an unsurprisingly permissive attitude towards this sort of thing, but its failure to legislate on a national level should not be exposed to the rest of the world.

Unfortunately, this requires global participation. Every country must have better regulation of this industry because, as Hill says, there is no way to determine nationality from a photo. If Clearview is outlawed in the U.S., what is there to stop it registering in another nationality with similarly weak regulation?

Clearview is almost certainly not the only company scraping the web with the intent of eradicating privacy as we know it, too. Decades of insufficient regulation have brought us to this time. We cannot give up on the basic right to privacy. But I fear that it has been sacrificed to a privatized version of the police state.

If a government directly created something like the Clearview system, it would be seen as a human rights violation. How is there any moral difference when it is instead created by private industry?

Jason Snell again graciously allowed me to participate in the annual Six Colors Apple report card, so I graded the performance of a multi-trillion-dollar company from my low-rent apartment. There simply aren’t enough column inches in his report card for all of my silly thoughts. I have therefore generously given myself some space here to share them with you.

As much as 2020 was a worldwide catastrophe, it was impressive to see Apple handle pandemic issues remarkably well and still deliver excellence in the hardware, software, and services that we increasingly depended on. If there wasn’t widespread disease, Apple’s year could have played out nearly identically and I do not imagine it would have been received any differently.

Now, onto specific categories, graded from 1–5, 5 being best and 1 being Apple TV-iest. Spoiler alert!

Mac: 4

It will be a while before we know if 2020 was to personal computers what 2007 was to phones, but the M1 Macs feel similarly impactful on the industry at large. Apple demonstrated a scarcely-believable leap by delivering Macs powered by its own SoCs that got great battery life and outperformed just about any other Mac that has ever existed. And to make things even more wild, Apple shoehorned this combination into the least-expensive computers it makes. A holy crap revolutionary year, and it is only an appetizer for forthcoming iMac and MacBook Pro models.

Aside from the M1 models, Apple updated nearly all of its Mac product range except the Mac Pro. The iMac Pro only dropped its 8-core config, but pretty much everything else is the same as when it debuted three years ago.

The best news, aside from the M1 lineup, is that the loathed butterfly keyboard was finally banished from the Mac. Good riddance.

MacOS Big Sur is a decent update by recent MacOS standards. The new design language is going in a good direction, but there are contrast and legibility problems. It is, thankfully, night-and-day more stable than Catalina which I am thrilled that I skipped on my iMac and annoyed that I installed on a MacBook Air that will not get a Big Sur update. Fiddlesticks. But Big Sur has its share of new and old bugs that, while doing nothing so dramatic as forcing the system to reboot, indicate to me that the technical debt of years past is not being settled. More in the Software Quality section.

iPhone: 4

I picked a great year to buy a new iPhone; I picked a terrible year to buy a new iPhone. The five new phones released in 2020 made for the easiest product line to understand and the hardest to choose from. Do I get the 12 Mini, the size I have been begging Apple to make? Do I get the 12 Pro Max with its ridiculously good camera? How about one of the middle models? What about the great value of the SE? It was a difficult decision, but I got the Pro. And then, because I wish the Pro was lighter and smaller, I seriously considered swapping it for the Mini, but didn’t because ProRAW was released shortly after. Buying a telephone is just so hard.

iOS 14 is a tremendous update as well. Widgets are a welcome addition to everyone’s home screen and have spurred a joyous customization scene. ProRAW is a compelling feature for the iPhone 12 Pro models, and is implemented thoughtfully and simply. The App Drawer is excellent for a packrat like me.

2019 was a rough year for Apple operating system stability but, while iOS 13 was better for me than Catalina, iOS 14 has been noticeably less buggy and more stable. I hope this commitment to features and quality can be repeated every year.

Consider my 4-out-of-5 grade a very high 4, but not quite a 5. The iPhone XR remains in the lineup and feels increasingly out of place, and I truly wish the Pro came in a smaller and lighter package. I considered going for a perfect score but, well, it’s my report card.

iPad: 3

The thing the iPad lineup has needed most from the late 2010s was clarity; for the past few years, that is what it has gotten. 2020 brought good hardware updates that has made each iPad feel more accurately placed in the line — with the exception of the Mini, which remains a year behind its entry-level sibling.

But the biggest iPad updates this year were in accessories and in software. Trackpad and mouse compatibility updated a legacy input method for a modern platform, and its introduction was complemented by the new Magic Keyboard case. iPadOS 14 brought further improvements like sidebars, pull-down menus, and components that no longer cover the entire screen.

Despite all of these changes, I remain hungry for more. This is only the second year the iPad has had “iPadOS” and, while it is becoming more of its own thing, its roots in a smartphone operating system are still apparent in a way that sometimes impedes its full potential.

After many difficult years, it seems like Apple is taking the iPad seriously again. I would like to see more steady improvements so that every version of iPadOS feels increasingly like its own operating system even if it continues to look largely like iOS. This one is tougher to grade. I have waffled between 3 and 4, but I settled on the lower number. Think of it as a positive and enthusiastic 3-out-of-5.

Wearables (including Apple Watch): 3

Grades were submitted separately for the Apple Watch and Wearables. I have no experience with the Apple Watch this year, so I did not submit a grade.

Only one new AirPods model was introduced in 2020 but it was big. The AirPods Max certainly live up to their name in weight alone.

Aside from that, rattling in the AirPods Pro models was a common problem from when they were released and it took until October 2020, a full year after the product’s launch, for Apple to correct the problem. Customers can exchange their problematic pair for free, but the environmental waste of even a small percentage of flawed models is hard to bat away.

AirPods continue to be the iconic wireless headphone in the same way that white earbuds were to the iPod. I wish they were less expensive, though, particularly since the batteries have a lifespan of only a couple of years of regular use.

Apple TV: 1

I guess my lowest grade must go to the product that seems like Apple’s lowest priority. It is kind of embarrassing at this point.

The two Apple TV models on sale today were released three and five years ago, and have remained unchanged since. It isn’t solely a problem of device age or cost; it is that these products feel like they were introduced for a different era. This includes the remote, by the way. I know it is repetitive to complain about, but it still sucks and there appears to be no urgency for completing the new remote.

On the software side, tvOS 14 contains few updates. It now supports 4K videos in YouTube and through AirPlay, and HomeKit camera monitoring. Meanwhile, the Music app still does not work well, screensavers no longer match the time of day so there are sometimes very bright screensavers at night, and the overuse of slow animations makes the entire system feel sluggish. None of these things are new in tvOS 14; they are all very old problems that remain unfixed.

The solution to a good television experience remains elusive — and not just for Apple.

Services: 4

No matter whether you look at Apple’s balance sheet or its product strategy, it is clear that it is now fully and truly a services company. That focus has manifested in an increasingly compelling range of things you can give Apple five or ten dollars a month for; or, if you are fully entranced, you can get the whole package for a healthy discount in the new Apple One bundle subscription. Cool.

It has also brought increased reliability to the service offerings. Apple’s internet products used to be a joke, but they have shown near-perfect stability in recent years. Cloud-based services had a rocky year for stability in 2020 and iCloud was no exception around Christmastime but, generally, the reliability of these services instills confidence.

New for this year were the well-received Fitness+ workout service and a bevy of new TV+ shows. Apple also rolled out services to a bunch more countries. But this focus on services has not come without its foibles, as Apple aggressively promotes subscriptions throughout its products in advertisements, up-sells, and push notifications to the irritation of anyone who wishes not to subscribe. Some of these services also introduce liabilities in antitrust and corporate behaviour, something which I will explore later.

HomeKit

I have no experience with HomeKit so I did not grade it.

Hardware Reliability: 3

2020 was the year we bid farewell to the butterfly keyboard and, with it, the most glaring hardware reliability problem in Apple’s lineup. A quick perusal of Apple’s open repair programs and the “hardware” tag on Michael Tsai’s blog shows a few notable quality problems:

  • “Stained” appearance with the anti-reflective coating on Retina display-equipped notebooks

  • Display problems with iPhone 11 models manufactured into May 2020

  • AirPods Pro crackling problems that were only resolved a full year after the product’s debut

Overall, an average year for hardware quality, but an improvement in the sense that you can no longer buy an Apple laptop with a defective keyboard design.

I suppose this score could have gone one notch higher.

Software Quality: 4

The roller coaster ride continues. 2019? Not good! 2020? Pretty good!

Big Sur is stable, but its redesign contains questionable choices that impair usability, some of which I strongly feel should not have shipped — to name two, notifications and the new alert style. Outside of redesign issues, I have seen new graphical glitches when editing images in Photos or using Finder’s Quick Look feature on my iMac. The Music app, while better than the one in Catalina, is slower and more buggy than iTunes used to be. There are older problems, too: with PDF rendering in Preview, with APFS containers in Finder (and Finder’s overall speed), and with data loss in Mail.

iOS 14 is much stable and without major bugs; or, at least, none that I have seen. There are animation glitches here-and-there, and I wish Siri suggestions were better.

On the other end of the scale, tvOS 14 is mediocre, some first-party apps have languished, and using Siri in any context is an experience that still ranges from lacklustre to downright shameful. I hope software quality improves in the future, particularly on the Mac. MacOS has never seemed less like it will cause a whole-system crash, but the myriad bugs introduced in the last several years have made it feel brittle.

I am now thinking I mixed up the scores for software and hardware quality. Oops.

Developer Relations: 2

An absolutely polarized year for developer relations.

On the one hand, Apple introduced a new mechanism to challenge rulings and added a program to reduce commissions to 15% for developers making less than $1 million annually. Running WWDC virtually was also a silver lining in a dark year. It’s the first WWCC I attended because hotels are thousands of dollars but my apartment has no extra cost.

On the other — oh boy, where do we begin? Apple is being sued by Epic Games along antitrust lines; Epic’s arguments are being supported by Facebook, Microsoft, and plenty of smaller developers. One can imagine ulterior motives for the plaintiff’s side, but it does not speak well for Apple’s status among developers that it is being sued. Also, there was that matter of the Hey app’s rejection just days before WWDC, and the difficulty of trying to squeeze the streaming game app model into Apple’s App Store model. Documentation still stinks, and Apple still has communication problems with developers.

Apple’s relationship with developers hit its lowest point in recent memory in 2020, but it also spurred the company to make changes. Developers should be excited to build apps for the Mac instead of relying on shitty cross-platform frameworks like Electron. They should be motivated by the jewellery-like quality of the iPhone 12 models and build apps that match in fit and finish. But I have seen enough comments this year that indicate that everyone — from one-person shops to moderate indies to big names — is worried that their app will be pulled from the store for some new interpretation of an old rule, or that Apple’s services push will raid their revenue model. There must be a better way.

Social/Societal Impact: 2

As with its developer relations, Apple’s 2020 environmental and social record sits at the extreme ends of the scale.

Apple’s response to the pandemic is commendable, from what I could see on the outside. Its store closures often outpaced restrictions from local health authorities in Canada and the U.S., but it kept retail staff on and found ways for them to work from home. It was also quick to allow corporate employees to work remotely, something it generally resists.

In a year of intensified focus on racial inequities, Apple pledged $100 million to projects intended to help right long-standing wrongs, and committed to diversity-supporting corporate practices. There is much more progress that it can make internally, particularly in leadership roles, but its recent hiring practices indicate that it is trying to do better.

Apple continues to invest in privacy and security features across its operating system and services lineup, like allowing users to decline third-party tracking in iOS apps. It also bucked another ridiculous request from the Justice Department and disabled an enterprise distribution certificate used by the creepy facial recognition company Clearview AI.

But a report at the beginning of 2020 drew a connection between discussions with the FBI and Apple’s failure to encrypt iCloud backups. It remains unclear whether one directly followed the other. Apple’s encryption policies remain confusing as far as knowing exactly which parties have access to what data. Still, Apple’s record on privacy is a high standard that its peers will never meet unless they change their business model.

China remains Apple’s biggest liability on two fronts: its supply chain, and services like the App Store and Apple TV Plus. Several reports in 2020 said that Apple was uniquely deferential to Chinese government sensitivities in its App Store policies and its original media. Many other big name companies, wary of being excluded from the Chinese market, have also faced similar accusations. But it is hard to think of one other than Apple that must balance those demands against its entire manufacturing capability. No company can be complicit in the Chinese government’s inhumane treatment of Uyghurs.

Apple is also facing increased antitrust scrutiny around the world for the way it runs the App Store, the commissions it charges third-party developers, and the way it uses private APIs.

Apple’s environmental record is less of a mixed bag. It is recycling more of the materials used in its products, new iPhones come in much smaller boxes containing nearly no plastic. Apple also says that its own operations are entirely carbon neutral, and says that its supply chain will follow by 2030.

For environmental reasons, many new products no longer ship with AC adapters in the box, and to prove it wasn’t screwing around, Apple made Lisa Jackson announce this while standing on the roof of its headquarters. Reactions to this change were predictably mixed, but it seems plausible that this has a big impact at Apple’s scale. I’m still not convinced that it makes sense to sell its charging mat without one.

Apple still isn’t keen on third-party repairs of its products, but it expanded its independent repair shop program to allow servicing of Macs.

If this were two separate categories, I think Apple’s environmental record is a 4/5 and its social record is a 2/5 — at best. I am not averaging those grades because I see liabilities with China and antitrust to be too significant.

Closing Remarks

As I wrote at the top, 2020 was a standout year in Apple’s history — even without considering the many obstacles created by this ongoing pandemic. As my workflow is dependent on these products and services, I appreciate the hard work that has gone into improving their features, but I am even happier that everything I use is, on the whole, more reliable.

What the heck is up with the Apple TV, though?