Yesterday I wrote about California’s AB 2273 bill and how it is impossible to comply with, censorial, and dangerous. From what I’ve heard it’s likely to pass today, and Governor Newsom may sign it soon. The bill seems to have taken many people by surprise, and at this late moment they’re asking how the hell such a bill could have come about. I’ve been wondering the same thing myself, and started digging — and am really confused. Because, as far as I can tell, THE BILL CAME FROM A UK BARONESS, and California politicians were like “ok, yeah, cool, we’ll just take your bill and make it law here.”
This is just one of three worrisome bills passed recently in California, all of which will likely be signed into law. But I wanted to focus on it for two reasons:
the origins of this bill are pretty wild, and
according to an interview, linked to by Masnick, with the Baroness in question, a copy of the bill is coming soon to Canadian lawmakers’ desks.
The bill seems painful for website operators to implement. To comply with its mandates, website operators whose services may be accessed by a minor must consider the health and well-being of children in administrating their website. That all sounds fine. But the terms of the bill are so vague as to potentially snare any website into creating child-friendly policies or require some verification of age.
It is plausible to me that Canadian regulators will see both the child protective promises of the bill and a likely Canadian beneficiary as compelling reasons to at least seriously consider it. I hope that is not the case but, unfortunately, policymakers here have not had a good track record when it comes to internet regulation.
In the year since Apple CEO Tim Cook denounced ad-based business models as a source of real-world violence, Apple has ramped up plans to pop more ads into people’s iPhones and beef up the tech used to target those ads. And now, it looks like Apple’s looking to poach the small businesses that have relied almost entirely on Facebook’s ad platform for more than a decade.
Marketwatch found tworecent job postings from Apple that suggest the company is looking to build out its burgeoning adtech team with folks who specialize in working with small businesses. Specifically, the company says it’s looking for two product managers who are “inspired to make a difference in how digital advertising will work in a privacy-centric world,” who want to “design and build consumer advertising experiences.” The ideal candidate, Apple said, won’t only have savvy around advertising, mobile tech, and advertising on mobile tech, but will also have experience with “performance marketing, local ads or enabling small businesses.”
Apple has spent years marketing itself as a privacy-focused business making its money the old fashioned way, when its users are happiest with the products and services they buy. These as-yet unannounced ad slots may be more respectful of users’ information. Even so, I firmly believe an expansion of ads across its platforms concurrent with its efforts to rein in others’ tracking behaviour — and, by extension, impacting small business advertisers — will damage Apple’s credibility and users’ satisfaction. Nobody is going to not buy an iPhone because there are ads in Maps, for example, but plenty of people who use Maps are going to feel a little cheated.
The maxim “if you are not paying for the product, you are the product” is as inaccurate as it is a cliché. If Apple really is planning to put more ads in its products, it shows that you can pay thousands of dollars and still be the product — because the line on each chart must go up.
Couple that with what feels like ambulance chasing as a knock-on effect of what I am sure many believe is a legitimate expression of privacy ideals and it reflects poorly on the company. One great reason strong privacy protections should be legislated by countries is to prevent businesses from twisting for their benefit something many, including Apple, consider a “fundamental human right”. Meta has spent years trying to redefine “privacy” for its own benefit. Apple’s definition may be closer to what you and I may think is truly private, but it should not get to make that decision.
The coveted blue tick can be difficult to obtain and is supposed to assure that anyone who bears one is who they claim to be. A ProPublica investigation determined that Jugenburg’s dubious alter ego was created as part of what appears to be the largest Instagram account verification scheme ever uncovered. With a generous greasing of cash, the operation transformed hundreds of clients into musical artists in an attempt to trick Meta, the owner of Instagram and Facebook, into verifying their accounts and hopefully paving the way to lucrative endorsements and a coveted social status.
Since at least 2021, at least hundreds of people — including jewelers, crypto entrepreneurs, OnlyFans models and reality show TV stars — were clients of a scheme to get improperly verified as musicians on Instagram, according to the investigation’s findings and information from Meta.
The scam required the creation of enough veneer of success to trick Meta’s verification deciders into giving these jokers a badge. It is hilarious to reflect on how successful someone would have to be to afford these services — tens of thousands of dollars, according to ProPublica’s reporting — yet still feel insufficiently notable without a blue badge. I guess fake clout is still clout until it all falls apart.
Hosted by comedian Wanda Sykes, the show originates from MGM Studios (itself a subsidiary of Amazon), and promises “friends and family a fun new way to enjoy time with one another” via doorbell cams, although the ensuing online reaction has been less than promising. Despite the rising popularity of smart home security systems such as Ring, it seems as though some audiences can see through the upcoming show’s premise to know it sounds less “family friendly” than a thinly-veiled surveillance state infomercial attempting to push more home monitoring products.
Just some lighthearted yuks in support of the private police state. Order your Ring today, brought to you by Amazon, and you, too, can help cops violate civil liberties while providing material for this long-form advertisement.
Mark noticed something amiss with his toddler. His son’s penis looked swollen and was hurting him. Mark, a stay-at-home dad in San Francisco, grabbed his Android smartphone and took photos to document the problem so he could track its progression.
After setting up a Gmail account in the mid-aughts, Mark, who is in his 40s, came to rely heavily on Google. He synced appointments with his wife on Google Calendar. His Android smartphone camera backed up his photos and videos to the Google cloud. He even had a phone plan with Google Fi.
Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse & exploitation.”
Hill is among the best reporters anywhere on these sensitive, nuanced topics, and this story is no exception. Through careful reporting, Hill writes about two people whose Google accounts were closed — and, at least in Mark’s case, deleted from its servers — because both had taken photos of their naked children for diagnostic purposes.
I think Hill’s article thoughtfully explores the difficult and often contradictory questions around CSAM policing — certainly in a more cogent way than I can write about it — and I think this article is worth reading in full. The part I am more able to comment on is Google’s final decision to lock these parents out of their accounts.
The sad thing is how unsurprising this is to anyone who has tried to deal with Google in any capacity aside from being an ad buyer. A few years ago, Talking Points Memo’s Josh Marshall wrote about his frustration with Google’s demonetization of stories about right-wing terrorists:
With the events of recent months and years, Google is apparently now trying to weed out publishers that are using its money streams and architecture to publish hate speech. Certainly you’d probably be unhappy to hear that Stormfront was funded by ads run through Google. I’m not saying that’s happening. I’m just giving you a sense of what they are apparently trying to combat. Over the last several months we’ve gotten a few notifications from Google telling us that certain pages of ours were penalized for ‘violations’ of their ban for hate speech. When we looked at the pages they were talking about they were articles about white supremacist incidents. Most were tied to Dylann Roof’s mass murder in Charleston.
Now in practice all this meant was that two or three old stories about Dylann Roof could no longer run ads purchased through Google. I’d say it’s unlikely that loss to TPM amounted to even a cent a month. Totally meaningless. But here’s the catch. The way these warnings work and the way these particular warnings were worded, you get penalized enough times and then you’re blacklisted.
Now, certainly you’re figuring we could contact someone at Google and explain that we’re not publishing hate speech and racist violence. We’re reporting on it. Not really. We tried that. We got back a message from our rep not really understanding the distinction and cheerily telling us to try to operate within the no hate speech rules. And how many warnings until we’re blacklisted? Who knows?
TPM also faced a different issue where its main email account, a G Suite paid Gmail account, was blocked without notification because Google flagged it as a source of spam.
It is unfair to blame TPM for relying on Google’s email services, which are among the best options for a managed email product on a custom domain, or for its use of Google ads, which are ubiquitous. Similarly, heavy use of Google services like Mark and Cassio — the other dad in Hill’s story — is pretty normal and encouraged by the tight integration of these products.
Google’s response in each of these cases points to a lack of humanity. It reflects a complete absence of care about the context in which its products are being used, from a company that has a primarily American perspective and may miss real problems in other countries. It is a recipe for more stories like these, especially since a Google spokesperson told Hill the company stood behind its decision. Not only does Google think it did the right thing, it believes deleting Mark’s entire Google presence was the right outcome.
Someone more sympathetic than I am might point out that Google will always struggle to understand context because it is operating at a prohibitively massive scale. This is a cop-out or, at least, an incomplete thought. Google, like many other big businesses, has prioritized growth at the expense of caution because the incentives outweigh the risks. Some variation of this is true across industries, from banking to natural resources to food. Nestlé is practically synonymous with a jaw-dropping lack of ethics but people keep buying Perrier and Cheerios.
It is overly simplistic to say these problems would not exist if businesses were smaller, but I believe shrinking businesses would minimize these problems. And, when they do appear, they would have a smaller effect. I disagree with Thompson’s assessment that “it seems silly to argue that getting banned from a social media platform isn’t an infringement on individual free speech rights”; far from Thompson’s claim that “you can still say whatever you want on a street corner”, you can write whatever you want on a website untouched by Facebook or Twitter, as he did. We have never had so much freedom to speak our minds and broadcast it to an audience. But we have never placed so much of our identity in the hands of such a small number of private entities, often poorly regulated. Software and services need a warranty and they need something like a bill of rights; and, if those demands are untenable at scale, then vendors should be smaller.
Ray, a cybersecurity researcher, who saw a similar item on online retailer AliExpress, knew the offer was too good to be true. He bought the drive, suspecting it was a scam, and took it apart to find out what exactly was happening here. Sure enough, he found what amounted to a different item cosplaying as a big SSD. Inside were two small memory cards and the item had been programmed in such a way so as to appear it had 30TB of storage when plugged into a computer.
As Cox writes, it may have appeared that Walmart was selling the drive, but it was actually a marketplace listing. Like Amazon, Walmart lets third-party vendors use its online store to sell their wares. Some vendors are household names, while others take the same Scrabble bag approach to branding as Amazon sellers.
Amazon and Walmart are two of many retailers you probably recognize which offer an online marketplace for third-party sellers, including Best Buy and Canadian retail giant Loblaw. Staples experimented with marketplace sales, too, but I could not find any current information about its program. These products are usually offered alongside those sold by the retailer itself, with few visual clues that they may have different return policies or expectations of quality.
In a complaint filed against Kochava, the FTC alleges that the company’s customized data feeds allow purchasers to identify and track specific mobile device users. For example, the location of a mobile device at night is likely the user’s home address and could be combined with property records to uncover their identity. In fact, the data broker has touted identifying households as one of the possible uses of its data in some marketing materials.
The FTC alleges that Kochava fails to adequately protect its data from public exposure. Until at least June 2022, Kochava allowed anyone with little effort to obtain a large sample of sensitive data and use it without restriction. The data sample the FTC examined included precise, timestamped location data collected from more than 61 million unique mobile devices in the previous week. Using Kochava’s publicly available data sample, the FTC complaint details how it is possible to identify and track people at sensitive locations […]
“This lawsuit shows the unfortunate reality that the FTC has a fundamental misunderstanding of Kochava’s data marketplace business and other data businesses,” Kochava Collective General Manager Brian Cox said in a statement. “Kochava operates consistently and proactively in compliance with all rules and laws, including those specific to privacy.”
Cox said the company announced a new ability to block location data from sensitive locations prior to the FTC’s lawsuit. He said the company engaged with the FTC for weeks explaining the data collection process and hoped to come up with “effective solutions” with the agency.
Marketing and data companies are eager to put on a privacy-respecting guise when it suits them while promising services completely antithetical to that. For example, Kochava says it offers in its data marketplace the ability to match mobile devices — perhaps the billion unique mobile devices it also brags about — to email addresses and precise locations. Its marketing materials say it can tie those devices to households and their respective behaviour and purchasing data. Of course, on the same page, it says it is “privacy-first by design” — one wonders how that is possible when the sample data set viewed by the FTC apparently pinpoints specific users by time and location.
Want to opt out? Thanks to regulation in Europe, some U.S. states, and elsewhere, that is made possible. But Kochava is uniquely dickish about it:
[…] You may submit a request to delete all your personal information by emailing Kochava at firstname.lastname@example.org or by contacting the legal department via telephone at 855-562-4282. However, please bear in mind that when you contact Kochava with such a request, because of the precautions we have proactively taken to protect your privacy, you are actually volunteering more personally identifying information to Kochava as a result of lodging the request than Kochava would have ever had prior to you initiating contact.
I call bullshit. What identifiers could you possibly give Kochava to opt out of its privacy hostile practices that it does not already know and have enriched with other data sources?
Kochava obviously wants to promote itself as uniquely precise to its audience of marketers who crave that kind of fidelity. Its claims warrant some skepticism. But time and time again this industry has proved itself to be as creepy as the brochures claim, at least in how much it collects. How it interprets that information is, in my experience, more questionable.
The FTC does not come out of this looking particularly good, either. Megan Gray on Twitter:
Methinks the agency knows it’s going to lose. Picked this company b/c thought it would settle. Oopsy. Then when company preemptively filed case, agency was in a corner and doesn’t want to be perceived as backing down from a fight.
The agency had until MID OCTOBER to respond to the DJ (and could’ve gotten an extension for further time). This was clearly rushed to capture the press cycle. I genuinely feel bad for staff.
It looks really bad for regulators to get financial settlements and modest concessions out of these cases without pushing for an admission of wrongdoing. It makes it look as though these cases are primarily for revenue generation instead of exposing heinous behaviour and setting standards for others to follow.
The North American smartphone market reached 35.4 million shipped units in Q2 2022, down 6.4% yearly amid economic challenges, high inflation, and poor seasonal demand. Apple grew 3% and has dominated over half the North American region for three consecutive quarters, thanks to solid iPhone 13 demand combined with a full quarter’s performance of its entry-level model, the iPhone SE (3rd Gen). Samsung’s shipments increased 4% as its S series and low-end A series devices continued to deliver. Lenovo (Motorola), TCL, and Google rounded of the top five, claiming 9%, 5% and 2% market share.
It sure seems like the mini will become the next SE.
Tsai is far from the only one to explore this line of thinking. But I do not buy it; a new rumour from AppleTrack on Twitter seems more likely to me:
RUMOR: The iPhone SE 4, likely coming next year, will essentially be a rebranded iPhone XR
Expect a 6.1-inch display with Face ID, 12MP rear camera and IP67 water and dust resistance.
The iPhone SE’s unique selling point is probably its price, not its form factor. Consider that the next most expensive iPhone in Apple’s lineup is the iPhone 11, which has the same 6.1-inch display as the rumoured SE 4. Why would Apple not simply slide this phone — more or less — down the price ladder?
Congress has never been closer to passing a federal data privacy law — and the brokers that profit from information on billions of people are spending big to nudge the legislation in their favor.
The brokers, including U.K.-based data giant RELX and credit reporting agency TransUnion, want changes to the bill — such as an easing of data-sharing restrictions that RELX says would hamper investigations of crimes. Some data brokers also want clearer permission to use third-party data for advertising purposes.
The only surprising part of this is that data brokers are bragging about being treated as an extension of law enforcement. Imagine being thrilled to live in a police state, so long as it is privatized.
A unique consequence of writing about the biggest computer companies, which are all based in the United States, from most any other country is a lurking sense of invasion. I do not mean this in an anti-American sense; it is perhaps inherent to any large organization emanating from the world’s most powerful economy. But there is always a sense that the hardware, software, and services we use are designed by Americans often for Americans. You can see this in a feature set inevitably richer in the U.S. than elsewhere, language offerings that prioritize U.S. English, pricing often pegged to the U.S. dollar, and — perhaps more subtly — in the values by which these products are created and administered.
These are values that I, as someone who resides in a country broadly similar to the U.S., often believe are positive forces. A right to free expression is among those historically espoused by these companies in the use of their products. But over the past fifteen years of their widespread use, platforms like Facebook, Instagram, Twitter, and YouTube have established rules of increasing specificity and caution to restrict what they consider permissible. That, in a nutshell, is the premise of Jillian C. York’s 2021 book, Silicon Values.
Though it was published last year, I only read it recently. I am glad I did, especially with several new stories questioning the impact of a popular tech company an ocean away. TikTok’s rapid rise after decades of industry dominance by American giants is causing a re-evaluation of an America-first perspective. Om Malik put it well:
For as long as I can remember, American technology habits did shape the world. Today, the biggest user base doesn’t live in the US. Billion-plus Indians do things differently. Ditto for China. Russia. Africa. These are giant markets, capable of dooming any technology that attempts a one-size-fits-all approach.
The path taken by York in Silicon Values gets right up to the first line of this quote from Malik. In the closing chapter, York (228) writes:
I used to believe that platforms should not moderate speech; that they should take a hands-off approach, with very few exceptions. That was naïve. I still believe that Silicon Valley shouldn’t be the arbiter of what we can say, but the simple fact is that we have entrusted these corporations to do just that, and as such, they must use wisely the responsibility that they have been given.
I am not sure this is exactly correct. We often do not trust the judgements of moderation teams, as evidenced by frequent complaints about what is permissible and, more often, what gets flagged, demonetized, or removed. As I was writing this article, reporters noted that Twitter took moderation action against doctors and scientists posting factual, non-controversial information about COVID-19. This erroneous flagging was reverted, but it is another in a series of stories about questionable decisions made by big platforms.
In fact, much of Silicon Values is about the tension between the power of these giants to shape the permissible bounds of public conversations and their disquieting influence. At the beginning of the book, York points to a 1946 U.S. Supreme Court decision, Marsh v. Alabama, which held that private entities can become sufficiently large and public to require them to be subject to the same Constitutional constraints as government entities. Though York says this ruling has “not as of this writing been applied to the quasi-public spaces of the internet” (14), I found a case which attempted to use Marsh to push against a moderation decision. In an appellate decision in Prager University v. Google, Judge M. Margaret McKeown wrote (PDF) “PragerU’s reliance on Marsh is not persuasive”. More importantly, McKeown reflected on the tension between influence and expectations:
Both sides say that the sky will fall if we do not adopt their position. PragerU prophesizes living under the tyranny of big-tech, possessing the power to censor any speech it does not like. YouTube and several amicus curiae, on the other hand, foretell the undoing of the Internet if online speech is regulated. While these arguments have interesting and important roles to play in policy discussions concerning the future of the Internet, they do not figure into our straightforward application of the First Amendment.
All of the subjects concerned being American, it makes sense to judge these actions on American legal principles. But even if YouTube were treated as an extension of government due to its size and required to retain every non-criminal video uploaded to its service, it would make as much of a political statement elsewhere, if not more. In France and Germany, it — like any other company — must comply with laws that require the removal of hate speech, laws which in the U.S. would be unconstitutional. York (19) contrasts their eager compliance with Facebook’s memorable inaction to rein in hate speech that contributed to the genocide of Rohingya people in Myanmar. Even if this is a difference of legal policy — that France and Germany have laws but Myanmar does not — it is clearly unethical for Facebook to have inadequately moderated this use of its platform.
The concept of an online world no longer influenced largely by U.S. soft power brings us back to the tension with TikTok and its Chinese ownership. It understandably makes some people nervous for the most popular social media platform for many Americans has the backing of an authoritarian regime. Some worry about the possibility of external government influence on public policy and discourse, though one study I found reflects a clear difference in moderation principles between TikTok and its Chinese-specific counterpart Douyin. Some are concerned about the mass collection of private data. I get it.
But from my Canadian perspective, it feels like most of the world is caught up in an argument between a superpower and a near-superpower, with continued dominance by the U.S. preferable only by comparison and familiarity. Several European countries have banned Google Analytics because it is impossible for their citizens to be protected against surveillance by American intelligence agencies. The U.S. may have legal processes to restrict ad hoc access by its spies, but those are something of a formality. Its processes are conducted in secret and with poor public oversight. What is known is that it rarely rejects warrants for surveillance, and that private companies must quietly comply with document requests with little opportunity for rebuttal or transparency. Sometimes, these processes are circumvented entirely. The data broker business permits surveillance for anyone willing to pay — includingU.S. authorities.
The privacy angle holds little more weight. While it is concerning for an authoritarian government to be on the receiving end of surveillance technologies rather than advertising and marketing firms, it is unclear that any specific app disproportionately contributes to this sea of data. Banning TikTok does not make for a meaningful reduction of visibility into individual behaviours.
Even concerns about how much a recommendation algorithm may sway voter intent smell funny. Like Facebook before it, TikTok has downplayed the seriousness of its platform by framing it as an entertainment venue. As with other platforms, disinformation on TikTok spreads and multiplies. These factors may have an effect on how people vote. But the sudden alarm over yet-unproved allegations of algorithmic meddling in TikTok to boost Chinese interests is laughable to those of us who have been at the mercy of American-created algorithms despite living elsewhere. American state actors have also taken advantage of the popularity of social networks in ways not dissimilar from political adversaries.
However, it would be wrong to conclude that both countries are basically the same. They obviously differ in their means of governance and the freedomsafforded to people. The problem is that I should not be able to find so many similarities in the use of technology as a form of soft power, and certainly not for spying, between a democratic nation and an authoritarian one. The mount from which Silicon Values are being shouted looks awfully short from this perspective.
You do not need me to tell you that decades of undermining democracy within our countries has caused a rise in autocratic leanings, even in countries assumed stable. The degradation of faith in democratic institutions is part of a downward spiral caused by internal undermining and a failure to uphold democratic values. Again, there are clear differences and I do not pretend otherwise. You will not be thrown in jail for disagreeing with the President or Prime Minister, and please spare me the cynical and ridiculous “yet!” responses.
I wish there were a clear set of instructions about where to go from here. Silicon Values is, understandably, not a book about solutions; it is an exploration of often conflicting problems. York delivers compelling defences of free expression on the web, maddening cases where newsworthy posts were removed, and the inequity of platform moderation rules. It is not a secret, nor a compelling narrative, that rules are applied inconsistently, and that famous and rich people are treated with more lenience than the rest of us. But what York notes is how aligned platforms are with the biases of upper-class white Americans; not coincidentally, the boards and executive teams of these companies are dominated by people matching that description.
The question of how to apply more local customs and behaviours to a global platform is, I believe, the defining challenge of the next decade in tech. One thing seems clear to me: the world’s democracies need to do better. It should not be so easy to point to similarities in egregious behaviour; corruption of legal processes should not be so common. I worry that regulators in China and the U.S. will spend so much time negotiating which of them gets to treat the internet as their domain while the rest of us get steamrolled by policies that maximize their self-preferencing.
This is especially true as waves of stories have been published recently alleging TikTok and its adjacent companies have suspicious ties to arms of an autocratic state. Lots of TikTok employees apparently used to work for China’s state media outlets and, in another app from ByteDance, TikTok’s owner, pro-China stories were regularly promoted while critical news was minimized. ByteDance sure seems to be working more closely with government officials than operators of other social media platforms. That is probably not great; we all should be able to publish negative opinions about lawmakers and big businesses without fear of reprisal.
There is a laundry list of reasons why we must invest more in our democratic institutions. One of them is, I believe, to ensure a clear set of values projected into the world. One way to achieve that is to prefer protocols over platforms. It is impossible for Facebook or Twitter or YouTube to be moderated to the full expectations of its users, and the growth of platforms like Rumble is a natural offshoot of that. But platforms like Rumble which trumpet their free speech bonafides are missing the point: moderation is good, normal, and reinforces free speech principles. It is right for platform owners to decide the range of permissible posts. What is worrying is the size and scope of them. Facebook moderates the discussions of billions — with a b and an s — of people worldwide. In some places, this can permit greater expression, but it is also an impossible task to monitor well.
The ambition of Silicon Valley’s biggest businesses has not gone unnoticed outside of the U.S. and, from my perspective, feels out of place. Yes, the country’s light touch approach to regulation and generous support of its tech industry has brought the world many of its most popular products and services. But it should not be assumed that we must rely on these companies built in the context of middle- and upper-class America. That is not an anti-American statement; nothing in this piece should be construed as anti-American. Far from it. But I am dismayed after my reading of Silicon Values. What I would like is an internet where platforms are not so giant, common moderation actions are not viewed as weapons, and more power is in more relevant hands.
You may remember when Google announced in June that it was adding Google Meet features to Google Duo, then renaming the app Google Meet, while preserving its original Google Meet app for some time. It turns out that strategy was not as easy to understand as you might think.
At the start of August, an update (172) started rolling out that replaced the blue Duo icon and introduced the four-colored Meet version. After updating and opening the app, Duo disappears from the launcher.
Version 173 today brings back the Google Duo icon for some reason. As such, you have both the Duo and Meet logos in your app drawer, with both working to launch the new unified Meet-Duo experience.
This is as clear as Google’s messaging strategy has ever been. The thing I have learned from this is that Google thinks launching Meet when users type “Duo” in the search field is some kind of insurmountable technical obstacles.
Marques likes the Ioniq 5 a lot. From everything I have seen, I do too. I think this is the first electric car I have truly wanted — if it were about twenty percent smaller and twenty percent less expensive.
Twitter has major security problems that pose a threat to its own users’ personal information, to company shareholders, to national security, and to democracy, according to an explosive whistleblower disclosure obtained exclusively by CNN and The Washington Post.
The whistleblower, who has agreed to be publicly identified, is Peiter “Mudge” Zatko, who was previously the company’s head of security, reporting directly to the CEO. Zatko further alleges that Twitter’s leadership has misled its own board and government regulators about its security vulnerabilities, including some that could allegedly open the door to foreign spying or manipulation, hacking and disinformation campaigns. The whistleblower also alleges Twitter does not reliably delete users’ data after they cancel their accounts, in some cases because the company has lost track of the information, and that it has misled regulators about whether it deletes the data as it is required to do. The whistleblower also says Twitter executives don’t have the resources to fully understand the true number of bots on the platform, and were not motivated to. Bots have recently become central to Elon Musk’s attempts to back out of a $44 billion deal to buy the company (although Twitter denies Musk’s claims).
You can read Mudge’s whistleblower disclosure and infosec report — both PDFs — for yourself, if you would like. Both contain heavily redacted sections, especially around claims of corporate fraud.
Mike Masnick reviewed these reports in two parts at Techdirt. Masnick’s first analyzed Mudge’s claims about Twitter’s security infrastructure, its compliance with an FTC consent decree, and whether it had hired foreign spies deeply embedded in the company. The second piece, published today, is exclusively responding to the many stories claiming Mudge’s investigations will help Elon Musk’s justification for backing out of his acquisition of Twitter:
So, let’s dive into those details. The first and most important thing to remember is that, even as Musk insists otherwise, the Twitter lawsuit is not about spam. It just is not. I’m not going to repeat everything in that earlier story explaining why not, so if you haven’t read that yet, please do. But the core of it is that Musk needed an escape hatch from the deal he didn’t want to consummate and the best his lawyers could come up with was to claim that Twitter was being misleading in its SEC reporting regarding spam. (As an aside, there is very strong evidence that Musk didn’t care at all about the SEC filings until he suddenly needed an escape hatch, and certainly didn’t rely on them).
Reading through all of this, anyone who actually understands the details — including what’s at play in the lawsuit — should see that Mudge is actually confirming the only thing that matters for the lawsuit: that the numbers Twitter reported to the SEC for mDAU involves estimating how much spam they mistakenly include in mDAU and not how much spam is on the platform as a whole. If the actual total amount of spam on the platform is higher than that, it doesn’t help Musk, because Musk’s legal argument is predicated on the <5% reported to the SEC.
Other executives — including Sean Edgett, the general counsel, and the privacy and security executives Damien Kieran and Lea Kissner — echoed Mr. Agrawal.
“We have never made a material misrepresentation to a regulator, to our board, to all of you,” Mr. Edgett said. “We are in full compliance with our F.T.C. consent decree.” He added that an external auditor reviews Twitter’s compliance with the decree every two years.
I read both of the PDFs linked above and, if true, they paint a picture of a company where developers have extraordinary latitude with few access controls and virtually no logging of their actions. If Mudge’s claims prove correct, Twitter’s board has been misled and the company constantly puts its users’ activity at risk. But after reading Masnick’s careful analysis, I am less convinced of the more headline-making claims in these documents.
Code seen by 9to5Mac makes it clear that the Wallet app has become “deletable” with iOS 16.1. Unsurprisingly, some features like Apple Pay won’t work without the Wallet app. In this case, users will see a message telling them to “Download the Wallet app from the App Store.”
Since iOS 16.1 is not yet available for iPhone and the iPad lacks the Wallet app, we haven’t been able to see this new option in action.
I have to wonder whether this will be launched as Espósito describes. If the Wallet app is deletable but cannot be replaced, it does not seem like it would assuage the self-preferencing concerns raised by European regulators. It is possible this could be revised to suggest the installation of a different wallet application — the E.U.’s own, for example. But making it removable without the ability for third-party apps to use the NFC system or override the double-click home button shortcut seems like it would appease neither users nor regulators.
Apple has been picked apart for so many lessons that it could start its own business school, but what the case of Apple Pay shows is that patience is a competitive edge for companies that know how to wield it.
The percentage of iPhones with Apple Pay activated was 10% in 2016 and 20% in 2017, according to research from Loup Ventures, as most people seemed perfectly happy with their plastic cards and leather wallets. Adoption nearly doubled again in 2018. It hit 50% by 2020. Now it’s around 75% and inching closer to ubiquity. Of course, not every account that gets activated remains in active use.
The winner here has been, and still is, the contactless card, the tap-to-pay functionality that has garnered a 14% share of in-person payments — and as seen in the chart below, that percentage has been growing all through the pandemic. Throw in the physical, plastic cards themselves, and debit cards have snared 44% of transactions and credit cards have a 27% share, as measured by the second quarter of this year.
I spent the past few weeks leaving cards in my pocket and tapping my phone wherever I could. But there are still plenty of places where I couldn’t. Restaurants have been slow to embrace the technology required for Apple Pay. Gas stations have been reluctant to spend on upgrading their pumps. Walmart, which favors its own mobile payment option, remains the most notable holdout among retailers.
For what it is worth, I cannot think of a single terminal I have used in the past couple of years that has not supported tap-based payments. I have been paying for groceries by tapping my card for even longer than that. Every gas station I have visited permits me to pay by tap. Even though I know Apple Pay offers a higher level of security than a physical card, I cannot remember the last terminal I tapped with my phone; it is always easier to grab the right card from my wallet. I use Apple Pay frequently with online payments, though, so that is something.
Apple this morning is rolling out iPadOS 16.1 beta to enrolled developer devices. It’s a break from the standard release cadence, which has tied together the tablet operating system with its smartphone counterpart, iOS, since its first release in 2019.
In a comment to TechCrunch, the company notes, “This is an especially big year for iPadOS. As its own platform with features specifically designed for iPad, we have the flexibility to deliver iPadOS on its own schedule. This Fall, iPadOS will ship after iOS, as version 16.1 in a free software update.”
Contrary to Heater’s comment, iPadOS has not always been released in sync with iOS. Apple’s very first “iPadOS” release, iPadOS 13.1, did not ship until five days after iOS 13; iPadOS 13.0 was never made publicly available. Before then, the first version of iOS 4 available for iPads was iOS 4.2.
I continue to think that my devices are now too secure. Face ID shouldn’t freak out multiple times a day, requiring a pin. Safari shouldn’t scrap cookies every week, requiring needless extra web sign-ins. Any security beyond unlocking my Mac is usually unnecessary friction.
I agree with Reece’s diagnosis of the problem, but not its cause. If someone is logged into a user account on a Mac, everything in the keychain is probably unlocked and available to them as well. And if they have text message forwarding enabled on their iPhone, an SMS-based two-factor code will appear in Message. Despite what is basically security theatre, I need to reauthenticate several times weekly on websites and in applications I use all the time. I have to sign into this website — you know, the one I solely write and administer — probably once a week for each device I use.
I get why some of these measures are in place, particularly as tracking cookies are concerned. But I wish there were a way to simply tell my computer that I — me, Nick Heer — am sitting in front of it and have all the doors opened and locks unlocked without further inquiry.
In June, [Erik] Prince publicly revealed the new phone, priced at $850. But before that, beginning in 2021, he was privately hawking the device to investors — using a previously unreported pitch deck that has been obtained by MIT Technology Review. It boldly claims that the phone and its operating system are “impenetrable” to surveillance, interception, and tampering, and its messenger service is marketed as “impossible to intercept or decrypt.”
Boasting falsely that Unplugged has built “the first operating system free of big tech monetization and analytics,” Prince bragged that the device is protected by “government-grade encryption.” Better yet, the pitch added, Unplugged is to be hosted on a global array of server farms so that it “can never be taken offline.” One option is said to be a server farm “on a vessel” located in an “undisclosed location on international waters, connected via satellite to Elon Musk’s StarLink.” An Unplugged spokesperson explained that “they benefit in having servers not be subject to any governmental law.”
Reminds me of the long-running libertarian fantasy of living on a barge on the ocean. This whole venture is completely untethered — unplugged, even — from reality.
The iOS Instagram and Facebook app render all third party links and ads within their app using a custom in-app browser. This causes various risks for the user, with the host app being able to track every single interaction with external websites, from all form inputs like passwords and addresses, to every single tap.
When you open any link on the TikTok iOS app, it’s opened inside their in-app browser. While you are interacting with the website, TikTok subscribes to all keyboard inputs (including passwords, credit card information, etc.) and every tap on the screen, like which buttons and links you click.
Instagram iOS subscribes to every tap on any button, link, image or other component on external websites rendered inside the Instagram app.
Is TikTok a keylogger? Is Instagram monitoring every tap on a loaded webpage? It is impossible to say, but it does not look good that either of these privacy-invasive apps are so reckless with users’ ostensibly external activity.
It reminds me of when iOS 14 revealed a bunch of apps, including TikTok, were automatically reading pasteboard data. It cannot be known for certain what happened to all of the credit card numbers, passwords, phone numbers, and private information collected by these apps. Perhaps some strings were discarded because they did not match the format an app was looking for, like a parcel tracking number or a URL. Or perhaps some ended up in analytics logs collected by the developer. We cannot know for sure.
What we do know is how invasive big-name applications are, and how little their developers really care about users’ privacy. There is no effort at minimization. On the contrary, there is plenty of evidence for maximizing the amount of information collected about each user at as granular a level as possible.