As the FBI continues to round up rioters who stormed the U.S. Capitol on Jan. 6 to try to stop President-elect Joe Biden’s inauguration last week, it’s finding that a number of them seem to have openly confessed to crimes on open social media, a review of court documents shows.
The subject of another much-circulated photo, of a cheerful and waving bearded man walking through the Capitol with the speaker’s lectern, has been identified by the Bradenton Herald as Florida man Adam Johnson (not “Via Getty”). Johnson was arrested on Friday and hit with the same three charges as Barnett. The complaint against Johnson references photos posted on his own Facebook account that appear to show him inside the Capitol building and were sourced from a newspaper article about the riot. Additionally, someone who has a mutual friend with Johnson called the FBI to report that he was the man in the photo with the lectern.
Johnson’s lawyer admitted to reporters that the photograph of his client is “a problem.”
“I’m not a magician,” Dan Eckhart added. “We’ve got a photograph of our client in what appears to be inside a federal building or inside the Capitol with government property.”
An affidavit from an FBI special agent filed in court Tuesday says Eduardo Florea stockpiled more than 1,000 rounds of ammo and threatened to kill Sen.-elect Raphael Warnock of Georgia.
The affidavit says the FBI received records from Parler to identify the user behind the account “LoneWolfWar,” where the threats originated. Parler provided the phone number associated with the account, the affidavit says, and the FBI used it, and info from T-Mobile, to identify Florea.
Tinder, Bumble and other dating apps are using images captured from inside the Capitol siege and other evidence to identify and ban rioters’ accounts, causing immediate consequences for those who participated as police move toward making hundreds of arrests.
Amanda Spataro, a 25-year-old logistics coordinator in Tampa, called it her “civic duty” to swipe through dating apps for men who’d posted incriminating pictures of themselves. On Bumble, she found one man with a picture that seemed likely to have come from the insurrection; his response to a prompt about his “perfect first date” was: “Storming the Capitol.”
“Most people, you think if you’re going to commit a crime, you’re not going to brag about it,” Spataro said in an interview.
You would think that, wouldn’t you? But only if you, you know, think.
The Capitol riot was a boundary-busting event in almost every way, and its impact on the digital privacy debate was no different. The insurrectionists’ acts were so galling, so frightening, that suddenly, even those who might oppose digital surveillance and forensics techniques in other contexts, like, say, identifying peaceful protesters at a Black Lives Matter rally, feel justified in deploying those tools against the rioters. The shifting goalposts have sparked a tense debate among researchers of online extremism about the right way to stitch together the digital scraps of someone’s life to publicly accuse them of committing a crime — or whether there is a right way at all.
I think a piece by Astead W. Herndon in the New York Times is a good explanation of the false equivalence between Black Lives Matter protests and the criminal surge of U.S. Capitol rioting morons. But Lapowsky’s article raises good arguments about the dangers of false accusations, attempts at mob justice, and the risks faced by those identifying extremists.
A widely adopted, decentralized protocol is an opportunity for social networks to “pass the buck” on moderation responsibilities to a broader network, one person involved with the early stages of bluesky suggests, allowing individual applications on the protocol to decide which accounts and networks its users are blocked from accessing.
Social platforms like Parler or Gab could theoretically rebuild their networks on bluesky, benefitting from its stability and the network effects of an open protocol. Researchers involved are also clear that such a system would also provide a meaningful measure against government censorship and protect the speech of marginalized groups across the globe.
The internet itself is comprised of a series of decentralized protocols. While I don’t want to minimize the worries of those involved with the oddly-lowercased bluesky effort, a universal protocol for short messages seems more in-line with the internet I remember before a handful of big American platforms corralled the worldwide market for communication. One could see it as “passing the buck”, but it is equally valid to see this as reducing singular influence and control.
It is baffling to me that, in 2021, I still do not know the security practices of the devices and cloud services I use more frequently than ever.
This became particularly worrisome last year when I began working my day job from my personal computer. I have several things in my favour: it is an iMac, not a portable computer, so there is dramatically less risk of unauthorized physical access; I keep an encrypted Time Machine backup and an encrypted BackBlaze remote backup; I use pretty good passwords. But what about my phone and iCloud, for example? I do not use either for much work stuff, but I inevitably have some communications and two-factor authentication apps on my iPhone, and I use iCloud for backups.
Over the holidays, I immersed myself in an early copy of a new report by Johns Hopkins University students Maximilian Zinkus and Tushar Jois, and associate professor Matthew Green, as I tried to find answers for what should be simple questions. The researchers’ conclusions in the now-published report were eye-opening to me:
Limited benefit of encryption for powered-on devices. We observed that a surprising amount of sensitive data maintained by built-in applications is protected using a weak “available after first unlock” (AFU) protection class, which does not evict decryption keys from memory when the phone is locked. The impact is that the vast majority of sensitive user data from Apple’s built-in applications can be accessed from a phone that is captured and logically exploited while it is in a powered-on (but locked) state.
Limitations of “end-to-end encrypted” cloud services. Several Apple cloud services advertise “end-to-end” encryption in which only the user (with knowledge of a password or passcode) can access cloud-stored data. We find that the end-to-end confidentiality of some encrypted services is undermined when used in tandem with the iCloud backup service. More critically, we observe that Apple’s documentation and user settings blur the distinction between “encrypted” (such that Apple has access) and “end-to-end encrypted” in a manner that makes it difficult to understand which data is available to Apple. Finally, we observe a fundamental weakness in the system: Apple can easily cause user data to be re-provisioned to a new (and possibly compromised) [Hardware Security Module] simply by presenting a single dialog on a user’s phone. We discuss techniques for mitigating this vulnerability.
The muddy distinction between “encryption”, “end-to-end encryption”, and “true end-to-end decryption in a way that Apple cannot reverse” remains a source of consternation, especially as Apple’s own documentation is anything but precise.
“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”
If there is one conclusion of this report that is damning by its silver lining, it is that it calls bullshit on law enforcement’s insistence that smartphone encryption creates some sort of investigative black hole. Maybe Apple has successfully created encryption that is strong enough for personal and business use with a strictly-controlled opening for legitimate legal use — by sticking itself in the middle of that chain. That is how I am reading between the lines of the statement an unidentified spokesperson provided Wired:
The researchers shared their findings with the Android and iOS teams ahead of publication. An Apple spokesperson told WIRED that the company’s security work is focused on protecting users from hackers, thieves, and criminals looking to steal personal information. The types of attacks the researchers are looking at are very costly to develop, the spokesperson pointed out; they require physical access to the target device and only work until Apple patches the vulnerabilities they exploit. Apple also stressed that its goal with iOS is to balance security and convenience.
There are security problems with iOS devices and iCloud services that Apple can and should fix, but I bet there are many that it will not because it is perhaps unwise to be a company that is explicitly trying to block any subpoena from having an effect. If that is the case, Apple ought to say so. It should be plainly clear to users what their security options are, and Apple ought to be more honest in its marketing and documentation of these features.
If Apple is appointing itself guardian of its users’ data — in iCloud form, including defaulted-to-on iCloud Backups of iPhones, iPads, and Apple Watches — that also means that it can respond to law enforcement requests at any level by any agency. Depending on how much you trust your local police and national intelligence services, perhaps that does not seem like a great idea to you. More worrying is that it leaves Apple open to potentially being a part of corrupt regimes’ human rights abuses if it is responsive to data requests for activists’ accounts, or if it complies with device search requests from border patrol.
Maybe there are only bad options, and this is the best bad option that strikes the least worst balance between individual security and mass security. But the compromises seem real and profound — and are, officially, undocumented.
Do you want to know what Apple’s 2021 Mac lineup looks like? Well, new reports from Ming-Chi Kuo and Mark Gurman that dropped in rapid succession today — almost like there was a meeting at Apple this week to discuss new products — paint a rosy picture.
Juli Clover, of the very appropriately named MacRumors:
According to Kuo, Apple is developing two models in 14 and 16-inch size options. The new MacBook Pro machines will feature a flat-edged design, which Kuo describes as “similar to the iPhone 12” with no curves like current models. It will be the most significant design update to the MacBook Pro in the last five years.
There will be no OLED Touch Bar included, with Apple instead returning to physical function keys. Kuo says the MagSafe charging connector design will be restored, though it’s not quite clear what that means as Apple has transitioned to USB-C. The refreshed MacBook Pro models will have additional ports, and Kuo says that most people may not need to purchase dongles to supplement the available ports on the new machines. Since 2016, Apple’s MacBook Pro models have been limited to USB-C ports with no other ports available.
All of the new MacBook Pro models will feature Apple silicon chips, and there will be no Intel chip options included.
These leaks were echoed by Mark Gurman, who also added that the displays in the new MacBook Pro models would be brighter and higher-contrast.
If these rumours are accurate, these products seem inspired by the early 2010s golden age of the MacBook Pro: lots of ports, MagSafe, and a great keyboard. All of these things were part of the much-loved models of that time before they were removed in favour of four USB-C and Thunderbolt combo ports which doubled as charging ports, and a poor keyboard. The latter problem was fixed; the former decision still feels like a compromise too much of the time. The excitement for these rumours seems telling. You’ve got to wonder what ports would be added; I can’t see USB-A or Ethernet making a comeback, and even HDMI and Micro SD ports feel like a stretch.
I would still love to read a deeply reported explanation of what happened with the Mac notebook range from 2012 through the present day. I think there must be an interesting story in there about being ready for the short-term backlash of trying new things, only to find that long-term compromises remain.
Apple Inc. is planning the first redesign of its iMac all-in-one desktop computer since 2012, part of a shift away from Intel Corp. processors to its own silicon, according to people familiar with the plans.
The new models will slim down the thick black borders around the screen and do away with the sizable metal chin area in favor of a design similar to Apple’s Pro Display XDR monitor. These iMacs will have a flat back, moving away from the curved rear of the current iMac. Apple is planning to launch two versions — codenamed J456 and J457 — to replace the existing 21.5-inch and 27-inch models later this year, the people said, asking not to be identified because the products are not yet announced.
Gurman also says that Apple is working on two new Mac Pro models — one of which he says may continue to use Intel’s processors, but that does not pass my sniff test — and a less-expensive standalone display.
The rumours that were published today represent nearly every Mac in Apple’s lineup that has yet to receive Apple’s own processors, with the exception of the iMac Pro. But, given the M1’s performance and the smaller Mac Pro model, it is possible the iMac Pro may simply be discontinued.
I’m just spitballing but, maybe if Apple’s feeling in a real retro mood, the new iMac could just be called the “Mac”. Just a thought.
Ben Thompson, on the different responses around the world to tech companies’ restrictions over the past week:
Make no mistake, Europe is far more restrictive on speech than the U.S. is, including strict anti-Nazi laws in Germany, the right to be forgotten, and other prohibitions on broadly defined “harms”; the difference from the German and French perspective, though, is that those restrictions come from the government, not private companies.
This sentiment, as I noted yesterday, is completely foreign to Americans, who whatever their differences on the degree to which online speech should be policed, are united in their belief that the legislature is the wrong place to start; the First Amendment isn’t just a law, but a culture. The implication of American tech companies serving the entire world, though, is that that American culture, so familiar to Americans yet anathema to most Europeans, is the only choice for the latter.
One of the reasons it is interesting to be a Canadian writing about tech is because, generally speaking, we take influence from both Western European and American perspectives on all sorts of matters. Our right of expression is not as wide-ranging as that of the U.S., but it lacks many European limitations as well. Like many in Europe, Canadians feel perfectly able to express their views in public — more than Americans in their country — and do not feel that the small number of legal limitations are restrictive.
This week’s sweeping restrictions of the social media accounts of the president of the United States and the deplatforming of Parler were a necessarily American response to problems in America. The president was not silenced or censored, but his association with private companies was revoked because they did not want to deal with his particular brand of nightmare fuel. But it is clearly not a solution for worldwide issues — especially when non-U.S. countries struggle to enforce their laws against American companies.
Thompson’s prediction for the future of the internet is intriguing:
Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols. This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.
Apple last year pledged a hundred million dollars in a new Racial Equity and Justice Initiative, promising big investments in underrepresented individuals and communities in the United States, initially, and around the world.
Apple today announced a set of major new projects as part of its $100 million Racial Equity and Justice Initiative (REJI) to help dismantle systemic barriers to opportunity and combat injustices faced by communities of colour. These forward-looking and comprehensive efforts include the Propel Center, a first-of-its-kind global innovation and learning hub for Historically Black Colleges and Universities (HBCUs); an Apple Developer Academy to support coding and tech education for students in Detroit; and venture capital funding for Black and Brown entrepreneurs. Together, Apple’s REJI commitments aim to expand opportunities for communities of colour across the country and to help build the next generation of diverse leaders.
A former Apple employee who noted that he was “not Black or Hispanic” described his experience on a team that was developing speech recognition for Siri, the virtual assistant program. As they worked on different English dialects — Australian, Singaporean, and Indian English — he asked his boss: “What about African American English?” To this his boss responded: “Well, Apple products are for the premium market.”
Benjamin notes that this interaction took place a year after Apple acquired Dr. Dre’s Beats brand:
The irony, the former employee seemed to imply, was that the company could somehow devalue and value Blackness at the same time.
For what it is worth, in the video introducing Apple’s racial equity initiative in June, Cook acknowledged the need for thorough correction:
In our supply chain and professional service partners, we’re committed to increasing our total spending with Black-owned partners, and increasing representation across companies we do business with. […]
We’re taking significant new steps on diversity and inclusion within Apple, because there is more that we can and must do to hire, develop, and support those from underrepresented groups — especially our Black and Brown colleagues.
Change begins at the top. But the “top” is somewhat relative: it is true not only at the executive level, but at each layer of management. African American English is still not a language option in Siri five years later, apparently at least in part because it doesn’t fit the “premium market” — and that is just one example. These changes take time and the projects announced today are surely a terrific investment in the future, but it must be acknowledged that Apple continues to have internal deficiencies today that it has the power to correct.
The research began with the observation that in the offline world, healthy communities have traditionally been served by thriving public spaces: town squares, libraries, parks, and so on. Like digital social networks, these spaces are open to all. But unlike those networks, they are owned by the community rather than a corporation. As you would expect, that difference results in a very different experience for the user.
Public spaces display a number of features that build healthier communities, according to researchers. “Humans have designed spaces for public life for millennia,” they write, “and there are lessons here that can be helpful for digital life.”
Even if the specifics of this research may need ironing out, the gist of it is inspiring. I looked through Civil Signals’ slide deck; I thought this was an eye-opening observation about the language often used for the ways social networks ought to be improved:
Encourage Civility. What counts as civil is often defined by dominant groups.
Reduce Polarization. Polarization isn’t the problem — dehumanization and lack of cross-connection are.
Increase Diversity. Mere contact with other groups or their ideas does not increase tolerance.
Inform People. Not all information is equally valuable to citizens.
Increase Trust. Not all institutions or individuals deserve trust.
Allow Participatory Governance. We think this is an important idea, but outside the scope of this research.
Last point notwithstanding, these are excellent arguments against which apparent improvements should be tested.
WhatsApp rival Telegram has seen a 500 per cent increase in new users amid widespread dissatisfaction with the way the Facebook-owned app handles people’s data.
Telegram recorded 25 million new users over the last 72 hours, according to founder Pavel Durov, taking the total number of users above 500 million.
This is roughly a quarter of the estimated 2 billion WhatsApp users around the world, though many users of the world’s most popular messaging app took to social media this week to urge others to leave the platform due to privacy concerns.
Right-wing extremists are using channels on the encrypted communication app Telegram to call for violence against government officials on Jan. 20, the day President-elect Joe Biden is inaugurated, with some extremists sharing knowledge of how to make, conceal and use homemade guns and bombs.
The messages are being posted in Telegram chatrooms where white supremacist content has been freely shared for months, but chatter on the channels has increased since extremists have been forced off other platforms in the wake of the siege of the U.S. Capitol last week by pro-Trump rioters.
Ignore the scary use of “encrypted” here — Telegram Channels are not encrypted, and encryption itself is not a specific worry. What interests me are the ways that misinformation is spreading now.
Telegram Channels are public, and messages posted in there can be forwarded within Telegram to other Channels, Groups, and individuals. They can also be sent as standard web links, which is how I came across a semi-popular post — archived here — claiming that:
Apple is about to pull Telegram from the App Store
Apple is going to remotely delete all existing copies of Telegram installed on users’ devices
You can prevent your copy from being deleted by disabling the ability to delete apps in parental controls
I have no reason to suspect the first claim is true. Telegram is a widely-used messaging app popular around the world for mostly legitimate uses. The second claim is incongruent with Apple’s treatment of Parler; existing copies of the app are still functional, theoretically. The third claim has absolutely no relevance towards anything here, and is not how that feature works. But this one message has racked up well over two hundred thousand views.
It is not just Telegram, or even solely a problem of these apps. Ben Collins, also at NBC News, explains how these fictions are now circulating through text messages:
One viral false conspiracy theory shared across the U.S. implores users to disable automatic software updates on their cellphones, claiming that the next patch will disable an emergency broadcasting system message from President Donald Trump. The false rumors are usually attached to another urban legend about a blackout coming in the next two weeks, which say people should be “prepared with food and water.”
Another viral text is a link to a deceptively edited video, also known as a “cheapfake,” that first appeared on the Twitter-like social media platform Parler. It features a series of mashed-up speeches by Trump that are realigned to lead the viewer to falsely believe he is calling for an uprising on Jan. 20.
This reminds me of the days of chain social media posts — post this and tag three friends or you won’t wake up tomorrow, that kind of thing — and chain emails before that, and chain posts on BBSes before that, and chain letters with actual postage before that.
But those posts seem quaint compared to what we’re seeing today. The conspiracy theories of the past seemed to be based on historical events. The ones that are circulating now are creating an alternate reality for the here and now. Instead of overanalyzing specks of dust in the Zapruder film decades on, there are now people who make a living by denying an audience the reality of what is happening before their very eyes. Neither chain letters nor invented versions of events are new; but, perhaps it is the combination of those and the speed of technology and lucrative careers in professional grifting that have given these messages unwelcome power.
Gayle King of CBS This Morning interviewed Tim Cook for an initiative Apple is announcing tomorrow. King previewed it today by saying:
It is not a new product. We should say, it is not a new product. It’s something, I think, bigger and better than that.
So, if not a new product, what could it be?
Apple’s January events have often centred around education, so it would not surprise me if the tip that fell into my lap is correct. It seems that Apple is a founding partner of the new to-be-officially-unveiled Propel Center in Atlanta:
Spanning 50,000 square feet, Propel Center will include state-of-the-art spaces to accommodate lecture halls, learning labs and common areas to facilitate group learning. The physical Propel Center will serve as a centralized nexus and symbol for HBCU collaboration across the country.
In addition to the main building, Propel will also be offering on-campus labs equipped by Apple, and online instruction with master classes from Spike Lee, Lisa Jackson, and — awkwardly — Jack Ma. There’s lots more; the promo site is only one page, but it’s worth checking out.
Anyway, I am not saying this is definitely what Cook will be talking about, but I am as confident as I can be that you will hear more about this tomorrow.
Now consider the current Mac product line. It would be instantly recognizable to a visitor from the early 2010s.
Of course, Macs have evolved a lot in the intervening years on the inside. But the exteriors of Apple’s Macs look remarkably like they did in 2012, if not 2007. It’s been a decade or more of quiet iteration without really rethinking the fundamentals of the product — except that one time, which Apple rapidly came to regret.
I have written before about Apple’s deliberate strategy to keep the industrial design of its first M1 Macs identical to their Intel-based predecessors. But Snell is right: the Mac lineup used to be more experimental and hungry for evolution in its hardware. What changed? Or, more accurately, why so little change?
Apple has settled on a nearly all-aluminum line; the wildest configuration is the choice of gold for MacBook Air models. Aside from the lack of a glowing Apple logo, today’s MacBook Pro looks from not-too-far away much like the one from 2008.
I am not one to argue for change for its own sake. But, as Apple plays around with the industrial design of the iPhone, iPad, and Apple Watch every few years, I have to wonder if part of the reason for the largely stagnant Intel era was the Intel processors themselves. Now that Apple is working with its own processors that have different thermal constraints and can be entirely custom-engineered, will we perhaps see a renaissance of experimentation in materials and form? I am not so sure; I do not want to get ahead of myself. The MacBook Air may not be the ideal laptop design, but it is pretty darn close. There is a reason it is the computer copied by every Windows OEM.
Maybe it is just my age, but I miss the era of glossy white finishes. I’m looking at my white and silver iPhone and it looks as crisp and modern and futuristic as you’d expect, without looking chintzy. Just a thought.
Jillian C. York of the Electronic Frontier Foundation on her personal blog:
Since Twitter and Facebook banned Donald Trump and began “purging” QAnon conspiracists, a segment of the chattering class has been making all sorts of wild proclamations about this “precedent-setting” event. As such, I thought I’d set the record straight.
Everything in the social media ecosystem was once tilted in the favor of toxic forces, from the algorithms that push our content feeds toward extremism to the companies’ longstanding reticence to admit it. Imagine a foosball game on a slanted table. Yes, the little soccer players could try to stop each rush of the rolling ball, but all their spinning wouldn’t matter in the end. Over the past few years, however, that table has started to be righted. Driven by outside pressure over election disinformation, mass killings, and COVID-19 striking close to home — and perhaps most significantly, internal employee revolts — the companies’ leaders have put into place a series of measures that make it harder for toxic forces. From banning certain types of ads to de-ranking certain lies, these safeguards built up, piece by piece, culminating in the deplatforming of the Internet’s loudest voice.
While it is impossible for anyone to have complete foresight about this moment, one need only look back at the abuse hurled most often at already marginalized individuals and groups on these platforms to know that the warning signs were there all along. Trouble is that these platforms did not meaningfully change to address the causes of why they were being used in bad faith. I am under no illusions that horrible people would not exist on these or any other online platform. But I do think it is possible that, had those who make decisions at these companies taken more seriously the concerns of those on the receiving end of viral hate, they would have been better equipped to scale their moderation strategies.
My own opinion is that this collision of politics, society, and technology has been a long time coming. As far back as 2010, I have argued about the legislative challenges facing technology will be more acute than technological changes themselves. My argument has been that these social platforms are essentially nation-states and require a higher level of social and civic etiquette established and enforced through official policies. When evaluating the performance of Twitter, Facebook, and others on this particular score, the phrase I have often used is “dereliction of duty.”
Malik doesn’t directly say this and I do not want to put words in his mouth, so this is my own extension of his piece: I think part of that duty must be in their careful moderation. That means creating limitations around problematic posts and users very quickly; it also means applying the lightest possible touch. For over a decade now, the largest social platforms have often been far too cautious about setting expectations of behaviour.
Concern about firefighting efforts doesn’t get us far enough when there are prolific arsonists.
Perhaps it is somewhat restrictive for Amazon to decide that it does not want to host a social network that is deliberately under-moderated — to the extent that it lacked basic controls against posting child sexual abuse imagery — on which an attempted insurrection was planned, and where many users in the days before and after that attack described their plans for assassinating lawmakers. I am not, however, convinced it says anything more general about the power of big tech companies. I imagine the moneyed backers of Parler can find another host for their Nazi-filled community of “free speech advocates”, or they can put some servers together themselves. They can surely pull themselves up by their gold-tipped bootstraps — unfortunately for society.
Update: Parler has now found a home with a hosting company that specializes in the same sort of websites. It is almost as though this was not censorship as much as it was one company not wishing to do business with another company for completely understandable reasons.
In an email sent this morning and obtained by BuzzFeed News, Apple wrote to Parler’s executives that there had been complaints that the service had been used to plan and coordinate the storming of the US Capitol by President Donald Trump’s supporters on Wednesday. The insurrection left five people dead, including a police officer.
“We have received numerous complaints regarding objectionable content in your Parler service, accusations that the Parler app was used to plan, coordinate, and facilitate the illegal activities in Washington D.C. on January 6, 2021 that led (among other things) to loss of life, numerous injuries, and the destruction of property,” Apple wrote to Parler. “The app also appears to continue to be used to plan and facilitate yet further illegal and dangerous activities.”
Apple gave Parler a day from when it sent its letter to submit a new version of the app alongside a moderation policy. Google did not wait; it pulled the app from the Play Store this afternoon.
From Apple’s letter, as quoted in the article:
Your CEO was quoted recently saying “But I don’t feel responsible for any of this and neither should the platform, considering we’re a neutral town square that just adheres to the law.” We want to be clear that Parler is in fact responsible for all the user generated content present on your service and for ensuring that this content meets App Store requirements for the safety and protection of our users. We won’t distribute apps that present dangerous and harmful content.
For what it is worth, it will still be possible to post to Parler from its website even if these apps are removed. It is not as though Parler does not exist on the iPhone after tomorrow when, inevitably, the ostensibly unmoderated platform fails to produce a tighter moderation strategy.
This clearly relates to questions about whether it is fair that users’ native software choices on the iPhone are limited by Apple’s control over the platform and its only software distribution mechanism. It seems reasonable to me that Apple would choose not to provide a platform for apps that have little to no moderation in place. Both Apple and Google disallowed clients for Gab — Twitter but for explicit Nazis — in their respective stores. Apple rejected the app at submission time, while Google permitted it and then pulled it:
Google explained the removal in an e-mail to Ars. “In order to be on the Play Store, social networking apps need to demonstrate a sufficient level of moderation, including for content that encourages violence and advocates hate against groups of people,” the statement read. “This is a long-standing rule and clearly stated in our developer policies. Developers always have the opportunity to appeal a suspension and may have their apps reinstated if they’ve addressed the policy violations and are compliant with our Developer Program Policies.”
Gab now runs on Mastodon, which is a decentralized standard that allows different communities to moderate posts as they choose. There are many Mastodon clients in the App Store, likely because there is not really a singular Mastodon product as much as there are many posts collected through a standard format.
Apple® today announced that the Mac® App Store℠ is now open for business with more than 1,000 free and paid apps. The Mac App Store brings the revolutionary App Store experience to the Mac, so you can find great new apps, buy them using your iTunes® account, download and install them in just one step. The Mac App Store is available for Snow Leopard® users through Software Update as part of Mac OS® X v10.6.6.
You know the first interesting thing about this? Apple issued a press release when the iOS App Store turned ten; Apple also posted one the day the Mac App Store turned ten, but it wasn’t about the Mac App Store:
As the world navigated an ever-changing new normal of virtual learning, grocery deliveries, and drive-by birthday celebrations, customers relied on Apple services in new ways, turning to expertly curated apps, news, music, podcasts, TV shows, movies, and more to stay entertained, informed, connected, and fit.
There’s a bit in the release touting the “commerce the App Store facilitates”, and Apple used it to announce $1.8 billion spent on the App Store between Christmas Eve and New Year’s Eve, but that’s it. Also, I want to thank the person who decided that Apple’s press releases do not need to contain intellectual property marks.
Perhaps it is not surprising that the Mac App Store did not get its own anniversary announcement. It could be the case that Apple considers the launch of the iPhone App Store the original, and everything else is simply part of that family. Apple also doesn’t indulge in anniversaries very often — the App Store press release was an exception rather than the rule.
But it also speaks to the Mac App Store’s lack of comparable influence. Joe Rossignol, MacRumors:
Since its inception, the Mac App Store has attracted its fair share of criticism from developers. Apple has addressed some of these complaints over the years by allowing developers to offer free trials via in-app purchase, create app bundles, distribute apps on multiple Apple platforms as a universal purchase, view analytics for Mac apps, respond to customer reviews, and more, but some developers remain unsatisfied with the Mac App Store due to Apple’s review process, the lack of upgrade pricing, the lack of sandboxing exceptions for trusted developers, the absence of TestFlight beta testing for Mac apps, and other reasons.
Thinking back to the early days of the Mac App Store, I remember how its introduction killed a nascent third-party effort to build a similar store. And I recall how, just months after the store opened, Apple changed the rules to require that apps be sandboxed. […]
The Mac App Store has led a bizarre life in its first ten years — remember when system software updates, including operating system updates, came through the Mac App Store? A 2018 redesign made it look more modern, but it continues to feel like it was ported from another platform. Like the iOS App Store, it faces moderation problems, and its vast quantity of apps are mostly terrible.
There are some bright spots. I have found that good little utility apps — ABX testers, light audio processing, and the sort — are easy to find in the Mac App Store. Much easier, I think, than finding them on the web. It is also a place where you can find familiar software from big developers alongside plenty of indies, software remains up-to-date with almost no user interaction, and there are no serial numbers to lose.
Unfortunately, there remain fundamental disagreements between Apple’s policies and developers’ wishes that often manifest in comical ways. Recently, for my day job, I needed to use one of Microsoft’s Office apps that I did not have installed. I was able to download it from the Mac App Store but, upon signing in to my workplace Office 365 account, I was told that the type of license on my account was incompatible with that version of the app. I replaced it with a copy from Microsoft’s website with the same version number and was able to log in. I assume this is because there is a conflict between how enterprise licenses are sold and Apple’s in-app purchase rules. It was caused in part by Microsoft’s desire to sell its products under as many subtly-different similarly-named SKUs as possible, and resulted in an error message that was prohibited by App Store rules from being helpful. Regardless of the reasons, all I experienced as a user was confusion and frustration. Oftentimes, it is simply less nice to use the Mac App Store than getting software from the web.
Happy tenth birthday to the Mac App Store; it cannot be the best that Apple can do.
There’s not a ton of research on this, but the work that has been done so far is promising. A study published by researchers at Georgia Tech last year found that banning [Reddit’s] most toxic subreddits resulted in less hate speech elsewhere on the site, and especially from the people who were active on those subreddits.
Early results from Data and Society sent to an academic listserv in 2017 noted that it’s “unclear what the unintended effects of no platforming will be in the near and distant future. Right now, this can be construed as an incredibly positive step that platforms are making in responding to public complaints that their services are being used to spread hate speech and further radicalize individuals. However, there could be other unintended consequences. There has already been pushback on the right about the capacity and ethics of technology companies making these decisions. We’ve also seen an exodus towards sites like Gab.ai and away from the more mainstream social media networks.”
I linked to this two years ago when Facebook cracked down on extremist public figures using its platforms, but I figured I would re-up it today.
This is a significant test of deplatforming. It seems to work for media personalities and toxic average users, but will it work for someone who — let’s face it — is still the president of the United States? Will it have significant blowback? I have concerns that it will embolden die-hard followers to commit further acts of violence, but I also think that is a problem for law enforcement and American society as a whole.
I do not think national healing is hastened by broadcast media of any type continuing to permit reckless lies about election fraud from influential figures.
Twitter, perhaps knowing the stakes of suspending the personal account of the president, posted a comprehensive explanation of its reasoning. I have trimmed it to two salient paragraphs:
Due to the ongoing tensions in the United States, and an uptick in the global conversation in regards to the people who violently stormed the Capitol on January 6, 2021, these two Tweets must be read in the context of broader events in the country and the ways in which the President’s statements can be mobilized by different audiences, including to incite violence, as well as in the context of the pattern of behavior from this account in recent weeks. After assessing the language in these Tweets against our Glorification of Violence policy, we have determined that these Tweets are in violation of the Glorification of Violence Policy and the user @realDonaldTrump should be immediately permanently suspended from the service.
Plans for future armed protests have already begun proliferating on and off-Twitter, including a proposed secondary attack on the US Capitol and state capitol buildings on January 17, 2021.
I do not understand why Twitter calls this a “permanent suspension” instead of a ban, but that’s what it is.
Even the most powerful people must face consequences. There must be a generally agreed upon line that cannot be crossed. I guess the line for Twitter, Reddit, and Facebook is when their platforms are used to tacitly encourage people to overthrow a fair election in a stable democracy.
Big platforms experimented with taking the laissez-faire moderation style of 4chan mainstream and it backfired. It is long past time that they took a more active role in user moderation.
See Also:Ben Thompson’s piece from yesterday; Mike Masnick today. I often disagree with both on platform moderation issues — see preceding paragraph — but I think they have articulated well why they support a more hands-off approach to moderation more generally, and why they came to believe this ban is due.
The siege was no doubt terrifying to watch, and doubly so especially for the legislators and staff trapped in the building by raging QAnon followers and Trump dead-enders. Rioters wore shirts glorifying the Holocaust; some shouted what sounded like racial epithets and paraded Confederate flags. Guns were drawn. A woman was shot to death by police. It was a tense, perilous, violent assault on democracy.
But it was also quickly apparent that this was a very dumb coup. A coup with no plot, no end to achieve, no plan but to pose. Thousands invaded the highest centers of power, and the first thing they did was take selfies and videos. They were making content as spoils to take back to the digital empires where they dwell, where that content is currency.
Social media did not cause us to give undue influence to public figures with little concern for the weight of their words and actions, but it surely amplifies and exacerbates it.
Every four years, Americans go to the polls to pick a president and vice president; the following January, the House and Senate certify the results and confirm the winner. That January joint session is routine stuff — something so formal and kind of arcane that it can be hard to remember the procedure during any past election. On this occasion, a mob encouraged and defended by the president decided that they should violently interject themselves into proceedings because they did not like the result.
It was a shocking, terrifying, and entirely unsurprising escalation of the anti-democratic rhetoric frequently used by commentators and pundits in a specific media bubble. But it is also the product of a president who has used his status to elevate blatant lies, codswallop, and self-serving fictions. Most platforms have given him generous leeway to do so since he is a world leader by office if not by any other quality.
The insurrection isn’t just being televised. It’s being orchestrated, promoted, and broadcast on the platforms of companies with a collective value in the trillions of dollars.
And the platforms have let Trump persist. At 2:38 p.m. in DC, Trump issued a new message, in which he did not tell his supporters to stand down.
“Please support our Capitol Police and Law Enforcement. They are truly on the side of our Country. Stay peaceful!” he wrote on Twitter and Facebook, as members of his own party barricaded themselves in chambers and rooms and the vice president was forced to evacuate the building. Police were overwhelmed.
That tweet was posted well after rioters were in the Capitol, minutes after they were at the Senate doors, and just a few minutes before they got into the chamber. This attack was planned in the open and incited by the sitting president through, in part, the affordances of his social media presence. Platforms limited the reach of — and ultimately removed — videos and tweets he posted that could be read as encouraging the rioters. And then Facebook decided enough was enough.
President Trump will no longer be able to use his official Facebook and Instagram accounts after the social media giant indefinitely banned him following the violent protests at the U.S. Capitol, Facebook CEO Mark Zuckerberg announced Thursday. Mr. Trump will be banned at least through the end of his presidential term.
“We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Zuckerberg wrote in a Facebook post. “Therefore, we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete.”
None of this is to say that Facebook is wrong to ban Trump, or that Twitter would be wrong to follow suit. There’s a good case to be made they should have done it well before now. While I’ve made the case for newsworthiness exemptions in the past, particularly on Twitter, it’s perfectly reasonable for media platforms to make judgment calls about the balance between newsworthiness and, say, public health or safety — as long as they admit that is in fact what they’re doing. It’s what true media organizations do every day. The only thing worse than constantly changing the rules would be stubbornly sticking to them when it’s clear they’re inadequate or misguided.
But the dominant platforms have always been loath to own up to their subjectivity, because it highlights the extraordinary, unfettered power they wield over the global public square, and places the responsibility for that power on their own shoulders. That in turn would make it clear that the underlying problem here is not the rules themselves, but the fact that just a few, for-profit entities have such power over global speech and politics in the first place. So they hide behind an ever-changing rulebook, alternately pointing to it when it’s convenient and shoving it under the nearest rug when it isn’t.
These platforms are designed to get advertisements and posts from public figures in front of as many users as possible — similar to the way mass media has worked for a couple of decades now. So what do their leadership teams do when those qualities are abused by someone to threaten public safety and democracy itself? In the case of news media, there are editors who are theoretically able to make factual corrections and put misleading information in context. Unfortunately, the people in charge of those decisions often prefer shouting matches; it’s better television. But social media platforms do not have an equivalent; de-platforming, whether temporarily or permanently, is the closest thing they have short of a soup-to-nuts rearchitecting of how posts are presented.
Rethinking how prominent posts are presented and lies are treated is something platforms should have done a long time ago. Facebook and Twitter are clearly still making all of this up as they go along. It was painfully clear one or two or five years ago that they needed to have new ways of presenting items from world leaders, lawmakers, and their spokespersons that would minimize the use of these platforms for indoctrination and, now, insurrection. They have failed to do so. That is why they have a choice between heavy-handed responses like these and doing next to nothing. In this context, I think the heavy-handed approach is almost certainly the correct one. But none of this should have gone this far — and the failures of these platforms stand out as one reason of many for the escalation of violent rhetoric from authoritative figures and the platforms’ aggressive response.
Once upon a time, we made one of the earliest MP3 players for the Mac, Audion. We’ve come to appreciate that Audion captured a special moment in time, and we’ve been trying to preserve its history. Back in March, we revealed that we were working on converting Audion faces to a more modern format so they could be preserved.
Since then, we’ve succeeded in converting 867 faces, and are currently working on a further 15 faces, representing every Audion face we know of.
Today, we’d like to give you the chance to experience these faces yourself on any Mac running 10.12 or later. We’re releasing a stripped-down version of Audion for modern macOS to view these faces.
I must say that it is both odd and comforting to see a version of Audion with a MiniDisc player skin running natively on MacOS Big Sur alongside lookalike modern apps.
If you have not yet read the story of how Audion almost became iTunes, now is a great time to do so.
Microsoft is building a universal Outlook client for Windows and Mac that will also replace the default Mail & Calendar apps on Windows 10 when ready. This new client is codenamed Monarch and is based on the already available Outlook Web app available in a browser today.
Project Monarch is the end-goal for Microsoft’s “One Outlook” vision, which aims to build a single Outlook client that works across PC, Mac, and the Web. Right now, Microsoft has a number of different Outlook clients for desktop, including Outlook Web, Outlook (Win32) for Windows, Outlook for Mac, and Mail & Calendar on Windows 10.
Microsoft wants to replace the existing desktop clients with one app built with web technologies. The project will deliver Outlook as a single product, with the same user experience and codebase whether that be on Windows or Mac. It’ll also have a much smaller footprint and be accessible to all users whether they’re free Outlook consumers or commercial business customers.
Some reports have interpreted this as though Microsoft will discard the Mac app redesign it previewed in September. I am not sure that is the case. The new version of Outlook for Mac looks an awful lot like an Electron app already.
Like most web apps in a native wrapper, this sounds like a stopgap way of easing cross-platform development at the cost of usability, quality, speed, and platform integration. To be fair, I am not sure that anyone would pitch today’s desktop Outlook apps as shining examples of quality or speed, but I spend a lot of time from Monday through Friday in the Outlook web app and it is poor.
My favourite bug is that when you are composing an inline reply it sometimes interprets the delete key not as though it should remove the most recently-typed character but that it should delete the current message thread. And, of course, you cannot undo that with a keyboard shortcut. If you miss the app’s built-in small notification balloon that appears nowhere near where you are typing but has an “undo” button that doesn’t look like a button in it, you’ll have to manually find the thread in the trash and move it back to the inbox.
I gave away tons of personal data to get the things I needed. Food came from grocery and restaurant delivery services. Everything else — clothes, kitchen tools, a vanity ring light for Zoom calls, office furniture — came from online shopping platforms. I took an Uber instead of public transportation. Zoom became my primary means of communication with most of my coworkers, friends, and family. I attended virtual birthdays and funerals. Therapy was conducted over FaceTime. I downloaded my state’s digital contact tracing tool as soon as it was offered. I put a camera inside my apartment to keep an eye on things when I fled the city for several weeks.
Millions of Americans have had a similar pandemic experience. School went remote, work was done from home, happy hours went virtual. In just a few short months, people shifted their entire lives online, accelerating a trend that would have otherwise taken years and will endure after the pandemic ends — all while exposing more and more personal information to the barely regulated internet ecosystem. At the same time, attempts to enact federal legislation to protect digital privacy were derailed, first by the pandemic and then by increasing politicization over how the internet should be regulated.
Last year marked an increased dependency for much of the world on one of its most poorly-regulated industries. We were held together by many of the same companies that were shown over the preceding several years to be deeply flawed — especially when it comes to privacy.
[Singaporean] authorities claim that such technologies have greatly strengthened their contact-tracing efforts. In early November, the health minister said that 25,000 close contacts of confirmed Covid-19 cases had been identified through TraceTogether, of which 160 eventually tested positive. The country reported zero cases of community transmission most days in November.
Despite these successes, the imposition of more intrusive data collection technology has unnerved privacy advocates, who worry that the pandemic will be used to justify the surveillance of citizens without consideration of the long-term consequences, and without sufficient checks and balances.
This is wildly invasive and incredibly short sighted. Device-based contact tracing and exposure notification already faced an uphill battle on privacy. It is now practically impossible in much of the world thanks to early but flawed contact tracing apps and broken promises about proximity data use. But not in Singapore, where their contact tracing app remains mandatory.
Update: “Location” in the last paragraph was changed to “proximity”. Thanks Stuart.
I’ve written a lot about private equity. By ‘private equity,’ I mean financial engineers, financiers who raise large amounts of money and borrow even more to buy firms and loot them. These kinds of private equity barons aren’t specialists who help finance useful products and services, they do cookie cutter deals targeting firms they believe have market power to raise prices, who can lay off workers or sell assets, and/or have some sort of legal loophole advantage. Often they will destroy the underlying business. The giants of the industry, from Blackstone to Apollo, are the children of 1980s junk bond king and fraudster Michael Milken. They are essentially are super-sized mobsters who burn down businesses for the insurance money.
In private equity takeovers of software, the gist is the same, with the players a bit different. It’s not Apollo and Blackstone, it’s Vista Equity Partners, Thoma Bravo, and Silver Lake, but it’s the same cookie cutter style deal flow, the same financing arrangements, and the same business model risks. But in this case, the private equity owner of SolarWinds burned down far more than just the firm.
U.S. intelligence agencies may have confirmed today that these attacks were perpetrated by Russians. But this particularly good piece from Stoller makes a satisfying case for the structural reasons behind this breach.
CBS’ 60 Minutes aired a story, reported by Scott Pelley, arguing that cases of harassment and abuse from online sources are enabled by Section 230 of the Communications Decency Act:
A priority of the new president and Congress will be reining in the giants of social media. On this, Democrats and Republicans agree. Their target is a federal law known as Section 230. In a single sentence it set off the ‘big bang’ helping to create the universe of Google, Facebook, Twitter and the rest. Some critics of the law say that it leaves social media free to ignore lies, hoaxes and slander that can wreck the lives of innocent people. One of those critics is Lenny Pozner. After a tragedy in his own life, Pozner has become a champion for victims of online lies, people including Maatje and Matt Benassi, who, overnight, became the target of death threats like these.
Right about now you might be thinking, they should sue. But that’s the problem. They can’t file hundreds of lawsuits against internet trolls hiding behind aliases. And they can’t sue the internet platforms because of that law known as Section 230 of the Communications Decency Act of 1996. Written before Facebook or Google were invented, Section 230 says, in just 26 words, that internet platforms are not liable for what their users post.
These cases are truly terrible — but they are not enabled by Section 230 as much as by the generosities afforded by the First Amendment combined with the scale of these platforms. And, as Mike Masnick of Techdirt points out, major platforms have eventually been responsive to user complaints:
Over and over again, the report blames Section 230 for all of this. Incredibly, at the end of the report, they admit that the video from that nutjob conspiracy theorist was taken down from YouTube after people complained about it. In other words Section 230 did exactly what it was supposed to do in enabling YouTube to pull down videos like that. But, of course, unless you watch the entire 60 Minutes segment, you’ll miss that, and still think that 230 is somehow to blame.
Facebook, Twitter, and YouTube have thankfully stepped up their moderation efforts in the last couple of years. But because of their scale — partially due to network effects, and partially because of a reluctance to use antitrust precedent to slow their roll — this increased moderation has been mistakenly referred to as “censorship”. None of this has anything to do with Section 230, however.
60 Minutes filmed a very good interview with Jeff Kosseff, an expert on Section 230, of which only a part made it into the final report. I am disappointed that they axed Kosseff’s historical context:
To understand why [Section 230] is necessary, you really have to go back to what the law was before Section 230, and that is: what is the liability for distributors of content that others create? Before the internet, that was bookstores and newsstands. And the general rule was that, if you are a distributor of someone else’s content, you’re only liable if you know or have reason to know if it’s illegal.
That compares favourably with Section 230, which requires platforms to remove illegal materials when they are notified and encourages them to moderate proactively.1 Because of the explosive growth of these platforms, moderation is extremely difficult.
Kosseff also fields a question from Pelley about news publishers:
Scott Pelley: But help me understand, the same is not true for other forms of media. If somebody says something defamatory on 60 Minutes or on Fox or CNN or in The New York Times, those organizations can be sued. So why not Google, YouTube, Facebook?
Jeff Kosseff: So the difference between a social media site and let’s say the Letters to the Editor page of The New York Times is the vast amount of content that they deliver. So I mean you might have five or ten letters to the editor on a page. You could have I think it’s 6,000 tweets per second. […]
One other difference is that the press relies upon human beings making a decision about what should be published and what should not. An interview subject can make a dubious and potentially defamatory claim, but it is up to the system of reporters and editors and fact-checkers to determine whether that claim ought to be shown to the public. Online platforms are more infrastructural. Making them legally liable for what their users publish would be like making it fair game to sue newsstands and grocery stores for selling copies of the Times containing an illegally defamatory story.
Perhaps owing to their unique scale and manipulated reach, I hope that platforms will continue to take a more active role in curbing high-profile bad faith use. I do not think making Twitter liable for my dumb tweets, or websites liable for their users’ comments, is a sensible way of getting there.
I liked Timothy Buck’s explanation of why accessibility matters in everything, and the simple list of tips to improve it in tech products. A key thing to think about is that, when you make things more accessible for more people, you make those things better for every user. Nobody wants things to be harder to use.