Rosenworcel served as an FCC commissioner during both the Obama and Trump administrations. She supported net neutrality rules and opposed mega-mergers that came before the agency including that between T-Mobile and Sprint.
Her greatest focus, however, has been on shoring up the FCC’s subsidy programs and the broadband connectivity data they rely on.
Rosenworcel has particularly emphasized the need to close the “homework gap” — the divide between students who have fast, reliable in-home internet and those who don’t.
Lewis Day writes about TV Licensing in Britain, and the detector vans used to discover if a household is watching unlicensed BBC broadcasts:
Alternatively, a search warrant may be granted on the basis of evidence gleaned from a TV detector van. Outfitted with equipment to detect a TV set in use, the vans roam the streets of the United Kingdom, often dispatched to addresses with lapsed or absent TV licences. If the van detects that a set may be operating and receiving broadcast signals, TV Licencing can apply to the court for the requisite warrant to take the investigation further. The vans are almost solely used to support warrant applications; the detection van evidence is rarely if ever used in court to prosecute a licence evader. With a warrant in hand, officers will use direct evidence such as a television found plugged into an aerial to bring an evader to justice through the courts.
Historically, antennas were used to detect specific frequencies. These days, with streaming and flat panel TVs, it appears that vans are still used but the detection methods are somewhat obscured.
An official Adobe history describes the PDF’s goal as being able to “exchange information between machines, between systems, between users in a way that ensured that the file would look the same everywhere it went.” This meant creating “a digital interchange format that preserved author intent,” says David Parmenter, director of engineering for Adobe Document Cloud, “which is, at a really high level, what a PDF tries to do.”
Beneath the highly technical language is something pretty basic: The mission of the PDF is simply to be the digital version of old-fashioned paper.
For then-unforeseen reasons, that mission has proved to be somewhat frustrated by the invention of the smartphone:
Adobe’s most recent and ongoing efforts around the PDF have centered on adapting the format to the smartphone era. Late last year, the company debuted what it calls a “Liquid Mode” option that rejiggers PDFs for easier phone-screen-sized reading. On the creator and developer side, it has recently made the documents easier to embed in websites and is working on the ability to incorporate 3-dimensional renderings into PDFs.
Is there a market for 3D renderings in PDF form? The PDF format’s simplicity has surely been a key factor in its success.
In fact, if you are reading this on a Mac, an iPhone, or an iPad, you’re looking at PDFs hundreds of times every day. Icons across the system are PDFs — including those in toolbars, the menu bar, and throughout apps — because PDFs can contain infinitely-scalable vector graphics. Mac OS X has always contained PDF elements, but their use was expanded around the time Apple was working on a programmatic resolution-independent user interface defined by XML files. Ultimately, higher resolution interfaces were solved through pixel doubling along both axes, but I wonder what happened to that project.
Today’s inauguration ceremonies in the United States were notable for many reasons: Vice President Kamala Harris has ascended to the highest office, so far, for a (non-white) woman in the U.S.; it was backdropped by evidence of a yearlong pandemic; it marked the return of full sentences and oratory skill above that of a startled goose.
But, aside from the procedural events, it is hard to think of a higher point than Amanda Gorman’s reading of her poem “The Hill We Climb”. A little less than six minutes of entirely captivating verse.
It’s hard to pinpoint exactly when we lost control of what we see, read — and even think — to the biggest social-media companies.
I put it right around 2016. That was the year Twitter and Instagram joined Facebook and YouTube in the algorithmic future. Ruled by robots programmed to keep our attention as long as possible, they promoted stuff we’d most likely tap, share or heart — and buried everything else.
If it were just about us and our friends and family, that would be one thing, but for years social media hasn’t been just about keeping up with Auntie Sue. It’s the funnel through which many now see and form their views of the world.
This is something we should continue to keep in mind as social media companies evolve. I am doubtful of the longevity of audience-specific platform clones — Parler, for example, or the Facebook copycat MeWe shown in Stern’s article — but I am certain that the major platforms will have to keep changing in response to the kinds of problems that have bubbled up in recent years. I hope there is increasing emphasis on quality and user control; as Stern says, this is something platforms have already proved they can readily adjust for these outcomes.
This is no substitute for a better option, which is to avoid using social media platforms as a primary referral source for news. A better-designed algorithm is no substitute for keen human editors across multiple reliable publishers. But, since our collective dependence on social media is unlikely to subside, it is an ethical responsibility of these platforms to better tune how they sort users’ feeds.
Americans, most directly impacted by this presidential administration’s everything, are surely looking forward to welcoming a new administration, imperfect as it may be. But it is going to allow the rest of us who are incidentally impacted by the actions of the most world’s powerful nation to breathe a little easier.
This president is so weak and sad that he’s going to get out of town early Wednesday morning. A fitting end to an administration defined by deliberate misery. Anyway, here’s a Tom Tomorrow strip.
As the FBI continues to round up rioters who stormed the U.S. Capitol on Jan. 6 to try to stop President-elect Joe Biden’s inauguration last week, it’s finding that a number of them seem to have openly confessed to crimes on open social media, a review of court documents shows.
The subject of another much-circulated photo, of a cheerful and waving bearded man walking through the Capitol with the speaker’s lectern, has been identified by the Bradenton Herald as Florida man Adam Johnson (not “Via Getty”). Johnson was arrested on Friday and hit with the same three charges as Barnett. The complaint against Johnson references photos posted on his own Facebook account that appear to show him inside the Capitol building and were sourced from a newspaper article about the riot. Additionally, someone who has a mutual friend with Johnson called the FBI to report that he was the man in the photo with the lectern.
Johnson’s lawyer admitted to reporters that the photograph of his client is “a problem.”
“I’m not a magician,” Dan Eckhart added. “We’ve got a photograph of our client in what appears to be inside a federal building or inside the Capitol with government property.”
An affidavit from an FBI special agent filed in court Tuesday says Eduardo Florea stockpiled more than 1,000 rounds of ammo and threatened to kill Sen.-elect Raphael Warnock of Georgia.
The affidavit says the FBI received records from Parler to identify the user behind the account “LoneWolfWar,” where the threats originated. Parler provided the phone number associated with the account, the affidavit says, and the FBI used it, and info from T-Mobile, to identify Florea.
Tinder, Bumble and other dating apps are using images captured from inside the Capitol siege and other evidence to identify and ban rioters’ accounts, causing immediate consequences for those who participated as police move toward making hundreds of arrests.
Amanda Spataro, a 25-year-old logistics coordinator in Tampa, called it her “civic duty” to swipe through dating apps for men who’d posted incriminating pictures of themselves. On Bumble, she found one man with a picture that seemed likely to have come from the insurrection; his response to a prompt about his “perfect first date” was: “Storming the Capitol.”
“Most people, you think if you’re going to commit a crime, you’re not going to brag about it,” Spataro said in an interview.
You would think that, wouldn’t you? But only if you, you know, think.
The Capitol riot was a boundary-busting event in almost every way, and its impact on the digital privacy debate was no different. The insurrectionists’ acts were so galling, so frightening, that suddenly, even those who might oppose digital surveillance and forensics techniques in other contexts, like, say, identifying peaceful protesters at a Black Lives Matter rally, feel justified in deploying those tools against the rioters. The shifting goalposts have sparked a tense debate among researchers of online extremism about the right way to stitch together the digital scraps of someone’s life to publicly accuse them of committing a crime — or whether there is a right way at all.
I think a piece by Astead W. Herndon in the New York Times is a good explanation of the false equivalence between Black Lives Matter protests and the criminal surge of U.S. Capitol rioting morons. But Lapowsky’s article raises good arguments about the dangers of false accusations, attempts at mob justice, and the risks faced by those identifying extremists.
A widely adopted, decentralized protocol is an opportunity for social networks to “pass the buck” on moderation responsibilities to a broader network, one person involved with the early stages of bluesky suggests, allowing individual applications on the protocol to decide which accounts and networks its users are blocked from accessing.
Social platforms like Parler or Gab could theoretically rebuild their networks on bluesky, benefitting from its stability and the network effects of an open protocol. Researchers involved are also clear that such a system would also provide a meaningful measure against government censorship and protect the speech of marginalized groups across the globe.
The internet itself is comprised of a series of decentralized protocols. While I don’t want to minimize the worries of those involved with the oddly-lowercased bluesky effort, a universal protocol for short messages seems more in-line with the internet I remember before a handful of big American platforms corralled the worldwide market for communication. One could see it as “passing the buck”, but it is equally valid to see this as reducing singular influence and control.
It is baffling to me that, in 2021, I still do not know the security practices of the devices and cloud services I use more frequently than ever.
This became particularly worrisome last year when I began working my day job from my personal computer. I have several things in my favour: it is an iMac, not a portable computer, so there is dramatically less risk of unauthorized physical access; I keep an encrypted Time Machine backup and an encrypted Backblaze remote backup; I use pretty good passwords. But what about my phone and iCloud, for example? I do not use either for much work stuff, but I inevitably have some communications and two-factor authentication apps on my iPhone, and I use iCloud for backups.
Over the holidays, I immersed myself in an early copy of a new report by Johns Hopkins University students Maximilian Zinkus and Tushar Jois, and associate professor Matthew Green, as I tried to find answers for what should be simple questions. The researchers’ conclusions in the now-published report were eye-opening to me:
Limited benefit of encryption for powered-on devices. We observed that a surprising amount of sensitive data maintained by built-in applications is protected using a weak “available after first unlock” (AFU) protection class, which does not evict decryption keys from memory when the phone is locked. The impact is that the vast majority of sensitive user data from Apple’s built-in applications can be accessed from a phone that is captured and logically exploited while it is in a powered-on (but locked) state.
Limitations of “end-to-end encrypted” cloud services. Several Apple cloud services advertise “end-to-end” encryption in which only the user (with knowledge of a password or passcode) can access cloud-stored data. We find that the end-to-end confidentiality of some encrypted services is undermined when used in tandem with the iCloud backup service. More critically, we observe that Apple’s documentation and user settings blur the distinction between “encrypted” (such that Apple has access) and “end-to-end encrypted” in a manner that makes it difficult to understand which data is available to Apple. Finally, we observe a fundamental weakness in the system: Apple can easily cause user data to be re-provisioned to a new (and possibly compromised) [Hardware Security Module] simply by presenting a single dialog on a user’s phone. We discuss techniques for mitigating this vulnerability.
The muddy distinction between “encryption”, “end-to-end encryption”, and “true end-to-end decryption in a way that Apple cannot reverse” remains a source of consternation, especially as Apple’s own documentation is anything but precise.
“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”
If there is one conclusion of this report that is damning by its silver lining, it is that it calls bullshit on law enforcement’s insistence that smartphone encryption creates some sort of investigative black hole. Maybe Apple has successfully created encryption that is strong enough for personal and business use with a strictly-controlled opening for legitimate legal use — by sticking itself in the middle of that chain. That is how I am reading between the lines of the statement an unidentified spokesperson provided Wired:
The researchers shared their findings with the Android and iOS teams ahead of publication. An Apple spokesperson told WIRED that the company’s security work is focused on protecting users from hackers, thieves, and criminals looking to steal personal information. The types of attacks the researchers are looking at are very costly to develop, the spokesperson pointed out; they require physical access to the target device and only work until Apple patches the vulnerabilities they exploit. Apple also stressed that its goal with iOS is to balance security and convenience.
There are security problems with iOS devices and iCloud services that Apple can and should fix, but I bet there are many that it will not because it is perhaps unwise to be a company that is explicitly trying to block any subpoena from having an effect. If that is the case, Apple ought to say so. It should be plainly clear to users what their security options are, and Apple ought to be more honest in its marketing and documentation of these features.
If Apple is appointing itself guardian of its users’ data — in iCloud form, including defaulted-to-on iCloud Backups of iPhones, iPads, and Apple Watches — that also means that it can respond to law enforcement requests at any level by any agency. Depending on how much you trust your local police and national intelligence services, perhaps that does not seem like a great idea to you. More worrying is that it leaves Apple open to potentially being a part of corrupt regimes’ human rights abuses if it is responsive to data requests for activists’ accounts, or if it complies with device search requests from border patrol.
Maybe there are only bad options, and this is the best bad option that strikes the least worst balance between individual security and mass security. But the compromises seem real and profound — and are, officially, undocumented.
Do you want to know what Apple’s 2021 Mac lineup looks like? Well, new reports from Ming-Chi Kuo and Mark Gurman that dropped in rapid succession today — almost like there was a meeting at Apple this week to discuss new products — paint a rosy picture.
Juli Clover, of the very appropriately named MacRumors:
According to Kuo, Apple is developing two models in 14 and 16-inch size options. The new MacBook Pro machines will feature a flat-edged design, which Kuo describes as “similar to the iPhone 12” with no curves like current models. It will be the most significant design update to the MacBook Pro in the last five years.
There will be no OLED Touch Bar included, with Apple instead returning to physical function keys. Kuo says the MagSafe charging connector design will be restored, though it’s not quite clear what that means as Apple has transitioned to USB-C. The refreshed MacBook Pro models will have additional ports, and Kuo says that most people may not need to purchase dongles to supplement the available ports on the new machines. Since 2016, Apple’s MacBook Pro models have been limited to USB-C ports with no other ports available.
All of the new MacBook Pro models will feature Apple silicon chips, and there will be no Intel chip options included.
These leaks were echoed by Mark Gurman, who also added that the displays in the new MacBook Pro models would be brighter and higher-contrast.
If these rumours are accurate, these products seem inspired by the early 2010s golden age of the MacBook Pro: lots of ports, MagSafe, and a great keyboard. All of these things were part of the much-loved models of that time before they were removed in favour of four USB-C and Thunderbolt combo ports which doubled as charging ports, and a poor keyboard. The latter problem was fixed; the former decision still feels like a compromise too much of the time. The excitement for these rumours seems telling. You’ve got to wonder what ports would be added; I can’t see USB-A or Ethernet making a comeback, and even HDMI and Micro SD ports feel like a stretch.
I would still love to read a deeply reported explanation of what happened with the Mac notebook range from 2012 through the present day. I think there must be an interesting story in there about being ready for the short-term backlash of trying new things, only to find that long-term compromises remain.
Apple Inc. is planning the first redesign of its iMac all-in-one desktop computer since 2012, part of a shift away from Intel Corp. processors to its own silicon, according to people familiar with the plans.
The new models will slim down the thick black borders around the screen and do away with the sizable metal chin area in favor of a design similar to Apple’s Pro Display XDR monitor. These iMacs will have a flat back, moving away from the curved rear of the current iMac. Apple is planning to launch two versions — codenamed J456 and J457 — to replace the existing 21.5-inch and 27-inch models later this year, the people said, asking not to be identified because the products are not yet announced.
Gurman also says that Apple is working on two new Mac Pro models — one of which he says may continue to use Intel’s processors, but that does not pass my sniff test — and a less-expensive standalone display.
The rumours that were published today represent nearly every Mac in Apple’s lineup that has yet to receive Apple’s own processors, with the exception of the iMac Pro. But, given the M1’s performance and the smaller Mac Pro model, it is possible the iMac Pro may simply be discontinued.
I’m just spitballing but, maybe if Apple’s feeling in a real retro mood, the new iMac could just be called the “Mac”. Just a thought.
Ben Thompson, on the different responses around the world to tech companies’ restrictions over the past week:
Make no mistake, Europe is far more restrictive on speech than the U.S. is, including strict anti-Nazi laws in Germany, the right to be forgotten, and other prohibitions on broadly defined “harms”; the difference from the German and French perspective, though, is that those restrictions come from the government, not private companies.
This sentiment, as I noted yesterday, is completely foreign to Americans, who whatever their differences on the degree to which online speech should be policed, are united in their belief that the legislature is the wrong place to start; the First Amendment isn’t just a law, but a culture. The implication of American tech companies serving the entire world, though, is that that American culture, so familiar to Americans yet anathema to most Europeans, is the only choice for the latter.
One of the reasons it is interesting to be a Canadian writing about tech is because, generally speaking, we take influence from both Western European and American perspectives on all sorts of matters. Our right of expression is not as wide-ranging as that of the U.S., but it lacks many European limitations as well. Like many in Europe, Canadians feel perfectly able to express their views in public — more than Americans in their country — and do not feel that the small number of legal limitations are restrictive.
This week’s sweeping restrictions of the social media accounts of the president of the United States and the deplatforming of Parler were a necessarily American response to problems in America. The president was not silenced or censored, but his association with private companies was revoked because they did not want to deal with his particular brand of nightmare fuel. But it is clearly not a solution for worldwide issues — especially when non-U.S. countries struggle to enforce their laws against American companies.
Thompson’s prediction for the future of the internet is intriguing:
Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols. This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.
Apple last year pledged a hundred million dollars in a new Racial Equity and Justice Initiative, promising big investments in underrepresented individuals and communities in the United States, initially, and around the world.
Apple today announced a set of major new projects as part of its $100 million Racial Equity and Justice Initiative (REJI) to help dismantle systemic barriers to opportunity and combat injustices faced by communities of colour. These forward-looking and comprehensive efforts include the Propel Center, a first-of-its-kind global innovation and learning hub for Historically Black Colleges and Universities (HBCUs); an Apple Developer Academy to support coding and tech education for students in Detroit; and venture capital funding for Black and Brown entrepreneurs. Together, Apple’s REJI commitments aim to expand opportunities for communities of colour across the country and to help build the next generation of diverse leaders.
A former Apple employee who noted that he was “not Black or Hispanic” described his experience on a team that was developing speech recognition for Siri, the virtual assistant program. As they worked on different English dialects — Australian, Singaporean, and Indian English — he asked his boss: “What about African American English?” To this his boss responded: “Well, Apple products are for the premium market.”
Benjamin notes that this interaction took place a year after Apple acquired Dr. Dre’s Beats brand:
The irony, the former employee seemed to imply, was that the company could somehow devalue and value Blackness at the same time.
For what it is worth, in the video introducing Apple’s racial equity initiative in June, Cook acknowledged the need for thorough correction:
In our supply chain and professional service partners, we’re committed to increasing our total spending with Black-owned partners, and increasing representation across companies we do business with. […]
We’re taking significant new steps on diversity and inclusion within Apple, because there is more that we can and must do to hire, develop, and support those from underrepresented groups — especially our Black and Brown colleagues.
Change begins at the top. But the “top” is somewhat relative: it is true not only at the executive level, but at each layer of management. African American English is still not a language option in Siri five years later, apparently at least in part because it doesn’t fit the “premium market” — and that is just one example. These changes take time and the projects announced today are surely a terrific investment in the future, but it must be acknowledged that Apple continues to have internal deficiencies today that it has the power to correct.
The research began with the observation that in the offline world, healthy communities have traditionally been served by thriving public spaces: town squares, libraries, parks, and so on. Like digital social networks, these spaces are open to all. But unlike those networks, they are owned by the community rather than a corporation. As you would expect, that difference results in a very different experience for the user.
Public spaces display a number of features that build healthier communities, according to researchers. “Humans have designed spaces for public life for millennia,” they write, “and there are lessons here that can be helpful for digital life.”
Even if the specifics of this research may need ironing out, the gist of it is inspiring. I looked through Civil Signals’ slide deck; I thought this was an eye-opening observation about the language often used for the ways social networks ought to be improved:
Encourage Civility. What counts as civil is often defined by dominant groups.
Reduce Polarization. Polarization isn’t the problem — dehumanization and lack of cross-connection are.
Increase Diversity. Mere contact with other groups or their ideas does not increase tolerance.
Inform People. Not all information is equally valuable to citizens.
Increase Trust. Not all institutions or individuals deserve trust.
Allow Participatory Governance. We think this is an important idea, but outside the scope of this research.
Last point notwithstanding, these are excellent arguments against which apparent improvements should be tested.
WhatsApp rival Telegram has seen a 500 per cent increase in new users amid widespread dissatisfaction with the way the Facebook-owned app handles people’s data.
Telegram recorded 25 million new users over the last 72 hours, according to founder Pavel Durov, taking the total number of users above 500 million.
This is roughly a quarter of the estimated 2 billion WhatsApp users around the world, though many users of the world’s most popular messaging app took to social media this week to urge others to leave the platform due to privacy concerns.
Right-wing extremists are using channels on the encrypted communication app Telegram to call for violence against government officials on Jan. 20, the day President-elect Joe Biden is inaugurated, with some extremists sharing knowledge of how to make, conceal and use homemade guns and bombs.
The messages are being posted in Telegram chatrooms where white supremacist content has been freely shared for months, but chatter on the channels has increased since extremists have been forced off other platforms in the wake of the siege of the U.S. Capitol last week by pro-Trump rioters.
Ignore the scary use of “encrypted” here — Telegram Channels are not encrypted, and encryption itself is not a specific worry. What interests me are the ways that misinformation is spreading now.
Telegram Channels are public, and messages posted in there can be forwarded within Telegram to other Channels, Groups, and individuals. They can also be sent as standard web links, which is how I came across a semi-popular post — archived here — claiming that:
Apple is about to pull Telegram from the App Store
Apple is going to remotely delete all existing copies of Telegram installed on users’ devices
You can prevent your copy from being deleted by disabling the ability to delete apps in parental controls
I have no reason to suspect the first claim is true. Telegram is a widely-used messaging app popular around the world for mostly legitimate uses. The second claim is incongruent with Apple’s treatment of Parler; existing copies of the app are still functional, theoretically. The third claim has absolutely no relevance towards anything here, and is not how that feature works. But this one message has racked up well over two hundred thousand views.
It is not just Telegram, or even solely a problem of these apps. Ben Collins, also at NBC News, explains how these fictions are now circulating through text messages:
One viral false conspiracy theory shared across the U.S. implores users to disable automatic software updates on their cellphones, claiming that the next patch will disable an emergency broadcasting system message from President Donald Trump. The false rumors are usually attached to another urban legend about a blackout coming in the next two weeks, which say people should be “prepared with food and water.”
Another viral text is a link to a deceptively edited video, also known as a “cheapfake,” that first appeared on the Twitter-like social media platform Parler. It features a series of mashed-up speeches by Trump that are realigned to lead the viewer to falsely believe he is calling for an uprising on Jan. 20.
This reminds me of the days of chain social media posts — post this and tag three friends or you won’t wake up tomorrow, that kind of thing — and chain emails before that, and chain posts on BBSes before that, and chain letters with actual postage before that.
But those posts seem quaint compared to what we’re seeing today. The conspiracy theories of the past seemed to be based on historical events. The ones that are circulating now are creating an alternate reality for the here and now. Instead of overanalyzing specks of dust in the Zapruder film decades on, there are now people who make a living by denying an audience the reality of what is happening before their very eyes. Neither chain letters nor invented versions of events are new; but, perhaps it is the combination of those and the speed of technology and lucrative careers in professional grifting that have given these messages unwelcome power.
Gayle King of CBS This Morning interviewed Tim Cook for an initiative Apple is announcing tomorrow. King previewed it today by saying:
It is not a new product. We should say, it is not a new product. It’s something, I think, bigger and better than that.
So, if not a new product, what could it be?
Apple’s January events have often centred around education, so it would not surprise me if the tip that fell into my lap is correct. It seems that Apple is a founding partner of the new to-be-officially-unveiled Propel Center in Atlanta:
Spanning 50,000 square feet, Propel Center will include state-of-the-art spaces to accommodate lecture halls, learning labs and common areas to facilitate group learning. The physical Propel Center will serve as a centralized nexus and symbol for HBCU collaboration across the country.
In addition to the main building, Propel will also be offering on-campus labs equipped by Apple, and online instruction with master classes from Spike Lee, Lisa Jackson, and — awkwardly — Jack Ma. There’s lots more; the promo site is only one page, but it’s worth checking out.
Anyway, I am not saying this is definitely what Cook will be talking about, but I am as confident as I can be that you will hear more about this tomorrow.
Now consider the current Mac product line. It would be instantly recognizable to a visitor from the early 2010s.
Of course, Macs have evolved a lot in the intervening years on the inside. But the exteriors of Apple’s Macs look remarkably like they did in 2012, if not 2007. It’s been a decade or more of quiet iteration without really rethinking the fundamentals of the product — except that one time, which Apple rapidly came to regret.
I have written before about Apple’s deliberate strategy to keep the industrial design of its first M1 Macs identical to their Intel-based predecessors. But Snell is right: the Mac lineup used to be more experimental and hungry for evolution in its hardware. What changed? Or, more accurately, why so little change?
Apple has settled on a nearly all-aluminum line; the wildest configuration is the choice of gold for MacBook Air models. Aside from the lack of a glowing Apple logo, today’s MacBook Pro looks from not-too-far away much like the one from 2008.
I am not one to argue for change for its own sake. But, as Apple plays around with the industrial design of the iPhone, iPad, and Apple Watch every few years, I have to wonder if part of the reason for the largely stagnant Intel era was the Intel processors themselves. Now that Apple is working with its own processors that have different thermal constraints and can be entirely custom-engineered, will we perhaps see a renaissance of experimentation in materials and form? I am not so sure; I do not want to get ahead of myself. The MacBook Air may not be the ideal laptop design, but it is pretty darn close. There is a reason it is the computer copied by every Windows OEM.
Maybe it is just my age, but I miss the era of glossy white finishes. I’m looking at my white and silver iPhone and it looks as crisp and modern and futuristic as you’d expect, without looking chintzy. Just a thought.
Jillian C. York of the Electronic Frontier Foundation on her personal blog:
Since Twitter and Facebook banned Donald Trump and began “purging” QAnon conspiracists, a segment of the chattering class has been making all sorts of wild proclamations about this “precedent-setting” event. As such, I thought I’d set the record straight.
Everything in the social media ecosystem was once tilted in the favor of toxic forces, from the algorithms that push our content feeds toward extremism to the companies’ longstanding reticence to admit it. Imagine a foosball game on a slanted table. Yes, the little soccer players could try to stop each rush of the rolling ball, but all their spinning wouldn’t matter in the end. Over the past few years, however, that table has started to be righted. Driven by outside pressure over election disinformation, mass killings, and COVID-19 striking close to home — and perhaps most significantly, internal employee revolts — the companies’ leaders have put into place a series of measures that make it harder for toxic forces. From banning certain types of ads to de-ranking certain lies, these safeguards built up, piece by piece, culminating in the deplatforming of the Internet’s loudest voice.
While it is impossible for anyone to have complete foresight about this moment, one need only look back at the abuse hurled most often at already marginalized individuals and groups on these platforms to know that the warning signs were there all along. Trouble is that these platforms did not meaningfully change to address the causes of why they were being used in bad faith. I am under no illusions that horrible people would not exist on these or any other online platform. But I do think it is possible that, had those who make decisions at these companies taken more seriously the concerns of those on the receiving end of viral hate, they would have been better equipped to scale their moderation strategies.
My own opinion is that this collision of politics, society, and technology has been a long time coming. As far back as 2010, I have argued about the legislative challenges facing technology will be more acute than technological changes themselves. My argument has been that these social platforms are essentially nation-states and require a higher level of social and civic etiquette established and enforced through official policies. When evaluating the performance of Twitter, Facebook, and others on this particular score, the phrase I have often used is “dereliction of duty.”
Malik doesn’t directly say this and I do not want to put words in his mouth, so this is my own extension of his piece: I think part of that duty must be in their careful moderation. That means creating limitations around problematic posts and users very quickly; it also means applying the lightest possible touch. For over a decade now, the largest social platforms have often been far too cautious about setting expectations of behaviour.
Concern about firefighting efforts doesn’t get us far enough when there are prolific arsonists.
Perhaps it is somewhat restrictive for Amazon to decide that it does not want to host a social network that is deliberately under-moderated — to the extent that it lacked basic controls against posting child sexual abuse imagery — on which an attempted insurrection was planned, and where many users in the days before and after that attack described their plans for assassinating lawmakers. I am not, however, convinced it says anything more general about the power of big tech companies. I imagine the moneyed backers of Parler can find another host for their Nazi-filled community of “free speech advocates”, or they can put some servers together themselves. They can surely pull themselves up by their gold-tipped bootstraps — unfortunately for society.
Update: Parler has now found a home with a hosting company that specializes in the same sort of websites. It is almost as though this was not censorship as much as it was one company not wishing to do business with another company for completely understandable reasons.
In an email sent this morning and obtained by BuzzFeed News, Apple wrote to Parler’s executives that there had been complaints that the service had been used to plan and coordinate the storming of the US Capitol by President Donald Trump’s supporters on Wednesday. The insurrection left five people dead, including a police officer.
“We have received numerous complaints regarding objectionable content in your Parler service, accusations that the Parler app was used to plan, coordinate, and facilitate the illegal activities in Washington D.C. on January 6, 2021 that led (among other things) to loss of life, numerous injuries, and the destruction of property,” Apple wrote to Parler. “The app also appears to continue to be used to plan and facilitate yet further illegal and dangerous activities.”
Apple gave Parler a day from when it sent its letter to submit a new version of the app alongside a moderation policy. Google did not wait; it pulled the app from the Play Store this afternoon.
From Apple’s letter, as quoted in the article:
Your CEO was quoted recently saying “But I don’t feel responsible for any of this and neither should the platform, considering we’re a neutral town square that just adheres to the law.” We want to be clear that Parler is in fact responsible for all the user generated content present on your service and for ensuring that this content meets App Store requirements for the safety and protection of our users. We won’t distribute apps that present dangerous and harmful content.
For what it is worth, it will still be possible to post to Parler from its website even if these apps are removed. It is not as though Parler does not exist on the iPhone after tomorrow when, inevitably, the ostensibly unmoderated platform fails to produce a tighter moderation strategy.
This clearly relates to questions about whether it is fair that users’ native software choices on the iPhone are limited by Apple’s control over the platform and its only software distribution mechanism. It seems reasonable to me that Apple would choose not to provide a platform for apps that have little to no moderation in place. Both Apple and Google disallowed clients for Gab — Twitter but for explicit Nazis — in their respective stores. Apple rejected the app at submission time, while Google permitted it and then pulled it:
Google explained the removal in an e-mail to Ars. “In order to be on the Play Store, social networking apps need to demonstrate a sufficient level of moderation, including for content that encourages violence and advocates hate against groups of people,” the statement read. “This is a long-standing rule and clearly stated in our developer policies. Developers always have the opportunity to appeal a suspension and may have their apps reinstated if they’ve addressed the policy violations and are compliant with our Developer Program Policies.”
Gab now runs on Mastodon, which is a decentralized standard that allows different communities to moderate posts as they choose. There are many Mastodon clients in the App Store, likely because there is not really a singular Mastodon product as much as there are many posts collected through a standard format.