A prequel report from RISJ, released a few weeks before the General Data Protection Regulation came into effect May 25, found that some news sites researchers looked at were worse than popular non-news websites when it came to third-party content. These news sites averaged 40 different third-party domains per page and 81 third-party cookies per page, compared to an average of 10 and 12, respectively, for other popular non-news websites. (Researchers collected the data in the first three months of this year.)
This time around, researchers found declines in cookie prevalence on the 200-plus news sites they tracked, across several categories, from cookies related to advertising and marketing to ones related to design optimization (they looked at the difference between the sites in April and then the sites in July). On average, total cookies related to design optimization dropped 27 percent; cookies relating to advertising and marketing dropped 14 percent.
I’m not surprised by these findings. With GDPR warnings in place, collectors of lots of data can do one of two things: ask visitors for permission, or reconsider just how much data they need to collect. Conversely, without GDPR, it’s unlikely that data collectors would do either.
Over ten years ago, there was this big piece of land that was carefully landscaped and prepared by the landowners for lots of people to use. We could take up any spot on that land that we would like. Forward-thinking as they were, the landowner built in various hookups for utilities and amenities. It was nice.
Very quickly, some enterprising people began building apartments on the land. These apartments often offered new amenities that made use of the existing infrastructure established by the landowner; sometimes, new infrastructure was built to better provide amenities that the landowner had not considered. Eventually, we had a great deal of choice of apartments. There were a couple of boutique buildings that people could live in, a few bigger ones that were a little nicer, or — for those who had the ability — enterprising residents were welcome to build our own block and lease it to anyone who wanted to stay in it.
Then, the landowner decided to buy one of the nicest apartment buildings on the site. And, slowly, residents of that apartment started to notice little changes being made. It began to receive new amenities, some of which were unavailable to anyone else on the land. Many people found that to be irritating but, as they were the owners, understandable.
More changes were made to the very nice apartment building. Over time, it stopped feeling like the original apartment, and the owners decided to tear it down and build a new one. It looked pretty nice, but suffered from some shoddy materials and craft. They put billboards on the side of it, and began pestering everyone to meet their neighbours and their friends’ neighbours. They started giving different amenities to different people, like some sort of science experiment to see which residents would crack first. Even so, most people wanted to live in that apartment because it had all of the amenities, and it had the landowner’s name on it, so it felt more official.
But there were still lots of other apartments for people to live in if you didn’t like some of the strange experiments happening in the big, popular apartment, and could live without a few nice amenities. The landowner mostly left these places alone because residents were still contributing to the community, and all of those apartments were disproportionately contributing to the value of the land.
One day, though, the owners decided to set a limit on the number of people who could live in each apartment building. They also very quietly began telling the management of each building that they didn’t want apartments on their land any more, but didn’t tell management when they would be making the final call on that. They also acknowledged just how important these apartments are to the overall community, and pledged to keep the plumbing and electricity hooked up indefinitely. Those mixed signals made management concerned but, as no decision was made, each apartment kept being maintained and renovated.
And then, out of the blue, the landowner made the call. They decided to charge apartment companies lots of money per resident to stay on the land, and they said that they would be turning off some of the utilities at a later date. Some of the renters saw the writing on the wall and decided to move into the big apartment run by the landowner, and they were happy. Others tried moving in only to find it gaudy and horrible, and moved right back into their old place. Management at these apartment pleaded with the landowner to help them figure this out for their tenants, but the landowners didn’t budge.
The day came for the landowner to turn off some of the less essential utilities to all of the smaller apartments. Some people stuck around – even with limited amenities, they still preferred living in those apartments to the popular-but-tacky one. A few people decided to find some new land, because the landowner was clearly only interested in putting all of their resources behind the apartment they also owned. There was little disagreement on their right to do so — it’s their land, of course. But by pretending that the land’s value was due to the big apartment rather than the overall community, the landowner made many residents question whether they knew what they were doing with their land. That feeling was deepened when the landowner also let a bunch of actual, literal Nazis stay on their land and call up any of the residents whenever they felt like it. That seemed like a bad idea.
Today, the landowner is spending much of their time attempting to convince the community to move out of their independently-managed apartments and into the big one. As they also keep saying that they want to help with the upkeep of the indie apartments, it’s very difficult to know what residents ought to do if they would like to remain in the community. And, given the poor communication from the landowner, it’s unclear what their next steps are and how it will affect the community in the months and years to come.
Whichever app you use, the biggest changes are to timeline streaming and push notifications. Twitterrific used to allow you to live-stream your timeline over WiFi, which is no longer possible. Instead, your timeline will refresh every two minutes or so over WiFi or a mobile data connection when the app is running. Tweetbot doesn’t support streaming anymore either, but it too will periodically refresh your timeline when the app is open.
Notifications are more limited as well. Tweetbot and Twitterrific used to allow users to turn on notifications for mentions, direct messages, retweets, quote tweets, likes, and follows, but don’t anymore.
How these changes shake out for third-party clients remains to be seen. I’ve used the beta update for Tweetbot over the past week, and the elimination of its Stats and Activity section has left me feeling like there is something missing from the app. I still prefer it to the official app, but the removal of that section is a meaningful loss. A similar hole will be left in Twitterrific when the Today section no longer works. Both apps have also lost their Apple Watch apps and live-streaming. If those are critical features to your use of Twitter, you may want to give the official client another try.
I’ve been using Twitter pretty much constantly for about eleven years,1 and I don’t think I’ve ever spent any time regularly using their first-party client on my phone. At a previous job, I used their Mac client, but that’s the extent of my first-party experience for my entire time using the platform. I started with Twitterrific on the desktop and phone, used a bunch of other third-party apps while there was a sincere market for them, and then settled on Tweetbot several years back.
I wanted to be fair, so I gave the official client another shot this week. It still isn’t my jam. It isn’t the ads that are a problem — they’re distracting, of course, but they’re a known kind of distraction. It’s something about the app that makes Twitter, as a concept, feel heavy and burdensome. It’s not solely the prompts to follow other accounts, or the strange reversal of the reverse-chronological timeline when a self-replying thread appears, or the real-time updates to retweet and like numbers — it’s a combination of all of those things, and many more. When I use the first-party client, I feel like I’m being played around with for business reasons.
Tweetbot makes Twitter feel light and friendly to me. I’m still using it for this reason; you may feel differently, and using the first-party client may be totally fine with you, which is great. But, for a long-time user, it’s a hard adjustment to make; and, it’s one that I worry I’ll have to make sooner rather than later, because I don’t see Twitter continuing to support third-party clients for much longer.
For example, in early 2015 the FCC voted to upgrade the standard definition of broadband from a paltry 4 Mbps down, 1 Mbps up — to a more respectable 25 Mbps down, 3 Mbps up.
At the time, giant ISP executives, lobbyists, and numerous, ISP-loyal Senators whined incessantly about the changes. Commissioner Ajit Pai (who hadn’t yet been promoted to agency head) was quick to vote against the effort, joining alongside cable lobbying organizations who lamented the changes as “unrealistic and arbitrary.”
And once again, Ajit Pai is hoping to keep the broadband definition bar set at ankle height.
In a Notice of Inquiry published last week, Pai’s FCC proposed keeping the current 25/3 definition intact, something that riled his fellow Commissioner Jessica Rosenworcel.
An FCC report in February based on data collected until the end of 2016 found that barely over half of American census blocks had two or more options for 25/3 broadband in their area, and 15% have a choice of 100/10 providers. Those numbers are almost certainly better now, but it’s past time that the definition of “broadband” ought to be much higher.
The Competition Bureau of Canada is in the process of conducting a broadband availability study, too. In 2016, the CRTC ruled that a 50/10 broadband connection was a basic service; the CRTC’s own glossary, however, still defines broadband as a connection supporting a miserable 1 Mbps download speed.
What would happen if Apple added a USB C port to the iPad?
It would, of course, have to be alongside the Lightning port in my opinion. But that would open up a whole new bunch of possibilities:
It would boost the USB C world just slightly more. Or at least move in the direction of having a single port that’s available on all Apple devices. For example, you’d get one external drive, and maybe an external display, but you’d be able to connect your Mac or iPad. It sounds super simple, but that’s what it should be.
I don’t see a circumstance with Lightning and USB-C ports on the same device; Apple has always favourited reducing the number and type of ports, and their functionality would be largely duplicative. I also doubt the handful of rumours that have been floating around claiming that next year’s iPhones will replace the Lightning port with a USB-C port.
However, this is the best argument I’ve heard yet for why Apple could be interested in switching. The USB-C market so far has been lacklustre, not to mention confusing. Remember how fast airport convenience stores and knick-knack shops started selling products with the Lightning connector? Apple is selling about 50% more iPhones than they did when the iPhone 5 was released, so that’s huge incentive for peripheral makers.
I’m still not convinced that this is what will happen. I believe there are technical reasons why the current crop of iPhones couldn’t fit a USB-C port, too. This is just the best argument I’ve heard yet for why it might be beneficial.1
I think it’s more likely that an updated Lightning port could be adopted as a USB standard. ↩︎
For the most part, Google is upfront about asking permission to use your location information. An app like Google Maps will remind you to allow access to location if you use it for navigating. If you agree to let it record your location over time, Google Maps will display that history for you in a “timeline” that maps out your daily movements.
Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company will let you “pause” a setting called Location History.
This may be a minor quibble, but this is some pretty strange framing for an otherwise well-reported story. The privacy risks of giving your real-time location to a targeted advertising company are glossed over; the implication is that the reason you may wish to disable this feature is because you might be doing criminal activity.
Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.”
That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. (It’s possible, although laborious, to delete it .)
To stop Google from saving these location markers, the company says, users can turn off another setting, one that does not specifically reference location information. Called “Web and App Activity” and enabled by default, that setting stores a variety of information from Google apps and websites to your Google account.
These settings appear to only be available in Google’s My Account section; I couldn’t find the same settings in the Google Maps app on my iPhone. I did, however, find a setting, under “About, Terms & Privacy”, called “Location Data Collection”, which was switched on; I disabled it.
My account’s settings were the inverse of what I expected, too: “Web and App Activity” was turned off, but “Location History” was switched on; I turned it off, too.
“People You May Know looks at, among other things, your current friend list and their friends, your education info and your work info,” Facebook explained when it launched the feature.
That wasn’t all. Within a year, AdWeek was reporting that people were “spooked” by the appearance of “people they emailed years ago” showing up as “People They May Know.” When these users had first signed up for Facebook, they were prompted to connect with people already on the site through a “Find People You Email” function; it turned out Facebook had kept all the email addresses from their inboxes. That was disturbing because Facebook hadn’t disclosed that it would store and reuse those contacts. (According to the Canadian Privacy Commissioner, Facebook only started providing that disclosure after the Commission investigated it in 2012.)
Because about one in three people on Earth use a Facebook product, it’s almost a certainty that your contact details have been uploaded by one or more of your contacts, and that the company has the capability to map out at least part of your real-life social network — even if you are not a Facebook member and have never consented to this. There appear to be few laws against this practice despite its obviously devastating privacy impact.
A generation of Chinese is coming of age with an internet that is distinctively different from the rest of the web. Over the past decade, China has blocked Google, Facebook, Twitter and Instagram, as well as thousands of other foreign websites, including The New York Times and Chinese Wikipedia. A plethora of Chinese websites emerged to serve the same functions — though they came with a heavy dose of censorship.
Now the implications of growing up with this different internet system are starting to play out. Many young people in China have little idea what Google, Twitter or Facebook are, creating a gulf with the rest of the world. And, accustomed to the homegrown apps and online services, many appear uninterested in knowing what has been censored online, allowing Beijing to build an alternative value system that competes with Western liberal democracy.
It’s easy to see why China is able to do this where no other country can: the population there is big enough to support a gigantic isolated ecosystem. For context, all of the regions that major American tech companies tend to optimize for — the United States, the European Union, Canada, Australia, and New Zealand — have a combined population that is about the same as China’s alone.
I see no problem with American tech companies finding it difficult to conquer other countries — there should be healthy skepticism about the risk of having much of the world’s information on platforms operated largely by people on the west coast of the United States. China is a very special case, though, as it is one of the world’s most oppressive administrations, and Yuan’s reporting indicates that a new generation of people has grown up not being fully aware of the degree to which all the information they see is being censored and controlled by an authoritarian regime.
I know the headline of this link sounds esoteric and boring, but this is actually a fascinating story from David Zweig in Wired:
Random Farms, and tens of thousands of other theater companies, schools, churches, broadcasters, and myriad other interests across the country, need to buy new wireless microphones. The majority of professional wireless audio gear in America is about to become obsolete, and illegal to operate. The story of how we got to this strange point involves politics, business, science, and, of course, money.
The upheaval around wireless mics can be traced to the National Broadband Plan of 2010, where, on the direction of Congress, the FCC declared broadband “a foundation for economic growth, job creation, global competitiveness and a better way of life.” Two years later, in a bill best known for cutting payroll taxes, Congress authorized the FCC to auction off additional spectrum for broadband communications. In 2014, the FCC determined it would use the 600 MHz band — where most wireless microphones operate — to accomplish that goal.
According to Zweig, this is the second time in ten years that part of the RF spectrum used for wireless audio equipment has been reallocated; so, for many users, this is the second time in recent memory they’re having to spend thousands of dollars on new gear. And there appears to be no indication that the FCC will cordon off a specific spectrum for these kinds of devices to operate on, which is foolish.
Apple’s iOS system encrypts location information and doesn’t associate that information with any name or Apple ID. The iOS operating system also permanently deletes data from an iPhone if the phone doesn’t connect to Wi-Fi or power for seven days.
iPhones without SIM cards will send a limited amount of information about cellular towers and Wi-Fi hotspots to Apple if the user has enabled location services. The information will be encrypted and isn’t used for targeting advertising. If location services are turned off, the iPhone won’t send any data to Apple.
However, as Sarah Frier points out at Bloomberg, Apple has no control over data use after a user has agreed to share their data with a third-party developer:
Apple has built in two direct consumer controls: one, when you agree to share your contact information with the developer; and the other, when you toggle the switch in your settings to deny that permission. But neither is as simple as it seems. The first gives developers access to everything you’ve stored about everyone you know, more than just their phone numbers, and without their permission. The second is deceptive. Turning off sharing only blocks the developer from continued access — it doesn’t delete data already collected.
Notwithstanding that users can, of course, also deny permission when first prompted, there is no mechanism for them to pull their data completely using a simple toggle switch or similar. It’s more likely that they will need to ask the company specifically to remove their historical data, and they will only have legal standing to demand it in Europe — thanks to GDPR — and other companies with strong privacy protections.
Apple probably can’t — and, arguably, should not — police user data in the hands of third-party developers when permission has been granted for its use. They would end up having to regulate any number of companies that are notoriously bad stewards of user data, like Facebook and Google. Users shouldn’t be required to read the excessive and overly-permissive contracts in every app. That’s something governments ought to regulate instead, and we should be expecting them to do a better job.
Acknowledging the widespread repercussions from the act of corporate censorship, first amendment experts warned Monday that Facebook’s decision to ban InfoWars could set a completely reasonable precedent for free speech. “If we allow giant media platforms to single out individual users for harassing the families of murdered kindergarteners, it could lead to a nightmare scenario of measured and well-thought-out public discourse,” said Georgetown law professor Charles F. Abernathy, cautioning that it was sometimes very easy for private organizations to draw a line between constitutionally protected free speech and the slanderous ravings of a bloated lunatic hawking snake oil supplements. […]
There’s no reason any platform should feel compelled to carry this unique brand of paranoia-based propaganda.
Apple was the first major tech company to make a move against Alex Jones of Infowars on Sunday night by removing his podcast from iTunes.
But the Infowars iPhone app, which hosts some of the same content and themes found on the podcast, still lives on in the company’s App Store. In fact, the app had skyrocketed from below the top 10 to become the fourth most popular app in the news category — beating out the CNN and Fox News apps — by Tuesday morning. The boost was likely caused by increased downloads given the news Monday that Infowars was banned from several tech platforms.
It’s genuinely remarkable and alarming that such a garbage source is the fourth most popular free news app on the App Store right now, even in Canada.
But why is it still there? Apple clearly doesn’t want to index Jones’ hours of supplement sales radio interspersed with paranoia-driven intimidation and outrageous commentary, so why would it host an app that provides the same? The same is true of Google — the app is still available in the Google Play store, despite Jones being banned from YouTube.
For decades, the district south of downtown and alongside San Francisco Bay here was known as either Rincon Hill, South Beach or South of Market. This spring, it was suddenly rebranded on Google Maps to a name few had heard: the East Cut.
The swift rebranding of the roughly 170-year-old district is just one example of how Google Maps has now become the primary arbiter of place names. With decisions made by a few Google cartographers, the identity of a city, town or neighborhood can be reshaped, illustrating the outsize influence that Silicon Valley increasingly has in the real world.
The service has also disseminated place names that are just plain puzzling. In New York, Vinegar Hill Heights, Midtown South Central (now NoMad), BoCoCa (for the area between Boerum Hill, Cobble Hill and Carroll Gardens), and Rambo (Right Around the Manhattan Bridge Overpass) have appeared on and off in Google Maps.
I wanted to know if this was widespread, so I opened Google Maps and found one straight away: apparently, Calgary has a community called Grandview which, so far as I can tell, doesn’t actually exist — the area Google Maps designates as Grandview is entirely in Ramsay. Even the area I grew up in, West Hillhurst, is called Upper Hillhurst in Google Maps, which is just north of Westmount, another neighbourhood that doesn’t exist. It’s easy to verify all of this because the municipal government publishes a list (PDF) of every neighbourhood in the city, and none of these areas are on it.
Laptop battery life is decreasingly relevant to me as more airplanes offer power outlets. But sometimes you lose that lottery, as I did on my latest 8-hour daytime flight.
Apple’s “Up to 10 hours” claim doesn’t apply to my work, which is usually a mix of Xcode, web browsing, and social time-wasting, so I knew I’d have to seriously conserve power.
Sometimes, you just need Low Power Mode: the switch added to iOS a few years ago to conserve battery life when you need it, at the expense of full performance and background tasks.
I’ve long wanted something like this in MacOS, and not just for battery life. All too often, I find myself in a hotel or at a public WiFi hotspot and MacOS will still try to upload photos or download a software update. Many Canadian ISPs also have monthly bandwidth caps, and it would be rude to gobble up their monthly allowance with my giant RAW files. You can disable all of these things individually, but it’s a pain; I’d rather have a single toggle to temporarily reduce my computer’s resource use.
Update:Tully Hansen reminded me about TripMode, a third-party app that allows you to restrict bandwidth on a per-app basis.
What would it be like if we all deleted Facebook? What does the future of online privacy look like?Why can’t the tech industry diversify? Are monkeys allowed to sue over copyrights? And what in the world is #cockygate?
To answer questions like these, the editorial board will soon be turning to Sarah Jeong, who will join us in September as our lead writer on technology. Sarah will also collaborate with Susan Fowler Rigetti, our incoming tech op-ed editor, and Kara Swisher, our latest contributor on tech issues.
Jeong is one of my favourite writers; this is terrific news. Unfortunately, some goblins dug up old — and mostly funny — tweets that she posted, and deliberately took them out of context to imply that she’s racist. There are understandablecontextual differences between her tweets and, for example, Rosanne Barr’s.
The Verge, Jeong’s current employer, published an editor’s note admonishing this reprehensible abuse campaign:
Online trolls and harassers want us, the Times, and other newsrooms to waste their time by debating their malicious agenda. They take tweets and other statements out of context because they want to disrupt us and harm individual reporters. The strategy is to divide and conquer by forcing newsrooms to disavow their colleagues one at a time. This is not a good-faith conversation, it’s intimidation.
So we’re not going to fall for these disingenuous tactics. And it’s time other newsrooms learn to spot these hateful campaigns for what they are: attempts to discredit and undo the vital work of journalists who report on the most toxic communities on the internet. We are encouraged that our colleagues at The New York Times are standing by Sarah in the face of feigned outrage.
This is a good statement, but I agree more with Libby Watson of Splinter:
The New York Times really fucked this one up. Instead of ignoring this ridiculous complaint and letting it die — which it would have, because who the fuck cares what The Gateway Pundit is doing — they have validated it. (At least they didn’t fire her, you might say, but even responding to this garbage sets a terrible precedent and legitimizes a completely illegitimate, bad faith campaign to discredit Jeong and the Times itself.)
Now, according to the Times, it is fair to say that being rude about white people serves “to feed the vitriol that we too often see on social media,” and that her tweets represent a “type of rhetoric” at all and not just… jokes, nothingnesses, completely mundane and honestly quite boring observations that have no wider importance or meaning. Do we think Sarah Jeong actually enjoys chasing down and bullying old white men for fun? Do we think she earnestly wants to “cancel” white people? No, because that doesn’t mean anything — “cancel” doesn’t mean “do genocide to.”
Fringe trolls only have this kind of power if it is granted to them.
I look forward to reading Jeong’s columns in the Times starting next month.
This is the email everyone in Apple’s affiliate program received this afternoon:
Thank you for participating in the affiliate program for apps. With the launch of the new App Store on both iOS and macOS and their increased methods of app discovery, we will be removing apps from the affiliate program. Starting on October 1st, 2018, commissions for iOS and Mac apps and in-app content will be removed from the program. All other content types (music, movies, books, and TV) remain in the affiliate program.
Followed by some boilerplate stuff about the affiliate program and — in my copy, at least — a Japanese translation, but only a Japanese translation. Strange.
I can’t help but feel that Apple is waving off the wide array of sites that help consumers find apps as being unnecessary in light of Apple’s new editorial content within the App Store. I simply don’t believe that to be the case. The App Store is massive, and the crop of websites that have come to make a name for themselves comparing and reviewing apps add value to the ecosystem.
Anyone still waiting for Apple to decrease its 30% cut?
One of the things that Apple has done fairly well is to encourage and cultivate a community of users who care deeply about the Apple products they use — not because they’re from Apple specifically, but because it’s a community of people who appreciate the tools that are essential components in their lives. Part of that community manifests as websites and blogs that focus on different aspects of the company: rumours, product reviews, retail stores,1 and new software.
A move like this is a frustrating kick in the teeth to that community. There are great websites that are built, in large part, on this revenue stream. It feels especially like a dick move coming just one day after Apple announced their highest-ever quarterly revenue from services and biggest third financial quarter in the company’s history. Is it for financial reasons? Is it because there are bad actors abusing the program? Nobody outside Apple knows for certain, but it feels like it’s dismissive of the greater Apple community.
Apple on Tuesday reported that it sold 3.72 million Macs in its third quarter, which spanned April 1 through June 30, the fewest in any single quarter since it sold 3.47 million in the third quarter of 2010.
It’s almost like a product’s freshness and degree of activity surrounding it correlates with sales.
Now, iPhone unit sales are still down from the days of the iPhone 6. What’s changed is that the average selling price of an iPhone is up—way up. That’s mostly thanks to the iPhone X, which has a record-breaking price tag that hasn’t seemed to matter one whit in terms of consumer acceptance. (And for those who don’t want to spend $1000 on an iPhone X, apparently the iPhone 8 and 8 Plus hits the spot.)
The math is pretty straightforward: Apple sold 11.6 million iPads, which is slightly more than they sold in the year-ago quarter, but iPad revenue was down 5 percent, which means the average selling price of an iPad dipped. This isn’t surprising, because the more pricey iPad Pro models are long in the tooth and the cheap iPad is relatively new. What it suggests is that the iPad sales price will rise once new iPad Pros arrive (presumably this fall), but in the meantime the release of the low-cost iPad is keeping things afloat.
It’s almost like a product’s freshness and degree of activity surrounding it correlates with sales.
My home computer in 1998 had a 56K modem connected to our telephone line; we were allowed a maximum of thirty minutes of computer usage a day, because my parents — quite reasonably — did not want to have their telephone shut off for an evening at a time. I remember webpages loading slowly: ten to twenty seconds for a basic news article.
At the time, a few of my friends were getting cable internet. It was remarkable seeing the same pages load in just a few seconds, and I remember thinking about the kinds of the possibilities that would open up as the web kept getting faster.
And faster it got, of course. When I moved into my own apartment several years ago, I got to pick my plan and chose a massive fifty megabit per second broadband connection, which I have since upgraded.
But first, a short parenthetical: I’ve been writing posts in both long- and short-form about this stuff for a while, but I wanted to bring many threads together into a single document that may pretentiously be described as a theory of or, more practically, a guide to the bullshit web.
A second parenthetical: when I use the word “bullshit” in this article, it isn’t in a profane sense. It is much closer to Harry Frankfurt’s definition in “On Bullshit”:
It is just this lack of connection to a concern with truth — this indifference to how things really are — that I regard as of the essence of bullshit.
In the year 1930, John Maynard Keynes predicted that, by century’s end, technology would have advanced sufficiently that countries like Great Britain or the United States would have achieved a 15-hour work week. There’s every reason to believe he was right. In technological terms, we are quite capable of this. And yet it didn’t happen. Instead, technology has been marshaled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless. Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situation is profound. It is a scar across our collective soul. Yet virtually no one talks about it.
These are what I propose to call ‘bullshit jobs’.
What is the equivalent on the web, then?
The average internet connection in the United States is about six times as fast as it was just ten years ago, but instead of making it faster to browse the same types of websites, we’re simply occupying that extra bandwidth with more stuff. Some of this stuff is amazing: in 2006, Apple added movies to the iTunes Store that were 640 × 480 pixels, but you can now stream movies in HD resolution and (pretend) 4K. These much higher speeds also allow us to see more detailed photos, and that’s very nice.
But a lot of the stuff we’re seeing is a pile-up of garbage on seemingly every major website that does nothing to make visitors happier — if anything, much of this stuff is deeply irritating and morally indefensible.
Twenty-nine XML HTTP requests, totalling about 500 KB
Approximately one hundred scripts, totalling several megabytes — though it’s hard to pin down the number and actual size because some of the scripts are “beacons” that load after the page is technically finished downloading.
The vast majority of these resources are not directly related to the information on the page, and I’m including advertising. Many of the scripts that were loaded are purely for surveillance purposes: self-hosted analytics, of which there are several examples; various third-party analytics firms like Salesforce, Chartbeat, and Optimizely; and social network sharing widgets. They churn through CPU cycles and cause my six-year-old computer to cry out in pain and fury. I’m not asking much of it; I have opened a text-based document on the web.
In addition, pretty much any CNN article page includes an autoplaying video, a tactic which has allowed them to brag about having the highest number of video starts in their category. I have no access to ComScore’s Media Metrix statistics, so I don’t know exactly how many of those millions of video starts were stopped instantly by either the visitor frantically pressing every button in the player until it goes away or just closing the tab in desperation, but I suspect it’s approximately every single one of them. People really hate autoplaying video.
Also, have you noticed just how many websites desperately want you to sign up for their newsletter? While this is prevalent on so many news and blog websites, I’ve dragged them enough in this piece so far, so I’ll mix it up a bit: this is also super popular with retailers. From Barnes & Noble to Aritzia, Fluevog to Linus Bicycles, these things are seemingly everywhere. Get a nominal coupon in exchange for being sent an email you won’t read every day until forever — I don’t think so.
Finally, there are a bunch of elements that have become something of a standard with modern website design that, while not offensively intrusive, are often unnecessary. I appreciate great typography, but web fonts still load pretty slowly and cause the text to reflow midway through reading the first paragraph. And then there are those gigantic full-width header images that dominate the top of every page, as though every two-hundred-word article on a news site was the equivalent of a magazine feature.
As Graeber observed in his essay and book, bullshit jobs tend to spawn other bullshit jobs for which the sole function is a dependence on the existence of more senior bullshit jobs:
And these numbers do not even reflect on all those people whose job is to provide administrative, technical, or security support for these industries, or for that matter the whole host of ancillary industries (dog-washers, all-night pizza delivery) that only exist because everyone else is spending so much of their time working in all the other ones.
So, too, is the case with the bullshit web. The combination of huge images that serve little additional purpose than decoration, several scripts that track how far you scroll on a page, and dozens of scripts that are advertising related means that text-based webpages are now obese and torpid and excreting a casual contempt for visitors.
Given the assumption that any additional bandwidth offered to web developers will immediately be consumed, there seems to be just one possible solution, which is to reduce the amount of bytes that are transmitted. For some bizarre reason, this hasn’t happened on the main web, because it somehow makes more sense to create an exact copy of every page on their site that is expressly designed for speed. Welcome back, WAP — except, for some reason, this mobile-centric copy is entirely dependent on yet more bytes. This is the dumbfoundingly dumb premise of AMP.
That belies the reason AMP has taken off. It isn’t necessarily because AMP pages are better for users, though that’s absolutely a consideration, but because Google wants it to be popular. When you search Google for anything remotely related to current events, you’ll see only AMP pages in the news carousel that sits above typical search results. You’ll also see AMP links crowding the first results page, too. Google has openly admitted that they promote AMP pages in their results and that the carousel is restricted to only AMP links on their mobile results page. They insist that this is because AMP pages are faster and, therefore, better for users, but that’s not a complete explanation for three reasons: AMP pages aren’t inherently faster than non-AMP pages, high-performing non-AMP pages are not mixed with AMP versions, and Google has a conflict of interest in promoting the format.
It seems ridiculous to argue that AMP pages aren’t actually faster than their plain HTML counterparts because it’s so easy to see these pages are actually very fast. And there’s a good reason for that. It isn’t that there’s some sort of special sauce that is being done with the AMP format, or some brilliant piece of programmatic rearchitecting. No, it’s just because AMP restricts the kinds of elements that can be used on a page and severely limits the scripts that can be used. That means that webpages can’t be littered with arbitrary and numerous tracking and advertiser scripts, and that, of course, leads to a dramatically faster page. A series of experiments by Tim Kadlec showed the effect of these limitations:
AMP’s biggest advantage isn’t the library — you can beat that on your own. It isn’t the AMP cache — you can get many of those optimizations through a good build script, and all of them through a decent CDN provider. That’s not to say there aren’t some really smart things happening in the AMP JS library or the cache — there are. It’s just not what makes the biggest difference from a performance perspective.
AMP’s biggest advantage is the restrictions it draws on how much stuff you can cram into a single page.
AMP’s restrictions mean less stuff. It’s a concession publishers are willing to make in exchange for the enhanced distribution Google provides, but that they hesitate to make for their canonical versions.
So: if you have a reasonably fast host and don’t litter your page with scripts, you, too, can have AMP-like results without creating a copy of your site dependent on Google and their slow crawl to gain control over the infrastructure of the web. But you can’t get into Google’s special promoted slots for AMP websites for reasons that are almost certainly driven by self-interest.
It’s key to recognize, though, that this is a choice, a responsibility, and — ultimately — a matter of respect. Let us return to Graeber’s explanation of bullshit jobs, and his observation that we often experience fifteen-hour work weeks while at the office for forty. Much of the same is true on the web: there is the capability for pages to load in a second or two, but it has instead been used to spy on users’ browsing habits, make them miserable, and inundate them on other websites and in their inbox.
As for Frankfurt’s definition — that the essence of bullshit is an indifference to the way things really are — that’s manifested in the hand-wavey treatment of the actual problems of the web in favour of dishonest pseudo-solutions like AMP.
An actual solution recognizes that this bullshit is inexcusable. It is making the web a cumulatively awful place to be. Behind closed doors, those in the advertising and marketing industry can be pretty lucid about how much they also hate surveillance scripts and how awful they find these methods, while simultaneously encouraging their use. Meanwhile, users are increasingly taking matters into their own hands — the use of ad blockers is risingacross the board, many of which also block tracking scripts and other disrespectful behaviours. Users are making that choice.
They shouldn’t have to. Better choices should be made by web developers to not ship this bullshit in the first place. We wouldn’t tolerate such intrusive behaviour more generally; why are we expected to find it acceptable on the web?
An honest web is one in which the overwhelming majority of the code and assets downloaded to a user’s computer are used in a page’s visual presentation, with nearly all the remainder used to define the semantic structure and associated metadata on the page. Bullshit — in the form of CPU-sucking surveillance, unnecessarily-interruptive elements, and behaviours that nobody responsible for a website would themselves find appealing as a visitor — is unwelcome and intolerable.
But in September 2015, she was suddenly plunged into an American nightmare. She got a call at 6 a.m. one morning from a colleague at Re/Max telling her something terrible had been posted about her on the Re/Max Facebook page. [Monika Glennon] thought at first she meant that a client had left her a bad review, but it turned out to be much worse than that.
It was a link to a story about Glennon on She’s A Homewrecker, a site that exists for the sole purpose of shaming the alleged “other woman.” The author of the Homewrecker post claimed that she and her husband had used Glennon as their realtor and that everything was going great until one evening when she walked in on Glennon having sex with her husband on the floor of a home the couple had been scheduled to see. The unnamed woman went into graphic detail about the sex act and claimed she’d taken photos that she used to get everything from her husband in a divorce. The only photo she posted though was Glennon’s professional headshot, taken from her bio page on Re/Max’s site.
Glennon was horrified. The story was completely fabricated and she had no idea why someone would have written it. Someone on Facebook named Ryan Baxter had posted it to the Re/Max page; Baxter also went through Glennon’s Facebook friend list and sent it to her husband, family members, and many of her professional contacts.
This story comes in cyclical waves of fury and heartbreak.
It’s sad that we ever got to a point where the keyboard can be shown 50% faster, but I’m thrilled to see these pain points addressed. It translates into meaningful, real-world, improvements. The overall reception to iOS 12 is going to be very positive because of it. It speaks volumes that performance is the first section on Apple’s iOS 12 features page.
When something in the UI is slow, even subtly, we notice it; when a lot of things are even a tiny bit slow, it can make using the OS feel tedious. Speed improvements like these go a long way to making iOS feel like a joy to use. I hope this continues to be a priority for every iOS release.
If you follow me on Twitter, you may have caught wind of my frustration a couple of weeks ago with YouTube’s universally slow pages and my inability to find a Safari extension to put me out of my misery. Well, turns out I’m not alone. Chris Peterson of Mozilla:
YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube’s Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome. […]
YouTube serves a Shadow DOM polyfill to Firefox and Edge that is, unsurprisingly, slower than Chrome’s native implementation. On my laptop, initial page load takes 5 seconds with the polyfill vs 1 without. Subsequent page navigation perf is comparable.
This is hugely frustrating because there really is no alternative to YouTube. Peterson points to a Firefox extension which restores the older YouTube layout that does not require polyfills to work; but, for other browsers, the easiest method is to manually add a cookie.
It’s the latest case of Google building and tuning its web services so they work better or only work in the company’s Chrome browser. Google Meet, Allo, YouTube TV, Google Earth, and YouTube Studio Beta have all blocked Microsoft Edge in the past, and Google Meet, Google Earth, and YouTube TV have all also been blocked if you use Firefox. Google even blocked its Google Maps service on Windows Phone years ago in a passive-aggressive move that it eventually reversed. It’s an ongoing problem that means Chrome is slowly turning into the next Internet Explorer 6.
The implication here seems to be that Google has built YouTube to run well specifically in Chrome because they want more people using their own browser, and that it’s somewhat anticompetitive in the vein of their blocking of other products on competing platforms. I get that angle, but I think it’s misapplied here. It seems more likely to me that Google just didn’t adequately test YouTube in non-Chrome browsers, probably because they’re less popular and maybe because they don’t care. It’s not malicious; it’s laziness bordering on incompetence.
Over the last few days we’ve seen outcry about Apple’s new MacBook Pro, which offers an optional top-end i9 processor, and how its performance is throttled to the point of parody as the laptop heats up over time.
I elected to take a wait-and-see approach to this apparent scandal, especially after word spread that Apple was investigating this with Lee. But some, like Williams this morning, decided that it would be easier to conclude that — as always — Apple’s obsession with thin and light products was largely to blame:
Apple’s insatiable thirst for thinner, which we can see across the iPhone and Mac, appears to have finally caught up with the company. Its new hardware is the most powerful yet, but the form factor betrays that on-paper performance, because the laptop’s form factor means it’s thermally constrained.
Outside of making the MacBook thicker — which is unheard of, for Apple — there’s little the company can do to solve this. This isn’t the only thermally constrained machine Apple builds, either. After years of silence, Apple admitted in 2017 that the top-end Mac Pro was stagnant because “[…] we designed ourselves into a bit of a thermal corner, if you will.”
For what it’s worth, each new iPhone has become thicker than its predecessor since the iPhone 6S; so, no, it would not be unheard-of for Apple. Especially egregious, though, is Williams’ assertion that there’s little that Apple can do to fix it.
After a week of controversy following the posting of a video that claimed the new 15-inch MacBook Pro could experience massive slowdowns, Apple on Tuesday acknowledged that the slowdowns exist — and that they’re caused by a bug in the thermal management software of all the 2018 MacBook Pro models. That bug has been fixed in a software update that Apple says it’s pushing out to all 2018 MacBook Pro users as of Tuesday morning.
This is the kind of thing you would expect Apple to catch before they shipped an expensive flagship product, but a week from identifying the bug to shipping a software fix seems fairly reasonable.
Williams’ article also — as usual for this kind of piece — blames Apple for the industry’s broader woes:
The pursuit of thinner, lighter laptops, a trend driven by Apple, coinciding with laptops replacing desktops as our primary devices means we have screwed ourselves out of performance — and it’s not going to get better anytime soon.
Apple may prioritize thin and light in their portable products, but that doesn’t make a trend. The industry following their lead does make a trend, but that’s the fault of those companies. If they thought that they would be constrained by the thermal envelope of thinner notebooks or that Apple was making a mistake in their priorities, they could have released different products. You can, of course, buy gaming laptops that are thicker and allow high-performance processors to run at their fullest potential, if that is your objective. But how many of those do you actually see people using in the real world, compared to those using MacBook Pro-like notebooks? In my experience, the latter dominates.
I like this simple but profound observation from Casey Johnston, of the Outline, regarding Instagram’s online status indicator in its messaging section:
When status indicators were originally introduced — the listing of screen names and opening- and slamming-of-door sounds by people signing on and off and posting of away messages on AOL Instant Messenger, may it rest — and continued to proliferate on services like Gchat and Facebook’s chat feature, we were all still using computers. Sometimes we were on those computers; sometimes, we were living our lives and not on computers.
Smartphones do not, and have never, faced this dichotomy of existence. Anyone who has Instagram, by definition, has a smartphone. If you have a smartphone, you are online no matter where you are or what you are doing.
Johnston indicates that the online status indicator is dead, but I think that’s an exaggeration. Perhaps the declarative online status indicator is on the wane, but I think the inferred status indicator is on the rise. I grew up with IRC, AIM, and MSN Messenger, and explicitly declaring your online status — and, often, your mood and chat readiness — was a hallmark of those platforms and protocols. Facebook retained that format even before the site had chat functionality. And declarative status indicators still exist, to an extent — Slack has several defaults to choose from, like “out sick”, “in a meeting”, or “commuting”.1
These kinds of statuses have largely been replaced by a more inferred or suggested status, by way of things like read receipts. This isn’t entirely new; answering machines and voicemail have long played the role of a passive status indicator. Read receipts are a subtle indicator to the sender that the recipient is or has been online. But they aren’t perceived in the same way — users often report feeling ignored if they see that a message they’ve sent has been read, but not responded to, even if it’s likely that the recipient is simply busy.
Statuses like these are how you know Slack is a serious business tool, not some goofy IRC-like chat room. ↩︎
Hannah Kuchler, Financial Times (this article may be behind a paywall):
Facebook has suspended Crimson Hexagon, as it investigates if the analytics firm violated any of the social network’s policies, including whether it harvested user data to build surveillance tools.
The social network said it does not yet have any evidence that the Boston-based company obtained Facebook or Instagram data improperly. Crimson Hexagon could not be reached for comment.
Crimson Hexagon describes itself as an artificial intelligence-powered consumer insights company for brand managers, marketers and executives. The company says it has the world’s largest library of public social data, including over one trillion posts.
Even though these are entirely public posts, it’s disconcerting to think that our offhand remarks and pictures of meals are seen as widgets to be collected by a creepy company to be resold as fodder for advertisers and marketers. Facebook users are already granting permission for Facebook to mine their online life in service of advertisers, of course, but this is a third-party company with whom data is not explicitly being shared for this purpose. I completely understand that public is public, and this information can be used this way legally and ethically. It’s still gross to think that the entire web is seen by companies like these solely as material to target ads.