Filip Struhárik, editor and social media manager at Slovakia’s Denník N newspaper:
Our traffic decreased by three percent in November and by nearly six percent in December 2017 (real users, year-on-year). Traffic to some other (mostly smaller) sites fell by tens of percentage points after the Explore Feed test started.
For a long time, Facebook was a main source of traffic for Denník N — around 40 percent of our readers came from Facebook. But this has changed. In December, less than 30 percent of our traffic came from Facebook. In November and December 2017, we had more visitors from Google than from Facebook for the first time (and it’s happening everywhere).
Although our reach, engagement, interactions and consumption have fallen dramatically, something interesting is happening. When we look at our “Reach Engagement Rate”, we can see that it‘s growing, especially after the Explore Feed test started.
What this suggests is that Facebook is concentrating visitors into audiences. This may reduce traffic and minimize the spread of biased and misleading news links amongst casually-interested users, but Struhárik’s post indicates that it could reinforce more active users’ news bubbles too.
While a lot of manufacturers stick Apple CarPlay into their vehicles as standard equipment these days, The Verge reports that it’s been a one-time $300 charge for BMW buyers since BMW started offering it on cars with built-in navigation in 2017. But BMW North America’s technology product manager Don Smith told The Verge that’ll change next year, and CarPlay will cost owners $80 a year.
To be clear, there’s nothing remotely subscription-based in CarPlay from a consumer’s perspective. The phone connects to the car’s screen, displays its own UI, and routes its audio through the car’s speakers — that’s pretty much it. There’s no justification for this other than nickel-and-diming iPhone users.
Combining new investments and Apple’s current pace of spending with domestic suppliers and manufacturers — an estimated $55 billion for 2018 — Apple’s direct contribution to the US economy will be more than $350 billion over the next five years, not including Apple’s ongoing tax payments, the tax revenues generated from employees’ wages and the sale of Apple products.
Planned capital expenditures in the US, investments in American manufacturing over five years and a record tax payment upon repatriation of overseas profits will account for approximately $75 billion of Apple’s direct contribution.
As of right now, I am no longer paying my taxes like everyone else. I am contributing to the Canadian economy, and will be issuing a self-congratulatory press release every April.
By the way, Apple estimates that they will pay $38 billion to repatriate $245 billion in income stored internationally, so the actual increase in expenditures and investments in American manufacturing will be $37 billion over five years, or about $7.2 billion per year.
Apple expects to invest over $30 billion in capital expenditures in the US over the next five years and create over 20,000 new jobs through hiring at existing campuses and opening a new one. Apple already employs 84,000 people in all 50 states.
The company plans to establish an Apple campus in a new location, which will initially house technical support for customers. The location of this new facility will be announced later in the year.
This is pretty big news. I’ve seen a handful of reports stating that this will be Apple’s “second” campus. It is not. Apple already has two well-known campuses — Infinite Loop and Apple Park — plus at least one more, in Austin, Texas.
There’s a lot to love about this press release; but, like many of the corporate gestures following last month’s U.S. tax cuts, I don’t see anything here that couldn’t have been done at the previous tax rate if companies like Apple were unable to withhold income internationally. I’m going to get emails for writing that, right?
Update:Tim Bradshaw of the Financial Times breaks down how Apple is calculating their $350 billion economic contribution:
Wednesday’s headline $350bn figure, though, does not include that kind of thing. What it does include is its annual spending with US-based suppliers and manufacturers over five years, capital expenditure plans for its new campus and data centres and a record tax payment related to its repatriation of overseas profits.
Spending with US suppliers was $50bn last year and will be $55bn this year, Apple says. Cynics might argue that this is money Apple would have spent anyway.
Breaking down Apple’s $350bn “direct contribution” to US:
$275bn+ of spending with US suppliers at $55bn+/year
+$38bn tax bill (estimated) for repatriation of overseas profits
+$30bn capex on new campus, data centres etc
+$5bn adv manuf fund
A 10% year-over-year increase in supplier spending certainly doesn’t have the impression of that eye-popping $350 billion figure, but it’s nothing to sneeze at either.
I’m interested to see how — if — Apple’s competitors respond.
Ever since Amy Wang published these two paragraphs in the Washington Post, my Twitter timeline has been lit up with UI designers wanting to know how what is described here is possible:
Shortly after 8 a.m. local time Saturday, an employee at the Hawaii Emergency Management Agency settled in at the start of his shift. Among his duties that day was to initiate an internal test of the emergency missile warning system: essentially, to practice sending an emergency alert to the public without actually sending it to the public.
Around 8:05 a.m., the Hawaii emergency employee initiated the internal test, according to a timeline released by the state. From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.” He was supposed to choose the former; as much of the world now knows, he chose the latter, an initiation of a real-life missile alert.
Look at this list. I mean really look at it. The link that the operator clicked is marked “PACOM (CDW) – STATE ONLY”; the one that they should have clicked is marked “DRILL – PACOM (CDW) – STATE ONLY”.
This list appears to be in no particular order. The link to initiate an internal test is not beside the one for a live, public alert, nor is grouped with other internal tests.
The use of uppercase text is inconsistent. In some instances — “PACOM” and “CAE” — it is used for initialisms, but in others — “DRILL” and “TEST” — it is used for emphasis. In the case of the two links here, uppercase is used for both emphasis and an initialism.
On a related note, uppercase text is harder to read than mixed-case text.
Aside from the text itself, here are no visual clues in this list to differentiate a test alert from a live alert.
Without knowing how this system is built, it would be ridiculous to suggest they modernize it or create separate menus for test and live alerts. In fact, I think the simplicity of this menu is a strength, not a weakness. But there are some steps that I think the Emergency Management Agency could take to reduce the likelihood of this happening again:
Reorder the list so that test alerts are grouped together, and clearly separated from live alerts.
Clarify the use of uppercase words. Because government agencies love to use initialisms, that is, by default, the only instance in which uppercase words should be used. All other words should be in sentence or title case.
Differentiate test and live alerts further. If it is not possible to change their colour, perhaps it is possible to add a symbol in front, even something as simple as three exclamation marks to indicate that the alert will be sent to the public. Test alerts should also be more clear; perhaps prefacing each one with something as simple as “Internal Only:” would make it easier to understand that those alerts won’t be public.
I know I’m making it sound trivial to differentiate each kind of alert, but it isn’t — it needs to be something that’s clear in both a calm test-only environment and in an emergency.
More clearly indicate the false alarm option, as it is neither a test nor an emergency live alert. It undoes a previous live alert, and should more clearly indicate that.
As I wrote above, I don’t know what is possible within the existing emergency system; even something as apparently simple as reordering the list may require many hours of work. But I don’t think the options above are entirely unreasonable and, in conjunction with requiring that more than one person sign off on issuing an alert, would add an extra layer of safety by reducing complexity in this UI.
Update:Marcel Honore of the Honolulu Civil Beat reports that the UI shown above is not, in fact, identical to the UI that an operator would see:
However, state officials now say that image was merely an example that showed more options than the employee had on the actual screen.
“We asked (Hawaii Emergency Management Agency) for a screenshot and that’s what they gave us,” Ige spokeswoman Jodi Leong told Civil Beat on Tuesday. “At no time did anybody tell me it wasn’t a screenshot.”
Honore obtained a different screenshot which, while prettier, still has the same problems as the example screenshot above.
Thanks to Kyle Dreger for pointing me to this update.
As we saw with Gawker, DNA Info/Gothamist, and now The Awl sites, independent media needs support more than ever. These voices are too important to just disappear.
All of these publications have been shuttered for different reasons. None were, as far as I know, heavily buoyed by Facebook referral traffic. But with Facebook deemphasizing publisher pages you can expect to see more independent media organizations losing staff or shutting down from a raw decline in traffic. Independent publishers — this aspiring one included — need to find a more sustainable revenue source.
Over the weekend, someone started a thread asking why an artist’s album view in Apple Music has gotten so cluttered.
To see this for yourself, pick a relatively modern artist and check out their list of albums in the Music app. For example, fire up Siri and say:
Show me all the Bruno Mars albums
When the Bruno Mars page appears, scroll down to the Albums section and tap See All. Amongst the actual Bruno Mars albums, you’ll find a lot of singles and EPs. Way more singles and EPs than actual albums, in fact.
It gets worse than that — many artists list both clean and explicit versions of each release, which means that the Album view in Apple Music is often twice the apparent size.
Mark and Kirk McElhearn put the blame on the ID3 audio metadata standard, and that’s fair: ID3 doesn’t have a field to distinguish between LPs, EPs, singles, and other release types.
The iTunes Store worked around the ID3’s limitations by sorting releases by popularity — I presume — instead of reverse chronologically. Compare, for example, Kanye West’s albums on the iTunes Store and Apple Music. The iTunes Store fits one more release onscreen than Apple Music but, more importantly, everything shown on the iTunes Store is a full-length album; on Apple Music, all six releases shown at the top of the Albums screen are singles.1 I don’t know if sorting by popularity is translatable to Apple Music and its users’ listening patterns, but it is perhaps worth investigating.2
Of note, Spotify does not have this problem; it correctly separates albums and singles. I don’t know how they do this — manually, perhaps? — but it makes Apple Music look sloppy by comparison.
And five of them are collaborations where West is only a featured artist on the track. ↩︎
I’d also like to see separate clean and explicit releases consolidated in Apple Music, with a toggle at the bottom of the album page to show one or the other. Showing both and effectively treating them as separate releases just creates clutter. ↩︎
Facebook on Thursday introduced major changes to its News Feed that will prioritize content it hopes will spark meaningful conversations between friends while deprioritizing content from businesses, brands, and media. The move is widely expected to hurt publishers that rely on traffic from Facebook.
FV: Talk to me about like the evolution of this. What’s changed over the course of the past 18 months to make you feel like this is something worth doing?
AM: The biggest thing has been just the explosion of video. Video is a paradigm shift in a lot of different ways. We’ve done a lot to try and nurture it. We think video is going to continue to be a more and more important part about how people communicate with each other, and how publishers communicate with people.
But as video has grown on Facebook, it has changed the nature of how people interact with the platform in a lot of different ways. Video is, primarily, a passive experience. You tend to just sit back and watch it. And while you’re watching it, you’re not usually liking or comment or speaking with friends. So this change is, in part, a reaction to how the ecosystem has shifted around us.
It is absolutely critical for publishers to disconnect from their reliance upon major referrers like Facebook and Google. And, yet, I’m not sure that’s realistic for a lot of major media organizations. Referral traffic remains a massive source of visitors — as Casey Newton points out, however, visitors are not the same thing as an audience. And, as I wrote six months ago, I think it’s a mistake to write off changes by referral sources as the fault of the publisher for relying upon that traffic:
As companies like Facebook and Google increasingly dominate actual publishers for how users get their news, even creating proprietary formats like Instant Stories and AMP for preferential treatment, shouldn’t their practices be scrutinized to a greater degree? Is it really fair for the rug to be pulled out from under publishers’ feet when their primary referrer decides it’s convenient for their business model? Does it make sense for the future of the worldwide digital media economy to be decided by a few young men in California? To return to the argument against publishers’ reliance upon traffic sources like Facebook and Google, is it possible to build a successful new publication without them?
Publishers shouldn’t be reliant upon Facebook and Google sending them traffic,1 but that truth also abdicates the responsibility of large tech companies.
Joshua Topolsky of the Outline is optimistic that the hit to publishers from a lack of Facebook traffic won’t be as significant as the hit to Facebook from a lack of news posts:
Facebook, despite all its best intentions, is still just a dumb pipe — a thing that delivers, not the thing itself. The pipe must be filled up, yes, with stuff like groups you belong to and photos of new babies, yes with Messenger conversations and events and fundraisers. But information is currency, and what is valuable to most people is to know what the fuck is going on in the world and to try and understand it. That doesn’t go away because Facebook wants to keep its hands clean. It simply goes somewhere else. Even the market had a negative reaction to this news, stripping around $25 billion off the network’s market cap following the announcement. I don’t think that’s a fluke — I think Facebook doesn’t know what its product really is.
Frankly, any publisher relying on Facebook for survival fucked up. But there’s a flip side to this. There’s the opportunity for outlets willing to rely less on social networks to set their fate, publishers who have diversified their traffic sources, who have pushed back on Facebook’s News Feed carrots, who have built (or are building) brands that resonate with audiences beyond what can be bought or given. Value not gifted by Facebook could be a very good thing for publishers. (And yes, I get that I’m also talking about The Outline, which is fighting for its right to survive in a very uncertain landscape every single day.)
I hope this means a new dawn for good publications, and an awakening to build a dedicated audience instead of simply driving traffic.2 I also think that this unfairly excuses Facebook from building their business on publishers and media for years. A consequence Facebook would understand is if their active users dropped — unfortunately, even if you, I, and most of the people we know stopped using Facebook, their Borg-like dominance on the web is unparalleled. But we can make a difference in the fortunes of publishers: support them financially by subscribing.
Terrifying to think how much one rich man’s decision on what direction he wants to take his company will have an impact on people’s epistemological sense of self – how they perceive the world. It’s too much power.
In fact, publications that are entirely dependent on traffic from Facebook or Google are, typically, nothing you’d actually want to read anyway. ↩︎
Covering websites in ads that generate money solely based on the number of views and clicks likely has a significant role in this. ↩︎
Shortly after 8 a.m. local time Saturday morning, an employee at the Hawaii Emergency Management Agency settled in at the start of his shift. Among his duties that day was to initiate an internal test of the emergency missile warning system: essentially, to practice sending an emergency alert to the public without actually sending it to the public.
Around 8:05 a.m., the Hawaii emergency employee initiated the internal test, according to a timeline released by the state. From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.”
This sounds like terrible user interface design to me. Why have the genuine “Jeez Louise! Freak out everybody!” option slap-bang next to the harmless one labelled “Test the brown alert”?
Even though the menu option still required confirmation that the user really wanted to send an alert, that wasn’t enough, on this occasion, to prevent the worker from robotically clicking onwards.
How on Earth were those buttons next to each other? And why can just one person send an alert like this to millions of people? And, finally, why weren’t the local authorities authorized to send out a retraction of this alert for thirty-eight minutes?
Both Twitter and Facebook’s selfish algorithms, optimized solely for increasing the number of hours I spend on their services, are kind of destroying civil society at the same time. Researchers also discovered that the algorithms served to divide up the world into partisan groups. So even though I was following hundreds of people on social networks, I noticed that the political pieces which I saw were nevertheless directionally aligned with my own political beliefs. But to be honest they were much… shriller. Every day the Twitter told me about something that The Other Side did that was Outrageous and Awful (or, at least, this was reported), and everyone was screeching in sync and self-organizing in a lynch mob, and I would have to click LIKE or RETWEET just to feel like I had done something about it, but I hadn’t actually done anything about it. I had just slacktivated.
What is the lesson? The lesson here is that when you design software, you create the future.
The public awakening in the past year to the more toxic and unethical effects of Silicon Valley firms, generally, is long overdue. The tech industry should have done a better job of regulating themselves for years, but they now have an opportunity to make up for their delinquency. I worry that they are incapable of doing so, and could be answering to the current U.S. administration instead.
But ultimately, Facebook is a place you go to. You can decide whether you want to visit the restaurant, or just continue throwing their flyers in the recycling bin alongside the coupon-stuffed weekly circulars and junk mail.
Google is equally needy, but feels a lot more insidious than Facebook. Unlike Facebook, Google isn’t just a place you go. It’s built into the infrastructure of your life. It’s your house. It’s the roads and sidewalks you travel on. Google is a lot more infrastructural than Facebook, which is why breeches of trust feel a lot weirder and scarier.
Turns out that if you buy a smartphone that runs an operating system made by an advertising company that loves to scoop up as much user data as it can, it’s going to endlessly nag you to provide more information to that company. That’s not to say that anyone who buys an Android phone is an idiot for expecting otherwise; on the contrary, users’ expectations should guide Google’s actions.
Also, always remember that someone actually built this stuff. There are, of course, employees in every industry who hang their souls up when they walk into their office, but very few have the kind of power and responsibility of a global tech giant.
The stagnation goes beyond C.E.S.’s scant diversity and casual sexism, extending to the products themselves, which feel like rehashed versions of the same technologies, packaged and presented in only slightly new ways. Year after year, the show produces more of the same from headlining companies: Internet-connected refrigerators (which have been around since 1998 but have failed to take off, despite their persistent presence on showroom floors); self-driving cars; and virtual- and augmented-reality technology. It’s telling that the most interesting thing that has happened so far this year was the show’s complete loss of power on Wednesday, which offered a brief, terrifying preview of the sort of Stone Age hysteria we can expect if the Internet of Things ever takes down the power grid.
Ben Bajarin published a decent piece today about Apple’s fading influence at CES. He has theories on why that may be, like Amazon’s Alexa devices dominating the smart speaker space, and a more mature consumer electronics market. But I have another theory: maybe CES is full of companies trying to carve their own little space with expensive gadgets that don’t work well and, ultimately, are of little relevance to what consumers will actually want or buy. Sure, there were plenty of products shown that work with Apple’s ecosystem — mostly HomeKit — but so much of what is shown at CES is just gadgetry for the sake of gadgetry. Does it matter how much Apple’s influence is felt at a showcase of stuff that’s mostly irrelevant?
When asked about the move to sell a third-party mesh system and the future of the AirPort line, an Apple spokesperson shared this with 9to5Mac:
People love our AirPort products and we continue to sell them. Connectivity is important in the home and we are giving customers yet another option that is well suited for larger homes.
Apple’s choice for that option is the Linksys Velop Whole Home Mesh Wi-Fi System which comes in two flavors: $350 for a 2-pack system or $500 for a 3-pack solution. The Tri-Band Wi-Fi system is rated to provide coverage for 2,000 square feet with each Node which can be configured from the Linksys iPhone and iPad app.
There are non-answers, and then there are Apple-grade non-answers. That statement confirms that WiFi is basically an expectation these days — duh — and that they are presently selling their AirPort lineup. More telling, though, is what they don’t say: there’s no confirmation that they’re even remotely interested in continuing to offer their own base station, which is remarkable even less commitment than they made to updating the Mac Mini.
In addition to this move, Mark Gurman’s reported in 2016 that Apple had disbanded the AirPort team, and I’ve heard thirdhand that no updates are planned.1 I’m convinced that the AirPort lineup is dead and will quietly be removed from Apple’s store and website in the not-too-distant future.
I’ve been trying to work out what’s the future for Time Capsule then? iCloud?
I think something like Time Machine in the Cloud is a reasonable guess. I could also see more third-party routers supporting Time Machine via a USB-connected hard drive — apparently, some Netgear and Asus routers have done so for a while.
Update: A reader email reminded me that Apple took at leasttwo months to patch their base station products to protect against a significant WiFi vulnerability. iOS and MacOS were updated within two weeks. I don’t know if the thirdhand information I have is right, of course, but the general thrust of the reports I’ve seen and moves Apple has made when it comes to their AirPort lineup strongly suggests that they’re not interested in the WiFi router market much longer.
They haven’t even bothered to update the iOS app with support for the iPhone X’s display. ↩︎
It all started with an Instagram ad for a coat, the West Louis (TM) Business-Man Windproof Long Coat to be specific. It looked like a decent camel coat, not fancy but fine. And I’d been looking for one just that color, so when the ad touting the coat popped up and the price was in the double-digits, I figured: hey, a deal!
The brand, West Louis, seemed like another one of the small clothing companies that has me tagged in the vast Facebook-advertising ecosystem as someone who likes buying clothes: Faherty, Birdwell Beach Britches, Life After Denim, some wool underwear brand that claims I only need two pairs per week, sundry bootmakers.
Several weeks later, the coat showed up in a black plastic bag emblazoned with the markings of China Post, that nation’s postal service. I tore it open and pulled out the coat. The material has the softness of a Las Vegas carpet and the rich sheen of a velour jumpsuit. The fabric is so synthetic, it could probably be refined into bunker fuel for a ship. It was, technically, the item I ordered, only shabbier than I expected in every aspect.
Madrigal’s a smart guy, so I’m not sure I buy the idea that he thought he could get anything better than H&M quality for H&M-like prices. But it’s pretty incredible to me that almost anyone with a few hours to spare every day could conceivably run a convincing-looking boutique online with no held inventory, no unique products of its own, and little risk. This sort of thing fascinates me — partly because I find fast fashion brands generally objectionable, but also because of how inventive it is. The scheme Madrigal describes is the product of relatively accessible technologies that simply weren’t available not that long ago.
Younger and more alert shoppers have already cottoned on to this scheme, though, and are bypassing the Shopify storefront to shop directly from AliExpress. In the haul video genre on YouTube, there are over half a million results for AliExpress shopping sprees. For comparison, there are a little over a million results for each “H&M haul” and “Zara haul”, and less than 200,000 results for each “Abercrombie haul”, “Hollister haul”, and “Lululemon haul”.
Earlier this month, Apple launched a new version of its mobile operating system, iOS 11.2, which disables the solution that some companies in the advertising ecosystem, including Criteo, currently use to reach Safari users. As a result, we believe the projected 9%-13% ITP net negative impact on Criteo’s 2018 Revenue ex-TAC relative to our pre-ITP base case projections, communicated on November 1, 2017, is no longer valid.
We are focused on developing an alternative sustainable solution for the long term, built on our best-in-class user privacy standards, aligning the interests of Apple users, publishers and advertisers. This solution is still under development and its effectiveness cannot be assessed at this early stage. Should it not mitigate any ITP impact, we believe the ITP net negative impact on Criteo’s 2018 Revenue ex-TAC, relative to our pre-ITP base case projections, would become approximately 22%.
Internet advertising firms are losing hundreds of millions of dollars following the introduction of a new privacy feature from Apple that prevents users from being tracked around the web.
With [Criteo’s] annual revenue in 2016 topping $730m, the overall cost of the privacy feature on just one company is likely to be in the hundreds of millions of dollars.
We acknowledge the problem of Web pages being slow to load, relative to alternative, proprietary technologies such as Facebook Instant Articles and Apple News. Publishers (especially in news media) have long faced difficult choices and poor incentives, leading to bad decisions and compromises, and ultimately to terrible user experiences.
Search engines are in a powerful position to wield influence to solve this problem. However, Google has chosen to create a premium position at the top of their search results (for articles) and a “lightning” icon (for all types of content), which are only accessible to publishers that use a Google-controlled technology, served by Google from their infrastructure, on a Google URL, and placed within a Google controlled user experience.
After an entire year of speculation about whether Apple or Samsung might integrate the fingerprint sensor under the display of their flagship phones, it is actually China’s Vivo that has gotten there first. At CES 2018, I got to grips with the first smartphone to have this futuristic tech built in, and I was left a little bewildered by the experience.
The mechanics of setting up your fingerprint on the phone and then using it to unlock the device and do things like authenticate payments are the same as with a traditional fingerprint sensor. The only difference I experienced was that the Vivo handset was slower — both to learn the contours of my fingerprint and to unlock once I put my thumb on the on-screen fingerprint prompt — but not so much as to be problematic. Basically, every other fingerprint sensor these days is ridiculously fast and accurate, so with this being newer tech, its slight lag feels more palpable.
The technology here is impressive, but it is an iteration on a security solution that has been eclipsed by accurate facial recognition that isn’t dependent on ambient lighting conditions. I know that, for the iPhone users who aren’t convinced by facial recognition, it’s only fair to compare this familiar technology against today’s version of Face ID and want to see both in a future iPhone model.
I don’t think that scenario is likely. There are shortcomings with Face ID today — it’s unreliable at very close range, some sunglasses don’t work with it, and it can’t recognize faces through facial coverings — but the next iPhone is likely to feature improvements to Face ID, not a duplicative authentication mechanism. From my limited perspective, it seems more efficient for Apple to use their engineering talent to make progress on Face ID rather than trying to integrate both.
Is it just me or are those daily upgrade notifications for upgrading to macOS High Sierra annoying the bleep out of you? Every time I turn on my MacBook (2017,) it immediately starts up with that exasperating High Sierra notice to upgrade to High Sierra so I can “enjoy the latest technologies and refinements.” And it’s even popping up on my iMac (2015 with Fusion Drive,) that Apple itself recommends NOT updating to High Sierra. And I really DON’T want to upgrade to macOS High Sierra right now on any of my Macs!
Unfortunately, Apple is only supporting fixes and mitigations for Meltdown and Spectre in High Sierra, contra their original statement. MacOS updates have generally been less impactful since Yosemite,1 there are stilllots of reasonswhy users may be reluctant to upgrade to a major new OS version. While Apple’s developer site displays a pie chart indicating iOS version market share, I can find no such official chart for MacOS market share. As of December, though, MacOS Sierra was still more widely used than High Sierra, and El Capitan isn’t that far behind, according to StatCounter. For serious security vulnerabilities, Apple should strongly consider issuing patches for previous widely-used system versions.
And, just as importantly, have improved older hardware compatibility. ↩︎
Good move. Perhaps they will also take the opportunity to merge other duplicative products and services, like their mess of messaging apps, or their two music streaming services, YouTube Music and Google Play Music.
Two announcements last week paint a highly contrasted view of the state of the App Store for developers. On Thursday, Apple announced record-shattering App Store revenue over the Christmas holiday week:
App Store customers around the world made apps and games a bigger part of their holiday season in 2017 than ever before, culminating in $300 million in purchases made on New Year’s Day 2018. During the week starting on Christmas Eve, a record number of customers made purchases or downloaded apps from the App Store, spending over $890 million in that seven-day period.
“We are thrilled with the reaction to the new App Store and to see so many customers discovering and enjoying new apps and games,” said Phil Schiller, Apple’s senior vice president of Worldwide Marketing. “We want to thank all of the creative app developers who have made these great apps and helped to change people’s lives. In 2017 alone, iOS developers earned $26.5 billion — more than a 30 percent increase over 2016.”
At the bottom of the press release, Apple says that developers have earned a total of over $86 billion from the App Store. What’s really remarkable is the App Store’s rate of growth: over 30% of revenue since its launch was earned by developers solely in the past year.
Transmit iOS made about $35k in revenue in the last year, representing a minuscule fraction of our overall 2017 app revenue. That’s not enough to cover even a half-time developer working on the app. And the app needs full-time work — we’d love to be adding all of the new protocols we added in Transmit 5, as well as some dream features, but the low revenue would render that effort a guaranteed money-loser. Also, paid upgrades are still a matter of great debate and discomfort in the iOS universe, so the normally logical idea of a paid “Transmit 2 for iOS” would be unlikely to help. Finally, the new Files app in iOS 10 overlaps a lot of file-management functionality Transmit provides, and feels like a more natural place for that functionality. It all leads to one hecka murky situation.
As Sasser points out, there are lots of reasons why Transmit may not have been successful enough to pay for its development. Perhaps it was too niche, but Sasser also says that Prompt — Panic’s SSH client — is doing fine. Perhaps its niche is better served by Coda for iOS, which supports the same file transfer protocols as Transmit, but also includes a full website editor. Maybe people using iOS devices — even iPads — don’t really want to use a file transfer app in isolation.
Therefore, sad as it is, I don’t think that Panic’s announcement is necessarily an indictment of the economics of the App Store on its own; but, it is a reminder of that nagging feeling I’ve long had that the environment of the App Store is, for whatever reason, not conducive to smaller developers.
It’s not just independent developers of utility apps that are struggling in the App Store, either. Simogo, a two-person game development studio that built its business on iOS games beginning in 2010, announced last month that their next game would be for consoles after a frustrating 2017:
This year, a lot of time we had planned to spend on our current project, ended up being spent on just making sure that our games would not be gone from the app store. Because sadly, the platform holder seems to have no interest in preservation of software on their platform. We can criticize and be angry and mad about it all we want, but we don’t think that any efforts we put in can change that direction. So, instead, we’re thinking a lot about how we can find ways to preserve our games, and our own history, because it is inevitable that our mobile games will be gone sometime in a distant, or not so distant future, as iOS and the app store keeps on changing and evolving. We don’t have a definitive answer, or any final ideas how this would be possible, but we’ll keep on thinking about it, and try to come up with solutions, and we welcome any input and ideas on this from you too!
And, though these criticisms have often originated from smaller developers, there’s evidence that the App Store is also frustrating for even the most recognizable of companies. Lukas Mathis:
Maybe Apple makes too many changes every year, and developers simply can’t keep up with those changes and add new capabilities. Maybe some developers are supporting their apps on more platforms than they are actually capable of. Maybe users too frequently demand that developers build and rapidly update apps for all of Apple’s platforms for free. Of course, it’s probably a combination of these factors and plenty more besides.1
I don’t think it would be fair to point out these criticisms of the App Store without also pointing to areas where Apple has made attempts at improvement. Two years ago, Phil Schiller was put in charge of the App Store; a few months later, the average amount of time an app spent in review dropped from a week to just two days. In iOS 11, Apple debuted a new version of the App Store that separated games from other types of apps, and introduced a news-like Today tab that spotlights all kinds of apps.
But something is clearly still not right in the App Store economy if developers are finding it as difficult as they are — generally speaking — to make a living building apps for one of the world’s biggest platforms. Making progress on this, I think, ought to be one of Apple’s highest priorities this year. 2018 marks the tenth anniversary of the App Store and, while they may generally be averse to marking historical milestones, it would be a shame if independent developers had less hope of a successful career this year than they did in 2008. Based solely on the revenue and growth Apple announced last Thursday, there should be hope for developers. The giant pool of money is clearly there; unfortunately, smaller developers simply aren’t seeing enough of it. Whether that change must start with things Apple controls, or developers, or users, I don’t know, but it would be a shame if the App Store becomes the place for virtually all users to download Facebook Messenger, Google Maps, and a manipulative game — and that’s it.
Would smaller developers make a lot more money if Apple’s cut of App Store revenue worked more like a progressive tax policy instead of a flat rate? ↩︎
There’s been a lot of discussion about political figures and world leaders on Twitter, and we want to share our stance.
No, there has been a lot of discussion about a world leader on Twitter.
Twitter is here to serve and help advance the global, public conversation. Elected world leaders play a critical role in that conversation because of their outsized impact on our society.
Blocking a world leader from Twitter or removing their controversial Tweets, would hide important information people should be able to see and debate. It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions.
The Washington, D.C.-based Internet Association specifically plans to join a lawsuit as an intervening party, aiding the challenge to FCC Chairman Ajit Pai’s vote in December to repeal regulations that required internet providers like AT&T and Comcast to treat all web traffic equally, its leader confirmed to Recode.
Technically, the Internet Association isn’t filing its own lawsuit. That task will fall to companies like Etsy, public advocates like Free Press and state attorneys general, all of which plan to contend they are most directly harmed by Pai’s decision, as Recode first reported this week.
As an intervener, though, the Internet Association still will play a crucial role, filing legal arguments in the coming case. And in formally participating, tech giants will have the right to appeal a judge’s decision later if Silicon Valley comes out on the losing end.
The Internet Association’s members include Amazon, Facebook, Google, Microsoft, Netflix, and Twitter. That’s a lot of weight to be thrown behind someone else’s legal action; but, with powerful companies like those as members, I don’t see any reason why the Association couldn’t file its own suit. They should.
Amid unceasing criticism of Facebook’s immense power and pernicious impact on society, its CEO, Mark Zuckerberg, announced Thursday that his “personal challenge” for 2018 will be “to focus on fixing these important issues”.
Zuckerberg’s new year’s resolution – a tradition for the executive who in previous years has pledged to learn Mandarin, run 365 miles, and read a book each week – is a remarkable acknowledgment of the terrible year Facebook has had.
“Facebook has a lot of work to do – whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent,” Zuckerberg wrote on his Facebook page. “We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools.”
To be fair, that’s more than Jack Dorsey has pledged to do this year at the raging dumpster fire that he’s ostensibly in charge of.
The T2 processor isn’t doing the heavy lifting in the iMac Pro—that’s the Intel Xeon processor with between 8 and 14 processor cores. The T2 is the brain behind that brain, running the subsystems of the iMac Pro from a single piece of Apple-built silicon. The result is a simplified internal design that doesn’t require multiple components from multiple manufacturers.
On most Macs, there are discrete controllers for audio, system management and disk drives. But the T2 handles all these tasks. The T2 is responsible for controlling the iMac Pro’s stereo speakers, internal microphones, and dual cooling fans, all by itself.
This is a great look at what the T2 does in the iMac Pro. It’s notable just how different it is compared to the T1’s functionality in recent MacBook Pro models.
I also collected a few tidbits about it last month after the first press and user previews of the iMac Pro began appearing around the web, and they — combined with Snell’s piece — paint an interesting picture about the future of the Mac. It sounds like there are plenty of additional tasks that could, at some point, be enabled by Apple’s custom silicon in their desktop and notebook products. In an article last year for Bloomberg, Mark Gurman and Ian King suggested that Power Nap could be made more efficient by porting it to Apple’s custom silicon. And, of course, “Hey, Siri” is likely to be coming to the Mac with a future MacOS update.
As such, we engage in this endless tug of war depending on how grossly-beholden the current FCC regulators are to regional telecom duopolies. Regulators not blindly loyal to giant ISPs will usually try to raise the bar to match modern needs, as Tom Wheeler did when he bumped the standard definition of broadband to 25 Mbps down, 4 Mbps up back in 2015. Revolving door regulators in turn do everything in their power to manipulate or ignore real world data so that the industry’s problems magically disappear.
Case in point: the FCC is expected to vote in February on a new proposal that would dramatically weaken the standard definition of broadband. Under the current rules, you’re not technically getting “broadband” if your connection in slower than 25 Mbps down, 4 Mbps up. Under Pai’s new proposal, your address would be considered “served” and competitive if a wireless provider is capable of offering 10 Mbps down, 1 Mbps up to your area. While many people technically can get wireless at these speeds, rural availability and geography make true coverage highly inconsistent.
This move, like most of the others made by Ajit Pai and the rest of the Republicans running the FCC, is indefensible. Who, anywhere, thinks that a lower standard for what constitutes “broadband” is what we need at a time when higher-bandwidth consumer services are growing?
Of course, in the near future, you can bet the FCC will begin touting how rapidly and widely they expanded broadband access under this administration.
The two problems, called Meltdown and Spectre, could allow hackers to steal the entire memory contents of a computer. There is no easy fix for Spectre, which could require redesigning the processors, according to researchers. As for Meltdown, the software patch needed to fix the issue could slow down computers by as much as 30 percent — an ugly situation for people used to fast downloads from their favorite online services.
According to the researchers, including security experts at Google and various academic institutions, the Meltdown flaw affects virtually every microprocessor made by Intel, which makes chips used in more than 90 percent of the computer servers that underpin the internet and private business operations.
The other flaw, Spectre, affects most other processors now in use, though the researchers believe this flaw is more difficult to exploit. There is no known fix [for] it.
“I think 5 percent for a load with a noticeable kernel component (eg, a database) is roughly in the right ballpark,” he said. “But if you do micro-benchmarks that really try to stress it, you might see double-digit performance degradation.”
Leaving aside the brilliance of the people that found this Intel bug, may I submit that perhaps coining threat names and invoking cute icons is a gratuitous and disingenuous way to get people to care about an impossibly arcane flaw that they in all likelihood can’t do much about?
I’ve flitted between whether giving bugs names and logos is helpful or harmful. The KRACK WiFi bug was disclosed on the same day last year as a potentially more harmful flaw in the RSA encryption library, but the latter didn’t have a catchy name:
I get why security researchers are dialling up the campaigns behind major vulnerabilities. CVE numbers aren’t interesting or explanatory, and the explanations that are attached are esoteric and precise, but not very helpful for less-technical readers. A catchy name gives a vulnerability — or, in this case, a set of vulnerabilities — an identity, helps educate consumers about the risks of having unpatched software, and gives researchers an opportunity to take public credit for their work. But, I think the histrionics that increasingly come with these vulnerabilities somewhat cheapens their effect, and potentially allows other very serious exploits to escape public attention.
In this case, these are very serious bugs: it’s possible to exploit them in relatively passive ways, the effects can be very damaging, and — as far as Spectre goes — there’s no way to fix it without a complete change in processor design. If these bugs had remained as CVE numbers, it’s unlikely that many people outside of the computer security world would know about them.
But does that matter? As far as I can figure out, there’s no proof that these branding efforts encourage consumers or software vendors to update their software any quicker. And, as noted above, there’s nothing consumers can do about the Spectre vulnerabilities until they buy a new computer or phone — and perhaps not for another generation or two. The branding of vulnerabilities has, absolutely, made the efforts of security researchers more notable, and there is a reasonable argument to be made for the value of that; it also makes everyone more aware that the technology they rely upon is not as secure as we want to believe it is.
Duke University’s Center for the Study of the Public Domain:
Current US law extends copyright for 70 years after the date of the author’s death, and corporate “works-for-hire” are copyrighted for 95 years after publication. But prior to the 1976 Copyright Act (which became effective in 1978), the maximum copyright term was 56 years—an initial term of 28 years, renewable for another 28 years. Under those laws, works published in 1961 would enter the public domain on January 1, 2018, where they would be “free as the air to common use.” Under current copyright law, we’ll have to wait until 2057. And no published works will enter our public domain until 2019. The laws in other countries are different—thousands of works are entering the public domain in Canada and the EU on January 1.
The good news is that this is, theoretically, the last year since 1998 where no new works will enter the public domain in the United States. The bad news is that you can bet that Disney will — as ever — lobby hard to extend that public domain drought for several more years.
Last week, a post caught fire alleging that there was a major design flaw in Apple’s new Chicago store: a large amount of snow built up on its roof, causing the area around the store to be closed off for safety reasons. Nick Statt of the Verge transformed this observation into an assertion that “Apple’s flagship Chicago retail store wasn’t designed to handle snow”. That would be a major oversight for Apple and Foster + Partners, which designed the store, but both companies have buildings located in snowy regions.
According to an Apple spokesperson, though, the cause was a technical malfunction in the roof heating system, which was installed to prevent snow buildup.
I get that stories about Apple tend to attract a gravitas that is associated with few other companies. Despite being the most valuable publicly-traded company in the world, they are seemingly always teetering on the brink. The story about the Chicago store’s roof came in at the tail end of a river of honestly-earned negative press for Apple: a series of pretty nasty bugs, a delayed HomePod release, and poorly-communicated device throttling when a recent iPhone’s battery has degraded.
In the rush to report problems and apparent controversies, though, it’s worth taking a step back and exercising skepticism. Could there have been another reason for the buildup of snow on the Chicago store’s roof? Adam Selby recognized that the roof could be heated, for example.
Rene Ritchie of iMore wrote about the biggest problems facing Apple in 2018:
Apple gets told it’s wrong all the time. Doesn’t matter if it’s iPhone or AirPods. The minute Apple announces anything new or different, some percentage of coverage and customers race to tell the company how limited, expensive, and just plain stupid it is. Then, more often than not, a few or many months later, that product breaks records in sales and satisfaction, and goes on to lead the industry for years to come.
When you’re told you’re wrong over and over again only to be proven right over and over again, you stop paying attention. You begin to think that if you just weather the initial storm, everyone will inevitably come to see what you saw, and then you can move forward together. You can get on with making faster cars.
But even if that’s true nine out of ten times — even 99 out of 100 times — there are those few times when it’s not true. When it’s just flat out wrong. And you never see it coming.
This is Apple’s risk when experimenting with each update, but it is also a risk for users and writers when controversy is seen where none exists. If everything is a top-priority grade-A indication of Apple’s failings, then nothing is.
Some industry leaders and lawmakers thought September’s revelation of the massive intrusion — which took place months after the credit reporting agency failed to act on a warning from the Homeland Security Department — might be the long-envisioned incident that prompted Congress to finally fix the country’s confusing and ineffectual data security laws.
Instead, the aftermath of the breach played out like a familiar script: white-hot, bipartisan outrage, followed by hearings and a flurry of proposals that went nowhere. As is often the case, Congress gradually shifted to other priorities — this time the most sweeping tax code overhaul in a generation, and another mad scramble to fund the federal government.
If you think those invested in Equifax’s trustworthiness and reputability would have punished the company, there’s bad news there, too: the stock has regained over 50% of the value it lost in the days after Equifax announced that they had been breached, and it’s basically flat compared to the same time last year.
The mountain of lawsuits directed at Equifax is, sadly, the biggest chance consumers have at getting the company to pay for their incompetence — “sad” because these lawsuits are very expensive and, had proactive legislation been in place already, completely unnecessary.
The underlying vulnerability of login managers to credential theft has been known for years. Much of the past discussion has focused on password exfiltration by malicious scripts through cross-site scripting (XSS) attacks. Fortunately, we haven’t found password theft on the 50,000 sites that we analyzed. Instead, we found tracking scripts embedded by the first party abusing the same technique to extract emails addresses for building tracking identifiers.
The image above shows the process. First, a user fills out a login form on the page and asks the browser to save the login. The tracking script is not present on the login page. Then, the user visits another page on the same website which includes the third-party tracking script. The tracking script inserts an invisible login form, which is automatically filled in by the browser’s login manager. The third-party script retrieves the user’s email address by reading the populated form and sends the email hashes to third-party servers.
The plugins focus largely on the usernames, but according to the researchers, there’s no technical measure to stop scripts from collecting passwords the same way. The only robust fix would be to change how password managers work, requiring more explicit approval before submitting information.
I’m not sure if I’ve come across these scripts specifically, but on a few occasions, I have been surprised to see a Face ID indicator appear while visiting a website, without explicitly tapping in a login form. I appreciate automatically-filled forms, but I do wish browsers would ask my permission first before handing over my email address and password.
Also, I think it’s worth pointing out how deliberate this is on the part of the trackers in question. Someone had to write the code to track users in this manner. Moreover, someone who manages them had to approve of this tracking mechanism. I can think of no circumstance under which someone could consider this kind of tracking ethical or morally sound.
To address our customers’ concerns, to recognize their loyalty and to regain the trust of anyone who may have doubted Apple’s intentions, we’ve decided to take the following steps:
Apple is reducing the price of an out-of-warranty iPhone battery replacement by $50 — from $79 to $29 — for anyone with an iPhone 6 or later whose battery needs to be replaced, starting in late January and available worldwide through December 2018. Details will be provided soon on apple.com.
Early in 2018, we will issue an iOS software update with new features that give users more visibility into the health of their iPhone’s battery, so they can see for themselves if its condition is affecting performance.
As always, our team is working on ways to make the user experience even better, including improving how we manage performance and avoid unexpected shutdowns as batteries age.
Apple’s PR strategy tends to err on the side of silence. This time, that bit them in the ass, at least initially; but, this is a fantastic response. To my eyes, it ticks the boxes of every complaint I had, with the exception of its timeliness: I don’t see anything here — with the possible exception of the reduction in battery replacement costs — that Apple could not have said back when they first released the 10.2.1 software update.
But after ten years, it’s fair to say that the smartphone has become commoditized. The feature set of this device is essentially limited, and there aren’t many new bells and whistles that can be added. So smartphone manufacturers focus on two areas: the camera, since many smartphone owners buy a phone in part to have a good or better camera, and details, including things like security features (Touch ID and Face ID), displays, water resistance, and more. None of these latter features are “killer” features, they are all incremental enhancements. Gone is the day when a new device added, say, the ability to play videos, or faster network access. All the essential features are there. (To be fair, the new iPhone adds augmented reality, but this technology is still too young for this to be a killer feature.)
Since the announcement of the iPhone X, as a “second” iPhone line, I have been thinking that Apple would keep the “number” iPhone for another generation – iPhone 9 and 9 plus – and release the iPhone X2, before moving all iPhones to the “X” line. They would be able to refine the new interface used to control the iPhone (see the Daring Fireball article linked above for more on the difference in iOS of the iPhone X), and slowly phase it in. But at $200-$300 more than the “number” iPhone, plus a steeper cost for AppleCare, this is a luxury item.
The pricing of the iPhone X is interesting — and I’ll get back to that — but I’m not sure McElhearn is right about the features common to pretty much any smartphone commodifying the market. There are still basic features implemented poorly, even on expensive devices. The $699 Essential Phone shipped with a terrible camera app and the $799 Pixel 2 XL has an abysmal display, so there’s still plenty of growth that can happen in the market. Yes, it’s a far more mature market with a higher level of base expectations than, say, five years ago, but it isn’t like we’re swimming in a sea of inexpensive smartphones with excellent screens, cameras, battery life, and apps.
There are also network improvements right around the corner that could mean faster speeds and lower latency. These sorts of improvements could unlock as-yet-unforeseen capabilities that may comfortably qualify as “killer” features — we just don’t know yet.
From a pricing standpoint, though, the iPhone X can be compared to the rest of the iPhone lineup in a similar way to the first Retina MacBook Pro and the rest of Apple’s laptop lineup. A standard 15-inch MacBook Pro started at $1,799 in the United States before and after the introduction of the Retina model; the Retina model started at $400 more, but came with an SSD, twice the RAM, and a double-resolution display. A few months after the 15-inch model was released, a 13-inch Retina MacBook Pro was launched at a price premium of $500 over the standard 13-inch model, at $1,699.
Over time, the Retina MacBook Pro eventually became the only model, but still at a price premium over the outgoing models. For example, the 13-inch Retina MacBook Pro model has started at $1,299 for several years, $200 more than the standard 13-inch model when it was most recently available. The Retina model is, of course, a much better computer, but there’s now a higher barrier to entry. The 15-inch model, meanwhile, now starts at $2,399 — $200 more than the previous starting price for the 15-inch Retina model, and a huge $600 more than the starting price for the non-Retina 15-inch MacBook Pro. Again, aside from the foibles of the most recent MacBook Pro models, I can’t imagine anyone would choose a non-Retina model over anything in the current lineup; but, all of the new MacBook Pros cost a lot more money.
I made a similar argument as McElhearn in July, when rumours of the thousand-dollar iPhone price point were brewing, and I wonder where pricing goes from here across Apple’s lineup. Perhaps the introduction of higher pricing tiers with next-generation features gives Apple room to introduce lower-cost products.1 Apple has long been a company that offers accessible premium goods, and I hope that more luxurious products aren’t the new standard.
For example, I think the more-expensive iPad Pro lineup helped make way for the $329 iPad. ↩︎
As usual, Michael Tsai has put together a definitive reference. Lots of great articles in here, including one from Andrei Frumusanu of Anandtech:
The first unique characteristic separating Apple iPhones from other smartphones is that Apple is using a custom CPU architecture that differs a lot from those of other vendors. It’s plausible that the architecture is able to power down and power up in a much more aggressive fashion compared to other designs and as such has stricter power regulation demands. If this is the case then another question rises is if this is indeed just a transient load issue why the power delivery system was not designed sufficiently robust enough to cope with such loads at more advanced levels of battery wear? While cold temperature and advanced battery wear are understandable conditions under which a device might not be able to sustain its normal operating conditions, the state of charge of a battery under otherwise normal conditions should be taken into account during the design of a device (Battery, SoC, PMIC, decoupling capacitors) and its operating tolerances.
If the assumptions above hold true then logically the issue would also be more prevalent in the smaller iPhone as opposed to the iPhone Plus models as the latter’s larger battery capacity would allow for greater discharge rates at a given stable voltage. This explanation might also be one of many factors as to why flagship Android and other devices don’t seem to exhibit this issue, as they come with much larger battery cells.
And a fair point from Tsai himself:
Lastly, how long should we expect a phone to last? Especially one like the iPhone X? With higher prices, the move away from carrier contracts, and diminishing returns for the camera and other new features, it seems natural that people will want to keep their phones longer. But that seems totally at odds with the design and battery choices Apple is making.
On that note, there seems to be some confusion about whether fast charging impacts long-term battery life and degradation. Rene Ritchie says that it does; John Gruber asked Apple and they said it doesn’t.
In hindsight, I think I was too nice in my first piece on this. What I wrote yesterday was that “I don’t think they communicated this very well”. What I should have written was that they didn’t communicate this at allon the record, and that’s not acceptable. I still think that reducing CPU performance is a reasonable choice to make, but perhaps it’s a choice they had to make because of other decisions, like the balance of battery capacity to maximum CPU power draw.
For a decade, some security professionals have held out extended validation certificates as an innovation in website authentication because they require the person applying for the credential to undergo legal vetting. That’s a step up from less stringent domain validation that requires applicants to merely demonstrate control over the site’s Internet name. Now, a researcher has shown how EV certificates can be used to trick people into trusting scam sites, particularly when targets are using Apple’s Safari browser.
Researcher Ian Carroll filed the necessary paperwork to incorporate a business called Stripe Inc. He then used the legal entity to apply for an EV certificate to authenticate the Web page https://stripe.ian.sh/. When viewed in the address bar, the page looks eerily similar to https://stripe.com/, the online payments service that also authenticates itself using an EV certificate issued to Stripe Inc.
Let’s look at the user interfaces of browsers. On Safari, the URL is completely hidden! This means the attacker does not even need to register a convincing phishing domain. They can register anything, and Safari will happily cover it with a nice green bar. The below screenshot is from this site. Hard to tell, right?
With Chrome, the story is slightly better, but only if you bother to look at the full URL. Chrome has no native way to view anything other than the company name and country of the certificate. Newer versions of Chrome will open the system certificate viewer with two mouse clicks (older versions completely removed viewing the certificate), but the system certificate viewer is useless for any normal user.
By default, Safari will only show the company name in the address bar when a website is loaded with an extended validation certificate; users can reveal the company name beside the URL by opening Safari preferences and checking the “Show full website address” box under the Advanced tab.
Over the past couple of weeks, you’ve probably noticed a resurgence of the old rumour that Apple deliberately slows down older iPhones with software updates, presumably to encourage users to upgrade. Here’s the post on Reddit from “TeckFire” that, I think, sparked recent rumours to that effect:
[…] Wear level was somewhere around 20% on my old battery. I did a Geekbench score, and found I was getting 1466 Single and 2512 Multi. This did not change wether I had low power mode on or off. After changing my battery, I did another test to check if it was just a placebo. Nope. 2526 Single and 4456 Multi. From what I can tell, Apple slows down phones when their battery gets too low, so you can still have a full days charge. […]
John Poole of Primate Labs, the company that runs Geekbench, effectively confirmed the post by examining Geekbench users’ scores in aggregate, and concluded:
If the performance drop is due to the “sudden shutdown” fix, users will experience reduced performance without notification. Users expect either full performance, or reduced performance with a notification that their phone is in low-power mode. This fix creates a third, unexpected state. While this state is created to mask a deficiency in battery power, users may believe that the slow down is due to CPU performance, instead of battery performance, which is triggering an Apple introduced CPU slow-down. This fix will also cause users to think, “my phone is slow so I should replace it” not, “my phone is slow so I should replace its battery”. This will likely feed into the “planned obsolecense” narritive.
Here’s what Apple says about this:
Our goal is to deliver the best experience for customers, which includes overall performance and prolonging the life of their devices. Lithium-ion batteries become less capable of supplying peak current demands when in cold conditions, have a low battery charge or as they age over time, which can result in the device unexpectedly shutting down to protect its electronic components.
Last year we released a feature for iPhone 6, iPhone 6s and iPhone SE to smooth out the instantaneous peaks only when needed to prevent the device from unexpectedly shutting down during these conditions. We’ve now extended that feature to iPhone 7 with iOS 11.2, and plan to add support for other products in the future.
As that battery ages, iOS will check its responsiveness and effectiveness actively. At a point when it becomes unable to give the processor all of the power it needs to hit a peak of power, the requests will be spread out over a few cycles.
Remember, benchmarks, which are artificial tests of a system’s performance levels, will look like peaks and valleys to the system, which will then trigger this effect. In other words, you’re always going to be triggering this when you run a benchmark, but you definitely will not always trigger this effect when you’re using your iPhone like normal.
Apple’s solution is quite clever here: to make a device last longer during the day with a battery in poor condition, the system simply caps peak performance. Since most activities don’t require that level of performance, users shouldn’t notice this cap in typical usage.
However, for the tasks that do make full use of the CPU, the effect of performance capping can be very noticeable. If search is indexing in the background, or the user is playing a game, or Safari is rendering a complex webpage, the device will feel much slower because it’s hitting the wall of reduced peak performance.1
Even though I think Apple’s solution is clever and, arguably, right, I don’t think they communicated this very well. I don’t know why Apple would even consider keeping something like this hidden — there are hundreds of millions of iPhones in active use around the world, so it’s guaranteed to be discovered. I understand why they would be reluctant to communicate this to users because it shatters the apparent simplicity of the product, but it would also be trivial to present users with a first-run dialog indicating that the battery is in a poor state and the phone will run with reduced performance until it is repaired. By choosing to implement this quietly, it appears more nefarious than it really is. That doesn’t engender trust.
Update: Apple has long been very good about managing expectations. When an item is backordered in their online store, they almost always beat their own shipping time estimates. The web is awash with stories from users who were pleasantly surprised with free or inexpensive repairs when they went to an Apple Store. This is an instance where they blew it — needlessly, I think.
Update: It would be interesting to know how Android handles battery degradation, and whether they employ similar throttling mechanisms or have their phones shut off when CPU power requests exceed the maximum battery output.
The Geekbench chart for iPhone 6S models running iOS 11.2 has peaks in single-core scores of about 1100, 1400, 1700, 2200, and a big spike at about 2500. The 1400 score is approximately similar to the speed of an iPhone 6. ↩︎
Justin O’Beirne is back with another one of his well-illustrated essays on the state of digital maps. This one is mostly about Google and the power of knowing about buildings:
At some point, Google realized that just as it uses shadings to convey densities of cities, it could also use shadings to convey densities of businesses. And it shipped these copper-colored shadings last year as part its Summer redesign, calling them “Areas of Interest”:
With “Areas of Interest”, Google has a feature that Apple doesn’t have. But it’s unclear if Apple could add this feature to its map in the near future.
The challenge for Apple is that AOIs aren’t collected — they’re created. And Apple appears to be missing the ingredients to create AOIs at the same quality, coverage, and scale as Google.
With Maps in particular, Google has truly learned the value of what Apple has known for quite some time: it pays to own your systems. The data Google has been able to collect for Maps has created staggering competitive advantages for them, and has enabled them to do things none of their competitors are even close to attempting. It makes you wonder why so much of Apple’s mapping efforts are clearly, as O’Beirne illustrates, so dependent on third-party data. It also makes you wonder if anyone can catch Google at their rate of progress.
Today, Facebook announced that it will start using its facial recognition technology to find photos of you across its site, even if you aren’t tagged in those photos. The idea is to give you more control over your identity online by informing you when your face appears in a photo, even those you don’t know about. According to a Facebook blog post, the new feature is powered by the same AI technology used to suggest friends you may want to tag in your own uploaded images.
The feature, dubbed Photo Review, has one caveat: you’ll only be notified of an untagged photo of yourself if you’re in the intended “audience” of that photo. “We always respect the privacy setting people select when posting a photo on Facebook (whether that’s friends, public, or a custom audience), so you won’t receive a notification if you’re not in the audience,” the blog post says.
To be clear, Facebook is now only making public what they’ve been doing privately for years: building a massive catalogue of recognized faces matched to names, birthdays, locations, and so on. It also means that they likely have an enormous catalogue of faces matched to people who are not members and, therefore, also knows their relationship to Facebook users. Kashmir Hill of Gizmodo was told by a Facebook representative that this facial recognition capability is not used for the People You May Know feature, though, so it won’t expose that information publicly right now.
For what it’s worth, Photo Review will not be made available to European or Canadian Facebook users because of local privacy laws. I am completely fine with that.
Federal Communications Commission (FCC) Chairman Ajit Pai said Friday that supporters of net neutrality provisions that were repealed Thursday have been proven wrong, as internet users wake up still able to send emails and use Twitter after the regulations were struck down.
Of course, Pai isn’t stupid, and he knows that this is a completely disingenuous defence. For one thing, it will take sixty days after the repeal is published in the Federal Registry for it to take effect. Yet, even though this explanation is bullshit, it is enabled by two related phenomena.
The first is that net neutrality is a fairly esoteric policy issue, despite sounding simple on the surface. Indeed, net neutrality policies are pretty simple for ISPs and consumers to understand: all traffic passing over the network is treated equality. It requires them to not bias nor hinder any data. But the pragmatic consequences of not having net neutrality policies in place are much harder to grok.
That has led to people trying to explain the negative impacts of yesterday’s vote with all sorts of analogies and situations, to the point of farce. That’s the second phenomenon: a muddying of the waters by well-meaning activists, writers, and public figures. An internet that lacks these policies means that ISPs have far more power over the data transmitted via their networks. That theoretically means that they could make Twitter load just one word at a time, or they could charge subscribers five dollars per month to access Facebook. But, realistically, that won’t happen.
What is far more likely, in my mind, is a quieter and more insidious campaign by ISPs to create private marketplaces under their control. I don’t think it’s unreasonable to imagine tiered internet plans where “premium” video services — the Netflixes and YouTubes of the world — would get guaranteed smooth service at faster speeds, while other media services would be streamed at the same lower speed as the webpages you read and emails you receive.
Of course, the only way this guarantee would be made is if the ISP were to strike a deal with the media service, and it’s unlikely that consumers would know about this until Netflix inevitably bumps up their rates to make up for their increased costs. Consumers also probably wouldn’t be wholly aware of the dynamics of this scheme if, say, Vimeo were to refuse to pay to be included in the hypothetical premium media package. When every website is loading slowly, you blame your computer, WiFi connection, or ISP; when only a single website is slow, you probably blame the website.
ISPs might even charge enticingly lower rates for a service like this, hoping they’ll make up the difference in increased subscribers and contractually-obligated fees from media services. It will look appealing, especially if the majority of your web experience centres around the giants of the web. But it will also mean that competing services will be fighting against established players that have paid to more deeply entrench themselves in consumers’ web habits.
As major ISPs increasingly consolidate into media conglomerates,2 there’s also reason to worry that they favour their own media in ways that may not legally violate American antitrust acts. Even if they do, regulators may be hesitant to prosecute in a generally-weakened antitrust climate.
But let’s be optimistic for a moment and, like Ajit Pai and Ben Thompson, let’s assume that ISPs will act in consumers’ best interests and somehow innovate with their utility-like service. In other words, let’s assume that the internet of two or three years from now works pretty much the same as it does today, just faster, cheaper, and more available to everyone. Why would anyone want rules like these in place?
In short, they’re a legally-binding guarantee that ISPs must not engage in the kind of behaviour described above. It also prevents ISPs from doing the kinds of stuff that Pai said was evidence that the internet didn’t collapse the day after his vote: block email and Twitter. Given the staggering influence ISPs have over the information and media we consume, the amount of soft power they now have is greatly concerning. They won’t do the things described earlier, but they could; they could do it, but they won’t. They promise not to, because doing so would be absurd in the minds of consumers; it would run counter to our learned expectations of how the internet works.
If you buy the ISPs’ argument that these rules are just needless nannying on the part of the federal government, it’s understandable why they and their lobbyists would want to tell everyone before the FCC’s vote that we should all calm down. That there was nothing to fear, because they profit best when everyone uses their internet connections as they do today. But, as often happens, what ISPs said after the FCC voted to rescind these regulations differs greatly from the picture they painted before.
We reached out to 10 big or notable ISPs to see what their stances are on three core tenets of net neutrality: no blocking, no throttling, and no paid prioritization. Not all of them answered, and the answers we did get are complicated.
In particular, none of the ISPs we contacted will make a commitment — or even a comment — on paid fast lanes and prioritization. And this is really where we expect to see problems: ISPs likely won’t go out and block large swaths of the web, but they may start to give subtle advantages to their own content and the content of their partners, slowly shaping who wins and loses online.
So you’ve heard this story before. You know that analogies that try to explain net neutrality often muddy the waters of an already complex issue. You know that ISPs are lobbying the hell out of Congress to try to get a law passed that would take the FCC out of the equation for good. Ajit Pai and the other FCC Commissioners who voted with him know all of this, too. Why write it all again?
Well, there is a glimmer of hope for you to have your say, now. Being an appointed and independent body, the FCC is not democratic and is not subject to continued public approval, per se.3 Now, though, this issue has been bunted over to elected officials. Divided as the United States may be today, an overwhelming majority of Americans disagree with the Title II net neutrality repeal. So when your representatives are helping write the new net neutrality rulebook, make sure they know just how much you approve of maintaining Title II-like regulations.
It’s a complicated topic, but I’m sure if you explain it to them really, really slowly, they’ll get it.
this whole year has been beyond parody but net neutrality rules being reversed against massive popular support on the same day that Disney effectively becomes the world’s only media company is still a bit of a stretch
When the Apple Watch Series 3 first launched, carriers in the United States and other countries where the LTE version of the device is available offered three free months of service and waived activation fees.
That fee-free grace period is coming to an end, and customers are getting their first bills that include the $10 per month service charge.
If you have an Apple Watch Series 3 with LTE functionality, you’ve probably already learned that $10 is not all it’s going to cost per month. On carriers like AT&T and Verizon, there are additional service charges and fees, which means it’s not $10 per month for an Apple Watch, it’s more like $12-$14.
I still think it’s egregious that carriers are charging anything more than an administrative fee — at most — to use an LTE Apple Watch on their network. You don’t get any additional data allotment by adding an Apple Watch to your plan; if anything, the data a Watch will use will be dramatically less than that used by a smartphone. Yet another subscription is the kind of thing that makes me wary of an LTE Apple Watch — not necessarily because of the price, but because of the ethics. I bet most consumers can tell that this is nothing more than a money grab by carriers.
A core Republican talking point during the net neutrality battle was that, in 2015, President Obama led a government takeover of the internet, and Obama illegally bullied the independent Federal Communications Commission into adopting the rules. In this version of the story, Ajit Pai’s rollback of those rules Thursday is a return to the good old days, before the FCC was forced to adopt rules it never wanted in the first place.
But internal FCC documents obtained by Motherboard using a Freedom of Information Act request show that the independent, nonpartisan FCC Office of Inspector General — acting on orders from Congressional Republicans — investigated the claim that Obama interfered with the FCC’s net neutrality process and found it was nonsense. This Republican narrative of net neutrality as an Obama-led takeover of the internet, then, was wholly refuted by an independent investigation and its findings were not made public prior to Thursday’s vote.
When little to no proof supports the arguments that you are making, and what evidence does exist actually refutes your stance, do you simply hide it? Congratulations — you, too, could be a Republican FCC commissioner.
As expected and in spite of overwhelming public and business support for net neutrality rules, the FCC just voted along party lines to strip themselves of the power to meaningfully regulate internet service providers. But just because appointed FCC Commissioners like Ajit Pai have no respect for the public, that doesn’t mean this is over.
The first course of immediate action will be for net neutrality proponents to pressure Congress to use the Congressional Review Act to pass a resolution of disapproval. This is a mechanism that allows Congress to overrule any regulations enacted by federal agencies. You might remember it’s the tool that the GOP used to eliminate broadband privacy protections earlier this year.
“The CRA is our best option on Capitol Hill for the time being,” said Timothy Karr, a spokesperson for the Free Press Action Fund, an open internet advocacy group. “We’re not interested in efforts to strike a Congressional compromise that are being championed by many in the phone and cable lobby. We don’t have a lot of confidence in the outcome of a legislative fight in a Congress where net neutrality advocates are completely outgunned and outspent by cable and telecom lobbyists.”
A lot more work needs to be done. Title II regulations are an effective and well-rounded way to treat ISPs more like the utility providers they really are, but a bill could be passed that places a Title II-style framework into a modern context for the internet, if there’s enough public pressure to do so. Time for Americans to get to work.
Update: New York Attorney General Eric Schneiderman is suing to block this repeal. He pointed out yesterday that millions of comments on this topic were posted under real people’s names without their knowledge or consent, and that the FCC has refused to allow an investigation into this matter.
The most dramatic cybersecurity story of 2016 came to a quiet conclusion Friday in an Anchorage courtroom, as three young American computer savants pleaded guilty to masterminding an unprecedented botnet — powered by unsecured internet-of-things devices like security cameras and wireless routers — that unleashed sweeping attacks on key internet services around the globe last fall. What drove them wasn’t anarchist politics or shadowy ties to a nation-state. It was Minecraft.
Minecraft may have been the motive and three college students may have been the perpetrators, but the reason this attack was so successful was because so many internet-of-things device manufacturers don’t prioritize security, and nobody really checks to make sure any of these products have been tested for trivial loopholes.
We’re used to extension cords being certified that they won’t burst into flames when you plug them in. Microwaves and cellphones get tested by regulatory bodies to ensure that they won’t fry living organisms. We expect our cars to be built to withstand moderate collisions. These processes don’t prevent all problems, but they do help maintain standards and provide third-party verification that the manufacturer did a good job.
I’m not necessarily arguing that every device and software update ought to go through an extensive pentesting process, but there is a reasonable argument to be made that internet-of-things devices should be subject to a little more scrutiny. The industry is currently not doing a good enough job regulating itself, and their failures can have global effects. Some sort of standards body probably would slow down the introduction of these products, but is the possibility of a global attack on the internet’s infrastructure a reasonable price to pay for bringing a device to the market a little bit faster?
When Ajit Pai, the Trump-appointed head of the Federal Communications Commission, announced his intention to roll back Obama-era net-neutrality guidelines, gutting rules that prevent Internet service providers from charging companies for faster access or from slowing down or speeding up services like Netflix or YouTube, he was quick to claim that critics of his plan—Internet freedom groups and smaller Internet companies that can’t afford so-called “fast lanes”—were overreacting. “They greatly overstate the fears about what the Internet will look like going forward,” Pai said on Fox & Friends. Pai’s proposal, which would put in place a voluntary system reliant on written promises from I.S.P.s not to stall competitors’ traffic or block Web sites, essentially serves as a road map to radically reshape the Internet. But like Pai, I.S.P.s and others in the telecom industry have curiously insisted that consumers and smaller companies have nothing to fear when it comes to net-neutrality reform.
ISPs also promise to be at your house between noon and 3:00 PM to check out your slow internet connection which, of course, is because they oversold your neighbourhood and overstated likely end-user speeds; but, sure, let’s trust them to play fair when they have few incentives to do so.
Apple Inc. is designing a new chip for future Mac laptops that would take on more of the functionality currently handled by Intel Corp. processors, according to people familiar with the matter.
The chip, which went into development last year, is similar to one already used in the latest MacBook Pro to power the keyboard’s Touch Bar feature, the people said. The updated part, internally codenamed T310, would handle some of the computer’s low-power mode functionality, they said. The people asked not to be identified talking about private product development. It’s built using ARM Holdings Plc. technology and will work alongside an Intel processor.
The current ARM-based chip for Macs is independent from the computer’s other components, focusing on the Touch Bar’s functionality itself. The new version in development would go further by connecting to other parts of a Mac’s system, including storage and wireless components, in order to take on the additional responsibilities. Given that a low-power mode already exists, Apple may choose to not highlight the advancement, much like it has not marketed the significance of its current Mac chip, one of the people said.
It sounds like this is the chip that is included in the iMac Pro, even though Gurman and King cite lower power tasks as being the focus of its development. Steven Troughton-Smith in November:
This looks like the iMac Pro’s coprocessor (Bridge2,1) will be an A10 Fusion chip with 512MB RAM […] So first Mac with an A-series chip
Rene Ritchie tweeted today that the A10 has been rebranded “T2” — as in, a successor to the T1 chip in Touch Bar MacBook Pro models.
Cabel Sasser of Panic received an iMac Pro review unit from Apple, and tweeted about the T2’s functionality:
It integrates previously discrete components, like the SMC, ISP for the camera, audio control, SSD control… plus a secure enclave, and a hardware encryption engine.
This new chip means storage encryption keys pass from the secure enclave to the hardware encryption engine in-chip — your key never leaves the chip. And, they it allows for hardware verification of OS, kernel, boot loader, firmware, etc. (This can be disabled…)
In addition to the enhanced security measures Sasser notes, a couple more things are very exciting about Apple’s gradual rollout of a proprietary coprocessor in their Mac lineup. The T2 sounds like it expands upon some of the input mechanism security measures of the T1, so the keyboard and built-in camera are more secure than previous implementations. And, as Guilherme Rambo noticed, it can enable “Hey, Siri” functionality on the Mac. But Apple hasn’t enabled that functionality; so, now, it is a question of “when?”.
Apple updated its website with news that the iMac Pro is shipping beginning on December 14, 2017. The pro-level iMac features a long list of impressive specifications. The desktop computer, which was announced in June at WWDC comes in 8, 10, and 18-core configurations, though the 18-core model will not ship until 2018. The new iMac can be configured with up to 128GB of RAM and can handle SSD storage of up to 4TB. Graphics are driven with the all-new Radeon Pro Vega, which Apple said offers three times the performance over other iMac GPUs.
Apple provided Marques Brownlee (MKBHD) and another YouTuber, Jonathan Morrison, with review units, and they seem effusively positive, with the exception of some concerns about the machine’s lack of post-purchase upgradability.
Of note, there’s nothing on the iMac Pro webpage nor in either of the review videos about the Secure Enclave that’s apparently in the machine, nor is there anything about an A10 Fusion chip or “Hey, Siri” functionality. These rumours were supported by evidence in MacOS; it isn’t as though the predictions came out of nowhere. It’s possible that these features will be unveiled on Thursday when the iMac Pro becomes available, or perhaps early next year with a software update, but I also haven’t seen any reason for the Secure Enclave — the keyboard doesn’t have a Touch Bar, nor is there Touch ID anywhere on this Mac.
I found a very consistent set of results: a 2X to 3X boost in speed (relative to my current iMac and MacBook Pro 15”) a noticeable leap from most generational jumps that are generally ten times smaller.
Whether you’re editing 8K RED video, H.264 4K Drone footage, 6K 3D VR content or 50 Megapixel RAW stills – you can expect a 200-300% increase in performance in almost every industry leading software with the iMac Pro.
Most of my apps have around 20,000-30,000 lines of code spread out over 80-120 source files (mostly Obj-C and C with a teeny amount of Swift mixed in). There are so many variables that go into compile performance that it’s hard to come up with a benchmark that is universally relevant, so I’ll simply note that I saw reductions in compile time of between 30-60% while working on apps when I compared the iMac Pro to my 2016 MacBook Pro and 2013 iMac. If you’re developing for iOS you’ll still be subject to the bottleneck of installing and launching an app on the simulator or a device, but when developing for the Mac this makes a pretty noticeable improvement in repetitive code-compile-test cycles.
These are massive performance gains, even at the 10-core level; imagine what the 18-core iMac Pro is going be like. And then remember that this isn’t the Mac Pro replacement — it’s just a stopgap while they work on the real Mac Pro replacement.
Update:Rene Ritchie says that the A10 Fusion SoC is, indeed, present in the iMac Pro, albeit rebranded as a T2 coprocessor.
Matt Birchler used a Pixel 2 instead of his usual iPhone for a couple of months, and has started publishing pieces about his experience and impressions. It’s worth your time to start with part one and work your way through his thoughts, but this bit from the “Performance and Stability” section stood out to me:
[…] My time with Android has shown it to be anything but the “stable” alternative to iOS. Just 31 days into my time with the Pixel 2, I had to restore my phone to factory settings to fix the errors I was experiencing. In addition to the issues raised in that post, I have also had issues with apps crashing, notifications staying on silent even though I have them set to vibrate, random reboots, and more. After a few weeks I actually stopped reporting my Android bugs to Twitter because it was getting too depressing.
As Birchler points out, everyone’s bug experiences are different because, even with the relatively limited configuration options available for mobile devices — compared to, say, PCs — there are still billions of possible combinations of languages, WiFi networks, apps, settings, and so on. In light of recent iOS bugs, though, it’s remarkable to recognize that there’s still a lot of work to be done all around. Bugs shake the trust we place in our devices and may even make us consider switching, but there’s nothing like hearing user reports like these that acknowledge that it’s not any more stable. I’m not mocking Android users here or the OS itself; it’s just something worth recognizing.
Hey, remember how Andy Rubin temporarily stepped away from Essential after reporters from the Information started asking questions about what they called an “inappropriate relationship” between him and a woman who worked for him while at Google? Theodore Schleifer of Recode has the latest:
Andy Rubin, the founder of smartphone startup Essential, has already returned to his company less than two weeks after it was announced that he took a leave of absence amid questions about an alleged inappropriate relationship.
Even while on leave from Essential, Rubin was still able to show up to work at the same physical workplace. That’s because he did not take a similar leave from Playground Global, the venture capital firm he founded, which shares the same office space as Essential.
It will come as no surprise to you that Playground Global also has an investment in Essential, so what did Rubin’s leave of absence truly mean?
There have been two major versions of the FCC’s transparency requirements: one created in 2010 with the first net neutrality rules, and an expanded version created in 2015. Both sets of transparency rules survived court challenges from the broadband industry.
The 2010 requirement had ISPs disclose pricing, including “monthly prices, usage-based fees, and fees for early termination or additional network services.”
That somewhat vague requirement will survive Pai’s net neutrality repeal. But Pai is proposing to eliminate the enhanced disclosure requirements that have been in place since 2015.
The 2015 disclosures that Ajit Pai’s proposal would undo include transparency on data caps, and additional monthly fees for things like modem rentals. ISPs also wouldn’t have to necessarily make these disclosures public on their own website; they can tell the FCC about them, and the FCC will publish the disclosures on their byzantine website.
Pai has claimed that his proposed rollback will encourage net neutrality practices without regulation because it will require ISPs to be fully transparent. In a shocking turn of events for statements and policies originating from the top minds of this administration, that claim turns out to be a complete lie: ISPs won’t have to be as open and transparent about their pricing and policies, and they have repeatedly stated that they would use tactics like paid prioritization to manipulate network traffic if given the opportunity.
I don’t have an Amazon Prime subscription, so I don’t really have a reason to download this app; but, by all accounts, it is shockingly bad.
Netflix’s is also pretty awful — it now autoplays a preview of the selected show or movie at the top of the screen, with sound, and I can’t find any way to disable this. It also doesn’t behave like a typical tvOS app: the app navigation is displayed as tiles, shows and movies are also displayed as tiles, and they’re mixed together in an infinitely-scrolling grid.
Hulu isn’t available in Canada, but its tvOS app is apparently poor as well.
Why is it that three of the biggest players in streaming video can’t seem to find the time and resources to build proper tvOS apps? Is it not worth the effort because the Apple TV isn’t popular enough? Is it because these companies simply don’t care?
I don’t think it’s right to stymie experimentation amongst app developers, but tvOS has a very particular set of platform characteristics. If Apple isn’t going to encourage developers’ compliance to those characteristics, it’s up to users to provide feedback and encourage developers like these to do better.
The larger point here, though is that while there certainly were a number of reasons to be hesitant about supporting Title II or even explicit rules from the FCC a decade ago, enough things have happened that if you support net neutrality, supporting Title II is the only current way to get it. Ajit Pai’s plan gets rid of net neutrality. The courts have made it clear. The (non) competitive market has made it clear. The statements of the large broadband providers have made it clear. The concerns of the small broadband providers have made it clear. If Ben does support net neutrality, as he claims, then he should not support Pai’s plan. It does not and will not lead to the results he claims he wants. It is deliberately designed to do the opposite.
So, yes. For a long time — like Ben does now — I worried about an FCC presenting rules. But the courts made it clear that this was the only way to actually keep neutrality — short of an enlightened Congress. And the deteriorating market, combined with continued efforts and statements from the big broadband companies, made it clear that it was necessary. You can argue that the whole concept of net neutrality is bad — but, if you support the concept of net neutrality, and actually understand the history, then it’s difficult to see how you can support Pai’s plan. I hope that Ben will reconsider his position — especially since Pai himself has been retweeting Ben’s posts and tweets on this subject.
If I didn’t convince you to disagree with Thompson’s misleading piece, maybe Masnick will. If you live in the United States, it’s vital that the FCC — particularly Ajit Pai, Michael O’Rielly, and Brendan Carr — and your representatives hear your concerns.
There are at least two possible explanations for all of these misunderstandings and technical errors. One is that, as we’ve suggested, the FCC doesn’t understand how the Internet works. The second is that it doesn’t care, because its real goal is simply to cobble together some technical justification for its plan to kill net neutrality. A linchpin of that plan is to reclassify broadband as an “information service,” (rather than a “telecommunications service,” or common carrier) and the FCC needs to offer some basis for it. So, we fear, it’s making one up, and hoping no one will notice.
Regardless of whether the FCC commissioners are being malicious or they truly don’t understand how the internet works, it disqualifies them from running the Commission.
John Herrman, New York Times, on reactions to the power of large tech companies in 2017:
The flip side of these companies’ new dominance is that, not unlike the first industrialists, they turn progress from something that manifests inevitably with the passage of time into something that is being done to us, for reasons that are out of our control but seem unnervingly and suddenly within someone else’s. This is a profound reorientation, which might explain why current anxieties about the internet make for such unlikely bedfellows. Conservative parents with moral complaints about inappropriate videos surfacing in YouTube kids’ channels find themselves inadvertently agreeing with leftist critiques of corporate power. Facebook’s inability to deal in any meaningful way with misinformation on the platform has loosely aligned an elitist critique of democratized news with populist anger at a company led by Silicon Valley elites. There are right-wing anti-monopolists and left-wing anti-monopolists setting their sights on Google and Facebook, claiming dangerous censorship or lack of responsible moderation or, sometimes, both at once — people who want different things, and who have incompatible goals, but who have intuited the same core premise. In these instances, the only people left telling us not to worry — rhyming their responses with the vindicated defenders of the nascent internet — have suspiciously much to lose.
In this future, what publications will have done individually is adapt to survive; what they will have helped do together is take the grand weird promises of writing and reporting and film and art on the internet and consolidated them into a set of business interests that most closely resemble the TV industry. Which sounds extremely lucrative! TV makes a lot of money, and there’s a lot of excellent TV. But TV is also a byzantine nightmare of conflict and compromise and trash and waste and legacy. The prospect of Facebook, for example, as a primary host for news organizations, not just an outsized source of traffic, is depressing even if you like Facebook. A new generation of artists and creative people ceding the still-fresh dream of direct compensation and independence to mediated advertising arrangements with accidentally enormous middlemen apps that have no special interest in publishing beyond value extraction through advertising is the early internet utopian’s worst-case scenario.
I’m going to bring this back around to net neutrality because the FCC’s vote is in about a week and I think it’s worth keeping that in mind. FCC chairman Ajit Pai has said, quite reasonably, that he is concerned about the influence of a handful of tech companies on our greater discourse. Whether that’s because he’s actually concerned about their influence or whether he’s using Silicon Valley as a scapegoat is irrelevant in this discussion. But it is more likely that a company can rise up to compete with, say, Facebook than it is that a startup could compete with a major ISP like Verizon or Comcast1 simply because of the high initial costs associated with building broadband infrastructure.2
Today’s tech giants were born in garages in the shadows of yesterday’s tech giants, so we hear, but major ISPs don’t have a comparable story. Allowing ISPs to treat websites differently or prioritizing traffic for a fee will more deeply entrench the dominance of the largest and wealthiest tech companies, and will make it less likely that an upstart can compete.
Both of these ISPs actually run cable and their own infrastructure unlike, for example, smaller regional ISPs. ↩︎
This is something that I seem remember the FCC acknowledging in their proposal (PDF) but I haven’t been able to find the passage. If you remember where it is, please let me know. ↩︎
An important update to a story I linked to two weeks ago about an Android system service that was collecting location data even when location services were switched off — according to Tony Romm of Recode, Oracle seeded that story to Quartz as part of a PR campaign against Google:
Since 2010, Oracle has accused Google of copying Java and using key portions of it in the making of Android. Google, for its part, has fought those claims vigorously. More recently, though, their standoff has intensified. And as a sign of the worsening rift between them, this summer Oracle tried to sell reporters on a story about the privacy pitfalls of Android, two sources confirmed to Recode.
To be sure, the substance of Quartz’s story — Google’s errant location tracking — checks out. Google itself acknowledged the mishap and said it ceased the practice. Nor does Oracle stand alone in raising red flags about Google at a time when many in the nation’s capital are questioning the power and reach of large web platforms.
Still, Oracle’s campaign is undeniable. In Washington, D.C., for example, it has devoted a slice of its $8.8 million in lobbying spending so far in 2017 to challenging Google in key policy debates. It has sought penalties against Google in Europe, meanwhile, and it even purchased billboard ads in Tennessee just to antagonize its tech peer, sources said.
It is quite reasonable for people and companies to have questions about Google’s dominance in many online services and mobile operating systems and find that Oracle’s dirty tricks campaign somewhat sours the reputation of this story.
But I don’t necessarily think this reflects poorly on Oracle; if anything, it shakes my confidence in Quartz’s reporting. I don’t know what Quartz’s sourcing attribution guidelines are, but the New York Times’ style guide indicates that a source’s interest in the story should be communicated to readers as candidly as possible. In their story, Quartz did not indicate how they were tipped-off to Android’s behaviour.
[…] Interviews with more than two dozen marketers, journalists, and others familiar with similar pay-for-play offers revealed a dubious corner of online publishing in which publicists, ranging from individuals like Satyam to medium-sized “digital marketing firms” that blur traditional lines between advertising and public relations, quietly pay off journalists to promote their clients in articles that make no mention of the financial arrangement.
People involved with the payoffs are extremely reluctant to discuss them, but four contributing writers to prominent publications including Mashable, Inc, Business Insider, and Entrepreneur told me they have personally accepted payments in exchange for weaving promotional references to brands into their work on those sites. Two of the writers acknowledged they have taken part in the scheme for years, on behalf of many brands. Mario Ruiz, a spokesperson for Business Insider, said in an email that “Business Insider has a strict policy that prohibits any of our writers, whether full-time staffers or contributors, from accepting payment of any kind in exchange for coverage.”
There are a couple of different kinds of writers that, according to Christian, took payments in exchange for mentioning or linking to brands in their articles. Some publish to “contributor networks”, which are blogs hosted by major publications but not edited by them. TechCrunch used to have one of those, but they shut it down earlier this year because they noticed an increase in posts that they “strongly suspected were ghost-written by PR”, which should come as no surprise. These contributor networks tend to be filled with self-promotional garbage. I don’t understand what positive effects a contributor network has on an established publication, but it seems like it’s trading away hard-earned authority for cheap traffic.
The more insidious acts Christian profiles are those from writers ostensibly creating articles where a brand pays for very subtle placement:
Yael Grauer, a freelancer who’s written for Forbes and many other outlets, says she’s gotten as many as 12 offers like Satyam’s in a single month, which she always rejects. Some are surprisingly straightforward, like a marketer who simply asked how much she charged for an article in Slate or Wired. Others are coy, like a representative of a firm called Co-Creative Marketing, who heaped praise on her writing before asking whether she could get content published in Forbes or Wired on behalf of a client. Another marketer offered Erik Sherman, a business journalist, $315 per article to mention her client’s landscaping products in Forbes, the Huffington Post, or the Wall Street Journal — though she cautioned that the mentions would need to “not look blatant.” Sherman declined, telling the marketer that the offer was “completely unethical.”
You’d probably expect this kind of thing to be pervasive in Forbes’ contributor network, but if a similar offer were accepted by a writer for an esteemed imprint like the Wall Street Journal, it would undermine your confidence in that publication overall — especially since it’s a business publication, as opposed to something more general-interest.
For what it’s worth, even I — writing at a fairly tiny site — receive offers like these a few times every week. I have never accepted any of them, of course.
Big news today: MarsEdit 4 is out of beta and available for download from the MarsEdit home page and the Mac App Store. This marks the end of a long development period spanning seven years, so it’s a great personal relief to me to finally release it. I hope you enjoy it.
MarsEdit 4 brings major improvements to the app including a refined new look, enhanced WordPress support, rich and plain text editor improvements, automatic preview template generation, and much more.
I’ve been using MarsEdit 4 betas for several months and I love the improvements in this version — particularly, the new Safari extension. Jalkut has created a very clever trial scheme; I highly recommend you take advantage of it if you have a blog and have never tried MarsEdit before. It’s terrific.
What I like about this postmortem is that it’s the script to what is almost the “Every Frame a Painting” episode of “Every Frame a Painting”, particularly in this detail:
In order to make video essays on the Internet, we had to learn the basics of copyright law. In America, there’s a provision called fair use; if you meet four criteria, you can argue in court that you made reasonable use of copyrighted material.
But as always, there’s a difference between what the law says and how the law is implemented. You could make a video that meets the criteria for fair use, but YouTube could still take it down because of their internal system (Copyright ID) which analyzes and detects copyrighted material.
So I learned to edit my way around that system.
If YouTube’s automatic flagging system didn’t exist, it’s likely that “Every Frame a Painting” would feel completely different. Whether it would have been better, I’m not sure, but I think the limitations of YouTube helped birth something truly unique and very, very good.
I don’t think stationary smart speakers represent the future of computing. Instead, companies are using smart speakers to take advantage of an awkward phase of technology in which there doesn’t seem to be any clear direction as to where things are headed. Consumers are buying cheap smart speakers powered by digital voice assistants without having any strong convictions regarding how such voice assistants should or can be used. The major takeaway from customer surveys regarding smart speakers usage is that there isn’t any clear trend. If anything, smart speakers are being used for rudimentary tasks that can just as easily be done with digital voice assistants found on smartwatches or smartphones. This environment paints a very different picture of the current health of the smart speaker market. The narrative in the press is simply too rosy and optimistic.
I’m clearly not the target market for the HomePod, primarily because I live in Canada where the HomePod won’t be for sale at launch.1 I also live in an apartment small enough that I can semi-loudly say “hey Siri” and get a response from my phone on the other side of my place. But I also think that the reason I’m not that enamoured with the HomePod or any smart speaker yet is because I’m a daily Apple Watch wearer, so many of its functions are on my wrist instead of in a tube in my kitchen.
I’m guessing that these products would appeal more — not exclusively, but more — to people who live in larger homes, of course, but also people who don’t typically wear a smartwatch — Apple’s or otherwise.2 I also wonder if smart speakers are an intermediate product between a more traditional computer-user relationship and something that’s more environmental or spatial. If it is, I’d rather throw my hat in with a company that has a strict commitment to user privacy, rather than companies that serve up targeted advertising.
And, if the rollout of Apple News is anything to go by, several years after launch. ↩︎
The HomePod is only $20 more expensive in the U.S. than a Series 3 Apple Watch. ↩︎
Sebastiaan de With, designer of the Halide camera app:
When you shoot JPEG, you really need to get the photo perfect at the time you take it. With RAW and its extra data, you can easily fix mistakes, and you get a lot more room to experiment.
What kind of data? RAW files store more information about detail in the highlights (the bright parts) and the shadows (the dark parts) of an image. Since you often want to ‘recover’ a slightly over or under-exposed photo, this is immensely useful.
It also stores information that enables you to change white balance later on. White balance is a constantly measured value that cameras try to get right to ensure the colors look natural in a scene. iPhones are quite good at this, but it starts to get more difficult when light is tinted.
I’ve been shooting RAW on my iPhone almost exclusively since I received a beta version of Obscura in the summer last year that used iOS 10’s RAW capture API. More time is needed to make a RAW photo usable than a JPEG out of the camera app and RAW files take up so much more space, but it’s completely worth it. So many of the photos I’ve captured since would have been impossible to make without RAW.
You can try this for yourself: get a manual camera app like Obscura, Halide, or Manual, and download either Lightroom or Darkroom. Capture a scene in RAW, then start playing around with the highlights, shadows,1 and white balance; in Lightroom, you can also adjust individual hues in a scene without degrading the image fidelity. It’s remarkable how much the iPhone’s sensor actually captures, especially in foliage and finer patterns.
If it’s snowy where you live, this is extremely helpful. ↩︎
On iOS 10.1 there were only 4 binaries using Swift. The number of apps and frameworks using Swift grew quite a lot in a year: There are now 20 apps and frameworks using Swift in iOS 11.1 […]
Similarly the number of binaries using Swift grew from 10 in macOS 10.12.1 to 23 in macOS 10.13.1.
It looks like most of the system components built in Swift are entirely new apps, or effectively so, as with Music and Podcasts. But it also appears that Apple is thoroughly porting both operating systems over to Swift. I have no idea how deep that will run — I imagine device drivers, for example, may not be rewritten — but perhaps the goal is to have everything the user interacts with be built in Swift, or something like that.
Whatever Apple’s specific goal may be, the apps they have ported to Swift so far are not little things or developer-specific utilities. These are critical apps that people use all the time. If that’s not eating your own dog food, I don’t know what is.
It’s a truism in tech design that it takes a great deal of work to make something easy to use, and no company has proven the principle more spectacularly than Apple. It came straight from Jobs, who pushed his engineers and designers to remember that it wasn’t the device that customers wanted — it was the experience, the information, the services, the apps, the ability to edit spreadsheets and documents, to watch video, send email and texts, play games, take photographs — the countless things we do today (effortlessly, for the most part). You can debate the consequences of this new power at our fingertips, but there’s no denying it’s a revolution in the daily lives of rich and poor alike, and that Apple has set the pace, led by Ive’s answers to Jobs’ questions. Jobs loved the iPad, which he called an “intimate device” because it was immersive, like a good book — a window into whatever worlds you chose to explore. “In so many ways,” Ive says, “we’re trying to get the object out of the way.”
Last night, I watched “App: The Human Story” and I was struck by Matías Duarte’s explanation that apps are generally single-purpose widgets on a very general-purpose device. I think Apple’s latest generation of devices is the purest expression of that idea. Everything they’ve been doing — from near-seamless enclosures and Face ID, down to the coatings on the display becoming increasingly closer to black, so when the display is off, it vanishes into the glass — gets closer to this idea. Even the software of the iPhone X comes closer to that: you can fling your apps around or send them back to the home screen, and it feels like you’re directly manipulating everything the system does. Similar interactions on the iPad help turn that into a totally immersive experience; one of my biggest gripes with previous generations of iOS is the number of times it still felt necessary to use the home button, but that’s almost completely changed with iOS 11. It really is remarkable how much I can do with a device that often feels like it isn’t even there.
More than 5 million people in the UK could be entitled to compensation from Google if a class action against the internet giant for allegedly harvesting personal data is successful.
A group led by the former executive director of consumer body Which?, Richard Lloyd, and advised by City law firm Mischon de Reya claims Google unlawfully collected personal information by bypassing the default privacy settings on the iPhone between June 2011 and February 2012.
They have launched a legal action with the aim of securing compensation for those affected. The group, called Google You Owe Us, says that approximately 5.4 million people in Britain used the iPhone during this period and could be entitled to compensation.
Google is accused of breaching principles in the UK’s data protection laws in a “violation of trust” against iPhone users.
To get around Safari’s default blocking, Google exploited a loophole in the browser’s privacy settings. While Safari does block most tracking, it makes an exception for websites with which a person interacts in some way—for instance, by filling out a form. So Google added coding to some of its ads that made Safari think that a person was submitting an invisible form to Google. Safari would then let Google install a cookie on the phone or computer.
It is striking to me how malicious this kind of action is. It isn’t Google’s right to determine when it feels like it can circumvent users’ preferences to install cookies or anything on their computers. You may argue that these are not users’ preferences — that Safari’s defaults are Apple’s preferences. But I think that’s a dangerous stance because there’s no way to determine when a preference has been deliberately chosen by the user.
I know I’ve been harping on bugs in Apple’s software for the last little while, but deliberate actions like Google’s bother me far more. The Safari workaround is something that an engineer had to actually build. Someone had to understand that Safari’s default cookie settings were incompatible with tracking, but instead of choosing not to track users, they thought it was their right to override those preferences. Egregious.
John C. Dvorak of PC Magazine wrote a piece tying the introduction of Face ID on the iPhone X to the Australian government’s plans to introduce a facial recognition system to identify suspects of crime. I know very little about that plan — though I’m eager to learn more — but I do know enough about the iPhone X to take issue with this bit of his piece:
We can assume the NSA, which spies on its own citizenry, will store massive amounts of imagery in its huge facility in Utah. From that, an instant dossier of someone’s whereabouts can be produced as needed.
Until then, we have Apple’s iPhone X, which swaps Touch ID for Face ID. The real beneficiaries of this technique will be the police; they can just point it at the person and they are in.
The user must be paying attention to the device and within a certain range for Face ID to make a successful scan. And, for what it’s worth, pressing and holding the power button and either volume button for two seconds will disable Face ID until a passcode is entered.
Also, implicitly tying Face ID to assumed NSA activities is misleading and irresponsible.
Apple previously relied on fingerprints with Touch ID; now the home button is gone, perhaps saving it money. Facial recognition is just software, after all; the camera is already in the phone.
Dvorak continues to demonstrate why he is one of the most inept technology columnists writing today in a mainstream publication. Apple has helpfully provided an easy-to-read white paper (PDF) explaining how Face ID works. It’s six pages long, but if that’s too much reading for Dvorak, Apple also put a labelled diagram on the iPhone X’s marketing webpage. In short, it doesn’t use the front-facing camera that’s “already in the phone” — it uses an infrared light, infrared dot projector, and an infrared camera to create a depth map of the detected face.
I don’t have a problem with people whose opinion differs from my own. I don’t have a problem with people who write articles that I firmly disagree with. I do have a problem with laziness and making stuff up.
Apple’s statement, via Romain Dillet of TechCrunch:
Security is a top priority for every Apple product, and regrettably we stumbled with this release of macOS.
When our security engineers became aware of the issue Tuesday afternoon, we immediately began working on an update that closes the security hole. This morning, as of 8:00 a.m., the update is available for download, and starting later today it will be automatically installed on all systems running the latest version (10.13.1) of macOS High Sierra.
We greatly regret this error and we apologize to all Mac users, both for releasing with this vulnerability and for the concern it has caused. Our customers deserve better. We are auditing our development processes to help prevent this from happening again.
A fast bug fix, an apology, and a commitment to fixing whatever led to a bug like this shipping. That’s the good news.
Unfortunately, some users on the MacRumors forums are reporting that the security patch also breaks file sharing. It would be foolish to recommend users wait to apply this patch — and impossible, because it gets installed automatically — but you should be aware of this bug if that’s something you depend on.
So to recount: one Portugal story is made up, and the other declared that a 10GB family plan with an extra 10GB for a collection of apps of your choosing for €25/month ($30/month) is a future to be feared; given that AT&T charges $65 for a single “Unlimited” plan that downscales video, bans tethering, and slows speeds after 22GB, one wonders if most Americans share that fear.
That, though, is the magic of the term “net neutrality”, the name — coined by the same Tim Wu whose tweet I embedded above — for those FCC rules that justified the original 2015 reclassification of ISPs to utility-like common carriers. Of course ISPs should be neutral — again, who could be against such a thing? What is missing in the ongoing debate, though, is the recognition that, ever since the demise of AOL, they have been. The FCC’s 2015 approach to net neutrality is solving problems as fake as the image in Wu’s tweet; unfortunately the costs are just as real as those in Congressman Khanna’s tweet, but massively more expensive.
Thompson follows this by acknowledging several instances when ISPs were not treating data neutrally, but concludes that contemporary regulatory action or public pressure illustrate a lack of need for Title II classification. I find this reasoning to be ill-considered at best. First, the Madison River incident:
The most famous example of an ISP acting badly was a company called Madison River Communication which, in 2005, blocked ports used for Voice over Internet Protocol (VoIP) services, presumably to prop up their own alternative; it remains the canonical violation of net neutrality. It was also a short-lived one: Vonage quickly complained to the FCC, which quickly obtained a consent decree that included a nominal fine and guarantee from Madison River Communications that they would not block such services again.
It’s worth recognizing that the consent decree references Title II guidelines. Thompson cites two more cases of net neutrality violations — Comcast blocking the BitTorrent protocol under the guise of it being network management policy, and MetroPCS offering zero-rated YouTube, which I’ll get to later — but, strangely, doesn’t mention AT&T’s blocking of FaceTime on certain cellular plans. No other video chatting apps were prohibited, raising the question of why AT&T decided to target FaceTime users.
That makes this claim, in Thompson’s recap, obviously incorrect:
There is no evidence of systemic abuse by ISPs governed under Title I, which means there are no immediate benefits to regulation, only theoretical ones
There is clearly plenty of evidence that ISPs will not treat data the same if offered the opportunity to do otherwise. And, I stress again, we aren’t simply talking about internet providers here — these are vertically-integrated media conglomerates which absolutely have incentive to treat traffic from friendly entities differently through, for example, zero-rating, as AT&T did with DirecTV, Verizon does with their NFL app, and T-Mobile does for certain services.
Again, zero-rating is not explicitly a net-neutrality issue: T-Mobile treats all data the same, some data just doesn’t cost money.
What? No, really, what? T-Mobile treats all data the same except the data they treat differently might be one of the worst arguments in this whole piece, and there are a few more rotten eggs to get to. If consumers are paying for some data and there’s other data they’re not paying for, they’re naturally going to be biased towards using the data that isn’t going to cost them anything. And that makes this argument complete nonsense as well:
What has happened to the U.S. mobile industry has certainly made me reconsider [the effect on competition by zero-rating]: if competition and the positive outcomes it has for customers is the goal, then it is difficult to view T-Mobile’s approach as anything but a positive.
T-Mobile’s introduction of inexpensive so-called “unlimited” data plans — throttled after a certain amount of data has been used, of course — drove competitors to launch similar plans, that much is true. But zero-rating had very little to do with those consumer-friendly moves. And, as if to conveniently illustrate the relative dearth of competition in the US cellular market, Sprint has a chart on their website showing that single-line unlimited plans cost a similar amount per month from AT&T, T-Mobile, and Verizon; Sprint’s plan is cheaper, but they also have worse performance and coverage.
Thompson next tackles the argument that zero-rating is anti-competitive:
Still, what of those companies that can’t afford to pay for zero rating — the future startups for which net neutrality advocates are willing to risk the costs of heavy-handed regulations? In fact, as I noted in that excerpt, zero rating is arguably a bigger threat to would-be startups than fast lanes, […]
This is probably true, and that’s why it’s so important that these rules are in place.
[…] yet T-Mobile-style zero rating isn’t even covered by those regulations! This is part of the problem of regulating future harm: sometimes that harm isn’t what you expect, and you have regulated and borne the associated costs in vain.
In fact, zero-rating is, in general, covered by the 2015 net neutrality rules. That’s why the FCC sent letters to AT&T and Verizon stating that aspects of those companies’ zero-rating practices discriminated against competitors.
But T-Mobile was careful with their zero-rating practices and made sure that there were competing services offered for free. As an example, they exempt Apple Music and Spotify from data limits. But what if you wanted to listen to a mixtape on DatPiff or an indie artist on Bandcamp? That would count against your data cap, which makes those services less enticing to consumers. It clearly benefits the established players, and reduces the likelihood that a startup can compete.
If anything, I think zero-rating services should actually be banned. It’s worse for consumers in the short term, but from a more expansive viewpoint, it encourages providers to be more honest about what kinds of speeds they can offer with their infrastructure. That might even get them to invest in more robust networks.1
Third, if the furor over net neutrality has demonstrated anything, it is that the media is ready-and-willing to raise a ruckus if ISPs even attempt to do something untoward; relatedly, a common response to the observation that ISPs have not acted badly to-date because they are fearful of regulation is not an argument for regulation — it is an acknowledgment that ISPs can and will self-regulate.
This is completely disproven by countless instances of corporate wrongdoing in modern American history. Banks and hedge funds already have a terrible name for helping cause the 2008 financial crisis, but many of them are still around and more valuable than ever. BP is still one of the world’s biggest oil and gas companies despite causing one of the world’s biggest environmental catastrophes.
Moreover, it isn’t as though ISPs are revered. They regularly rank towards the bottom of consumer happiness surveys. It’s not like their reputation can get much worse. And, with a lack of competition — especially amongst fixed broadband providers — it’s not like Americans have many options to turn to when their ISP suddenly starts behaving badly.
I could nitpick this article all day long, but this is, I think, the part of Thompson’s piece that frustrates me most:
I believe that Ajit Pai is right to return regulation to the same light touch under which the Internet developed and broadband grew for two decades.
This statement here isn’t just wrong — it’s lazily wrong. It is exactly the claim that Ajit Pai makes, and which Rob Pegoraro did a fantastic job debunking in the Washington Post in May:
But Pai’s history is wrong. The government regulated Internet access under Clinton, just as it did in the last two years of Barack Obama’s term, and it did so into George W. Bush’s first term, too. The phone lines and the connections served over them — without which phone subscribers had no Internet connection — did not operate in the supposedly deregulated paradise Pai mourns.
Without government oversight, phone companies could have prevented dial-up Internet service providers from even connecting to customers. In the 1990s, in fact, FCC regulations more intrusive than the Obama administration’s net neutrality rules led to far more competition among early broadband providers than we have today. But Pai’s nostalgia for the ’90s doesn’t extend to reviving rules that mandated competition — instead, he’s moving to scrap regulations the FCC put in place to protect customers from the telecom conglomerates that now dominate the market.
Thompson’s argument is exceptionally flawed, almost to the point of disbelief. But there is one thing he may be right about: it’s fair to argue that Title II may not be the perfect law for ISPs to be governed by. There are reasonable arguments to be made for writing new legislation and passing it through the appropriate channels in Congress.
But I think it’s completely absurd to change their classification without sufficient neutrality-guaranteeing legislation in place. Unfortunately, I wouldn’t trust this Congress to write and pass that law. Therefore, it is reasonable to keep ISPs under Title II until such a bill can be passed. The “wait and see” approach Thompson favours is not fair to consumers who get to play the part of the lab rat against influential lobbyists, large corporations, and a faux-populist legislative body.
Update: Even if you believe that the American broadband market is sufficiently competitive — it isn’t — that ISPs can be trusted to not discriminate against some forms of traffic once given the freedom to — doubtful — and that existing regulatory structures will allow any problems to be fixed on a case-by-case basis, it still seems far more efficient to prevent it in the first place. There’s an opportunity to treat internet service as a fundamental utility; let’s keep it that way, whether that’s through Title II classification or an equivalent replacement.
A VR edition of the epochally awful, intentionally mind-numbing minigame from the unreleased Penn & Teller’s Smoke & Mirrors 21 years ago has a listing on Steam. The game supports the HTC Vive and Oculus Rift headsets, plus motion controllers and gamepads (partial support).
From four screenshots shown, it looks like the game has been remastered with new graphics. But the gameplay is still the same: Drive a bus from Tucson, Ariz. to Las Vegas, in real-time (eight hours), fighting its misaligned steering the whole way.
Via Andy Baio who incorrectly calls Desert Bus the “worst game ever made”. Frankly, I hope a version of this comes to the iPhone.
There appears to be a serious bug in macOS High Sierra that enables the root superuser on a Mac with with a blank password and no security check.
The bug, discovered by developer Lemi Ergin, lets anyone log into an admin account using the username “root” with no password. This works when attempting to access an administrator’s account on an unlocked Mac, and it also provides access at the login screen of a locked Mac.
As with any security issue, it would have been preferable for this to be disclosed to the vendor — in this case, Apple — privately before being publicly exposed. And, still, this is a huge problem for anyone whose recently-updated Mac is occasionally in the vicinity of other people. Apparently, pretty much any authentication dialog is susceptible, including worrying things like Keychain Access or changing a drive’s FileVault state. It appears to be a bug introduced in High Sierra; I failed to reproduce it on a machine running MacOS Sierra.
I don’t want to speculate on whether something like this would be caught in code review or a penetration testing scenario. Apple may do both of those things and it may have simply bypassed loads of people. I also don’t know how much buggier Apple’s operating systems are now compared to, say, ten years ago, if they are truly buggier at all. Maybe we were just more tolerant of bugs before, or perhaps apps crashed more instead of subtly failing while performing critical tasks.
But there has been a clear feeling for a while now that Apple’s software simply doesn’t seem to be as robust as it once was. And perhaps these failures are for good reasons, too. Perhaps parts of MacOS and iOS are being radically rewritten to perform better over the long term, and there are frustrating bugs that result. In a sense, this is preferable to the alternative of continuing to add new features to old functionality — I’d be willing to bet that there’s code in iTunes that hasn’t been changed since the Clinton administration.
Even with all that in mind, it still doesn’t excuse the fact that we have to live and work through these bugs every single day. Maybe a security bug like this “root” one doesn’t really affect you, but there are plenty of others that I’m sure do. I’m not deluded enough to think that complex software can ever be entirely bug-free, but I’d love to see more emphasis put on getting Apple’s updates refined next year, rather than necessarily getting them released by mid-September.1 There’s a lot that High Sierra gets right — the transition to APFS went completely smoothly for me, and the new Metal-powered WindowServer process seems to be far more responsive than previous iterations — but there is also a lot that feels half-baked.
Update: It gets worse — based on reports from security researchers on Twitter, this bug is exploitable remotely over VNC and Apple Remote Desktop. So, not only is this bug bad for any Mac left in a room with other people, it’s also bad for any Mac running High Sierra and connected to the internet with screen sharing or other remote services enabled. It’s worth adding a strong password to the root user account if you haven’t already. Thanks to Adam Selby for sending this my way.
When affected users type the word “it” into a text field, the keyboard first shows “I.T” as a QuickType suggestion. After tapping the space key, the word “it” automatically changes to “I.T” without actually tapping the predictive suggestion.
Neither of these bugs — nor, incidentally, the “A [?]” autocorrect from earlier this month — have personally affected me. For what it’s worth, I keep the predictive bar turned off because I find that I get more errors with it switched on.1 I don’t know if that has an effect on whether I see these bugs.
I don’t know how accurate the broken windows theory is,2 nor how appropriate it would necessarily be to compare it to problems with input devices. But it kind of feels as though the occasional usability irritants — interactivity-blocking animations, occasional layout bugs, and the like — have been ignored as a cost of a rapid development cycle. It seems like the tolerance of these kinds of bugs has built up to the point where input device bugs are now shipping.
I wasn’t messing around when I wrote that input devices should never be buggy. Users already don’t fully trust computers; when their only control interface is disobedient, I bet it reinforces this mistrust — or, at least, increases user frustration.3
Bugs like the word “is” turning into “I.S” are not, in of themselves, all that alarming, but their accumulated effect is deeply irritating. Because parts of iOS’ system are shared with Apple’s three other operating systems — soon to be four, with the release of the HomePod — these bugs can occur in many contexts. And, of course, bugs from Apple are not the only ones users will be confronted with working around daily. I once tried keeping track of all of the bugs I encountered in everything I use, and I stopped after about a hundred individual notations in about three days. Fixing bugs — even little ones — needs to be as high of a priority as new features. I understand that developers often don’t get to make that decision, but someone should.
I doubt that there is any effect on autocorrect behaviour no matter whether predictive is turned on. I find it easier to see when autocorrect is about to insert a word, though, when the old-style balloon appears over whatever I’m typing. ↩︎
I understand that the broken windows theory has had very racist connotations, particularly in Giuliani’s New York, but I can’t think of anything else that communicates a sense of fixing small things prevents bigger problems. My understanding is that the theory itself isn’t racist, but its implementation often has been due to socioeconomic circumstances and aggressive policing. If you have any suggestions on what I can replace this with, I’m all ears. ↩︎
I can’t find many studies specifically about users’ trust in computers and how computer-made errors affect that. I did find a study on children’s use of handwriting recognition software where computer-made errors caused concern for users, and I also found plenty of other user interface studies noting the importance of predictability and consistency. However, I think users’ concerns and frustrations are borne from a sense that they do not trust the computer when it behaves unpredictability. ↩︎
The point is, the iPhone X-spensive (pronounced “tenspensive”) is very expensive and still does not come with literally everything!
As Gizmodo points out in its great investigation into all of the various charging cables and plugs Apple offers, to actually get the quickest charging, you need to buy a 61-Watt adapter plug and a USB-C to Lightning cable.
And, to protect your phone, you have to buy a case! To listen to music wirelessly, you have to buy Bluetooth headphones! To look at it while eating a ham sandwich, you have to buy the ham sandwich! That’s right, a lot of people don’t know this, but there’s no ham sandwich in the box! Outrageous!
A classic Macalope retort to a typical whiny article, right?
Well, not exactly. In this case, I actually think Murphy has a good point: at $25 per USB-C to Lightning cable, Apple has good reason not to include them in the box, but the customer experience would be way better if they did.
Apple doesn’t have to include headphones in every iPhone box, but they do because they know that it’s far better for someone to have the option to listen to music as soon as they set up their device. They don’t have to ship every iPhone with a 50% battery charge, but they do because it’s a better experience when a customer opens the box. “Batteries not included” is a barely-tolerable buying experience for kids’ toys; Apple understands that its smartphone equivalent is unacceptable.
But if a customer has one of Apple’s recent-generation laptops, they also need to buy a cable with their new iPhone. I wouldn’t be surprised if a handful of customers bought their new iPhone and forgot to buy a USB-C cable with it, and had to drive to their nearest Apple Store or electronics retailer to pick one up. That sucks.
Likewise, the fast charging feature in these new iPhones requires additional hardware, which means that it’s something many users will be unlikely to discover on their own. Maybe that doesn’t matter much overall, but it would be fantastic if customers could experience that right out of the box.
The “iPhone X is expensive!” complaints will continue despite the phone selling extremely well. It occurs to the Macalope that in a market-based society, the real way to complain about something being too expensive is to not buy it. But the passive-aggressive whining about it lets you have your cake and complain about it, too.
I don’t think anyone’s buying decision for a thousand-dollar smartphone is predicated on whether a $25 cable is in the box. But Murphy’s complaint is not invalidated because of that; there are reasonable arguments to be made for the cable’s inclusion on the grounds of value and the unboxing experience. Maybe the 61-watt charging brick is a step too far — it’s heavy and probably expensive to build — but what about the 29-watt brick? And maybe it is absurd to include both USB-A and USB-C Lightning cables in every iPhone’s box, like Apple used to do when they shipped iPods with USB and FireWire cables, but they could include a USB-C to Lightning cable with every MacBook Pro or offer the opportunity for customers to make a trade at the point of purchase.
Murphy’s article isn’t passive-aggressive — it’s customer feedback. You may disagree with it, as does the Macalope, but “that’s capitalism!” isn’t a valid response on its own.
This wrong on its face. It imagines decades ago that the FCC inshrined some plaque on the wall stating principles that subsequent FCC commissioners have diligently followed. The opposite is true. FCC commissioners are a chaotic bunch, with different interests, influenced (i.e. “lobbied” or “bribed”) by different telecommunications/Internet companies. Rather than following a principle, their Internet regulatory actions have been ad hoc and arbitrary — for decades.
This is absolutely a fair take by Graham: the FCC has, indeed, failed to uphold net neutrality provisions in the past and is currently doing so, as he points out in a following paragraph:
There are gross violations going on right now that the FCC is allowing. Most egregiously is the “zero-rating” of video traffic on T-Mobile. This is a clear violation of the principles of net neutrality, yet the FCC is allowing it — despite official “net neutrality” rules in place.
Under the previous FCC administration, AT&T and Verizon were warned to cease similar zero-rating practices, and an investigation was being conducted. That is, until Ajit Pai and this FCC administration shut those investigations down and retracted their warnings. So, yeah, the FCC is currently failing to uphold net neutrality regulations, but that’s because it’s run by a cable-chummy chairman.
This is where Graham goes off the rails:
More concretely, from the beginning of the Internet as we know it (the 1990s), CDNs (content delivery networks) have provided a fast-lane for customers willing to pay for it. These CDNs are so important that the Internet wouldn’t work without them.
I just traced the route of my CNN live stream. It comes from a server 5 miles away, instead of CNN’s headquarters 2500 miles away. That server is located inside Comcast’s network, because CNN pays Comcast a lot of money to get a fast-lane to Comcast’s customers.
The reason these egregious net net violations exist is because it’s in the interests of customers. Moving content closer to customers helps. Re-prioritizing (and charging less for) high-bandwidth video over cell networks helps customers.
There’s so much amiss here that it beggars belief, especially coming from someone as technologically-knowledgable as Graham.
CDNs are absolutely a critical piece of the infrastructure of the modern web. They are what allow us to reliably stream video or access media-rich web applications around the world. But they are not inherently tied to ISPs like Comcast, nor are they a paid-for “fast lane” that violates the spirit of net neutrality.
I just launched one of CNN’s streaming videos, and it was being served from a server owned by Akamai. Akamai is a private company that CNN has its own contract with; my ISP, Shaw, provides the dumb pipe between my computer and — through some switching boxes and big fibre optic cables — Akamai’s servers. But Shaw does not have its own special contract with Akamai to serve that video any faster or slower than it would be if it were served from, say, Cloudflare. They can’t: I live in Canada, and such an arrangement is illegal here.
Moreover, the idea that CDNs somehow infringe upon net neutrality provisions is complete nonsense. Net neutrality is a set of rules that requires internet service providers to treat all traffic passing through their network identically, without prioritizing some data or blocking others. CDNs are a way for website owners to host their media in multiple places around the world. They’re completely different fields.
And, to his last point in that quote, Graham is right that being able to serve video over mobile networks more reliably and for less money is good for consumers. That’s why ISPs — fixed or mobile — should be competing on reliability, speed, and price, not gimmicks or special offers. That’s what happened after T-Mobile introduced their ostensibly unlimited plan last year.
You might say it’s okay that the FCC bends net neutrality rules when it benefits consumers, but that’s garbage. Net neutrality claims these principles are sacred and should never be violated. Obviously, that’s not true — they should be violated when it benefits consumers.
As explained above, this argument is nonsense. Net neutrality rules do not get in the way of consumer benefits. They simply make it so that ISPs are treated as the dumb pipes that they are, and require them to compete on tangible consumer benefits like pricing and speed.
This means what net neutrality is really saying is that ISPs can’t be trusted to allows act to benefit consumers, and therefore need government oversight. Well, if that’s your principle, then what you are really saying is that you are a left-winger, not that you believe in net neutrality.
This, too, is utter garbage. Plenty of conservative-leaning people and politicians favour differing amounts of government oversight across a wide variety of industries. 73% of self-identified Republicans in a poll by Mozilla say they strongly or somewhat strongly support net neutrality rules. I don’t know — maybe 73% of the Republicans surveyed by Mozilla are actually secret “left-wingers” who may also support crazy liberal ideas like a representative democracy or freedom of speech. Or, perhaps, this isn’t a partisan issue: the internet has become a utility, and many people from across the political spectrum think it ought to be treated as such.
The premise of this app by Marco Land was enough for me to buy it immediately:
With 657 billion digital images per year being captured and pushed to the web, it is likely that at some point in your life you’ve taken a photo that already exists. And you will continue to do so with the help of this app.
CCamera is the first camera app that takes images that have already been uploaded to the internet. It brings your photos to the next level — because they’re not yours.
It costs a dollar and will probably give you more satisfaction than a Snickers bar. Totally worth it.
Since the beginning of 2017, Android phones have been collecting the addresses of nearby cellular towers—even when location services are disabled—and sending that data back to Google. The result is that Google, the unit of Alphabet behind Android, has access to data about individuals’ locations and their movements that go far beyond a reasonable consumer expectation of privacy.
Quartz observed the data collection occur and contacted Google, which confirmed the practice.
The cell tower addresses have been included in information sent to the system Google uses to manage push notifications and messages on Android phones for the past 11 months, according to a Google spokesperson. They were never used or stored, the spokesperson said, and the company is now taking steps to end the practice after being contacted by Quartz. By the end of November, the company said, Android phones will no longer send cell-tower location data to Google, at least as part of this particular service, which consumers cannot disable.
As Michael Rockwell pointed out, Google only stopped when contacted by Quartz; how long would this practice have continued if Quartz had not discovered this? And why were they doing this in the first place if they weren’t storing or using the location information? And why will it take until the end of the month to stop collecting this information?
The FCC received a record-breaking 22 million comments chiming in on the net neutrality debate, but from the sound of it, it’s ignoring the vast majority of them. In a call with reporters yesterday discussing its plan to end net neutrality, a senior FCC official said that 7.5 million of those comments were the exact same letter, which was submitted using 45,000 fake email addresses.
But even ignoring the potential spam, the commission said it didn’t really care about the public’s opinion on net neutrality unless it was phrased in unique legal terms. The vast majority of the 22 million comments were form letters, the official said, and unless those letters introduced new facts into the record or made serious legal arguments, they didn’t have much bearing on the decision. The commission didn’t care about comments that were only stating opinion.
There is strong public support for the net neutrality rules in place today that prevent ISPs from prioritizing some kinds of traffic over others, yet the FCC didn’t care. Perhaps public commentary shouldn’t outweigh expert opinion — which, by the way, tends to side with Title II proponents — but perhaps it should also be considered by the FCC as an indication that what they’re proposing is disagreeable to Americans. They can correct course. Remember when previous FCC Chairman Tom Wheeler released his first draft of a proposal to allow “fast lanes”? That was met with public outcry; so, the commission listened and changed course. Pai could do that, but he won’t.
By the way, the FCC isn’t cooperating with the New York Attorney General’s investigation into those spam comments, many of which involved the theft or imitation of Americans’ identities.
The only saving grace is that the better-managed newsletters ask you to confirm that you really really want to receive emails from them. They do this by sending a single email – normally with a clickable confirmation link – to the email address entered on their subscription form.
If you don’t respond to the confirmation email, you don’t get any follow-up emails. That’s how things are supposed to work. And it’s called double opt-in.
Rather, as the majority of companies have moved to single opt-in, recipients have become re-educated on how email marketing confirmation works. Today, most people don’t expect or look for a double opt-in confirmation message when they subscribe to a newsletter.
Indeed, we’ve seen double-opt in rates within MailChimp slip to 39%. This means 61% of people start but do not finish the double opt-in process.
Maybe that’s because some people are given the opportunity to not be spammed, either when they perhaps didn’t intend to subscribe to a company’s emails, or perhaps they had the chance to second-guess their subscription after seeing their already-full inbox. That’s a good thing.
For what it’s worth, nearly all the newsletters I subscribe to still use double opt-in.
Two things have saved my inbox from becoming a complete disaster over the past couple of years: double opt-in, and iOS’ prompt to unsubscribe from newsletter emails.
This announcement from MailChimp coincides with Julia Angwin’s report for ProPublica explaining how easily thousands of malicious subscriptions overwhelmed her email inbox and prevented her from doing her job.
Earlier today, I picked apart Ajit Pai’s comments made introducing his proposed repeal of net neutrality rules, but there was one thing I missed. There’s this trope that Pai has repeatedly invoked since taking office:
For almost twenty years, the Internet thrived under the light-touch regulatory approach established by President Clinton and a Republican Congress. This bipartisan framework led the private sector to invest $1.5 trillion building communications networks throughout the United States. And it gave us an Internet economy that became the envy of the world.
It’s a nonsense argument, as Rob Pegoraro pointed out in May in the Washington Post:
But Pai’s history is wrong. The government regulated Internet access under Clinton, just as it did in the last two years of Barack Obama’s term, and it did so into George W. Bush’s first term, too. The phone lines and the connections served over them — without which phone subscribers had no Internet connection — did not operate in the supposedly deregulated paradise Pai mourns.
Without government oversight, phone companies could have prevented dial-up Internet service providers from even connecting to customers. In the 1990s, in fact, FCC regulations more intrusive than the Obama administration’s net neutrality rules led to far more competition among early broadband providers than we have today. But Pai’s nostalgia for the ’90s doesn’t extend to reviving rules that mandated competition — instead, he’s moving to scrap regulations the FCC put in place to protect customers from the telecom conglomerates that now dominate the market.
The landscape of ISPs and the role they play in our lives has dramatically shifted since the mid-’90s. They are more like utilities than ever before, and ought to be regulated as such.
If you’re anything like me, when you’re shopping for broadband, you probably compare four things amongst your different options: speed, monthly allotment, availability, and price. That’s it. Internet service providers are dumb pipe provisioners. An electrical company can’t mandate which appliances you use or what you keep in your fridge; an ISP shouldn’t be allowed to limit your access to certain web services or promote others.
In addition to ditching its own net neutrality rules, the Federal Communications Commission also plans to tell state and local governments that they cannot impose local laws regulating broadband service.
It isn’t clear yet exactly how extensive the preemption will be. Preemption would clearly prevent states from imposing net neutrality laws similar to the ones being repealed by the FCC, but it could also prevent state laws related to the privacy of Internet users or other consumer protections. Pai’s staff said that states and other localities do not have jurisdiction over broadband because it is an interstate service and that it would subvert federal policy for states and localities to impose their own rules.
It’s not just an interstate service; the internet is an international service. Recent proposals from the FCC sharply contrast with net neutrality and online privacy legislation passed by the European Union and in Canada.
After the FCC canned internet privacy rules shortly before they were set to go into effect, several states and Seattle proposed legislation to protect the privacy of consumers living in their regions. If it isn’t the FCC’s job to police infringement upon Americans’ privacy by ISPs — as they seem to believe — why would they also think they have the power to usurp that right from states as well? And, aside from the global nature of the web, why would Pai’s FCC be so keen to preempt states from proposing local net neutrality rules?
The Federal Communications Commission announced on Tuesday that it planned to dismantle landmark regulations that ensure equal access to the internet, clearing the way for companies to charge more and block access to some websites.
The proposal, put forward by the F.C.C. chairman, Ajit Pai, is a sweeping repeal of rules put in place by the Obama administration. The rules prohibited high-speed internet service providers from blocking or slowing down the delivery of websites, or charging extra fees for the best quality of streaming and other internet services for their subscribers. Those limits are central to the concept called net neutrality.
Ajit Pai published a statement (PDF) on the FCC website, and it’s offensively misleading:
For almost twenty years, the Internet thrived under the light-touch regulatory approach established by President Clinton and a Republican Congress. This bipartisan framework led the private sector to invest $1.5 trillion building communications networks throughout the United States. And it gave us an Internet economy that became the envy of the world.
But in 2015, the prior FCC bowed to pressure from President Obama. On a party-line vote, it imposed heavy-handed, utility-style regulations upon the Internet. That decision was a mistake. It’s depressed investment in building and expanding broadband networks and deterred innovation.
Pai conflates the regulation of the internet with regulation of internet service providers. If he’s doing this unintentionally, he’s too stupid to run the FCC. But that clearly isn’t the case: he isn’t stupid, and I fully believe he’s conflating the two intentionally. Regulating the internet really does sound like a bad thing, but regulating Verizon and Comcast probably sounds pretty reasonable to most people — most people hate the way their internet service provider treats them. His claim that the internet is being “micromanaged” is an outright lie.
Moreover, his complaint that net neutrality regulations were passed under partisan terms is utterly ridiculous given that his proposal is also expected to pass along partisan lines — only this time, in a way that’s favourable to him.
Finally, his claim that Title II regulations have reduced broadband investment by ISPs is also a lie.
This is why I’m proposing today that my colleagues at the Federal Communications Commission repeal President Obama’s heavy-handed internet regulations. Instead the FCC simply would require internet service providers to be transparent so that consumers can buy the plan that’s best for them. And entrepreneurs and other small businesses would have the technical information they need to innovate. The Federal Trade Commission would police ISPs, protect consumers and promote competition, just as it did before 2015. Instead of being flyspecked by lawyers and bureaucrats, the internet would once again thrive under engineers and entrepreneurs.
The internet is thriving under engineers and entrepreneurs; retaining Title II classification would allow small and independent creators to compete against established players. Repealing that classification, as Pai is proposing, would allow internet service providers create their own marketplaces with better service going to the richest and best-connected websites.
FTC Commissioner Terrell McSweeny took to Twitter to dispute the notion that the FTC could be able to adequately protect consumers:
So many things wrong here, like even if @FCC does this @FTC still won’t have jurisdiction. But even if we did, most discriminatory conduct by ISPs will be perfectly legal.
This news is dropping today and the text of the proposal will be released tomorrow because it’s the start of the Thanksgiving long weekend in the United States. Pai is counting on your outrage being buried under enough turkey and booze by Monday that you’ll forget about it. You can’t.
I’m Canadian, so it sounds like I shouldn’t care about this, but I do. I have to. The internet economy that is “the envy of the world”, in Pai’s words, is mostly an American one, so regulations that affect those companies affect the world, especially considering how weak American anti-trust regulations tend to be.
Consider that Comcast is working on a Netflix competitor, and that they also own NBCUniversal. It’s not hard to imagine an environment in which Comcast charges Netflix an extremely high rate to carry NBCUniversal TV shows and movies while also requiring Netflix to pay to be in their “fast lane” of internet service.
Comcast could also conceivably offer their streaming service at a reduced rate, or not count it against monthly bandwidth caps. In 2014, Kate Cox of the Consumerist reported that there were plenty of well-populated regions in the United States where Comcast had no broadband competition. As of last year, around 78% of Americans had a choice of zero or one provider for broadband of 25 Mbps or higher. In regions where Comcast is the only option, they could choose to offer NBC and MSNBC at a reduced rate on the web, but charge higher prices to view CNN or Fox News. If you didn’t like this, you could lodge an FTC complaint; but, as long as your ISP were being transparent about these practices, it wouldn’t be deceptive and may not even necessarily be predatory.
As cable companies increasingly become providers of television, home and business internet, home phone, cellular, and streaming services as well as making and distributing movies, music, and TV shows — including the news — this proposal becomes increasingly toxic. Combine this proposal with other moves Pai’s FCC has made and it’s a recipe for preserving the interests of the biggest businesses and media entities, and reducing competition from upstart and lesser-funded businesses.
You can — and should — hammer the FCC with your complaints, calls, and feedback on this. But be prepared for the long haul on this, because no matter which way Pai’s proposal goes, there’s a bigger story here. Karl Bode, Techdirt:
Supporters of net neutrality also need to understand that the broadband industry’s assault on net neutrality is a two-phase plan. Phase one is having an unelected bureaucrat like Ajit Pai play bad cop with his vote to dismantle the rules. Phase two will be to gather support for a net neutrality law that professes to be a “long-standing solution to this tiresome debate.” In reality, this law will be written by ISP lobbyists themselves as an attempt to codify federal apathy on this subject into law. These weaker protections will be designed to be so loophole-filled as to effectively be useless, preventing the FCC from revisiting the subject down the road. A solution that isn’t — for a problem they themselves created.
It’s understandable that the public and press is tired of this debate after fifteen years. But instead of hand wringing and apathy, we should be placing the blame for this endless hamster wheel at the feet of those responsible for it: Comcast, AT&T, Verizon and Charter, and the army of lawmakers, economists, fauxcademics, and other hired policy tendrils willing to sell out the health of the internet — and genuinely competitive markets — for a little extra holiday cash. Folks that honestly believe they can lie repeatedly with zero repercussion, and hide a giant middle finger behind the gluten-free stuffing and Aunt Martha’s cardboard-esque pumpkin pie.
Pai is carrying water for ISPs and their paid interests in that damn mug of his which, incidentally, is also big enough to hide the middle finger he’s giving Americans. If you live in the United States, it’s up to you to tell him to put his mug down and start working for your interests, instead of for ISPs and against you. You deserve better.
I’m encouraged to see that “net neutrality” is a trending topic on Twitter, too. It’s something that everyone should be concerned about, regardless of age, political affiliation, or interest in technology. It’s important to spread the word on this to everyone you know: explain what net neutrality is, why it’s so important to preserve Title II rules, and talk about what normal people can do about it. Everyone deserves better than what Pai’s has proposed.
Apple announced on Friday that the HomePod’s release would be delayed until next year and, of course, that made some people fear the worst. Like ZDNet’s Adrian Kingsley-Hughes, who claims that this delay means that Apple has “[slipped] closer to becoming ‘just another tech company’”:
Apple didn’t go into any details as to why it had to delay the release beyond a very vague “we need a little more time before it’s ready for our customers.” For a product that was demoed on stage back in June at WWDC isn’t ready almost five months later, and won’t be until some “early 2018.”
Incidentally, the HomePod was not demoed on stage at WWDC. It was announced on stage and a few press outlets were given private demos, but those publications were allowed limited access and — something I believe is critical — they weren’t allowed to test Siri.
Apple was in in similar situation last year with the AirPods, although the company did manage to get them out of the door just before Christmas.
Delaying products that would be pretty great gifts is regrettable, but I don’t know that having limited quantities of AirPods available last year registers on the same scale as having the HomePod being entirely unavailable until next year.
Here’s a counterargument to Kingsley-Hughes’ narrative: the iPhone X was released in larger quantities than rumours suggested, and Apple has been bumping up shipping estimates across the board. If you tried ordering one just a week or two ago, you would have seen shipping estimates of 5–6 weeks; now, shipping times are at 2–3 weeks, and I’ve always found that Apple beats those estimates in practice.
As much as I don’t want to bring up the tired old “Apple wouldn’t have done this under Steve Jobs’ watch” trope, a lot of what’s happening at Apple lately is different from what the came to expect under Jobs. Not to say that things didn’t go wrong under his watch, but product announcements and launches felt a lot tighter for sure, as did the overall quality of what Apple was releasing.
Remember the white iPhone 4 debacle? That happened under Jobs’ watch, as did MobileMe and buggy x.0 releases of iOS. They were embarrassing for the company, and I’m sure Apple works hard to not repeat the same mistakes.
I’ll let you in on a little secret: I’ve had an article in draft for a while in which I complain about a lot of bugs in iOS 11. I’ve hesitated to publish it because it just sounded whiny, and it wouldn’t age well. And, it turns out, there was an advantage to my delinquency: in the two months since its release, Apple has issued several small patches to iOS 11 which have dramatically improved its stability and fixed lots of the issues I wrote about.
For what it’s worth, I think iOS 11 was released too soon. I think the artificial September deadline bit Apple in the ass as they tried to wrap up overhauls of major system components — especially Springboard. It’s not just the fault of the iPhone X, either; many of the improvements to iPad multitasking this year required big updates to systemwide processes. But I also think that there’s a welcome commitment to releasing smaller updates more frequently. Of course we all want the x.0 release to be as stable and bug-free as possible, but I’m glad Apple has reduced their tendency to leave bugs — at least on iOS — hanging for months until they popped out an x.1 update.
Don’t mistake my attempt at combatting the Jobs trope for complacency in bugs and hardware release dates. The delay of the HomePod is unfortunate and would have been easily prevented by not announcing it months in advance, even if they thought a December ship date was likely. I hope the iMac Pro isn’t similarly delayed. I’d love to see a bit more of that old Apple magic again where desirable products are available to buy or preorder the same day they’re announced. Yet, I simply don’t think this is cause for the kind of concern that Kingsley-Hughes is raising.
Brooks Barnes and Michael J. de la Merced, New York Times:
Comcast, the cable giant and owner of NBCUniversal, is in preliminary talks to buy entertainment assets owned by 21st Century Fox, including a vast overseas television distribution business, the Fox movie studio, the FX cable network and a group of regional sports channels.
Under the deal being discussed, the Murdoch family, which controls 21st Century Fox, would retain the Fox News cable network, certain sports holdings, a chain of local television stations and the Fox broadcast network.
Disney is also rumoured to be interested in these Fox assets, as is Sony. All of these companies are gigantic media conglomerates, Comcast being the largest in the United States, Disney being the second largest, and 21st Century Fox third.
One thing that’s absolutely critical to understand when considering questions about media ownership and net neutrality is that there are few major media companies that are in single lines of business. Increasingly, these conglomerates are becoming vertically integrated with unprecedented reach: they finance movies and television, distribute and market their programming, some provide the cable and internet services that transmit video to viewers’ computers and televisions, and many own or have major stakes in streaming platforms as well. So as the FCC contemplates dismantling net neutrality regulations, they are helping create a situation in which Comcast could conceivably own and prioritize their media assets from their production to your couch, while restricting competition. Imagine if heyday-era General Motors owned everything from steel mills to parts of the Interstate system, but instead of transportation, it’s information and entertainment.
I maintain that Comcast should never have been allowed to buy NBCUniversal. That kind of cross-market dominance is toxic for competition. A similar mistake should be avoided by blocking their purchase of 21st Century Fox’s entertainment businesses as well.
I did get the translation feature to work, by the way, and it’s just as confusing as everything else about the Pixel Buds. You’d think that you could just tap the right earbud and ask Google to translate what you’re hearing, but it’s more complicated than that. You do have to tap the earbud and ask Google to translate, but then you have to open up the Google Translate app and hold your phone in front of your foreign language-speaking friend. And, of course, your phone must be a Google Pixel or Pixel 2.
The dream is to be able to have a relatively normal conversation with someone whose language you don’t speak, right? That’s clearly not what you get here. That’s a shame, because it’s something Google ought to be able to do very well — or, at least, that’s the promise of a company that mines the world’s data, isn’t it?
In June, Apple announced that it was challenging Amazon’s sleeper hit Amazon Echo with its own voice assistant-enabled speaker, called HomePod, and said the product would be released in December 2017. Today, the company released a statement that the speaker will be delayed until 2018: “We can’t wait for people to experience HomePod, Apple’s breakthrough wireless speaker for the home, but we need a little more time before it’s ready for our customers. We’ll start shipping in the US, UK, and Australia in early 2018.”
I’ve been trying to figure out why the HomePod was announced at WWDC in June at all instead of, say, during Apple’s more product-focused September keynote. My best guess is that it was a way to complete the story of SiriKit in a broader context and encourage adoption.
No word on the iMac Pro, by the way, which is still scheduled to begin shipping in December.
The head of the Federal Communications Commission is set to unveil plans next week for a final vote to reverse a landmark 2015 net neutrality order barring the blocking or slowing of web content, two people briefed on the plans said.
In May, the FCC voted 2-1 to advance Republican FCC Chairman Ajit Pai’s plan to withdraw the former Obama administration’s order reclassifying internet service providers as if they were utilities. Pai now plans to hold a final vote on the proposal at the FCC’s Dec. 14 meeting, the people said, and roll out details of the plans next week.
The FCC is currently in Republican hands; today, they voted to lift regulations that prevent broadcasters and newspapers from common ownership in the same market. According to Shepardson, the FCC also plans to vote in December to lift rules preventing any single media company from owning television stations reaching 39% of households. The cumulative effect of this push to lift sensible regulations will likely be catastrophic for independent media and diverse viewpoints. It fundamentally rots the very idea of a free and independent press, and is ruinous for a healthy democracy.
It’s worth pointing out that rescinding net neutrality regulations is not what Americans want. Jon Brodkin, Ars Technica:
The FCC voted in May to take public comment on a preliminary proposal to overturn the 2015 net neutrality order. With the public comment period now over, Pai is free to push through a final vote.
The public comments were dominated by spam and form letters, but a study funded by ISPs found that 98.5 percent of unique comments were written by people who want the FCC to leave the rules in place.
Statistically, if you’re American, you favour preserving these regulations. Ajit Pai and the other Republican commissioners at the FCC are currently planning to vote against the will and want of an overwhelming majority of Americans. That’s outrageous.
iOS 11.2 is currently in beta, and will be released to all iPhone and iPad users in the coming weeks, and one of the key features for iPhone 8/8 Plus/X owners is accelerated wireless charging. Previously, all wireless charging was limited to 5W, but this update will raise that limit to 7.5W. That’s a 50% increase in power on paper, but I had to know what the real world difference was.
The only place I’m considering using one of these inductive charging pads is on my desk at work, because I still use wired headphones because I can’t find a pair of wireless headphones that I like. But I’m having a hard time justifying the expense for what is effectively a glorified trickle charger, especially since battery life with my iPhone X has been fantastic.
Update: I’ve heard that 7.5W charging is only supported on certain charging bases; as far as I can figure out, that’s limited right now to the Mophie and Belkin ones that are sold through Apple’s online store. Both of those charging bases carry a note like this:
High-speed wireless charging
Leverages Qi wireless technology to deliver safe, quick-charging speeds with up to 7.5W of power.
As Federico Viticci writes, the Qi standard supports up to 15W, so I’m not sure why the third beta of iOS 11.2 unlocks only up to 7.5W, nor do I understand why only specific base stations will apparently support this faster charging rate.
This morning, a few publications ran with a holiday-themed data study about how families that voted for opposite parties spent less time together on Thanksgiving, especially in areas that saw heavy political advertising. It’s an interesting finding about how partisan the country is becoming, and admirably, the study’s authors tried to get data that would be more accurate than self-reporting through surveys. To do this, they tapped a company called SafeGraph that provided them with 17 trillion location markers for 10 million smartphones.
The data wasn’t just staggering in sheer quantity. It also appears to be extremely granular. Researchers “used this data to identify individuals’ home locations, which they defined as the places people were most often located between the hours of 1 and 4 a.m.,” wrote The Washington Post.
SafeGraph was also able to use their data to state that attendees at Donald Trump’s inauguration had lower household incomes than those attending the Women’s March the following day which, regardless of whether you believe it, is a deeply creepy claim.
I have no idea which apps share my data with SafeGraph because I grant so many apps approval to share collected information with third parties, with no mention of what those third parties may be. I don’t like that I have seemingly no control over this; blanket approval statements are pretty standard in privacy policies on websites and in apps, and they need to be stopped. I did not explicitly give permission for my data to be shared with a creepy location tracking company, and it’s completely unfair to assume that it’s okay.
For what it’s worth, iOS should also request explicit permission to enable ad tracking. It is presently allowed by default — at least in Canada — and users must opt out in Settings.
Stevan Dojcinovic, in an op-ed for the New York Times, reacting to the fallout from Facebook’s announcement last month that they would move unpaid news stories from pages into a separate News Feed in some countries:
It wasn’t just in Serbia that Facebook decided to try this experiment with keeping pages off the News Feed. Other small countries that seldom appear in Western headlines — Guatemala, Slovakia, Bolivia and Cambodia — were also chosen by Facebook for the trial.
Some tech sites have reported that this feature might eventually be rolled out to Facebook users in the rest of the world, too. But of course no one really has any way of knowing what the social media company is up to. And we don’t have any way to hold it accountable, either, aside from calling it out publicly. Maybe that’s why it has chosen to experiment with this new feature in small countries far removed from the concerns of most Americans.
But for us, changes like this can be disastrous. Attracting viewers to a story relies, above all, on making the process as simple as possible. Even one extra click can make a world of difference. This is an existential threat, not only to my organization and others like it but also to the ability of citizens in all of the countries subject to Facebook’s experimentation to discover the truth about their societies and their leaders.
It’s pretty astonishing that an experiment like this would be announced around the same time that Facebook is being questioned about the possible role that misleading targeted ads may have played in the 2016 U.S. Presidential election. There’s no indication yet just how influential these ads were on specific voters or the election itself, but if they had even a slight sway in a developed democracy like that in the U.S., just imagine how influential highly-targeted ads may be in newer and, usually, weaker democracies. Facebook’s careless U.S.-centric attitude is frightening from this non-American’s perspective.
A small quibble with Dojcinovic’s piece: its headline is “Hey, Mark Zuckerberg: My Democracy Isn’t Your Laboratory”, and he refers to “Mark Zuckerberg’s arbitrary experiments”. I think ascribing the actions of a company to its notable figureheads is unproductive as I feel that it reduces a concerning issue of egregious corporate influence and accountability to a personal spat.
I’ve been using my iPhone X for nearly a week now and, while I have some thoughts about it, by no means am I interested in writing a full review. There seem to be more reviews of the iPhone X on the web than actual iPhone X models sold. Instead, here are some general observations about the features and functionality that I think are noteworthy.
The iPhone X is a product that feels like it shouldn’t really exist — at least, not in consumers’ hands. I know that there are millions of them in existence now, but mine feels like an incredibly well-made, one-off prototype, as I’m sure all of them do individually. It’s not just that the display feels futuristic — I’ll get to that in a bit — nor is it the speed of using it, or Face ID, or anything else that you might expect. It is all of those things, combined with how nice this product is.
I’ve written before that the magic of Apple’s products and their suppliers’ efforts is that they are mass-producing niceness at an unprecedented scale. This is something they’ve become better at with every single product they ship, and nothing demonstrates that progress better than the iPhone X.
It’s such a shame, then, that the out-of-warranty repair costs are appropriately high, to the point where not buying AppleCare+ and a case seems downright irresponsible. Using the iPhone X without a case is a supreme experience, but I don’t trust myself enough to do so. And that’s a real pity, because it’s one of those rare mass-produced items that feels truly special.
This is the first iPhone to include an OLED display. It’s made by Samsung and uses a diamond subpixel arrangement, but Apple says that it’s entirely custom-designed. Samsung’s display division is being treated here like their chip foundry was for making Apple’s Ax SoCs.
And it’s one hell of a display. It’s running at a true @3x resolution of 458 pixels per inch. During normal use, I can’t tell much of a difference between it and the 326 pixel-per-inch iPhone 6S that I upgraded from. But when I’m looking at smaller or denser text — in the status bar, for example, or in a long document — this iPhone’s display looks nothing less than perfect.
One of the reasons this display looks so good is because of Apple’s “True Tone” feature, which matches the white balance of the display to the environment. In a lot of indoor lighting conditions, that’s likely to mean that the display is yellower than you’re probably used to. Unlike Night Shift, though, which I dislike for being too heavy-handed, True Tone is much subtler. Combine all of this — the brightness of the display, its pixel density, its nearly edge-to-edge size, and True Tone — with many of iOS’ near-white interface components and it really is like a live sheet of paper in your hand.
Because it’s an OLED display that has the capability of switching on and off individual pixels, it’s only normal to consider using battery-saving techniques like choosing a black wallpaper or using Smart Invert Colours. I think this is nonsense. You probably will get better battery life by doing both of those things, but I’ve been using my iPhone X exactly the same as I have every previous phone I’ve owned and it gets terrific battery life. Unless you’re absolutely paranoid about your battery, I see no reason in day-to-day use to treat the iPhone X differently than you would any other phone.
I’m a total sucker for smaller devices. I’d love to see what an iPhone SE-sized device with an X-style display would be like.
Face ID is, for my money, one of the best things Apple has done in years. It has worked nearly flawlessly for me, and I say that with no exaggeration or hyperbole. Compared to Touch ID, it almost always requires less effort and is of similar perceptual speed. This is particularly true for login forms on the web: where previously I’d see the Touch ID prompt and have to shuffle my thumb down to the home button, I now just continue staring at the screen and my username and password are just there.
I’m going to great pains to avoid the most obvious and clichéd expression for a feature like this, but it’s apt here: it feels like magic.
The only time Face ID seems to have trouble recognizing me is when I wake up, before I’ve put on my glasses. It could be because my eyes are still squinty at the time and it can’t detect that I’m looking at the screen, or maybe it’s just because I look like a deranged animal first thing in the morning. Note, though, that it has no trouble recognizing me without my glasses at any other time; however, I first set up Face ID while wearing my glasses and that’s almost always how I use it to unlock my phone. That’s how it recognizes me most accurately.
Last week, I wrote that I found that there was virtually no learning curve for me to feel comfortable using the home indicator, and I completely stand by that. If you’ve used an iPad running iOS 11, you’re probably going to feel right at home on an iPhone X. My favourite trick with the home indicator is that you can swipe left and right across it to slide between recently-used apps.
Arguably, the additional space offered by the taller display is not being radically reconsidered, since nearly everything is simply taller than it used to be. But this happens to work well for me because nearly everything I do on my iPhone is made better with a taller screen: reading, scrolling through Twitter or Instagram, or writing something.
The typing experience is, surprisingly, greatly improved through a simple change. The keyboard on an iPhone X is in a very similar place to where it is on a 4.7-inch iPhone, which means that there’s about half an inch of space below it. Apple has chosen to move the keyboard switching button and dictation control into that empty space from beside the spacebar, and this simple change has noticeably improved my typing accuracy.
In a welcome surprise, nearly all of the third-party apps I use on a regular basis were quickly updated to support the iPhone X’s display. The sole holdouts are Weather Line, NY Times, and Spotify.
I have two complaints with how the user interfaces in iOS work on the iPhone X. The first is that the system still seems like it is adapting its conventions to fit bigger displays. Yes, you can usually swipe right from the lefthand edge of the display to go back to a previous screen, but toolbars are still typically placed at the top and bottom of the screen. With a taller display, that means that there can be a little more shuffling of the device in your hand to hit buttons on opposite sides of the screen.
My other complaint is just how out of place Control Centre feels. Notification Centre retains its sheet-like appearance if it’s invoked from the left “ear” of the display, but Control Centre opens as a sort of panelled overlay with the status bar in the middle of the screen when it is invoked from the right “ear”. The lack of consistency between the two Centres doesn’t make sense to me, nor does the awkward splitting of functionality between the two upper corners of the phone. It’s almost as though it was an adjustment made late in the development cycle.
Update: One more weird Control Centre behaviour is that it displays the status bar but in a different layout than the system usually does. The status bar systemwide shows the time and location indicator on the left, and the cellular signal, WiFi indicator, and battery level on the right. The status bar within Control Centre is, left to right: cellular signal, carrier, WiFi indicator, various status icons for alarm and rotation lock, location services indicator, Bluetooth status, battery percentage, and battery icon. The location indicator, cellular strength, and WiFi signal all switch sides; I think they should stay consistent.
I don’t know what the ideal solution is for the iPhone X. Control Centre on the iPad is a part of the multitasking app switcher, and that seems like a reasonable way to display it on the iPhone, too. I’m curious as to why that wasn’t shipped.
Cameras and Animoji
This is the first dual-camera iPhone I’ve owned so, not only do I get to take advantage of technological progress in hardware, I also get to use features like Portrait Mode on a regular basis. Portrait Mode is very fun, and does a pretty alright job in many environments of separating a subject from its background. Portrait Lighting, new in the iPhone 8 and iPhone X, takes this one step further and tries to replicate different lighting conditions on the subject. I found this to be much less reliable, with the two spotlight-style “stage lighting” modes to be inconsistent in their subject detection abilities.
The two cameras in this phone are both excellent, and the sensor captures remarkable amounts of data, especially if you’re shooting RAW. Noise is well-controlled for such a small sensor and, in some lighting conditions, even has a somewhat filmic quality.
I really like having the secondary lens. Calling it a “telephoto” lens is, I think, a stretch, but its focal length creates some nice framing options. I used it to take a photo of my new shoes without having to get too close to the mirror in a department store.
Animoji are absurdly fun. The face tracking feels perfect — it’s better than motion capture work in some feature films I’ve seen. I’ve used Animoji more often as stickers than as video messages, and it’s almost like being able to create your own emoji that, more or less, reflects your actual face. I only have two reservations about Animoji: they’re only available as an iMessage app, and I worry that it won’t be updated regularly. The latter is something I think Apple needs to get way better at; imagine how cool it would be if new iMessage bubble effects were pushed to devices remotely every week or two, for example. It’s the same thing for Animoji: the available options are cute and wonderful, but when Snapchat and Instagram are pushing new effects constantly, it isn’t viable to have no updates by, say, this time next year.
I mentioned above that I bought AppleCare+ for this iPhone. It’s the first time I’ve ever purchased AppleCare on a phone, and only the second time I’ve purchased it for any Apple product — the first was my MacBook Air because AppleCare also covered the Thunderbolt Display purchased around the same time. This time, it was not a good buying experience.
I started by opening up the Apple Store app, which quoted $249 for AppleCare+ for the iPhone X. I tapped on the “Buy Now” button in the app but received an error:
Some products in your bag require another product to be purchased. The required product was not found so the other products were removed.
As far as I can figure out, this means that I need to buy an iPhone X at the same time, which doesn’t make any sense as the Store page explicitly says that AppleCare+ can be bought within sixty days.
I somehow wound up on the check coverage page where I would actually be able to buy extended coverage. After entering my serial number and fumbling with the CAPTCHA, I clicked the link to buy AppleCare. At that point, I was quoted $299 — $50 more than the store listing. I couldn’t find any explanation for this discrepancy, so I phoned Apple’s customer service line. The representative told me that the $249 price was just an estimate, and the $299 price was the actual quote for my device, which seems absurd — there’s simply no mention that the advertised price is anything other than the absolute price for AppleCare coverage. I went ahead with my purchase, filling in all my information before arriving at a final confirmation page where the price had returned to $249, and that was what I was ultimately charged.
It’s not the $50 that troubles me in this circumstance, but the fact that there was a difference in pricing at all between pages on Apple’s website. I don’t know why I was ever shown a $299 price, nor do I understand why I’m unable to use the Apple Store app to purchase AppleCare+ for my iPhone X using my iPhone X.
The world has lots of very stupid ideas in it. One of them, one of the most harmful, is the prevailing idea of what it means for one thing to be technologically superior to another. Only a culture sunken to a really frightening and apocalyptic level of libertarian stupidity would regard the Keurig machine — a sophisticated, automated robot designed specifically and only to brew a single serving of coffee, rather than a big efficient pot of it; which presents only illusory ease and convenience only to whoever is using it at the moment of his or her use and to no one else, and only via fragile technologized mediations it wears atop its primary function like an anvil, or a bomb collar; which can be rendered literally unusable by the breakdown of needless components completely ancillary to that primary function — as a technological improvement upon the drip coffeemaker, or the French press, or putting some coffee grounds in a fucking saucepan with some water and holding it over a campfire for a little while until the water smells good. It is not technologically superior to any of those! It is vastly technologically inferior to all of them. It is a wasteful piece of trash. It is not a machine engineered to improve anything or to resolve a problem, but only and entirely the pretext for a sales pitch, a means to separate someone from their money.
Two things that Burneko does not cover in his otherwise comprehensive explanation of a Keurig machine’s failings: dosage and price per pound. Let’s start with dosage.
A K-Cup pod contains somewhere between 9 and 13 grams of coffee grounds. The coffee I make is a bit stronger than most people make, but it’s nowhere near knock-your-head-off territory; even so, I use about 20–22 grams of beans per cup in my AeroPress and follow a method similar to Kaye Joy Ong’s. But even if you like your coffee a little closer to average, you have to fall a long way to get to nine measly grams of beans. That and a Keurig’s low brewing temperature go a long way towards explaining why every cup of Keurig coffee I’ve ever had tastes like laundry water.
And then there’s the price of all of this — up to $50 per pound. There is almost nowhere on Earth you can’t get better coffee shipped to your door for less than $50 per pound. The Keurig is an utterly absurd way to brew expensive instant coffee not very well.
Update: It turns out that some fans of Sean Hannity are destroying their Keurig machines in a bizarre protest that they think offends liberals. This post has absolutely nothing to do with that. For extra credit, reflect on how absurd this update truly is.
Pictures and text often pair nicely together. You have an article about a thing, and the picture illustrates that thing, which in many cases helps you understand the thing better. But on the web, this logic no longer holds, because at some point it was decided that all texts demand a picture. It may be of a tangentially related celeb. It may be a stock photo of a person making a face. It may be a Sony logo, which is just the word SONY. I have been thinking about this for a long time and I think it is stupid. I understand that images —> clicks is industry gospel, but it seems like many publishers have forgotten their sense of pride. If a picture is worth a thousand words, it’s hard for me to imagine there’ll be much value in the text of an article illustrated by a generic stock image.
The Outline is, of course, also a contributor to this trend. A photo of Mark Zuckerberg leads this story about Facebook’s dumb-as-bricks idea to combat revenge porn — which, incidentally, is almost exactly one of O’Haver’s examples. A great article about Twitter’s inconsistent character limit for those using accessibility features is illustrated, for some reason, by an old-timey photo of a man using a Monotype keyboard.
At some point in the past several years, the millions of different possibilities of turning individual pixels into a website coalesced around a singularly recognizable and repeatable form: logo and menu, massive image, and page text distractingly split across columns or separated by even more images, subscription forms, or prompts to read more articles. The web has rapidly become a wholly unpleasant place to read. It isn’t the fault of any singular website, but a sort of collective failing to prioritize readers.
I don’t know about you, but I’ve become numb to the web’s noise. I know that I need to wait for every article I read to load fully before I click anywhere, lest anything move around as ads are pulled in through very slow scripts from ten different networks. I know that I need to wait a few seconds to cancel the autoplaying video at the top of the page, and a few more seconds to close the request for me to enter my email and receive spam. And I know that I’ll need to scroll down past that gigantic header image to read anything, especially on my phone, where that image probably cost me more to download than anything else on the page.
These photos add nothing but hundreds of kilobytes to the story. They can easily be replaced with pictures of William Howard Taft with little consequence. It’s just another reason why full-text RSS feeds continue to be one of the best ways to read a website’s articles.
Earlier this week, I noted on Twitter that I thought that one of Apple’s biggest misses when they released the iPhone 4 was not including a version of Photo Booth. Photo Booth was a huge deal for the Mac when it was included with new Macs that had the built-in iSight camera. Imagine if Apple had released a version of it for the iPhone at any point in the past six years and updated its built-in filters weekly. I think it would have been extremely popular.
Well, they’ve kind of done that with the second version of Clips, their quick little video editing app. I wasn’t enthralled with it when it was first released and, as far as I could tell, neither were most people.
But this new version is exciting. Apple has completely redesigned the app so it’s way easier to use, and they’ve added a new Scenes feature to allow you to virtually change your environment. Fans of Photo Booth might remember Backdrops; Scenes is like that, only far more reliable — I bet it uses ARKit — and with way cooler effects. You can place yourself into a futuristic metropolis, outer space, or even into Star Wars locations.
Clips 2.0 is still too complicated to feel as lightweight and fun as Photo Booth; Snapchat and Instagram — and, to an extent, Animojis — have that market cornered. I’d like to see Clips receive more frequent updates, but there’s something good here that’s absolutely worth checking out if you haven’t tried Clips recently.
You’ve read Steven Levy’s tour of Apple Park, and you’ve read Christina Passariello’s for the Wall Street Journal. But Apple is still putting the finishing touches on the building so they invited Nick Compton of Wallpaper to take a look as well. There is, of course, fantastic photography by Mark Mahaney in this article, but I think this bit — about the iPhone X — profound:
The most advanced iteration of the iPhone, the X, launched with great hoopla at the keynote address, is all screen. Except that’s the wrong way to look at it. The point is that, at least in the way we use it and understand it, it is entirely unfixed and fluid.
I wonder, then, if Ive misses the physical click and scroll of the first iPods, that fixed mono-functionality, the obvious working parts, the elegance of the design solution. But I’ve got him all wrong. ‘I’ve always been fascinated by these products that are more general purpose. What I think is remarkable about the iPhone X is that its functionality is so determined by software. And because of the fluid nature of software, this product is going to change and evolve. In 12 months’ time, this object will be able to do things that it can’t now. I think that is extraordinary. I think we will look back on it and see it as a very significant point in terms of the products we have been developing.
‘So while I’m completely seduced by the coherence and simplicity and how easy it is to comprehend something like the first iPod, I am quite honestly more fascinated and intrigued by an object that changes its function profoundly and evolves. That is rare. That didn’t happen 50 years ago.’
The pitch of the first iPhone was that the fixed plastic keyboards of the BlackBerry, et al., were unchangeable buttons that were there whether you needed them or not. All of that was replaced with an onscreen keyboard, when needed, and a singular “home” button. But, when viewed in the light of only displaying what is necessary, it is striking how — in just ten years — the home button has been reduced to the same level as those plastic keyboards: a fixed button that is there no matter whether it is needed. Nearly the entire user-facing surface of the iPhone X is now as flexible as the bezel-surrounded 3.5-inch display of that original iPhone.
For several years now, the trend among geeks has been to abandon the RSS format.
Has it, though? Sparks doesn’t cite anything to back this up. I’ve seen the occasional tech writer indicate that links surfaced through Twitter are equating, to a certain extent, those found in their RSS subscriptions, and others who see Twitter as increasingly replacing their RSS diet. But to call it a “trend” is, I think, an exaggeration.
I love this argument that Sparks makes, though:
That was never me. The reason I’ve stuck with RSS is the way in which I work. Twitter is the social network that I participate in most and yet sometimes days go by where I don’t load the application. I like to work in focused bursts. If I’m deep into writing a book or a legal client project. I basically ignore everything else. I close my mail application, tell my phone service to take my calls, and I definitely don’t open Twitter. When I finish the job, I can then go back to the Internet. I’ll check in on Twitter, but I won’t be able to get my news from it. That only works if you go into Twitter much more frequently than I do. That’s why RSS is such a great solution for me. If a few days go by, I can open RSS and go through my carefully curated list of websites and get caught back up with the world.
I can’t remember who, but someone once gave me the best tip I’ve ever received for using RSS: subscribe to your must-read websites, and those websites you like but aren’t updated frequently. It prevents your reader from quickly becoming overwhelming.
Truly, though, this isn’t a case for RSS so much as it is a case for a simple, easy-to-use way to receive updates from the websites you trust and like most. You could theoretically replace “RSS” with “JSON Feed” or “Twitter lists” — whatever works best for you. For news junkies like me, though, there will always be a case for dedicated feeds, without the interruption of non-news tweets or Facebook posts. RSS just happens to be one of the simplest implementations of that.
Most people, if they know Lamarr at all, remember her as an exotic beauty who starred in such movies as “Algiers” (1938) with Charles Boyer, and “Come Live with Me” (1941) opposite James Stewart. But behind those lips and those eyes was the brain of an untrained scientist who, after a long day on the MGM lot, would come home and invent things for pleasure. As one of many screen beauties who dated the eccentric aviator Howard Hughes, Lamarr devised rounded (rather than squared-off) wings for a super-fast plane Hughes was designing. Hughes was so impressed that he set Lamarr up with a mini-laboratory in her house.
Today would have been Lamarr’s 103 birthday. A film about her life and legacy — “Bombshell” — is being screened at the Boston Jewish Film Festival running now, and will be released in select theatres November 24.
Equifax has quadrupled spending on security, updated its security tools and changed its corporate structure since the breach, Paulino do Rego Barros Jr., the interim chief, said during a hearing by the Senate Commerce Committee.
But Mr. Barros stumbled when asked by Sen. Cory Gardner (R., Colo) whether Equifax was now encrypting the consumer data it stored on its computers — a basic step in hiding sensitive information from hackers, and one the company previously had admitted it didn’t take before the breach.
“I don’t know at this stage,” Mr. Barros said.
Before this catastrophic breach, your passcode-protected iPhone was more hardened against physical data access than every American’s credit information. Now, who knows? It may still be better-protected.
This is irresponsible to the point of negligence. I sincerely hope criminal charges are brought against Equifax for the results of their indifference towards basic security practices; if no criminal charges apply, it ought to trigger a process to ensure that new laws get written to hold companies accountable for inadequate protection of customer data.
In other news, Equifax reported their quarterly earnings today. Stephen Gandel of Bloomberg:
Equifax’s ability to increase its operating earnings during one of the most disastrous quarters, at least operationally and reputationally, in its history, or the history of most companies, really, attests to how entrenched the business is in the financial system. That will most likely add to the frustration of consumers and their advocates.
All that is probably why Equifax’s stock, which plunged initially after the hack, has rebounded some and been fairly steady. Shares closed at just less than $109 on Thursday before the company announced its results. That’s down from the $143 they were trading at before the hack, but up from the $94 they sank to two days after the hack was disclosed. The stock is amazingly down only 8 percent this year. What’s more, it has a price-to-earnings ratio of 18 times next year’s earnings. That’s not a P/E ratio of a company in jeopardy but one that investors think is highly valued and growing. By comparison, Apple Inc. has a similar P/E of 15.
That’s a hell of an “if” to predicate this entire article on. I did not want to have to deal with two Diaz articles today — one is often enough — but, luckily, the Macalope dismantled that “if”.
So, now that all of the air has been taken out of Diaz’s argument, what is his argument?
You’re looking at a UX disaster, the result of eliminating what is probably the simplest, most intuitive form of navigation ever implemented in consumer electronics: the iPhone’s home button. The iPhone X replaces it with the mess above. This is bad news, because this interaction is a fundamental part of the user experience.
The home button was and is, indeed, a brilliant piece of user interface design. But don’t pretend that it’s completely simple and intuitive; pressing the home button is used to show the multitasking app switcher, access Siri, dismiss Notification Centre and Control Centre, take screenshots, activate accessibility features, invoke Reachability, and more. Oh, and it’s also used to return to the home screen. Lots of functionality has been packed into that little button.
Joanna Stern’s review for the Wall Street Journal – which still concludes that, “Yes, There Are Reasons to Pay Apple $1,000” – documents what this means in detail: “[T]he lack of a home button means your thumb is about to turn into one of those inflatable waving tube-men outside the car dealership […] you must master a list of thumb wiggles, waves and swipes […] the other gestures, however, are buried. Many moves require almost surgical precision.” Heather Kelly, for CNN Money, adds her own experience: “To fill the void left by the Home button, the iPhone X has added new gestures (the different swipes you make with a finger). The process of learning them is a pain, and some of the new options are more work than before.” The Verge declared that “there’s a whole new system of gestures and swipes to learn and master, and many of them will be annoying to remember and difficult to perform with just one hand.”
Diaz doesn’t link to any of these articles, and for good reason: it’s a rubbish argument. Joanna Stern praises the home button swipe in her piece, and the entirety of her criticisms are quoted by Diaz. She doesn’t make a big deal out of it, likely because her review was published just a day after she received her review unit. Heather Kelly was more muted in her first impressions than many reviewers, but she “[doesn’t] doubt anyone’s ability to master a few new finger movements”.
If you want to switch apps, you either swipe along the bottom of the screen or swipe up and hold — you’ll get a little haptic bump and the app switcher will show up. It took a minute to figure out how to do that move consistently. It took me a little longer to figure out how to consistently use Reachability.
I got my iPhone X last night. The idea that there’s some sort of steep learning curve to this thing is, I think, preposterous. Yeah, there are some decade-old habits I have to break, like when I moved an app around on my home screen this morning and tried pressing on a non-existent home button instead of tapping the “done” button in the upper-right. But the home indicator strip feels completely natural. It’s a testament to the speed and responsiveness of the device and its UI that these gestures feel as smooth and predictable as pinch-to-zoom did on the first iPhone.
Do you have to learn some new stuff? Sure. Will it take a little bit to get accustomed to the device? Absolutely. Is it a “nightmare”, as Diaz frames it in this article’s headline? Hardly.
Back to Diaz:
We knew this was coming, but the reviews and the sudden spike in “how to navigate your iPhone X” tutorials puts a new spotlight on the interaction problems that the elimination of the home button created.
No, it puts a spotlight on websites that really want to cash in on some sweet Google rankings by content farms. There’s a brief three-screen guide when you first set up an iPhone X that demonstrates how to use the home indicator. Once you get used to it, it feels completely natural, particularly if you’ve used an iPad running iOS 11.
Diaz spends another few hundred words quoting writers who made their explanations of other iOS gestures overly complicated, quoting Steve Jobs — hey, remember when people who generally liked using Apple products were Steve Jobs “fanboys”? Times sure have changed — and looking through rose-tinted glasses at the history of the iPhone.
I can’t make Diaz change his mind, no matter how ridiculous his arguments. He thinks iOS 11 “sucks” because UI elements in a few apps are misaligned, that the iPhone X is an egregious excess, and that the replacement of the home button with a handful of gestures makes the device a failure. This is the molehill he wants to die on.
My favourite thing about the release of a well-received Apple product is that there’s a great new product on the market — ideally, they’ve set a new benchmark. My second favourite thing is all the piss-poor takes from the usual suspects, like John C. Dvorak writing in PC Magazine:
The first round of iPhone X reviews are out, and a number of them came from a strange place: amateur YouTubers.
As of November 1, when this piece was published, Apple’s new PR strategy had already been picked apart and scrutinized in excellent pieces from Christina Bonnington and Matt Alexander, among many others. It’s already played out. What can Dvorak possibly contribute? Well, after several paragraphs about how YouTube is new and hip with the youth, he arrives at:
Perhaps Cupertino senses that iPhone X may end up like Microsoft Vista: unfairly criticized.
Windows Vista was too long in the making, removed a litany of features, was too slow on most hardware, was a bloated mix of new ideas and legacy code, and didn’t have nearly enough of the innovative features that were announced years before it was launched. There are forged paintings with a greater attention to historical accuracy than Dvorak demonstrates by calling criticisms of Vista “unfair”.
Chief on my list of complaints is the death of what my son calls The Magic Circle.
Get your crystals and divining rods ready.
The Magic Circle has been around since Steve Jobs introduced the original iPod. On the iPhone, it took the form of the home button, but rounded edges and circles are a favorite design element for Apple; from selecting favorite artists and genres inside Apple Music to that massive spaceship campus.
This is just silly. The primary design and user interaction element of the iPhone was its touch screen. Yes, the home button was important, but the screen was clearly more important for the way that the device is actually used. Don’t believe me? Ask yourself whether you’d rather have an iPhone without a home button, or an iPhone without a multitouch display. There’s a good reason why Apple went with the former option.
The iPhone X is full of rounded edges; it just has one fewer circle on its face.
But it does not exist on the iPhone X. Not even a boot-up screen with ever-expanding circles. So if the iPhone X fails, can we blame the missing Magic Circle? Well, maybe not. A more likely culprit will be that $1,000 price tag.
If I wanted to stretch, I’d point out that the Face ID setup screens use circles extensively, as does its animation. But Dvorak changes tack in the second and third sentences here — apparently, circles are no longer all that important to the iPhone’s success or potential failure. It’s the price, dammit. But, while it is certainly higher than many smartphones, Apple doesn’t seem to think that it will be a problem. They’re forecasting an $84–87 billion October–December quarter, compared to $78 billion for the same period in 2016. Financial results aren’t inherently indicative of a product’s quality, but Apple isn’t forecasting a failure. This isn’t Apple’s Vista.
This week, multiple outlets reported on a Facebook pilot scheme that aims to combat revenge porn. In the program, users would send a message to themselves containing their nude images, which Facebook will then make a fingerprint of, and stop others from uploading similar or identical pictures.
The approach has many similarities with how Silicon Valley companies tackle child abuse material, but with a key difference—there is no already-established database of non-consensual pornography.
According to a Facebook spokesperson, Facebook workers will have to review full, uncensored versions of nude images first, volunteered by the user, to determine if malicious posts by other users qualify as revenge porn.
Now, you could make a reasonable argument that Facebook should err on the side of assuming that all images that are similar to pornographic images should be hidden from public view when they’re reported as revenge porn. I would make that argument, too. But it seems like Facebook has abdicated the responsibility of monitoring their platform for these abuses for a long time, and they’re having a hard time catching up.
Ultimately, it comes down to whether users can trust Facebook, and a recent survey conducted by Reticle Research and the Verge indicates that Americans simply don’t. Oh, and one more thing:
Zuck: They “trust me”
Zuck: Dumb fucks.
That transcript from over ten years ago will never fail to bite Mark Zuckerberg in the ass.
Marissa Mayer, who led Yahoo until she left earlier this year with a $260 million payout after the web giant was bought by Verizon, wasn’t able to tell senators how hackers were able to steal the company’s entire store of three billion user accounts during a breach in 2013.
Richard Smith, meanwhile, who retired earlier this year after the catastrophic data breach at credit agency Equifax, which affected more than 145 million Americans, couldn’t tell senators who was behind the attack.
I understand that these investigations take time, and that the people involved in these kinds of attacks try to cover their steps as best they can. What I don’t understand is how, even with prior knowledge, both Yahoo1 and Equifax2 failed to take appropriate and responsible measures. We’re allowed to click the “Install Later” button beside system updates all we want, with very few consequences; a major corporation handling unfathomable amounts of data cannot take that risk. So why did they?
Yahoo experienced several security breaches prior to the 2013 one that affected three billion accounts, and several after that as well. ↩︎
The glasses company is cleverly using the iPhone’s camera to take maps of people’s faces, and use that data to recommend styles of glasses that will best fit your face. It’s a step beyond the digital try on system the company has previously offered, where it would try to place a virtual pair of glasses on a picture to let you see how it looks.
I’ve always liked the styles Warby Parker has offered and I’ve been very pleased with the glasses I’ve ordered from them. But the purchasing process where I live is nowhere near as great as it is in the United States: their home try-on kit isn’t available here, and the only retail stores in Canada are both in Toronto.
This is an interesting first step, but I can’t wait to see if Warby Parker can really commit to augmented reality and offer a truly fantastic virtual try-on experience.
[…] After reading Alex Ross’s article about John Eliot Gardiner and Monteverdi, I went to Apple Music to listen to one of his recordings. The problem is that his ensembles are called The English Baroque Soloists and The Monteverdi Choir. So the number of results that come up when searching for “Gardiner Monteverdi” is stultifying. (Yes, Sir John has recorded a lot of albums.)
Sure, there are two Monteverdi albums in that list, but there is a lot more Bach. To make things worse, this search only returns 21 albums, whereas clicking on the name of the artist on one of these album pages – English Baroque Soloists, John Eliot Gardiner, & The Monteverdi Choir – returns nearly 100 albums. But none of these searches return all the recordings that he made with this ensemble.
Apple’s search engines in Music and Photos aren’t terrible, but they need some work to feel capable and powerful. As an example, if you begin searching for, say, Queens of the Stone Age and tap the suggestion Queens of the Stone Age in Artists, there’s only one result — Queens of the Stone Age. But you have to tap that result to get to their artist page, and that feels slow and cumbersome. If there’s only one result and it’s an exact match, it should just go to the artist page.
I also find Apple’s search functionality rather limited. In Photos, for instance, you can search by date, location, keyword, person, or even different objects automatically identified in the photos. But you cannot search by camera model or lens. I get that most people probably wouldn’t use this but, as a digital camera’s make and model is part of every file’s metadata, it almost seems like the kind of thing that requires more effort to omit from Photos’ search engine.
Users of the latest iOS 11.2 beta release received a surprise today in their Messages app picker: the long-awaited Apple Pay iMessage app has now arrived.
Only in the United States, at the moment.
Most of the details of this feature were announced at WWDC, but Christoffel shares additional notes, including all the different access points for peer-to-peer Apple Pay:
While opening the iMessage app to initiate all payments and requests may be the idealized workflow, Apple has included several alternative methods for starting a transaction. You can use Siri to send or request money by voice, using simple commands like ‘Send John $10’ or ‘Ask Federico to send me $10.’ Within the Contacts app, there’s now a Pay button alongside other contact options, which takes you into Messages and opens the Apple Pay app. Inside the Messages app, any message you receive that includes a dollar amount will have that amount underlined, indicating it includes a link to quickly open the Apple Pay app and make a payment. Apple is clearly aware that far more often friends and family send standard messages with the requested amounts included. Lastly, the QuickType keyboard can also serve as a shortcut to initiate a payment.
It makes total sense that this is an iMessage app, but the additional access points ought to help users discover the feature which, I think, is the biggest hurdle Apple faces against a competitor like Venmo.
Face ID is one of the hallmark features of the iPhone X. Using facial recognition, you can unlock your phone almost as quickly as if you had no device security enabled at all—all you have to do is stare at it. It’s convenient, and potentially more secure than a four- or six-digit passcode. And because your data is stored in the phone’s so-called secure enclave and not in the cloud (as Apple did with Touch ID’s fingerprint data), the impressively detailed digital map Apple makes of your face, and the more than 50 facial expressions it can recognize, are kept safe. For the most part.
At launch, facial recognition data from Face ID will only be used by Apple to unlock your phone—and animate a handful of goofy emoji characters called Animoji. However, Apple plans to allow third-party app developers access to some of the biometric data Face ID collects. And this has some privacy experts concerned, as Reuters reports.
A stunning twist.
Fun fact: that Animoji link goes to another Slate article with the title “Three reasons why Apple’s iPhone X animojis are worrisome.” Those three reasons are: they are so good that users will be encouraged to use them! in public! with audio! and that can be annoying; that they are so good that they will become a selling tool for the iPhone X; and that the author gets confused about the difference between the Face ID feature and iOS’ ARKit APIs. A distinction which, as it turns out, Bonnington buries in her ostensibly panic-inducing article:
Facial recognition is everywhere these days. It’s how Facebook suggests friends you should tag in photos, how Snapchat’s lenses so masterfully morph onto your face, and how Google Photos can so intelligently collect and organize photos of people you photograph often. Apple already uses facial recognition in its Photos app on iOS, too. But until now, these companies have kept their facial recognition data private. Allowing developers to access some of that data — even if it’s only a rough map of your face and facial expressions, not the full dataset it uses for biometric identification — is new, potentially scary territory.
This is a completely confused paragraph. There is a difference between facial feature identification — the kind that’s used by Snapchat for lenses, Facebook for suggesting faces to tag in photos, and variations of which are available in a bunch of GitHub repos — and recognition of specific faces, like Google and Apple use for notating specific people in photo libraries.
Apple uses a very sophisticated version of the latter to make Face ID work, which they’ve detailed in a security white paper. But the version of face tracking that’s available to developers is not to be confused with Face ID; it is more like an enhanced version of facial identification. But even that has Bonnington worried:
To use your facial data, developers must first ask your permission in their apps, and must not sell that information to other parties. Still, while it’s forbidden under Apple developer guidelines, privacy experts worry that developers might sell this data or use it for marketing or advertising purposes. (Imagine, if you will, an ad-supported gaming app that uses your current facial expression on your avatar. How valuable would it be for an advertiser to monitor what facial expressions you make as you watch their commercial in between rounds of gameplay?)
That would, indeed, be pretty valuable and deeply creepy. Privacy experts are right to be worried about the plausibility of a company using any kind of facial identification data for marketing purposes, and that’s why Apple has prohibited it. And, yeah, they’re going to have to be pretty vigilant about that.
But let’s not pretend that this is a brand new hypothetical concern that’s exclusive to the iPhone X. Theoretically, any app the user has granted permission for the camera could also target ads using one of those open source facial identification libraries I wrote about earlier — something which is, of course, also prohibited by Apple.
The thing that confuses me most about this piece is that Bonnington is a damn good writer. On the same day that this poorly-researched article was published, she also wrote a fantastic take on those YouTube hands-on videos of the iPhone X published Monday last week. Can’t win ’em all, I guess.
If you updated your iPhone, iPad, or iPod touch to iOS 11.1 and find that when you type the letter “i” it autocorrects to the letter “A” with a symbol, learn what to do.
Apple suggests creating a text replacement shortcut to swap the letter I for the letter i. Yeah, really. They also say that they’re going to fix this in an update soon.
This is an utterly ridiculous bug to have escaped Apple’s QA checks and beta testing amongst developers and a public pool. I understand that this seems like an overreaction to a relatively minor bug, but I wasn’t kidding when I wrote last month that input devices should always work. That goes for virtual input devices, too.