As expected and in spite of overwhelming public and business support for net neutrality rules, the FCC just voted along party lines to strip themselves of the power to meaningfully regulate internet service providers. But just because appointed FCC Commissioners like Ajit Pai have no respect for the public, that doesn’t mean this is over.
The first course of immediate action will be for net neutrality proponents to pressure Congress to use the Congressional Review Act to pass a resolution of disapproval. This is a mechanism that allows Congress to overrule any regulations enacted by federal agencies. You might remember it’s the tool that the GOP used to eliminate broadband privacy protections earlier this year.
“The CRA is our best option on Capitol Hill for the time being,” said Timothy Karr, a spokesperson for the Free Press Action Fund, an open internet advocacy group. “We’re not interested in efforts to strike a Congressional compromise that are being championed by many in the phone and cable lobby. We don’t have a lot of confidence in the outcome of a legislative fight in a Congress where net neutrality advocates are completely outgunned and outspent by cable and telecom lobbyists.”
A lot more work needs to be done. Title II regulations are an effective and well-rounded way to treat ISPs more like the utility providers they really are, but a bill could be passed that places a Title II-style framework into a modern context for the internet, if there’s enough public pressure to do so. Time for Americans to get to work.
Update: New York Attorney General Eric Schneiderman is suing to block this repeal. He pointed out yesterday that millions of comments on this topic were posted under real people’s names without their knowledge or consent, and that the FCC has refused to allow an investigation into this matter.
The most dramatic cybersecurity story of 2016 came to a quiet conclusion Friday in an Anchorage courtroom, as three young American computer savants pleaded guilty to masterminding an unprecedented botnet — powered by unsecured internet-of-things devices like security cameras and wireless routers — that unleashed sweeping attacks on key internet services around the globe last fall. What drove them wasn’t anarchist politics or shadowy ties to a nation-state. It was Minecraft.
Minecraft may have been the motive and three college students may have been the perpetrators, but the reason this attack was so successful was because so many internet-of-things device manufacturers don’t prioritize security, and nobody really checks to make sure any of these products have been tested for trivial loopholes.
We’re used to extension cords being certified that they won’t burst into flames when you plug them in. Microwaves and cellphones get tested by regulatory bodies to ensure that they won’t fry living organisms. We expect our cars to be built to withstand moderate collisions. These processes don’t prevent all problems, but they do help maintain standards and provide third-party verification that the manufacturer did a good job.
I’m not necessarily arguing that every device and software update ought to go through an extensive pentesting process, but there is a reasonable argument to be made that internet-of-things devices should be subject to a little more scrutiny. The industry is currently not doing a good enough job regulating itself, and their failures can have global effects. Some sort of standards body probably would slow down the introduction of these products, but is the possibility of a global attack on the internet’s infrastructure a reasonable price to pay for bringing a device to the market a little bit faster?
When Ajit Pai, the Trump-appointed head of the Federal Communications Commission, announced his intention to roll back Obama-era net-neutrality guidelines, gutting rules that prevent Internet service providers from charging companies for faster access or from slowing down or speeding up services like Netflix or YouTube, he was quick to claim that critics of his plan—Internet freedom groups and smaller Internet companies that can’t afford so-called “fast lanes”—were overreacting. “They greatly overstate the fears about what the Internet will look like going forward,” Pai said on Fox & Friends. Pai’s proposal, which would put in place a voluntary system reliant on written promises from I.S.P.s not to stall competitors’ traffic or block Web sites, essentially serves as a road map to radically reshape the Internet. But like Pai, I.S.P.s and others in the telecom industry have curiously insisted that consumers and smaller companies have nothing to fear when it comes to net-neutrality reform.
ISPs also promise to be at your house between noon and 3:00 PM to check out your slow internet connection which, of course, is because they oversold your neighbourhood and overstated likely end-user speeds; but, sure, let’s trust them to play fair when they have few incentives to do so.
Apple Inc. is designing a new chip for future Mac laptops that would take on more of the functionality currently handled by Intel Corp. processors, according to people familiar with the matter.
The chip, which went into development last year, is similar to one already used in the latest MacBook Pro to power the keyboard’s Touch Bar feature, the people said. The updated part, internally codenamed T310, would handle some of the computer’s low-power mode functionality, they said. The people asked not to be identified talking about private product development. It’s built using ARM Holdings Plc. technology and will work alongside an Intel processor.
The current ARM-based chip for Macs is independent from the computer’s other components, focusing on the Touch Bar’s functionality itself. The new version in development would go further by connecting to other parts of a Mac’s system, including storage and wireless components, in order to take on the additional responsibilities. Given that a low-power mode already exists, Apple may choose to not highlight the advancement, much like it has not marketed the significance of its current Mac chip, one of the people said.
It sounds like this is the chip that is included in the iMac Pro, even though Gurman and King cite lower power tasks as being the focus of its development. Steven Troughton-Smith in November:
This looks like the iMac Pro’s coprocessor (Bridge2,1) will be an A10 Fusion chip with 512MB RAM […] So first Mac with an A-series chip
Rene Ritchie tweeted today that the A10 has been rebranded “T2” — as in, a successor to the T1 chip in Touch Bar MacBook Pro models.
Cabel Sasser of Panic received an iMac Pro review unit from Apple, and tweeted about the T2’s functionality:
It integrates previously discrete components, like the SMC, ISP for the camera, audio control, SSD control… plus a secure enclave, and a hardware encryption engine.
This new chip means storage encryption keys pass from the secure enclave to the hardware encryption engine in-chip — your key never leaves the chip. And, they it allows for hardware verification of OS, kernel, boot loader, firmware, etc. (This can be disabled…)
In addition to the enhanced security measures Sasser notes, a couple more things are very exciting about Apple’s gradual rollout of a proprietary coprocessor in their Mac lineup. The T2 sounds like it expands upon some of the input mechanism security measures of the T1, so the keyboard and built-in camera are more secure than previous implementations. And, as Guilherme Rambo noticed, it can enable “Hey, Siri” functionality on the Mac. But Apple hasn’t enabled that functionality; so, now, it is a question of “when?”.
Apple updated its website with news that the iMac Pro is shipping beginning on December 14, 2017. The pro-level iMac features a long list of impressive specifications. The desktop computer, which was announced in June at WWDC comes in 8, 10, and 18-core configurations, though the 18-core model will not ship until 2018. The new iMac can be configured with up to 128GB of RAM and can handle SSD storage of up to 4TB. Graphics are driven with the all-new Radeon Pro Vega, which Apple said offers three times the performance over other iMac GPUs.
Apple provided Marques Brownlee (MKBHD) and another YouTuber, Jonathan Morrison, with review units, and they seem effusively positive, with the exception of some concerns about the machine’s lack of post-purchase upgradability.
Of note, there’s nothing on the iMac Pro webpage nor in either of the review videos about the Secure Enclave that’s apparently in the machine, nor is there anything about an A10 Fusion chip or “Hey, Siri” functionality. These rumours were supported by evidence in MacOS; it isn’t as though the predictions came out of nowhere. It’s possible that these features will be unveiled on Thursday when the iMac Pro becomes available, or perhaps early next year with a software update, but I also haven’t seen any reason for the Secure Enclave — the keyboard doesn’t have a Touch Bar, nor is there Touch ID anywhere on this Mac.
I found a very consistent set of results: a 2X to 3X boost in speed (relative to my current iMac and MacBook Pro 15”) a noticeable leap from most generational jumps that are generally ten times smaller.
Whether you’re editing 8K RED video, H.264 4K Drone footage, 6K 3D VR content or 50 Megapixel RAW stills – you can expect a 200-300% increase in performance in almost every industry leading software with the iMac Pro.
Most of my apps have around 20,000-30,000 lines of code spread out over 80-120 source files (mostly Obj-C and C with a teeny amount of Swift mixed in). There are so many variables that go into compile performance that it’s hard to come up with a benchmark that is universally relevant, so I’ll simply note that I saw reductions in compile time of between 30-60% while working on apps when I compared the iMac Pro to my 2016 MacBook Pro and 2013 iMac. If you’re developing for iOS you’ll still be subject to the bottleneck of installing and launching an app on the simulator or a device, but when developing for the Mac this makes a pretty noticeable improvement in repetitive code-compile-test cycles.
These are massive performance gains, even at the 10-core level; imagine what the 18-core iMac Pro is going be like. And then remember that this isn’t the Mac Pro replacement — it’s just a stopgap while they work on the real Mac Pro replacement.
Update:Rene Ritchie says that the A10 Fusion SoC is, indeed, present in the iMac Pro, albeit rebranded as a T2 coprocessor.
Matt Birchler used a Pixel 2 instead of his usual iPhone for a couple of months, and has started publishing pieces about his experience and impressions. It’s worth your time to start with part one and work your way through his thoughts, but this bit from the “Performance and Stability” section stood out to me:
[…] My time with Android has shown it to be anything but the “stable” alternative to iOS. Just 31 days into my time with the Pixel 2, I had to restore my phone to factory settings to fix the errors I was experiencing. In addition to the issues raised in that post, I have also had issues with apps crashing, notifications staying on silent even though I have them set to vibrate, random reboots, and more. After a few weeks I actually stopped reporting my Android bugs to Twitter because it was getting too depressing.
As Birchler points out, everyone’s bug experiences are different because, even with the relatively limited configuration options available for mobile devices — compared to, say, PCs — there are still billions of possible combinations of languages, WiFi networks, apps, settings, and so on. In light of recent iOS bugs, though, it’s remarkable to recognize that there’s still a lot of work to be done all around. Bugs shake the trust we place in our devices and may even make us consider switching, but there’s nothing like hearing user reports like these that acknowledge that it’s not any more stable. I’m not mocking Android users here or the OS itself; it’s just something worth recognizing.
Hey, remember how Andy Rubin temporarily stepped away from Essential after reporters from the Information started asking questions about what they called an “inappropriate relationship” between him and a woman who worked for him while at Google? Theodore Schleifer of Recode has the latest:
Andy Rubin, the founder of smartphone startup Essential, has already returned to his company less than two weeks after it was announced that he took a leave of absence amid questions about an alleged inappropriate relationship.
Even while on leave from Essential, Rubin was still able to show up to work at the same physical workplace. That’s because he did not take a similar leave from Playground Global, the venture capital firm he founded, which shares the same office space as Essential.
It will come as no surprise to you that Playground Global also has an investment in Essential, so what did Rubin’s leave of absence truly mean?
There have been two major versions of the FCC’s transparency requirements: one created in 2010 with the first net neutrality rules, and an expanded version created in 2015. Both sets of transparency rules survived court challenges from the broadband industry.
The 2010 requirement had ISPs disclose pricing, including “monthly prices, usage-based fees, and fees for early termination or additional network services.”
That somewhat vague requirement will survive Pai’s net neutrality repeal. But Pai is proposing to eliminate the enhanced disclosure requirements that have been in place since 2015.
The 2015 disclosures that Ajit Pai’s proposal would undo include transparency on data caps, and additional monthly fees for things like modem rentals. ISPs also wouldn’t have to necessarily make these disclosures public on their own website; they can tell the FCC about them, and the FCC will publish the disclosures on their byzantine website.
Pai has claimed that his proposed rollback will encourage net neutrality practices without regulation because it will require ISPs to be fully transparent. In a shocking turn of events for statements and policies originating from the top minds of this administration, that claim turns out to be a complete lie: ISPs won’t have to be as open and transparent about their pricing and policies, and they have repeatedly stated that they would use tactics like paid prioritization to manipulate network traffic if given the opportunity.
I don’t have an Amazon Prime subscription, so I don’t really have a reason to download this app; but, by all accounts, it is shockingly bad.
Netflix’s is also pretty awful — it now autoplays a preview of the selected show or movie at the top of the screen, with sound, and I can’t find any way to disable this. It also doesn’t behave like a typical tvOS app: the app navigation is displayed as tiles, shows and movies are also displayed as tiles, and they’re mixed together in an infinitely-scrolling grid.
Hulu isn’t available in Canada, but its tvOS app is apparently poor as well.
Why is it that three of the biggest players in streaming video can’t seem to find the time and resources to build proper tvOS apps? Is it not worth the effort because the Apple TV isn’t popular enough? Is it because these companies simply don’t care?
I don’t think it’s right to stymie experimentation amongst app developers, but tvOS has a very particular set of platform characteristics. If Apple isn’t going to encourage developers’ compliance to those characteristics, it’s up to users to provide feedback and encourage developers like these to do better.
The larger point here, though is that while there certainly were a number of reasons to be hesitant about supporting Title II or even explicit rules from the FCC a decade ago, enough things have happened that if you support net neutrality, supporting Title II is the only current way to get it. Ajit Pai’s plan gets rid of net neutrality. The courts have made it clear. The (non) competitive market has made it clear. The statements of the large broadband providers have made it clear. The concerns of the small broadband providers have made it clear. If Ben does support net neutrality, as he claims, then he should not support Pai’s plan. It does not and will not lead to the results he claims he wants. It is deliberately designed to do the opposite.
So, yes. For a long time — like Ben does now — I worried about an FCC presenting rules. But the courts made it clear that this was the only way to actually keep neutrality — short of an enlightened Congress. And the deteriorating market, combined with continued efforts and statements from the big broadband companies, made it clear that it was necessary. You can argue that the whole concept of net neutrality is bad — but, if you support the concept of net neutrality, and actually understand the history, then it’s difficult to see how you can support Pai’s plan. I hope that Ben will reconsider his position — especially since Pai himself has been retweeting Ben’s posts and tweets on this subject.
If I didn’t convince you to disagree with Thompson’s misleading piece, maybe Masnick will. If you live in the United States, it’s vital that the FCC — particularly Ajit Pai, Michael O’Rielly, and Brendan Carr — and your representatives hear your concerns.
There are at least two possible explanations for all of these misunderstandings and technical errors. One is that, as we’ve suggested, the FCC doesn’t understand how the Internet works. The second is that it doesn’t care, because its real goal is simply to cobble together some technical justification for its plan to kill net neutrality. A linchpin of that plan is to reclassify broadband as an “information service,” (rather than a “telecommunications service,” or common carrier) and the FCC needs to offer some basis for it. So, we fear, it’s making one up, and hoping no one will notice.
Regardless of whether the FCC commissioners are being malicious or they truly don’t understand how the internet works, it disqualifies them from running the Commission.
John Herrman, New York Times, on reactions to the power of large tech companies in 2017:
The flip side of these companies’ new dominance is that, not unlike the first industrialists, they turn progress from something that manifests inevitably with the passage of time into something that is being done to us, for reasons that are out of our control but seem unnervingly and suddenly within someone else’s. This is a profound reorientation, which might explain why current anxieties about the internet make for such unlikely bedfellows. Conservative parents with moral complaints about inappropriate videos surfacing in YouTube kids’ channels find themselves inadvertently agreeing with leftist critiques of corporate power. Facebook’s inability to deal in any meaningful way with misinformation on the platform has loosely aligned an elitist critique of democratized news with populist anger at a company led by Silicon Valley elites. There are right-wing anti-monopolists and left-wing anti-monopolists setting their sights on Google and Facebook, claiming dangerous censorship or lack of responsible moderation or, sometimes, both at once — people who want different things, and who have incompatible goals, but who have intuited the same core premise. In these instances, the only people left telling us not to worry — rhyming their responses with the vindicated defenders of the nascent internet — have suspiciously much to lose.
In this future, what publications will have done individually is adapt to survive; what they will have helped do together is take the grand weird promises of writing and reporting and film and art on the internet and consolidated them into a set of business interests that most closely resemble the TV industry. Which sounds extremely lucrative! TV makes a lot of money, and there’s a lot of excellent TV. But TV is also a byzantine nightmare of conflict and compromise and trash and waste and legacy. The prospect of Facebook, for example, as a primary host for news organizations, not just an outsized source of traffic, is depressing even if you like Facebook. A new generation of artists and creative people ceding the still-fresh dream of direct compensation and independence to mediated advertising arrangements with accidentally enormous middlemen apps that have no special interest in publishing beyond value extraction through advertising is the early internet utopian’s worst-case scenario.
I’m going to bring this back around to net neutrality because the FCC’s vote is in about a week and I think it’s worth keeping that in mind. FCC chairman Ajit Pai has said, quite reasonably, that he is concerned about the influence of a handful of tech companies on our greater discourse. Whether that’s because he’s actually concerned about their influence or whether he’s using Silicon Valley as a scapegoat is irrelevant in this discussion. But it is more likely that a company can rise up to compete with, say, Facebook than it is that a startup could compete with a major ISP like Verizon or Comcast1 simply because of the high initial costs associated with building broadband infrastructure.2
Today’s tech giants were born in garages in the shadows of yesterday’s tech giants, so we hear, but major ISPs don’t have a comparable story. Allowing ISPs to treat websites differently or prioritizing traffic for a fee will more deeply entrench the dominance of the largest and wealthiest tech companies, and will make it less likely that an upstart can compete.
Both of these ISPs actually run cable and their own infrastructure unlike, for example, smaller regional ISPs. ↩︎
This is something that I seem remember the FCC acknowledging in their proposal (PDF) but I haven’t been able to find the passage. If you remember where it is, please let me know. ↩︎
An important update to a story I linked to two weeks ago about an Android system service that was collecting location data even when location services were switched off — according to Tony Romm of Recode, Oracle seeded that story to Quartz as part of a PR campaign against Google:
Since 2010, Oracle has accused Google of copying Java and using key portions of it in the making of Android. Google, for its part, has fought those claims vigorously. More recently, though, their standoff has intensified. And as a sign of the worsening rift between them, this summer Oracle tried to sell reporters on a story about the privacy pitfalls of Android, two sources confirmed to Recode.
To be sure, the substance of Quartz’s story — Google’s errant location tracking — checks out. Google itself acknowledged the mishap and said it ceased the practice. Nor does Oracle stand alone in raising red flags about Google at a time when many in the nation’s capital are questioning the power and reach of large web platforms.
Still, Oracle’s campaign is undeniable. In Washington, D.C., for example, it has devoted a slice of its $8.8 million in lobbying spending so far in 2017 to challenging Google in key policy debates. It has sought penalties against Google in Europe, meanwhile, and it even purchased billboard ads in Tennessee just to antagonize its tech peer, sources said.
It is quite reasonable for people and companies to have questions about Google’s dominance in many online services and mobile operating systems and find that Oracle’s dirty tricks campaign somewhat sours the reputation of this story.
But I don’t necessarily think this reflects poorly on Oracle; if anything, it shakes my confidence in Quartz’s reporting. I don’t know what Quartz’s sourcing attribution guidelines are, but the New York Times’ style guide indicates that a source’s interest in the story should be communicated to readers as candidly as possible. In their story, Quartz did not indicate how they were tipped-off to Android’s behaviour.
[…] Interviews with more than two dozen marketers, journalists, and others familiar with similar pay-for-play offers revealed a dubious corner of online publishing in which publicists, ranging from individuals like Satyam to medium-sized “digital marketing firms” that blur traditional lines between advertising and public relations, quietly pay off journalists to promote their clients in articles that make no mention of the financial arrangement.
People involved with the payoffs are extremely reluctant to discuss them, but four contributing writers to prominent publications including Mashable, Inc, Business Insider, and Entrepreneur told me they have personally accepted payments in exchange for weaving promotional references to brands into their work on those sites. Two of the writers acknowledged they have taken part in the scheme for years, on behalf of many brands. Mario Ruiz, a spokesperson for Business Insider, said in an email that “Business Insider has a strict policy that prohibits any of our writers, whether full-time staffers or contributors, from accepting payment of any kind in exchange for coverage.”
There are a couple of different kinds of writers that, according to Christian, took payments in exchange for mentioning or linking to brands in their articles. Some publish to “contributor networks”, which are blogs hosted by major publications but not edited by them. TechCrunch used to have one of those, but they shut it down earlier this year because they noticed an increase in posts that they “strongly suspected were ghost-written by PR”, which should come as no surprise. These contributor networks tend to be filled with self-promotional garbage. I don’t understand what positive effects a contributor network has on an established publication, but it seems like it’s trading away hard-earned authority for cheap traffic.
The more insidious acts Christian profiles are those from writers ostensibly creating articles where a brand pays for very subtle placement:
Yael Grauer, a freelancer who’s written for Forbes and many other outlets, says she’s gotten as many as 12 offers like Satyam’s in a single month, which she always rejects. Some are surprisingly straightforward, like a marketer who simply asked how much she charged for an article in Slate or Wired. Others are coy, like a representative of a firm called Co-Creative Marketing, who heaped praise on her writing before asking whether she could get content published in Forbes or Wired on behalf of a client. Another marketer offered Erik Sherman, a business journalist, $315 per article to mention her client’s landscaping products in Forbes, the Huffington Post, or the Wall Street Journal — though she cautioned that the mentions would need to “not look blatant.” Sherman declined, telling the marketer that the offer was “completely unethical.”
You’d probably expect this kind of thing to be pervasive in Forbes’ contributor network, but if a similar offer were accepted by a writer for an esteemed imprint like the Wall Street Journal, it would undermine your confidence in that publication overall — especially since it’s a business publication, as opposed to something more general-interest.
For what it’s worth, even I — writing at a fairly tiny site — receive offers like these a few times every week. I have never accepted any of them, of course.
Big news today: MarsEdit 4 is out of beta and available for download from the MarsEdit home page and the Mac App Store. This marks the end of a long development period spanning seven years, so it’s a great personal relief to me to finally release it. I hope you enjoy it.
MarsEdit 4 brings major improvements to the app including a refined new look, enhanced WordPress support, rich and plain text editor improvements, automatic preview template generation, and much more.
I’ve been using MarsEdit 4 betas for several months and I love the improvements in this version — particularly, the new Safari extension. Jalkut has created a very clever trial scheme; I highly recommend you take advantage of it if you have a blog and have never tried MarsEdit before. It’s terrific.
What I like about this postmortem is that it’s the script to what is almost the “Every Frame a Painting” episode of “Every Frame a Painting”, particularly in this detail:
In order to make video essays on the Internet, we had to learn the basics of copyright law. In America, there’s a provision called fair use; if you meet four criteria, you can argue in court that you made reasonable use of copyrighted material.
But as always, there’s a difference between what the law says and how the law is implemented. You could make a video that meets the criteria for fair use, but YouTube could still take it down because of their internal system (Copyright ID) which analyzes and detects copyrighted material.
So I learned to edit my way around that system.
If YouTube’s automatic flagging system didn’t exist, it’s likely that “Every Frame a Painting” would feel completely different. Whether it would have been better, I’m not sure, but I think the limitations of YouTube helped birth something truly unique and very, very good.
I don’t think stationary smart speakers represent the future of computing. Instead, companies are using smart speakers to take advantage of an awkward phase of technology in which there doesn’t seem to be any clear direction as to where things are headed. Consumers are buying cheap smart speakers powered by digital voice assistants without having any strong convictions regarding how such voice assistants should or can be used. The major takeaway from customer surveys regarding smart speakers usage is that there isn’t any clear trend. If anything, smart speakers are being used for rudimentary tasks that can just as easily be done with digital voice assistants found on smartwatches or smartphones. This environment paints a very different picture of the current health of the smart speaker market. The narrative in the press is simply too rosy and optimistic.
I’m clearly not the target market for the HomePod, primarily because I live in Canada where the HomePod won’t be for sale at launch.1 I also live in an apartment small enough that I can semi-loudly say “hey Siri” and get a response from my phone on the other side of my place. But I also think that the reason I’m not that enamoured with the HomePod or any smart speaker yet is because I’m a daily Apple Watch wearer, so many of its functions are on my wrist instead of in a tube in my kitchen.
I’m guessing that these products would appeal more — not exclusively, but more — to people who live in larger homes, of course, but also people who don’t typically wear a smartwatch — Apple’s or otherwise.2 I also wonder if smart speakers are an intermediate product between a more traditional computer-user relationship and something that’s more environmental or spatial. If it is, I’d rather throw my hat in with a company that has a strict commitment to user privacy, rather than companies that serve up targeted advertising.
And, if the rollout of Apple News is anything to go by, several years after launch. ↩︎
The HomePod is only $20 more expensive in the U.S. than a Series 3 Apple Watch. ↩︎
Sebastiaan de With, designer of the Halide camera app:
When you shoot JPEG, you really need to get the photo perfect at the time you take it. With RAW and its extra data, you can easily fix mistakes, and you get a lot more room to experiment.
What kind of data? RAW files store more information about detail in the highlights (the bright parts) and the shadows (the dark parts) of an image. Since you often want to ‘recover’ a slightly over or under-exposed photo, this is immensely useful.
It also stores information that enables you to change white balance later on. White balance is a constantly measured value that cameras try to get right to ensure the colors look natural in a scene. iPhones are quite good at this, but it starts to get more difficult when light is tinted.
I’ve been shooting RAW on my iPhone almost exclusively since I received a beta version of Obscura in the summer last year that used iOS 10’s RAW capture API. More time is needed to make a RAW photo usable than a JPEG out of the camera app and RAW files take up so much more space, but it’s completely worth it. So many of the photos I’ve captured since would have been impossible to make without RAW.
You can try this for yourself: get a manual camera app like Obscura, Halide, or Manual, and download either Lightroom or Darkroom. Capture a scene in RAW, then start playing around with the highlights, shadows,1 and white balance; in Lightroom, you can also adjust individual hues in a scene without degrading the image fidelity. It’s remarkable how much the iPhone’s sensor actually captures, especially in foliage and finer patterns.
If it’s snowy where you live, this is extremely helpful. ↩︎
On iOS 10.1 there were only 4 binaries using Swift. The number of apps and frameworks using Swift grew quite a lot in a year: There are now 20 apps and frameworks using Swift in iOS 11.1 […]
Similarly the number of binaries using Swift grew from 10 in macOS 10.12.1 to 23 in macOS 10.13.1.
It looks like most of the system components built in Swift are entirely new apps, or effectively so, as with Music and Podcasts. But it also appears that Apple is thoroughly porting both operating systems over to Swift. I have no idea how deep that will run — I imagine device drivers, for example, may not be rewritten — but perhaps the goal is to have everything the user interacts with be built in Swift, or something like that.
Whatever Apple’s specific goal may be, the apps they have ported to Swift so far are not little things or developer-specific utilities. These are critical apps that people use all the time. If that’s not eating your own dog food, I don’t know what is.
It’s a truism in tech design that it takes a great deal of work to make something easy to use, and no company has proven the principle more spectacularly than Apple. It came straight from Jobs, who pushed his engineers and designers to remember that it wasn’t the device that customers wanted — it was the experience, the information, the services, the apps, the ability to edit spreadsheets and documents, to watch video, send email and texts, play games, take photographs — the countless things we do today (effortlessly, for the most part). You can debate the consequences of this new power at our fingertips, but there’s no denying it’s a revolution in the daily lives of rich and poor alike, and that Apple has set the pace, led by Ive’s answers to Jobs’ questions. Jobs loved the iPad, which he called an “intimate device” because it was immersive, like a good book — a window into whatever worlds you chose to explore. “In so many ways,” Ive says, “we’re trying to get the object out of the way.”
Last night, I watched “App: The Human Story” and I was struck by Matías Duarte’s explanation that apps are generally single-purpose widgets on a very general-purpose device. I think Apple’s latest generation of devices is the purest expression of that idea. Everything they’ve been doing — from near-seamless enclosures and Face ID, down to the coatings on the display becoming increasingly closer to black, so when the display is off, it vanishes into the glass — gets closer to this idea. Even the software of the iPhone X comes closer to that: you can fling your apps around or send them back to the home screen, and it feels like you’re directly manipulating everything the system does. Similar interactions on the iPad help turn that into a totally immersive experience; one of my biggest gripes with previous generations of iOS is the number of times it still felt necessary to use the home button, but that’s almost completely changed with iOS 11. It really is remarkable how much I can do with a device that often feels like it isn’t even there.
More than 5 million people in the UK could be entitled to compensation from Google if a class action against the internet giant for allegedly harvesting personal data is successful.
A group led by the former executive director of consumer body Which?, Richard Lloyd, and advised by City law firm Mischon de Reya claims Google unlawfully collected personal information by bypassing the default privacy settings on the iPhone between June 2011 and February 2012.
They have launched a legal action with the aim of securing compensation for those affected. The group, called Google You Owe Us, says that approximately 5.4 million people in Britain used the iPhone during this period and could be entitled to compensation.
Google is accused of breaching principles in the UK’s data protection laws in a “violation of trust” against iPhone users.
To get around Safari’s default blocking, Google exploited a loophole in the browser’s privacy settings. While Safari does block most tracking, it makes an exception for websites with which a person interacts in some way—for instance, by filling out a form. So Google added coding to some of its ads that made Safari think that a person was submitting an invisible form to Google. Safari would then let Google install a cookie on the phone or computer.
It is striking to me how malicious this kind of action is. It isn’t Google’s right to determine when it feels like it can circumvent users’ preferences to install cookies or anything on their computers. You may argue that these are not users’ preferences — that Safari’s defaults are Apple’s preferences. But I think that’s a dangerous stance because there’s no way to determine when a preference has been deliberately chosen by the user.
I know I’ve been harping on bugs in Apple’s software for the last little while, but deliberate actions like Google’s bother me far more. The Safari workaround is something that an engineer had to actually build. Someone had to understand that Safari’s default cookie settings were incompatible with tracking, but instead of choosing not to track users, they thought it was their right to override those preferences. Egregious.
John C. Dvorak of PC Magazine wrote a piece tying the introduction of Face ID on the iPhone X to the Australian government’s plans to introduce a facial recognition system to identify suspects of crime. I know very little about that plan — though I’m eager to learn more — but I do know enough about the iPhone X to take issue with this bit of his piece:
We can assume the NSA, which spies on its own citizenry, will store massive amounts of imagery in its huge facility in Utah. From that, an instant dossier of someone’s whereabouts can be produced as needed.
Until then, we have Apple’s iPhone X, which swaps Touch ID for Face ID. The real beneficiaries of this technique will be the police; they can just point it at the person and they are in.
The user must be paying attention to the device and within a certain range for Face ID to make a successful scan. And, for what it’s worth, pressing and holding the power button and either volume button for two seconds will disable Face ID until a passcode is entered.
Also, implicitly tying Face ID to assumed NSA activities is misleading and irresponsible.
Apple previously relied on fingerprints with Touch ID; now the home button is gone, perhaps saving it money. Facial recognition is just software, after all; the camera is already in the phone.
Dvorak continues to demonstrate why he is one of the most inept technology columnists writing today in a mainstream publication. Apple has helpfully provided an easy-to-read white paper (PDF) explaining how Face ID works. It’s six pages long, but if that’s too much reading for Dvorak, Apple also put a labelled diagram on the iPhone X’s marketing webpage. In short, it doesn’t use the front-facing camera that’s “already in the phone” — it uses an infrared light, infrared dot projector, and an infrared camera to create a depth map of the detected face.
I don’t have a problem with people whose opinion differs from my own. I don’t have a problem with people who write articles that I firmly disagree with. I do have a problem with laziness and making stuff up.
Apple’s statement, via Romain Dillet of TechCrunch:
Security is a top priority for every Apple product, and regrettably we stumbled with this release of macOS.
When our security engineers became aware of the issue Tuesday afternoon, we immediately began working on an update that closes the security hole. This morning, as of 8:00 a.m., the update is available for download, and starting later today it will be automatically installed on all systems running the latest version (10.13.1) of macOS High Sierra.
We greatly regret this error and we apologize to all Mac users, both for releasing with this vulnerability and for the concern it has caused. Our customers deserve better. We are auditing our development processes to help prevent this from happening again.
A fast bug fix, an apology, and a commitment to fixing whatever led to a bug like this shipping. That’s the good news.
Unfortunately, some users on the MacRumors forums are reporting that the security patch also breaks file sharing. It would be foolish to recommend users wait to apply this patch — and impossible, because it gets installed automatically — but you should be aware of this bug if that’s something you depend on.
So to recount: one Portugal story is made up, and the other declared that a 10GB family plan with an extra 10GB for a collection of apps of your choosing for €25/month ($30/month) is a future to be feared; given that AT&T charges $65 for a single “Unlimited” plan that downscales video, bans tethering, and slows speeds after 22GB, one wonders if most Americans share that fear.
That, though, is the magic of the term “net neutrality”, the name — coined by the same Tim Wu whose tweet I embedded above — for those FCC rules that justified the original 2015 reclassification of ISPs to utility-like common carriers. Of course ISPs should be neutral — again, who could be against such a thing? What is missing in the ongoing debate, though, is the recognition that, ever since the demise of AOL, they have been. The FCC’s 2015 approach to net neutrality is solving problems as fake as the image in Wu’s tweet; unfortunately the costs are just as real as those in Congressman Khanna’s tweet, but massively more expensive.
Thompson follows this by acknowledging several instances when ISPs were not treating data neutrally, but concludes that contemporary regulatory action or public pressure illustrate a lack of need for Title II classification. I find this reasoning to be ill-considered at best. First, the Madison River incident:
The most famous example of an ISP acting badly was a company called Madison River Communication which, in 2005, blocked ports used for Voice over Internet Protocol (VoIP) services, presumably to prop up their own alternative; it remains the canonical violation of net neutrality. It was also a short-lived one: Vonage quickly complained to the FCC, which quickly obtained a consent decree that included a nominal fine and guarantee from Madison River Communications that they would not block such services again.
It’s worth recognizing that the consent decree references Title II guidelines. Thompson cites two more cases of net neutrality violations — Comcast blocking the BitTorrent protocol under the guise of it being network management policy, and MetroPCS offering zero-rated YouTube, which I’ll get to later — but, strangely, doesn’t mention AT&T’s blocking of FaceTime on certain cellular plans. No other video chatting apps were prohibited, raising the question of why AT&T decided to target FaceTime users.
That makes this claim, in Thompson’s recap, obviously incorrect:
There is no evidence of systemic abuse by ISPs governed under Title I, which means there are no immediate benefits to regulation, only theoretical ones
There is clearly plenty of evidence that ISPs will not treat data the same if offered the opportunity to do otherwise. And, I stress again, we aren’t simply talking about internet providers here — these are vertically-integrated media conglomerates which absolutely have incentive to treat traffic from friendly entities differently through, for example, zero-rating, as AT&T did with DirecTV, Verizon does with their NFL app, and T-Mobile does for certain services.
Again, zero-rating is not explicitly a net-neutrality issue: T-Mobile treats all data the same, some data just doesn’t cost money.
What? No, really, what? T-Mobile treats all data the same except the data they treat differently might be one of the worst arguments in this whole piece, and there are a few more rotten eggs to get to. If consumers are paying for some data and there’s other data they’re not paying for, they’re naturally going to be biased towards using the data that isn’t going to cost them anything. And that makes this argument complete nonsense as well:
What has happened to the U.S. mobile industry has certainly made me reconsider [the effect on competition by zero-rating]: if competition and the positive outcomes it has for customers is the goal, then it is difficult to view T-Mobile’s approach as anything but a positive.
T-Mobile’s introduction of inexpensive so-called “unlimited” data plans — throttled after a certain amount of data has been used, of course — drove competitors to launch similar plans, that much is true. But zero-rating had very little to do with those consumer-friendly moves. And, as if to conveniently illustrate the relative dearth of competition in the US cellular market, Sprint has a chart on their website showing that single-line unlimited plans cost a similar amount per month from AT&T, T-Mobile, and Verizon; Sprint’s plan is cheaper, but they also have worse performance and coverage.
Thompson next tackles the argument that zero-rating is anti-competitive:
Still, what of those companies that can’t afford to pay for zero rating — the future startups for which net neutrality advocates are willing to risk the costs of heavy-handed regulations? In fact, as I noted in that excerpt, zero rating is arguably a bigger threat to would-be startups than fast lanes, […]
This is probably true, and that’s why it’s so important that these rules are in place.
[…] yet T-Mobile-style zero rating isn’t even covered by those regulations! This is part of the problem of regulating future harm: sometimes that harm isn’t what you expect, and you have regulated and borne the associated costs in vain.
In fact, zero-rating is, in general, covered by the 2015 net neutrality rules. That’s why the FCC sent letters to AT&T and Verizon stating that aspects of those companies’ zero-rating practices discriminated against competitors.
But T-Mobile was careful with their zero-rating practices and made sure that there were competing services offered for free. As an example, they exempt Apple Music and Spotify from data limits. But what if you wanted to listen to a mixtape on DatPiff or an indie artist on Bandcamp? That would count against your data cap, which makes those services less enticing to consumers. It clearly benefits the established players, and reduces the likelihood that a startup can compete.
If anything, I think zero-rating services should actually be banned. It’s worse for consumers in the short term, but from a more expansive viewpoint, it encourages providers to be more honest about what kinds of speeds they can offer with their infrastructure. That might even get them to invest in more robust networks.1
Third, if the furor over net neutrality has demonstrated anything, it is that the media is ready-and-willing to raise a ruckus if ISPs even attempt to do something untoward; relatedly, a common response to the observation that ISPs have not acted badly to-date because they are fearful of regulation is not an argument for regulation — it is an acknowledgment that ISPs can and will self-regulate.
This is completely disproven by countless instances of corporate wrongdoing in modern American history. Banks and hedge funds already have a terrible name for helping cause the 2008 financial crisis, but many of them are still around and more valuable than ever. BP is still one of the world’s biggest oil and gas companies despite causing one of the world’s biggest environmental catastrophes.
Moreover, it isn’t as though ISPs are revered. They regularly rank towards the bottom of consumer happiness surveys. It’s not like their reputation can get much worse. And, with a lack of competition — especially amongst fixed broadband providers — it’s not like Americans have many options to turn to when their ISP suddenly starts behaving badly.
I could nitpick this article all day long, but this is, I think, the part of Thompson’s piece that frustrates me most:
I believe that Ajit Pai is right to return regulation to the same light touch under which the Internet developed and broadband grew for two decades.
This statement here isn’t just wrong — it’s lazily wrong. It is exactly the claim that Ajit Pai makes, and which Rob Pegoraro did a fantastic job debunking in the Washington Post in May:
But Pai’s history is wrong. The government regulated Internet access under Clinton, just as it did in the last two years of Barack Obama’s term, and it did so into George W. Bush’s first term, too. The phone lines and the connections served over them — without which phone subscribers had no Internet connection — did not operate in the supposedly deregulated paradise Pai mourns.
Without government oversight, phone companies could have prevented dial-up Internet service providers from even connecting to customers. In the 1990s, in fact, FCC regulations more intrusive than the Obama administration’s net neutrality rules led to far more competition among early broadband providers than we have today. But Pai’s nostalgia for the ’90s doesn’t extend to reviving rules that mandated competition — instead, he’s moving to scrap regulations the FCC put in place to protect customers from the telecom conglomerates that now dominate the market.
Thompson’s argument is exceptionally flawed, almost to the point of disbelief. But there is one thing he may be right about: it’s fair to argue that Title II may not be the perfect law for ISPs to be governed by. There are reasonable arguments to be made for writing new legislation and passing it through the appropriate channels in Congress.
But I think it’s completely absurd to change their classification without sufficient neutrality-guaranteeing legislation in place. Unfortunately, I wouldn’t trust this Congress to write and pass that law. Therefore, it is reasonable to keep ISPs under Title II until such a bill can be passed. The “wait and see” approach Thompson favours is not fair to consumers who get to play the part of the lab rat against influential lobbyists, large corporations, and a faux-populist legislative body.
Update: Even if you believe that the American broadband market is sufficiently competitive — it isn’t — that ISPs can be trusted to not discriminate against some forms of traffic once given the freedom to — doubtful — and that existing regulatory structures will allow any problems to be fixed on a case-by-case basis, it still seems far more efficient to prevent it in the first place. There’s an opportunity to treat internet service as a fundamental utility; let’s keep it that way, whether that’s through Title II classification or an equivalent replacement.
A VR edition of the epochally awful, intentionally mind-numbing minigame from the unreleased Penn & Teller’s Smoke & Mirrors 21 years ago has a listing on Steam. The game supports the HTC Vive and Oculus Rift headsets, plus motion controllers and gamepads (partial support).
From four screenshots shown, it looks like the game has been remastered with new graphics. But the gameplay is still the same: Drive a bus from Tucson, Ariz. to Las Vegas, in real-time (eight hours), fighting its misaligned steering the whole way.
Via Andy Baio who incorrectly calls Desert Bus the “worst game ever made”. Frankly, I hope a version of this comes to the iPhone.
There appears to be a serious bug in macOS High Sierra that enables the root superuser on a Mac with with a blank password and no security check.
The bug, discovered by developer Lemi Ergin, lets anyone log into an admin account using the username “root” with no password. This works when attempting to access an administrator’s account on an unlocked Mac, and it also provides access at the login screen of a locked Mac.
As with any security issue, it would have been preferable for this to be disclosed to the vendor — in this case, Apple — privately before being publicly exposed. And, still, this is a huge problem for anyone whose recently-updated Mac is occasionally in the vicinity of other people. Apparently, pretty much any authentication dialog is susceptible, including worrying things like Keychain Access or changing a drive’s FileVault state. It appears to be a bug introduced in High Sierra; I failed to reproduce it on a machine running MacOS Sierra.
I don’t want to speculate on whether something like this would be caught in code review or a penetration testing scenario. Apple may do both of those things and it may have simply bypassed loads of people. I also don’t know how much buggier Apple’s operating systems are now compared to, say, ten years ago, if they are truly buggier at all. Maybe we were just more tolerant of bugs before, or perhaps apps crashed more instead of subtly failing while performing critical tasks.
But there has been a clear feeling for a while now that Apple’s software simply doesn’t seem to be as robust as it once was. And perhaps these failures are for good reasons, too. Perhaps parts of MacOS and iOS are being radically rewritten to perform better over the long term, and there are frustrating bugs that result. In a sense, this is preferable to the alternative of continuing to add new features to old functionality — I’d be willing to bet that there’s code in iTunes that hasn’t been changed since the Clinton administration.
Even with all that in mind, it still doesn’t excuse the fact that we have to live and work through these bugs every single day. Maybe a security bug like this “root” one doesn’t really affect you, but there are plenty of others that I’m sure do. I’m not deluded enough to think that complex software can ever be entirely bug-free, but I’d love to see more emphasis put on getting Apple’s updates refined next year, rather than necessarily getting them released by mid-September.1 There’s a lot that High Sierra gets right — the transition to APFS went completely smoothly for me, and the new Metal-powered WindowServer process seems to be far more responsive than previous iterations — but there is also a lot that feels half-baked.
Update: It gets worse — based on reports from security researchers on Twitter, this bug is exploitable remotely over VNC and Apple Remote Desktop. So, not only is this bug bad for any Mac left in a room with other people, it’s also bad for any Mac running High Sierra and connected to the internet with screen sharing or other remote services enabled. It’s worth adding a strong password to the root user account if you haven’t already. Thanks to Adam Selby for sending this my way.