As expected and in spite of overwhelming public and business support for net neutrality rules, the FCC just voted along party lines to strip themselves of the power to meaningfully regulate internet service providers. But just because appointed FCC Commissioners like Ajit Pai have no respect for the public, that doesn’t mean this is over.
The first course of immediate action will be for net neutrality proponents to pressure Congress to use the Congressional Review Act to pass a resolution of disapproval. This is a mechanism that allows Congress to overrule any regulations enacted by federal agencies. You might remember it’s the tool that the GOP used to eliminate broadband privacy protections earlier this year.
“The CRA is our best option on Capitol Hill for the time being,” said Timothy Karr, a spokesperson for the Free Press Action Fund, an open internet advocacy group. “We’re not interested in efforts to strike a Congressional compromise that are being championed by many in the phone and cable lobby. We don’t have a lot of confidence in the outcome of a legislative fight in a Congress where net neutrality advocates are completely outgunned and outspent by cable and telecom lobbyists.”
A lot more work needs to be done. Title II regulations are an effective and well-rounded way to treat ISPs more like the utility providers they really are, but a bill could be passed that places a Title II-style framework into a modern context for the internet, if there’s enough public pressure to do so. Time for Americans to get to work.
Update: New York Attorney General Eric Schneiderman is suing to block this repeal. He pointed out yesterday that millions of comments on this topic were posted under real people’s names without their knowledge or consent, and that the FCC has refused to allow an investigation into this matter.
The most dramatic cybersecurity story of 2016 came to a quiet conclusion Friday in an Anchorage courtroom, as three young American computer savants pleaded guilty to masterminding an unprecedented botnet — powered by unsecured internet-of-things devices like security cameras and wireless routers — that unleashed sweeping attacks on key internet services around the globe last fall. What drove them wasn’t anarchist politics or shadowy ties to a nation-state. It was Minecraft.
Minecraft may have been the motive and three college students may have been the perpetrators, but the reason this attack was so successful was because so many internet-of-things device manufacturers don’t prioritize security, and nobody really checks to make sure any of these products have been tested for trivial loopholes.
We’re used to extension cords being certified that they won’t burst into flames when you plug them in. Microwaves and cellphones get tested by regulatory bodies to ensure that they won’t fry living organisms. We expect our cars to be built to withstand moderate collisions. These processes don’t prevent all problems, but they do help maintain standards and provide third-party verification that the manufacturer did a good job.
I’m not necessarily arguing that every device and software update ought to go through an extensive pentesting process, but there is a reasonable argument to be made that internet-of-things devices should be subject to a little more scrutiny. The industry is currently not doing a good enough job regulating itself, and their failures can have global effects. Some sort of standards body probably would slow down the introduction of these products, but is the possibility of a global attack on the internet’s infrastructure a reasonable price to pay for bringing a device to the market a little bit faster?
When Ajit Pai, the Trump-appointed head of the Federal Communications Commission, announced his intention to roll back Obama-era net-neutrality guidelines, gutting rules that prevent Internet service providers from charging companies for faster access or from slowing down or speeding up services like Netflix or YouTube, he was quick to claim that critics of his plan—Internet freedom groups and smaller Internet companies that can’t afford so-called “fast lanes”—were overreacting. “They greatly overstate the fears about what the Internet will look like going forward,” Pai said on Fox & Friends. Pai’s proposal, which would put in place a voluntary system reliant on written promises from I.S.P.s not to stall competitors’ traffic or block Web sites, essentially serves as a road map to radically reshape the Internet. But like Pai, I.S.P.s and others in the telecom industry have curiously insisted that consumers and smaller companies have nothing to fear when it comes to net-neutrality reform.
ISPs also promise to be at your house between noon and 3:00 PM to check out your slow internet connection which, of course, is because they oversold your neighbourhood and overstated likely end-user speeds; but, sure, let’s trust them to play fair when they have few incentives to do so.
Apple Inc. is designing a new chip for future Mac laptops that would take on more of the functionality currently handled by Intel Corp. processors, according to people familiar with the matter.
The chip, which went into development last year, is similar to one already used in the latest MacBook Pro to power the keyboard’s Touch Bar feature, the people said. The updated part, internally codenamed T310, would handle some of the computer’s low-power mode functionality, they said. The people asked not to be identified talking about private product development. It’s built using ARM Holdings Plc. technology and will work alongside an Intel processor.
The current ARM-based chip for Macs is independent from the computer’s other components, focusing on the Touch Bar’s functionality itself. The new version in development would go further by connecting to other parts of a Mac’s system, including storage and wireless components, in order to take on the additional responsibilities. Given that a low-power mode already exists, Apple may choose to not highlight the advancement, much like it has not marketed the significance of its current Mac chip, one of the people said.
It sounds like this is the chip that is included in the iMac Pro, even though Gurman and King cite lower power tasks as being the focus of its development. Steven Troughton-Smith in November:
This looks like the iMac Pro’s coprocessor (Bridge2,1) will be an A10 Fusion chip with 512MB RAM […] So first Mac with an A-series chip
Rene Ritchie tweeted today that the A10 has been rebranded “T2” — as in, a successor to the T1 chip in Touch Bar MacBook Pro models.
Cabel Sasser of Panic received an iMac Pro review unit from Apple, and tweeted about the T2’s functionality:
It integrates previously discrete components, like the SMC, ISP for the camera, audio control, SSD control… plus a secure enclave, and a hardware encryption engine.
This new chip means storage encryption keys pass from the secure enclave to the hardware encryption engine in-chip — your key never leaves the chip. And, they it allows for hardware verification of OS, kernel, boot loader, firmware, etc. (This can be disabled…)
In addition to the enhanced security measures Sasser notes, a couple more things are very exciting about Apple’s gradual rollout of a proprietary coprocessor in their Mac lineup. The T2 sounds like it expands upon some of the input mechanism security measures of the T1, so the keyboard and built-in camera are more secure than previous implementations. And, as Guilherme Rambo noticed, it can enable “Hey, Siri” functionality on the Mac. But Apple hasn’t enabled that functionality; so, now, it is a question of “when?”.
Apple updated its website with news that the iMac Pro is shipping beginning on December 14, 2017. The pro-level iMac features a long list of impressive specifications. The desktop computer, which was announced in June at WWDC comes in 8, 10, and 18-core configurations, though the 18-core model will not ship until 2018. The new iMac can be configured with up to 128GB of RAM and can handle SSD storage of up to 4TB. Graphics are driven with the all-new Radeon Pro Vega, which Apple said offers three times the performance over other iMac GPUs.
Apple provided Marques Brownlee (MKBHD) and another YouTuber, Jonathan Morrison, with review units, and they seem effusively positive, with the exception of some concerns about the machine’s lack of post-purchase upgradability.
Of note, there’s nothing on the iMac Pro webpage nor in either of the review videos about the Secure Enclave that’s apparently in the machine, nor is there anything about an A10 Fusion chip or “Hey, Siri” functionality. These rumours were supported by evidence in MacOS; it isn’t as though the predictions came out of nowhere. It’s possible that these features will be unveiled on Thursday when the iMac Pro becomes available, or perhaps early next year with a software update, but I also haven’t seen any reason for the Secure Enclave — the keyboard doesn’t have a Touch Bar, nor is there Touch ID anywhere on this Mac.
I found a very consistent set of results: a 2X to 3X boost in speed (relative to my current iMac and MacBook Pro 15”) a noticeable leap from most generational jumps that are generally ten times smaller.
Whether you’re editing 8K RED video, H.264 4K Drone footage, 6K 3D VR content or 50 Megapixel RAW stills – you can expect a 200-300% increase in performance in almost every industry leading software with the iMac Pro.
Most of my apps have around 20,000-30,000 lines of code spread out over 80-120 source files (mostly Obj-C and C with a teeny amount of Swift mixed in). There are so many variables that go into compile performance that it’s hard to come up with a benchmark that is universally relevant, so I’ll simply note that I saw reductions in compile time of between 30-60% while working on apps when I compared the iMac Pro to my 2016 MacBook Pro and 2013 iMac. If you’re developing for iOS you’ll still be subject to the bottleneck of installing and launching an app on the simulator or a device, but when developing for the Mac this makes a pretty noticeable improvement in repetitive code-compile-test cycles.
These are massive performance gains, even at the 10-core level; imagine what the 18-core iMac Pro is going be like. And then remember that this isn’t the Mac Pro replacement — it’s just a stopgap while they work on the real Mac Pro replacement.
Update:Rene Ritchie says that the A10 Fusion SoC is, indeed, present in the iMac Pro, albeit rebranded as a T2 coprocessor.
Matt Birchler used a Pixel 2 instead of his usual iPhone for a couple of months, and has started publishing pieces about his experience and impressions. It’s worth your time to start with part one and work your way through his thoughts, but this bit from the “Performance and Stability” section stood out to me:
[…] My time with Android has shown it to be anything but the “stable” alternative to iOS. Just 31 days into my time with the Pixel 2, I had to restore my phone to factory settings to fix the errors I was experiencing. In addition to the issues raised in that post, I have also had issues with apps crashing, notifications staying on silent even though I have them set to vibrate, random reboots, and more. After a few weeks I actually stopped reporting my Android bugs to Twitter because it was getting too depressing.
As Birchler points out, everyone’s bug experiences are different because, even with the relatively limited configuration options available for mobile devices — compared to, say, PCs — there are still billions of possible combinations of languages, WiFi networks, apps, settings, and so on. In light of recent iOS bugs, though, it’s remarkable to recognize that there’s still a lot of work to be done all around. Bugs shake the trust we place in our devices and may even make us consider switching, but there’s nothing like hearing user reports like these that acknowledge that it’s not any more stable. I’m not mocking Android users here or the OS itself; it’s just something worth recognizing.
Hey, remember how Andy Rubin temporarily stepped away from Essential after reporters from the Information started asking questions about what they called an “inappropriate relationship” between him and a woman who worked for him while at Google? Theodore Schleifer of Recode has the latest:
Andy Rubin, the founder of smartphone startup Essential, has already returned to his company less than two weeks after it was announced that he took a leave of absence amid questions about an alleged inappropriate relationship.
Even while on leave from Essential, Rubin was still able to show up to work at the same physical workplace. That’s because he did not take a similar leave from Playground Global, the venture capital firm he founded, which shares the same office space as Essential.
It will come as no surprise to you that Playground Global also has an investment in Essential, so what did Rubin’s leave of absence truly mean?
There have been two major versions of the FCC’s transparency requirements: one created in 2010 with the first net neutrality rules, and an expanded version created in 2015. Both sets of transparency rules survived court challenges from the broadband industry.
The 2010 requirement had ISPs disclose pricing, including “monthly prices, usage-based fees, and fees for early termination or additional network services.”
That somewhat vague requirement will survive Pai’s net neutrality repeal. But Pai is proposing to eliminate the enhanced disclosure requirements that have been in place since 2015.
The 2015 disclosures that Ajit Pai’s proposal would undo include transparency on data caps, and additional monthly fees for things like modem rentals. ISPs also wouldn’t have to necessarily make these disclosures public on their own website; they can tell the FCC about them, and the FCC will publish the disclosures on their byzantine website.
Pai has claimed that his proposed rollback will encourage net neutrality practices without regulation because it will require ISPs to be fully transparent. In a shocking turn of events for statements and policies originating from the top minds of this administration, that claim turns out to be a complete lie: ISPs won’t have to be as open and transparent about their pricing and policies, and they have repeatedly stated that they would use tactics like paid prioritization to manipulate network traffic if given the opportunity.
I don’t have an Amazon Prime subscription, so I don’t really have a reason to download this app; but, by all accounts, it is shockingly bad.
Netflix’s is also pretty awful — it now autoplays a preview of the selected show or movie at the top of the screen, with sound, and I can’t find any way to disable this. It also doesn’t behave like a typical tvOS app: the app navigation is displayed as tiles, shows and movies are also displayed as tiles, and they’re mixed together in an infinitely-scrolling grid.
Hulu isn’t available in Canada, but its tvOS app is apparently poor as well.
Why is it that three of the biggest players in streaming video can’t seem to find the time and resources to build proper tvOS apps? Is it not worth the effort because the Apple TV isn’t popular enough? Is it because these companies simply don’t care?
I don’t think it’s right to stymie experimentation amongst app developers, but tvOS has a very particular set of platform characteristics. If Apple isn’t going to encourage developers’ compliance to those characteristics, it’s up to users to provide feedback and encourage developers like these to do better.
The larger point here, though is that while there certainly were a number of reasons to be hesitant about supporting Title II or even explicit rules from the FCC a decade ago, enough things have happened that if you support net neutrality, supporting Title II is the only current way to get it. Ajit Pai’s plan gets rid of net neutrality. The courts have made it clear. The (non) competitive market has made it clear. The statements of the large broadband providers have made it clear. The concerns of the small broadband providers have made it clear. If Ben does support net neutrality, as he claims, then he should not support Pai’s plan. It does not and will not lead to the results he claims he wants. It is deliberately designed to do the opposite.
So, yes. For a long time — like Ben does now — I worried about an FCC presenting rules. But the courts made it clear that this was the only way to actually keep neutrality — short of an enlightened Congress. And the deteriorating market, combined with continued efforts and statements from the big broadband companies, made it clear that it was necessary. You can argue that the whole concept of net neutrality is bad — but, if you support the concept of net neutrality, and actually understand the history, then it’s difficult to see how you can support Pai’s plan. I hope that Ben will reconsider his position — especially since Pai himself has been retweeting Ben’s posts and tweets on this subject.
If I didn’t convince you to disagree with Thompson’s misleading piece, maybe Masnick will. If you live in the United States, it’s vital that the FCC — particularly Ajit Pai, Michael O’Rielly, and Brendan Carr — and your representatives hear your concerns.
There are at least two possible explanations for all of these misunderstandings and technical errors. One is that, as we’ve suggested, the FCC doesn’t understand how the Internet works. The second is that it doesn’t care, because its real goal is simply to cobble together some technical justification for its plan to kill net neutrality. A linchpin of that plan is to reclassify broadband as an “information service,” (rather than a “telecommunications service,” or common carrier) and the FCC needs to offer some basis for it. So, we fear, it’s making one up, and hoping no one will notice.
Regardless of whether the FCC commissioners are being malicious or they truly don’t understand how the internet works, it disqualifies them from running the Commission.
John Herrman, New York Times, on reactions to the power of large tech companies in 2017:
The flip side of these companies’ new dominance is that, not unlike the first industrialists, they turn progress from something that manifests inevitably with the passage of time into something that is being done to us, for reasons that are out of our control but seem unnervingly and suddenly within someone else’s. This is a profound reorientation, which might explain why current anxieties about the internet make for such unlikely bedfellows. Conservative parents with moral complaints about inappropriate videos surfacing in YouTube kids’ channels find themselves inadvertently agreeing with leftist critiques of corporate power. Facebook’s inability to deal in any meaningful way with misinformation on the platform has loosely aligned an elitist critique of democratized news with populist anger at a company led by Silicon Valley elites. There are right-wing anti-monopolists and left-wing anti-monopolists setting their sights on Google and Facebook, claiming dangerous censorship or lack of responsible moderation or, sometimes, both at once — people who want different things, and who have incompatible goals, but who have intuited the same core premise. In these instances, the only people left telling us not to worry — rhyming their responses with the vindicated defenders of the nascent internet — have suspiciously much to lose.
In this future, what publications will have done individually is adapt to survive; what they will have helped do together is take the grand weird promises of writing and reporting and film and art on the internet and consolidated them into a set of business interests that most closely resemble the TV industry. Which sounds extremely lucrative! TV makes a lot of money, and there’s a lot of excellent TV. But TV is also a byzantine nightmare of conflict and compromise and trash and waste and legacy. The prospect of Facebook, for example, as a primary host for news organizations, not just an outsized source of traffic, is depressing even if you like Facebook. A new generation of artists and creative people ceding the still-fresh dream of direct compensation and independence to mediated advertising arrangements with accidentally enormous middlemen apps that have no special interest in publishing beyond value extraction through advertising is the early internet utopian’s worst-case scenario.
I’m going to bring this back around to net neutrality because the FCC’s vote is in about a week and I think it’s worth keeping that in mind. FCC chairman Ajit Pai has said, quite reasonably, that he is concerned about the influence of a handful of tech companies on our greater discourse. Whether that’s because he’s actually concerned about their influence or whether he’s using Silicon Valley as a scapegoat is irrelevant in this discussion. But it is more likely that a company can rise up to compete with, say, Facebook than it is that a startup could compete with a major ISP like Verizon or Comcast1 simply because of the high initial costs associated with building broadband infrastructure.2
Today’s tech giants were born in garages in the shadows of yesterday’s tech giants, so we hear, but major ISPs don’t have a comparable story. Allowing ISPs to treat websites differently or prioritizing traffic for a fee will more deeply entrench the dominance of the largest and wealthiest tech companies, and will make it less likely that an upstart can compete.
Both of these ISPs actually run cable and their own infrastructure unlike, for example, smaller regional ISPs. ↩︎
This is something that I seem remember the FCC acknowledging in their proposal (PDF) but I haven’t been able to find the passage. If you remember where it is, please let me know. ↩︎
An important update to a story I linked to two weeks ago about an Android system service that was collecting location data even when location services were switched off — according to Tony Romm of Recode, Oracle seeded that story to Quartz as part of a PR campaign against Google:
Since 2010, Oracle has accused Google of copying Java and using key portions of it in the making of Android. Google, for its part, has fought those claims vigorously. More recently, though, their standoff has intensified. And as a sign of the worsening rift between them, this summer Oracle tried to sell reporters on a story about the privacy pitfalls of Android, two sources confirmed to Recode.
To be sure, the substance of Quartz’s story — Google’s errant location tracking — checks out. Google itself acknowledged the mishap and said it ceased the practice. Nor does Oracle stand alone in raising red flags about Google at a time when many in the nation’s capital are questioning the power and reach of large web platforms.
Still, Oracle’s campaign is undeniable. In Washington, D.C., for example, it has devoted a slice of its $8.8 million in lobbying spending so far in 2017 to challenging Google in key policy debates. It has sought penalties against Google in Europe, meanwhile, and it even purchased billboard ads in Tennessee just to antagonize its tech peer, sources said.
It is quite reasonable for people and companies to have questions about Google’s dominance in many online services and mobile operating systems and find that Oracle’s dirty tricks campaign somewhat sours the reputation of this story.
But I don’t necessarily think this reflects poorly on Oracle; if anything, it shakes my confidence in Quartz’s reporting. I don’t know what Quartz’s sourcing attribution guidelines are, but the New York Times’ style guide indicates that a source’s interest in the story should be communicated to readers as candidly as possible. In their story, Quartz did not indicate how they were tipped-off to Android’s behaviour.
[…] Interviews with more than two dozen marketers, journalists, and others familiar with similar pay-for-play offers revealed a dubious corner of online publishing in which publicists, ranging from individuals like Satyam to medium-sized “digital marketing firms” that blur traditional lines between advertising and public relations, quietly pay off journalists to promote their clients in articles that make no mention of the financial arrangement.
People involved with the payoffs are extremely reluctant to discuss them, but four contributing writers to prominent publications including Mashable, Inc, Business Insider, and Entrepreneur told me they have personally accepted payments in exchange for weaving promotional references to brands into their work on those sites. Two of the writers acknowledged they have taken part in the scheme for years, on behalf of many brands. Mario Ruiz, a spokesperson for Business Insider, said in an email that “Business Insider has a strict policy that prohibits any of our writers, whether full-time staffers or contributors, from accepting payment of any kind in exchange for coverage.”
There are a couple of different kinds of writers that, according to Christian, took payments in exchange for mentioning or linking to brands in their articles. Some publish to “contributor networks”, which are blogs hosted by major publications but not edited by them. TechCrunch used to have one of those, but they shut it down earlier this year because they noticed an increase in posts that they “strongly suspected were ghost-written by PR”, which should come as no surprise. These contributor networks tend to be filled with self-promotional garbage. I don’t understand what positive effects a contributor network has on an established publication, but it seems like it’s trading away hard-earned authority for cheap traffic.
The more insidious acts Christian profiles are those from writers ostensibly creating articles where a brand pays for very subtle placement:
Yael Grauer, a freelancer who’s written for Forbes and many other outlets, says she’s gotten as many as 12 offers like Satyam’s in a single month, which she always rejects. Some are surprisingly straightforward, like a marketer who simply asked how much she charged for an article in Slate or Wired. Others are coy, like a representative of a firm called Co-Creative Marketing, who heaped praise on her writing before asking whether she could get content published in Forbes or Wired on behalf of a client. Another marketer offered Erik Sherman, a business journalist, $315 per article to mention her client’s landscaping products in Forbes, the Huffington Post, or the Wall Street Journal — though she cautioned that the mentions would need to “not look blatant.” Sherman declined, telling the marketer that the offer was “completely unethical.”
You’d probably expect this kind of thing to be pervasive in Forbes’ contributor network, but if a similar offer were accepted by a writer for an esteemed imprint like the Wall Street Journal, it would undermine your confidence in that publication overall — especially since it’s a business publication, as opposed to something more general-interest.
For what it’s worth, even I — writing at a fairly tiny site — receive offers like these a few times every week. I have never accepted any of them, of course.
Big news today: MarsEdit 4 is out of beta and available for download from the MarsEdit home page and the Mac App Store. This marks the end of a long development period spanning seven years, so it’s a great personal relief to me to finally release it. I hope you enjoy it.
MarsEdit 4 brings major improvements to the app including a refined new look, enhanced WordPress support, rich and plain text editor improvements, automatic preview template generation, and much more.
I’ve been using MarsEdit 4 betas for several months and I love the improvements in this version — particularly, the new Safari extension. Jalkut has created a very clever trial scheme; I highly recommend you take advantage of it if you have a blog and have never tried MarsEdit before. It’s terrific.
What I like about this postmortem is that it’s the script to what is almost the “Every Frame a Painting” episode of “Every Frame a Painting”, particularly in this detail:
In order to make video essays on the Internet, we had to learn the basics of copyright law. In America, there’s a provision called fair use; if you meet four criteria, you can argue in court that you made reasonable use of copyrighted material.
But as always, there’s a difference between what the law says and how the law is implemented. You could make a video that meets the criteria for fair use, but YouTube could still take it down because of their internal system (Copyright ID) which analyzes and detects copyrighted material.
So I learned to edit my way around that system.
If YouTube’s automatic flagging system didn’t exist, it’s likely that “Every Frame a Painting” would feel completely different. Whether it would have been better, I’m not sure, but I think the limitations of YouTube helped birth something truly unique and very, very good.
I don’t think stationary smart speakers represent the future of computing. Instead, companies are using smart speakers to take advantage of an awkward phase of technology in which there doesn’t seem to be any clear direction as to where things are headed. Consumers are buying cheap smart speakers powered by digital voice assistants without having any strong convictions regarding how such voice assistants should or can be used. The major takeaway from customer surveys regarding smart speakers usage is that there isn’t any clear trend. If anything, smart speakers are being used for rudimentary tasks that can just as easily be done with digital voice assistants found on smartwatches or smartphones. This environment paints a very different picture of the current health of the smart speaker market. The narrative in the press is simply too rosy and optimistic.
I’m clearly not the target market for the HomePod, primarily because I live in Canada where the HomePod won’t be for sale at launch.1 I also live in an apartment small enough that I can semi-loudly say “hey Siri” and get a response from my phone on the other side of my place. But I also think that the reason I’m not that enamoured with the HomePod or any smart speaker yet is because I’m a daily Apple Watch wearer, so many of its functions are on my wrist instead of in a tube in my kitchen.
I’m guessing that these products would appeal more — not exclusively, but more — to people who live in larger homes, of course, but also people who don’t typically wear a smartwatch — Apple’s or otherwise.2 I also wonder if smart speakers are an intermediate product between a more traditional computer-user relationship and something that’s more environmental or spatial. If it is, I’d rather throw my hat in with a company that has a strict commitment to user privacy, rather than companies that serve up targeted advertising.
And, if the rollout of Apple News is anything to go by, several years after launch. ↩︎
The HomePod is only $20 more expensive in the U.S. than a Series 3 Apple Watch. ↩︎
Sebastiaan de With, designer of the Halide camera app:
When you shoot JPEG, you really need to get the photo perfect at the time you take it. With RAW and its extra data, you can easily fix mistakes, and you get a lot more room to experiment.
What kind of data? RAW files store more information about detail in the highlights (the bright parts) and the shadows (the dark parts) of an image. Since you often want to ‘recover’ a slightly over or under-exposed photo, this is immensely useful.
It also stores information that enables you to change white balance later on. White balance is a constantly measured value that cameras try to get right to ensure the colors look natural in a scene. iPhones are quite good at this, but it starts to get more difficult when light is tinted.
I’ve been shooting RAW on my iPhone almost exclusively since I received a beta version of Obscura in the summer last year that used iOS 10’s RAW capture API. More time is needed to make a RAW photo usable than a JPEG out of the camera app and RAW files take up so much more space, but it’s completely worth it. So many of the photos I’ve captured since would have been impossible to make without RAW.
You can try this for yourself: get a manual camera app like Obscura, Halide, or Manual, and download either Lightroom or Darkroom. Capture a scene in RAW, then start playing around with the highlights, shadows,1 and white balance; in Lightroom, you can also adjust individual hues in a scene without degrading the image fidelity. It’s remarkable how much the iPhone’s sensor actually captures, especially in foliage and finer patterns.
If it’s snowy where you live, this is extremely helpful. ↩︎
On iOS 10.1 there were only 4 binaries using Swift. The number of apps and frameworks using Swift grew quite a lot in a year: There are now 20 apps and frameworks using Swift in iOS 11.1 […]
Similarly the number of binaries using Swift grew from 10 in macOS 10.12.1 to 23 in macOS 10.13.1.
It looks like most of the system components built in Swift are entirely new apps, or effectively so, as with Music and Podcasts. But it also appears that Apple is thoroughly porting both operating systems over to Swift. I have no idea how deep that will run — I imagine device drivers, for example, may not be rewritten — but perhaps the goal is to have everything the user interacts with be built in Swift, or something like that.
Whatever Apple’s specific goal may be, the apps they have ported to Swift so far are not little things or developer-specific utilities. These are critical apps that people use all the time. If that’s not eating your own dog food, I don’t know what is.