Month: September 2017

Michael Riley, Jordan Robertson, and Anita Sharpe, in a lengthy feature for Bloomberg:

The impact of the Equifax breach will echo for years. Millions of consumers will live with the worry that the hackers — either criminals or spies — hold the keys to their financial identity, and could use them to do serious harm. The ramifications for Equifax and the larger credit reporting industry could be equally severe. The crisis has already claimed the scalp of Richard Smith, the chief executive officer. Meanwhile, the federal government has launched several probes, and the company has been hit with a flurry of lawsuits. “I think Equifax is going to pay or settle for an amount that has a ‘b’ in it,” says Erik Gordon, a University of Michigan business professor.

If you call a $90 million golden parachute a scalping, you can scalp me any time.

I’m struggling to come to grips with the likely long-term ramifications of the Equifax breach. The entire model of the credit reporting industry rests on the idea that they can secure the financial details of millions of people. But the reputation of all of this industry — and, I would argue, any company that collects sensitive information en masse — has been deeply undermined by this breach and others like it.

Lawsuits are a predictable response. However, even if this attack puts Equifax out of business — and I wholly doubt that it could — the effects of this breach will be felt for decades to come by American consumers.

I know that regulation is a touchy subject, but the kind of data that is held by companies in pretty much every major industry is far too valuable to allow for anything other than a perfect security record. If we are going to permit mass data retention, there ought to be standards for how this information is secured: latest patches must be applied immediately, frequent audits need to be conducted to ensure that data centres are secured, and there ought to be steep penalties for any violation. Self-regulation isn’t working, and failures have massive consequences.

Dan Goodin, Ars Technica:

An analysis by security firm Duo Security of more than 73,000 Macs shows that a surprising number remained vulnerable to such attacks even though they received OS updates that were supposed to patch the EFI firmware. On average, 4.2 percent of the Macs analyzed ran EFI versions that were different from what was prescribed by the hardware model and OS version. Forty-seven Mac models remained vulnerable to the original Thunderstrike, and 31 remained vulnerable to Thunderstrike 2. At least 16 models received no EFI updates at all. EFI updates for other models were inconsistently successful, with the 21.5-inch iMac released in late 2015 topping the list, with 43 percent of those sampled running the wrong version.

EFI vulnerabilities are rarely a problem for typical users; they’re more likely to be used for high-value breaches. Still, any security vulnerability is concerning, and the same Mac models are used by high-value targets and college students alike, so it’s important that these holes get patched.

Apple’s statement:

We appreciate Duo’s work on this industry-wide issue and noting Apple’s leading approach to this challenge. Apple continues to work diligently in the area of firmware security and we’re always exploring ways to make our systems even more secure. In order to provide a safer and more secure experience in this area, macOS High Sierra automatically validates Mac firmware weekly.

More information on the firmware validation built into High Sierra from the Eclectic Light Co:

The new utility eficheck, located in /usr/libexec/firmwarecheckers/eficheck, runs automatically once a week. It checks that Mac’s firmware against Apple’s database of what is known to be good. If it passes, you will see nothing of this, but if there are discrepancies, you will be invited to send a report to Apple, with the following dialog.

If you are running a real Mac, rather than a ‘Hackintosh’, Kovah asks that you agree to send the report. This will allow eficheck to send the binary data from the EFI firmware, preserving your privacy by excluding data which is stored in NVRAM. Apple will then be able to analyse the data to determine whether it has been altered by malware or anything else.

But, per Goodin, this won’t necessarily prevent the kinds of problems described in Duo’s report:

The new macOS version introduces a feature called eficheck, but Duo Security researchers said they have found no evidence it warns users when they’re running out-of-date EFI versions, as long as they’re official ones from Apple. Instead, eficheck appears only to check if EFI firmware was issued by someone other than Apple.

Moreover, eficheck depends on the user running High Sierra, though it appears that it made an appearance in Sierra 10.12.4. As Rich Smith and Pepijn Bruienne of Duo point out, older versions of MacOS are receiving security updates, but not necessarily firmware updates:

  • The security support provided for EFI firmware depends on the hardware model of Mac. Some Macs have received regular EFI updates, some have only been updated after particular vulnerabilities have been discovered, others have never seen an update to their EFI.

  • The security support provided for EFI firmware also depends on the version of the OS a system is running. A Mac model running OS X 10.11 can receive distinctly different updates to its EFI than the same Mac model running macOS 10.12. This creates the confusing situation where a system is fully patched and up to date with respect to its software, but is not fully patched with respect to its EFI firmware — we called this software secure but firmware vulnerable.

Again, it’s unlikely that you are at risk here. You’re probably not interesting enough to the kinds of entities that exploit firmware vulnerabilities. I hope that this research motivates Apple to ensure patches are rolled out more consistently across the board, and it would be awesome if eficheck could validate firmware more thoroughly in a future version of MacOS.

Charlie Warzel and Emma Loop, Buzzfeed:

But Twitter’s disclosures did not impress some lawmakers. After the meeting, Sen. Mark Warner, the lead Democrat on the committee, told reporters the discussion was “deeply disappointing,” calling Twitter’s presentation “inadequate” in almost every way.

“The presentation that the Twitter team made to the Senate Intel staff today was deeply disappointing,” Warner said. “The notion that their work was basically derivative based upon accounts that Facebook had identified showed an enormous lack of understanding from the Twitter team of how serious this issue is, the threat it poses to democratic institutions, and again begs many more questions than they offered. Their response was frankly inadequate on almost every level.”

As tech companies play an increasing role in democratic processes worldwide, a regular theme has been their reluctance to admit to their own influence in a legal context. They’re perfectly happy to trot out the old Silicon Valley trope of changing the world and brag to candidates about the effectiveness of advertising on their platforms when it suits them. But when it’s time for them to be introspective about their own responsibilities, they suddenly clam up and claim that they can’t possibly have influence. They’re just “platforms”; they’re merely allowing a public forum for “all ideas”.

But their employees — generally young, generally male, and frequently white — write the algorithms that preference some of these ideas over others, recommend other users to follow, or surface different news articles. When you consider that they’re doing this for hundreds of millions — or even billions — of users around the world, that’s an enormous influence.

I’m sure these companies are thrilled to have such a significant role in our lives. But they aren’t taking responsibility for that.

I don’t like the way Ajit Pai has been running the FCC. I may be Canadian, but the influence of the FCC on worldwide telecom policy is such that I feel obligated to encourage my American readers to think critically about how well you think Pai has been running the Commission. Fortunately, if you are not a fan of the way he’s running the show, you can do something about it.

Ex-FCC counselor Gigi Sohn, in a column for the Verge:

The Senate vote on Pai is imminent. When it happens, it will be a stark referendum on the kind of communications networks and consumer protections we want to see in this country. Senators can choose a toothless FCC that will protect huge companies, allow them to further consolidate, charge higher prices with worsening service, and a create bigger disconnect between broadband haves and have-nots. Or, they can vote for what the FCC is supposed to do: protect consumers, promote competition, and ensure access for all Americans, including the most vulnerable. It shouldn’t be a hard decision, and what we’ve seen over the past eight months makes the stakes clear.

How do you help encourage the Senate to vote against Pai? Say it with me now: “call your Senator”. While you’re at it, just add the Senate’s switchboard to your iPhone’s favourite contacts list.

Arielle Pardes, Wired:

One year and three cities later, the Museum of Ice Cream has graduated to cult status on Instagram. More than 241,000 people follow its page, and countless more have posted their own photos from within the space. (Instagram doesn’t show how many photos have been posted at a particular geotag, but there are over 66,000 images with the #museumoficecream hashtag.) All those grams have made the Museum of Ice Cream a coveted place to be: In New York, the $18 tickets to visit — 300,000 in total — sold within five days of opening. At its San Francisco location, which opened this month, single tickets went up to $38. The entire six-month run sold out in less than 90 minutes.

[Co-founder Maryellis Bunn] denies that Instagram played a significant role in how she shaped the museum. “I don’t think that social is what is driving what the Museum of Ice Cream does,” she says. Yet it’s hard to walk through the space and imagine it as anything but a series of Instagram backdrops. One room in the San Francisco space is filled with giant cherries and marshmallow clouds; in LA, there’s a room with strings of pink and yellow bananas strewn from the ceiling. Visitors are allotted about 90 minutes to explore the museum, but it’s hard to imagine what you’d do during that time if you weren’t taking photos.

My interpretation of the Museum of Ice Cream is that it’s an expression of unadulterated excitement — a fantasy made real. If it were presented in a pre-Instagram — even pre-photography — world, I think visitors would still get a hell of a lot of joy out of the fantastical nature of swimming in sprinkles. Still, it was created in a world where we all have a camera and an internet connection in our pants, and I think it’s a little disingenuous for Bunn to neglect the role of Instagram in its success — the team behind it features a “#MOIC” gallery on the installation’s website.

Nevertheless, I see parallels between this piece and Casey Newton’s from earlier this year about Instagram-friendly interior design. When I linked to that, I wrote that these interiors still feel like they’re embracing social media only at a surface level, and I see the same thing happening with the Museum of Ice Cream and the Color Factory. Both sure seem to be embracing the role that Instagram can play in enjoying and promoting the installations, but neither installation seems to be taking advantage of their photo-friendliness beyond merely what they look like.

I would love to see artists pushing the use of Instagram beyond promotion. What if Instagram was integral to the experience of the artwork? What if an artwork explored how some of us are prone to sharing our experiences like they’re trading cards? What about using an Instagram-friendly installation to demonstrate the disconnect between our curated-for-Instagram selves and our more private reality? I think exploring topics like these would turn photogenic installations from novelty into critical artworks.

Jen Wieczner, Fortune:

Equifax said Tuesday that as a condition of [Richard Smith’s] retirement, he “irrevocably” forfeits any right to a bonus in 2017, an amount that under normal circumstances would have totaled more than $3 million — the bonus he received in 2016 — according to the company’s retirement policy.

But the CEO is still set to collect about $72 million this year alone (including nine months’ worth of his $1,450,000 salary), plus another $17.9 million over the next few years. That’s when the rest of Smith’s stock compensation hits a few important milestones or “vests,” allowing Smith to essentially put it in his bank account. Altogether, it adds up to a total potential paycheck of more than $90.1 million, according to Fortune’s calculations based on Equifax securities filings.

Smith is the third Equifax executive who has been allowed to “retire” instead of being fired for allowing the exposure of personal information of virtually every American who has ever applied for a cellphone contract, a credit card, a mortgage, or any other loan. Wieczner reports that Equifax may still retroactively change the conditions under which Smith’s employment was terminated, but no executive who oversaw a breach of trust as serious at this should be allowed to “retire” and collect their severance. That’s outrageous.

Ross Benes, Digiday:

According to data from comScore, the publishers that pivoted to video this summer have seen at least a 60 percent drop in their traffic in August compared to the same period from a year ago. Mic went from 17.5 million visitors in August 2016 to 6.6 million visitors in August 2017, according to comScore. The decline at Vocativ was even more drastic as it went from 4 million visitors in August 2016 to a 175,000 visitors in July 2017. By August 2017, Vocativ’s traffic had shrunk enough that comScore couldn’t detect it. Over the past six months, the Alexa ranks of Vocativ, Fox Sports and Mic have also plummeted.

Heidi N. Moore, Columbia Journalism Review:

Publishers must acknowledge the pivot to video has failed, find out why, and set about to fix the reckless pivots so that publishers focus on good video. It should be original, clever, entertaining, and part of a balanced multimedia approach to digital journalism that includes well-written, well-reported stories, strong data and graphics, and good art.

Moore’s article is killer — a well-considered dressing-down of publishers that rely on lazy video techniques to try to replace high production value journalism.

From an Apple support document:

Even if you don’t enroll in Face ID, the TrueDepth camera intelligently activates to support attention aware features, like dimming the display if you aren’t looking at your iPhone or lowering the volume of alerts if you’re looking at your device. For example, when using Safari, your device will check to determine if you’re looking at your device and turns the screen off if you aren’t. If you don’t want to use these features, you can open Settings > General > Accessibility, and disable Attention Aware Features.

And from a security white paper published today (PDF):

To improve unlock performance and keep pace with the natural changes of your face and look, Face ID augments its stored mathematical representation over time. Upon successful unlock, Face ID may use the newly calculated mathematical representation — if its quality is sufficient — for a finite number of additional unlocks before that data is discarded. Conversely, if Face ID fails to recognize you, but the match quality is higher than a certain threshold and you immediately follow the failure by entering your passcode, Face ID takes another capture and augments its enrolled Face ID data with the newly calculated mathematical representation. This new Face ID data is discarded after a finite number of unlocks and if you stop matching against it. These augmentation processes allow Face ID to keep up with dramatic changes in your facial hair or makeup use, while minimizing false acceptance.

Apple also provides information in that white paper about a Face ID Diagnostics function that users can opt into, which will record all Face ID unlock attempts as images for seven days and can optionally be sent to Apple for analysis.

I have written here before that I have no idea whether Face ID is going to be good enough in most circumstances to replace Touch ID. Outside of the lucky Apple employees who are using an iPhone X as their regular carry device, nobody truly knows. But from everything I’ve seen in Apple’s documentation and everything I’ve heard from those who know about using Face ID on a daily basis, it’s the real deal for secure, reliable, and fast facial recognition.

Earlier today, Craig Hockenberry tweeted that he hadn’t received any stand reminders since upgrading to WatchOS 4. I’m usually good about hitting my stand goal without the reminders, but yesterday, I didn’t hit my stand goal for the first time in seventy-five days, according to Activity++. I realized today that I wasn’t reminded after being stationary for over four hours.

Patrick McConnell saw the same issue with his Apple Watch, but he found a fix:

When I first updated to Watch OS4 on my original series 0 watch I didn’t get any reminders to stand. I checked the stand reminders setting was in fact set correctly and scoured the internet for other possible issues. This problem persisted even after updating to my new series 3 watch.

I don’t recall where I came across this tip, but the answer seems to be go into the health app on your phone and access your profile by clicking the icon in the upper right of the screen. From there set the wheelchair option to No.

By default, the wheelchair option is “Not Set”. This is probably a trivial bug to fix, but until it is, this silly workaround should re-enable standing reminders if you’re also affected by this bug.

Brian Stucki:

Text replacement syncing is completely broken. Sometimes it works, sometimes it doesn’t. Sometimes it will only sync back old snippets that you have deleted. Sometimes the sync will work one direction, but not the other. Every time I ask about this on Twitter, it brings a strong response of similar experiences.

[…]

From my own experience, syncing of all other data via iCloud has really improved. Notes, Calendar, address book, reminders, photos, etc all sync almost instantly across all devices.

What is so special/not special about Text Replacement snippets that makes it so hard?

I know a bunch of people have been passing this link around today, but I thought I’d throw my bit in, too, because a few friends and I were chatting about this in Slack just this weekend. It’s truly astonishing that seemingly the buggiest part of iCloud is syncing plain text strings. As one person quipped in Slack, it’s amazing that I can make dozens of edits to a RAW photo and see that reflected nearly instantaneously on all my devices, but changes to text replacements remain entirely unreliable.

I used the word “astonishing” because I truly mean it. iCloud is a long way from its bug-riddled past, and features like iCloud Photo Library have worked nearly flawlessly for me since they launched. Greg Pierce’s sources say that text replacement still uses the old (and deprecated) iCloud Core Data APIs. I imagine that it’s one of the last things that does — this year’s iOS and MacOS releases migrated Safari bookmark syncing to an updated format. It’s long past the time when text replacement syncing should have been fixed, but there’s no time like the present.

Update: Apparently, if all devices under a single Apple ID have been upgraded to the latest versions of MacOS and iOS, text replacement syncing will use CloudKit instead of iCloud Core Data. Over time, we will see how much of a role the underlying technology played in its unreliability.

Update: A clarification on the above — an Apple spokesperson emailed John Gruber to state that text replacement syncing will switch to CloudKit with an update. No word on whether that’s a back-end update or a software update. On a potentially related note, the first beta of iOS 11.1 was pushed to developers today.

David Lazarus, Los Angeles Times:

The ad opens with quick cuts of creepy-looking hackers in sinister surroundings. A serious male voice asks: “Is your personal information already being traded on the dark Web?”

Then the imagery brightens — a sunny kitchen, a family playing with a fluffy white dog. “Find out with Experian,” says a friendly female voice. “Act now to help keep your personal information safe.”

Consumers’ and lawmakers’ attention is rightly focused at the moment on the security breach involving Equifax, which left millions of people facing a very real possibility of fraud and identity theft.

But the recent ad from rival Experian highlights a more troublesome aspect of credit agencies — their use of questionable methods to spook people into buying services they may not need and, in so doing, giving the companies permission to share data with marketers and business partners.

Baiting practices like these are pretty gross in virtually every context, but particularly intolerable in an industry that is supposed to monitor your finances and handle your sensitive information delicately. If Experian — or any other company that uses similar tactics — were proud of the spam that you can expect to receive after giving them your email address, don’t you think they would point that out in a more obvious place that isn’t thousands of words deep inside a terms of service agreement?

Of course, similarly-buried agreements are exactly how Experian and Equifax get their data in the first place. A common misconception I’ve seen and heard is that we never agreed to their collection of information — Jimmy Kimmel was among many who claimed that. But that isn’t exactly true. Your credit card agreement probably contains something similar to the language in mine (PDF):

Credit Reporting Agencies and Other Lenders – For a credit card, line of credit, loan, mortgage or other credit facility, merchant services, or a deposit account with overdraft protection, hold and/or withdrawal or transaction limits, we will exchange Information and reports about you with credit reporting agencies and other lenders at the time of and during the application process, and on an ongoing basis to review and verify your creditworthiness, establish credit and hold limits, help us collect a debt or enforce an obligation owed to us by you, and/or manage and assess our risks.

So if you’ve ever applied for a car loan, or a mortgage, or have a cellphone or internet subscription, you probably agreed to allow the provider of that service to submit your information to Equifax, Experian, and TransUnion — the big three credit reporting firms. I don’t think that’s necessarily a good reason — as Lazarus notes, there’s a lot of text in most terms of service agreements, and it’s pretty unfair for us to be expected to interpret their language as a lawyer would.

Matthew Panzarino, TechCrunch:

Apple is switching the default provider of its web searches from Siri, Search inside iOS (formerly called Spotlight) and Spotlight on the Mac. So, for instance, if Siri falls back to a web search on iOS when you ask it a question, you’re now going to get Google results instead of Bing.

[…]

The search results include regular ‘web links’ as well as video results. Web image results from Siri will still come from Bing, for now. Bing has had more than solid image results for some time now so that makes some sense. If you use Siri to search your own photos, it will, of course, use your own library instead. Interestingly, video results will come directly from YouTube.

I have a lot of questions about this announcement. Most of all, I wonder about Apple’s justification — their statement said, in part, that switching to Google “as the web search provider for Siri, Search within iOS and Spotlight on Mac will allow these services to have a consistent web search experience with the default in Safari”. But if consistency is what they’re aiming for, why does Siri on the Mac use Google for all searches except image searches, which still use Bing? In fact, if consistency is truly what is desired here, why don’t Siri and Spotlight match the search engine the user has selected for Safari?

The most obvious reason why this isn’t the case — and why this change was made today — is that Google’s expanded presence across Apple’s platforms is a condition of their agreement with Apple.

Also, one thing from Microsoft’s statement to TechCrunch:

Bing has grown every year since its launch, now powering over a third of all the PC search volume in the U.S., and continues to grow worldwide.

That’s unbelievable. I mean that literally — I cannot believe that a third of U.S. searches are made through Bing. There’s no citation for this, but Statista’s consolidated data from April indicates that Bing’s market share was around 23% in the U.S. at the time. I can’t imagine that Google has ceded 10% of the market to Microsoft in the past four months and nobody I know willingly uses Bing, so I wonder how this is being measured by both Statista and Microsoft.

Patrick Wardle of penetration testing firm Synack posted a short video of this security hole in action. In short, it appears that the only requirement is for the user to download and execute an unsigned application; after that, the user’s Keychain is dumped in plain text.

Thomas Fox-Brewster of Forbes spoke with Wardle about the vulnerability:

“Most attacks we see today involve social engineering and seem to be successful targeting Mac users,” he added. “I’m not going to say the [keychain] exploit is elegant – but it does the job, doesn’t require root and is 100% successful.”

That’s a hell of a combination.

This is being described in several places as a High Sierra-specific problem. It isn’t; Wardle has clarified on Twitter that other versions of MacOS are also vulnerable.

Update: Wardle has also stated on Twitter that signed apps could potentially be vehicles for distributing this malware, too — it’s not difficult to imagine a circumstance similar to last year’s incident when ransomware was briefly attached to copies of Transmission.

Roman Loyola of Macworld got a statement from Apple on this:

“macOS is designed to be secure by default, and Gatekeeper warns users against installing unsigned apps, like the one shown in this proof of concept, and prevents them from launching the app without explicit approval. We encourage users to download software only from trusted sources like the Mac App Store, and to pay careful attention to security dialogs that macOS presents.”

Users are inundated with dialog boxes and security warnings — surely Apple knows that very few people actually read them.1 And, again, I stress that this malware could be attached to a totally legitimate signed app. Apple could invalidate the developer’s certificate if something like this were to be discovered in the wild, but that doesn’t mean that the security issue doesn’t exist. They have to be working on a fix for this, too, right?


  1. The only effective way I’ve seen of presenting security warnings is the one that Safari displays when you try to visit an address marked as a possible phishing domain. It requires the user to click the “Show Details” button and actually read the text to find the link to visit the site. ↥︎

Ricky Mondello, in a Twitter thread of notable Safari 11 improvements on MacOS and iOS (via Michael Tsai):

Safari on iOS 11 will share the canonical link for a page, which can improve the experience of sharing a “mobile” website.

This is a great feature. Unfortunately, it is restricted to Safari; Apple News still shares apple.news links, which I find problematic.

For whatever reason, Apple has designed their News URLs to be unintelligible strings of random characters — for example, https://apple.news/Aj3TLC1DoQ7ubOyx_-Kwc9A. That means that the publication and article topic are completely obscured. Do you know that the link I pasted here will be safe for work, or from a reliable publication, or cover a topic you’re interested in? In short, can you trust that link? I certainly don’t think that’s possible.

Publications on Apple News have similarly sketchy-looking URLs. Pixel Envy’s is https://apple.news/TAjcS0c5sRV2HYftzmJ6UMQ. Unlike article URLs, those links don’t redirect to the publication’s website on desktop computers; visiting that link on my desktop simply displays a notice that it’s only available on Apple News.

To make matters worse, iOS 11 seems to have a bug where choosing to copy an Apple News link from the sharing sheet creates two iterations of that link as one which, of course, breaks the link.

This is a solvable problem. Let’s assume that Apple would prefer to share apple.news links rather than the canonical URL for marketing purposes — it’s not a great reason, and I believe that the canonical URL should always be shared, but let’s just stick with that argument. Let’s also assume that the sharing sheet bug will be fixed. Apple could update apple.news links to include the publication name and article title, plus a unique article ID to prevent cases of overlap as this would likely be an automatically-generated URL. For example, the link above could be apple.news/national-geographic/[unique-ID]/alligators-attack-and-eat-sharks. Of course, publications would simply be the first-level directory — apple.news/national-geographic/ — and should redirect to the publication’s regular website when the Apple News app isn’t detected.

I filed this as a bug back when Apple News launched and it was closed rather quickly as a duplicate. That was about two years ago. I’m sure there is a very sound technical reason why Apple News shipped with such a terrible URL design originally, and probably a decent reason — scale, perhaps? — why it hasn’t been fixed since. But it ought to be, and soon.

John Paczkowski of Buzzfeed interviewed Phil Schiller and Johnnie Manzari of Apple:

[…] When I ask Schiller about the evolution of the iPhone’s camera, he acknowledges that the company has been deliberately and incrementally working towards a professional-caliber camera. But he quickly follows up with an addendum that tells you most everything you need to know about Apple and camera design: “It’s never just ‘let’s make a better camera,'” he says. “It’s what camera can we create? What can we contribute to photography?”

I love this sentiment. The physics of light and the pocket-friendliness of smartphones means that an iPhone will never truly replace a full-frame camera. But some of the things that were previously the exclusive domain of specific hardware — shallow depths of field and lighting, for instance — can increasingly be modelled in software. Apple’s interpretation is not perfect, but it’s damn good.

In related news, DxOMark has given the iPhone 8 Plus the highest rating of any smartphone camera they’ve tested, but I think their scoring is of dubious reliability.

First of all, I think that applying numerical scores to subjective or perceived qualities is terrible, so I’m already not a fan of their tendency to split hairs between a phone that’s an 88 and one that’s a 90. How can anyone possibly decide that one phone is two points better than another?

But, more critically, DxOMark didn’t bother testing the iPhone 7 Plus last year; in fact, they didn’t test it until last week, just one day before this year’s iPhones were announced. They rationalized this by saying that they were updating their testing protocol to cover things like ultra-low-light performance and newer software features like simulated depth-of-field.

Yet, despite their reservations about testing phones that make heavy use of software enhancements to improve image quality, they tested the Google Pixel just a couple of weeks after the iPhone 7 Plus was released. No question about it — the Pixel takes great photos and, much like dual-camera iPhones, that’s due in part to the machine learning work it does to boost image quality. In fact, not only did DxOMark test it, they felt comfortable crowning it the best smartphone camera, a feat which Google touted extensively in their marketing materials for the Pixel.

Now, I don’t think there was any collusion with Google or any nonsense like that. There are some people who believe that DxOMark’s updated protocol conveniently aligns with Apple’s camera priorities and I, too, don’t believe that there’s any favouritism going on there either — their updated test suite is simply reflecting the changing reality of these products. But I think that DxOMark somewhat soiled their credibility with such an enormous lag in testing the 7 Plus, without great reason to do so.

One of the iPhone reviews I look forward to most every year is Austin Mann’s; it’s also the review that makes me the most envious. The new Portrait Lighting feature is particularly impressive, especially in Mann’s use. Also of note:

During my briefing with the Apple team, they mentioned I should expect to see improvements in how the iPhone 8 Plus meters for specific scenes like sunsets and concerts, and they also mentioned it should focus more accurately on fast moving objects.

I asked them if they had given these improvements a name and their answer was simple: “It’s a smarter sensor.” I noticed these subtle improvements every time I shot the sky and in the tack sharp images I captured of birds in super low light. It’s hard to describe with words, but it is a smarter sensor, indeed.

Though Mann quotes Apple as saying that it’s the sensor that’s responsible, I bet it’s helped a lot by some of the ISP improvements in the iPhone 8 as well.

Matthew Panzarino, reviewing the iPhone 8 for TechCrunch:

Noise reduction (NR) is the process that every digital camera system uses to remove the multi-colored speckle that’s a typical byproduct of a (relatively) tiny sensor, heat and the analog-to-digital conversion process. Most people just call this “grain.”

In previous iPhones this was done purely by software. Now it’s being done directly by the hardware. I’d always found Apple’s NR to be too “painterly” in its effect. The aggressive way that they chose to reduce noise created an overall “softening,” especially noticeable in photos with fine detail when cropped or zoomed.

One of the reasons I switched to shooting RAW on my iPhone is to have control over noise reduction. It’s especially noticeable in photos of trees and other foliage — here’s the same photo shot as a JPG and as a RAW file with my iPhone 6S.

Now that new iPhones are being delivered, I’m very interested to see how JPG and HEIF files perform compared to RAW, and whether noise reduction can still be disabled by third-party camera apps.

Maciej Cegłowski, reacting to the deluge of articles claiming that Amazon’s algorithms are suggesting “bomb-making supplies”:

The real story in this mess is not the threat that algorithms pose to Amazon shoppers, but the threat that algorithms pose to journalism. By forcing reporters to optimize every story for clicks, not giving them time to check or contextualize their reporting, and requiring them to race to publish follow-on articles on every topic, the clickbait economics of online media encourage carelessness and drama. This is particularly true for technical topics outside the reporter’s area of expertise.

And reporters have no choice but to chase clicks. Because Google and Facebook have a duopoly on online advertising, the only measure of success in publishing is whether a story goes viral on social media. Authors are evaluated by how individual stories perform online, and face constant pressure to make them more arresting. Highly technical pieces are farmed out to junior freelancers working under strict time limits. Corrections, if they happen at all, are inserted quietly through ‘ninja edits’ after the fact.

There are plenty of critical pieces that can be written about the dangers of machine learning and algorithmic biases. Thing is, though, those stories aren’t about Amazon.

This part of Benjamin Clymer’s review of the Apple Watch Series 3 stood out to me:

So again, the Swiss were dismissive of the Apple Watch because it’s not even a watch, right? How could someone who appreciates a fine timepiece ever want a disposable digital device on their wrist?

Still, we now have smartwatches from two of the three big luxury watch groups, and likely more to come. And that’s before we actually talk about sales numbers of Apple versus the traditional players or the fact that all of theirs use what is the equivalent of an off-the-shelf caliber in Android OS while Apple’s is, to borrow a term they’ll understand, completely in-house. Ironic, really.

Recall, if you will, Tim Cook’s slide during the Series 3’s unveiling indicating that the Apple Watch is now the bestselling watch in the world by revenue. Recall, too, Ed Colligan’s now-infamous dismissal of the then-rumoured iPhone:

“PC guys are not going to just figure this out. They’re not going to just walk in.”

The Apple Watch has, very quietly, become a hit product. There have been plenty of those with doubts about its potential — yours truly included, by the way, shortly after it was announced — but, now, I see them everywhere. I’m sure you do too.

One more thing that Clymer wrote caught my eye:

And if Apple did want to have some visual cue to let others know you’ve copped the new hotness with that cellular bizness inside, why make it a red dot, a logo well known and loved by a brand with which many consumers of “luxury digital products” are well acquainted – Leica? Hell, Apple designers Jony Ive and Marc Newson even collaborated on a Leica for the Red Charity Auction in 2013. Again, the red dot isn’t a huge deal, but I’d love to get the background on this. Why that and why there?

I’m also confused about the red dot. I don’t find it revolting; I do find it ostentatious. Some configurations of the original gold Edition model also featured a red dot on the Digital Crown, and I didn’t care for it much there, either.

But, more to the point, I have a Leica, and my camera was not the first thing I thought of when I saw the red dot on the Watch. I would also like to understand why it’s red, why it’s on the Digital Crown, and why there’s anything at all to indicate that a particular Watch is an LTE model. Only the aluminum model has a non-LTE option.

Update: Matthew Achariam points out that Tim Cook’s personal Apple Watch has always had a red dot on its Digital Crown. Interesting.

Max Rudberg:

iPhone X and its curved screen is the most exciting iOS UI design challenge in many years. However, there’s not a lot of time for developers to adjust their apps to this new form.

These are explorations on how certain design patterns can be adapted to the new screen. I’ll use findings in our own apps as an example.

This is a terrific piece. Rudberg makes heavy use of floating UI elements, as does Apple in many parts of iOS: notifications, Siri panels, and the card-like layout of parts of Music, Podcasts, and Mail. These elements seem very naturally tailored for a near-bezel-free display that can switch off individual pixels for perfect black areas; I’m a little surprised floating elements aren’t used even more extensively or encouraged by the HIG.