Federico Viticci has collected some reactions to the new Twitter for Mac client and it’s not brilliant:
I’ve seen dozen of other people lamenting poor performance, odd behaviors on OS X, and random bugs with Twitter accounts. That doesn’t inspire a lot of confidence, especially after you read that the app was apparently outsourced to developers outside of Twitter. Even more baffling: Twitter Moments – one of Twitter’s biggest product releases of 2015 that got its own (confusing) TV commercial – aren’t supported in the new Mac app.
Outsourced where, you ask? The Vergereports — and it was previously hinted to me — that Black Pixel was behind the new client. They have a pretty good reputation, but their updates to beloved apps (NetNewsWire, and now Twitter) leave much to be desired.
“Hold on there, Nick,” you complain, “you’re saying Twitter for Mac was ‘beloved’?”
Indeed, I am. A long time ago, in a version far, far away, there was an excellent Twitter for Mac client made by Loren Brichter. Not only was it functionally complete, there were little transitions and animations everywhere that made it feel joyous to use.
But now, in the fourth version of Twitter for Mac, changing each view is accompanied by a fade so short as to be perfunctory. Images simply pop open in full size. Locations are not aligned to dates. Selecting a DM thread and then going back to the DM thread list will return it to the top scroll position. It’s rough around the edges, yes, but also devoid of personality. In a way, it’s the perfect client for today’s Twitter, Inc.
Me, though? I’m sticking with Tweetbot. At least, until Twitter fully pulls the rug out from third-party clients.
As with Trusteer, this is a browser add-on that ostensibly provides greater user security at the expense of actual security. And, of course, the companies responsible for these breaches will not be held accountable.
Selling snake oil to audiophiles is not only a very profitable business, but one could argue that it isn’t even usually a scam — in most cases, both the sellers and the buyers believe in the benefits being sold. Placebo benefits are real to their observers, and placebo-based demand is still demand.
While audiophiles who demand high-resolution formats are a tiny fraction of all Apple customers, they’re probably a much bigger portion of those who buy a lot of music.
Arment is, of course, right: audiophiles will spend more on music that they perceive to be of a much higher quality. The iTunes Store could have $0.99 tracks for us plebeians with normal ears, and $1.99 tracks for those who believe they have superhuman hearing. Apple could gobble that market up.
If Apple thinks this is worthy of their attention, it ought to be more than just a financial opportunity. I think this rumour has some of its roots in Steve Jobs’ vinyl record collection, and I’m sure there are other audiophile-types among Apple’s executive ranks. At least, I hope there are, because Apple’s side projects are rarely as good as offerings from competitors who live and breathe the product.
That’s not to say that I’m suddenly a convert to 192/24 audio woo. I still think it’s a waste of money to build an audio system to support an audio format with no perceptible improvements. But if Apple wants to cater to people who do believe they’re hearing a difference, they need to treat it better than they currently do their music offerings.
Boy, was that ever a disappointing sentence to write.
I really like my Apple Watch. I wouldn’t go so far as to say I love it, but I like it enough that I don’t plan to stop wearing it anytime soon. I’m very curious to see what the next revision brings to the table wrist. I don’t suspect I’ll be itching to upgrade… until I hear how much thinner and faster it is. (In this case, Apple should be trying to make things thinner.)
And, I would argue, faster — a device that’s supposed to be used in bursts of seconds at a time shouldn’t contradict that with slow loading times.
I’ve worn my Watch every day since I received it in June, and I’ve found it both fun and useful, despite it not being essential. I swam with it in Bali, and cycled with it in Calgary; I’ve responded to texts and calls while cooking or washing dishes, and set timers while doing the same; I’m also more aware of my daily physical activity. I’ve spent less time directly on my phone as I know which emails and texts I need to deal with now versus those that can wait.
After spending that kind of time with it, I’ve realized just how much I like it. It’s occasionally frustrating in the way a first-generation product often is,1 but it’s well-considered overall, in the way that Apple seems to excel at.
If I were to rewind to June, would I click the “buy” button again? Absolutely. I might have even sprung for the stainless steel model, though I’m not sure I would have gone too far up the pricing brackets — the Sport Band really is that nice.
I’m not sure whether I’m upgrading to the second generation next year, largely because I’ve no idea what it will bring. But next year’s model doesn’t have to convince me and the other early adopters to drop $400+ on the latest model; its job will be to further convince those who are on the fence that it’s a valuable, if inessential, device.
I’m currently one of what I gather is a small amount of people who are affected by a rather irritating bug that prevents some third-party native apps from launching, too. ↩︎
Great post on the Juniper backdoor from Adam Langley:
Again, assuming this hypothesis is correct then, if it wasn’t the NSA who did this, we have a case where a US government backdoor effort (Dual-EC) laid the groundwork for someone else to attack US interests. Certainly this attack would be a lot easier given the presence of a backdoor-friendly RNG already in place. And I’ve not even discussed the SSH backdoor which, as Wired notes, could have been the work of a different group entirely.
It’s probably necessary to read Langley’s post to fully comprehend this, but here it is in a nut: the NSA compromised Dual-EC which allowed them to potentially predict numbers generated by a “random” number generator. And Juniper used Dual-EC as part of its security efforts, but not in the recommended (read: backdoored) way.
Maybe this infiltration would allow the NSA to monitor data sent over Juniper Networks’ hardware, or perhaps it’s unrelated to them. But the very introduction of any backdoor has significantly depleted the security of Juniper’s hardware.
Some may feel that it’s in the U.S. government’s best interests to be allowed to monitor secure connections for possible illegal activity, but it is technically impossible to create a system that only permits connections from American intelligence agencies. If the U.S. is allowed access, why not China? Does that make U.S. intelligence agencies squirm?
As the Apple Watch functions largely as a companion product to the iPhone, it’s hard to use a spike in downloads from most apps as an indicator of the Watch’s popularity. But Craig Hockenberry’s Clicker app doesn’t function on anything but the Watch, making its holiday spike much more telling.
Critics had said that the draft version of the law used a recklessly broad definition of terrorism, gave the government new censorship powers and authorized state access to sensitive commercial data.
The government argued that the measures were needed to prevent terrorist attacks. Opponents countered that the new powers could be abused to monitor peaceful citizens and steal technological secrets.
In the end, the approved law published by state media dropped demands in the draft version that would have required Internet companies and other technology suppliers to hand over encryption codes and other sensitive data for official vetting before they went into use.
Buckley’s reporting runs counter to Ben Blanchard’s, for Reuters, who says that the passed bill does require tech companies to hand over encryption keys. Regardless, this bill doesn’t swerve too much from laws already in place in the United States, especially after CISA was snuck into the budget bill recently signed into law.
Tech companies’ reliance upon China’s manufacturing infrastructure isn’t a problem per se, even in light of this legislation, but even if they were, they wouldn’t necessarily gain an advantage by making products in the U.S. If anything, the troubling details of the initial drafts of this bill and the reception it received from the White House paint a contradictory position to views previously espoused by this and previous American administrations. It’s apparently alright if the U.S. wishes to snoop on the world, but if China does it, it’s a national crisis. There’s only one antidote to that viewpoint, and it’s to cut it off at the source: mass surveillance is not — and should never — be considered okay, by anybody.
I’m a little hesitant to post this because it’s in Korean — most of you speak English — and I can’t track down the author’s name. But I think this is one of the most interesting explanations of the iOS 9 multitasking interface. Per Bing’s translator [sic]:
Why, then, do the same in the plane of the app is not listed in the GUI switch to overlapping method applied with perspective? This leads to can be found in the 3D touch itself. Light objects than when moving heavy objects might have more power when you move, Apple apps screenshots page in-app has given a “heavy” feeling than a page remains to be seen. This is the sense of the weight of the user for the hierarchical levels in the structure in-app screenshots of the app itself rather than the page, the page higher stage allows us to know intuitively that there is. In other words, Apple has introduced a touch gesture iOS a new level of intensity over from 7 to more users by easy to understand and to be able to experience the UI changes.
If I’m reading this right, what the author is saying is that the 3D Touch feature of the iPhone 6S makes it feel like you’re pushing older apps back and away from the display, granting them “weight”.
This is worth taking a look at if only for the excellent diagrams.
Anyway, Netflix is talking about the bitrates for their 1080p videos soon being as low 2000 Kbps for the simple stuff. That’s down from the 4300-5800 Kbps range they’re using now. And I’m sure they can do that on the low end without any perceivable loss of quality while streaming.
But can Apple and Amazon sell 1080p videos — averaging about 5000 Kbps now — at bitrates as low as 2000 Kbps — less than half that average size — without a perceived loss of value?
I don’t see why they couldn’t. From my experiences with a non-technical crowd, they don’t care about video bitrate, likely because video isn’t marketed on bitrate but size.1 As long as it says “HD” and it looks “HD”, most people probably won’t care whether the file is 2,000 or 5,000 Kbps.
I also know of a lot of people who rip audio from YouTube, transcoding an already poor-quality MP3 track into an even poorer-quality file. ↩︎
…[Apple] has been developing Hi-Res Audio streaming up to 96kHz/24bit in 2016.
The Lightning terminal with iOS 9 is compatible up to 192kHz/24Bit, but we do not have information on the sampling frequency of Apple Music download music. […]
Yet another indication that the analog headphone jack might be a goner.
In my commentary on this rumour, I pointed out the lack of perceptible differences between lossy and lossless audio, but I didn’t address the Lightning connector or this rumour’s intersection with the also-rumoured removal of the headphone jack.
These rumours are easily conflated for lots of reasons, but I think the main one is because of how confusing the world of digital audio is. There are myriad combinations of file types, compression formats, sampling rates, and bit depths, and that’s without exploring the various factors in between the sounds leaving the device and reaching our ears. Clarifying this rumour, however, needs some explanation.
Generally speaking, most music you have on your computer is likely to be in 44.1 kHz, 16 bit files, regardless of whether they’re lossy or lossless. The frequency rating, in kHz, is the sample rate. It partially determines the highest pitch the file can reproduce, which is precisely half the sample rate — a file with a 44.1 kHz sample rate can store frequencies up to 22.05 kHz. This is well beyond human hearing, which ranges between about 20 Hz and 20 kHz.
With age and noise exposure, the human ear’s sensitivity to high frequencies begins to deteriorate. You can test this by using Audacity and generating sine waves beginning at 20 kHz and reducing by 500 Hz or so until you can hear the tone. I’m 25; the upper bound of my hearing is about 19 kHz, which is pretty much normal. An older person who has spent a lot of time surrounded by loud noises — a musician whose career began in the ’60’s, for example — will have a much lower sensitivity to higher pitches.
Then there’s bit depth; typically, 16 or 24 bits. This determines the dynamic range of the recording — in simple terms, the difference between silence and the loudest non-distorting sound. When the volume is increased of a recording with a reduced bit depth — say, 8-12 bits — the “noise floor” will become more noticeable. The pervasive hissing sound you heard when playing cassettes? That’s the noise floor creeping in on an analog format similar to a low-bitrate digital recording.
In recording studios, 96 kHz or even 192 kHz sample rates and a 24 bit depth is not uncommon because recording engineers, mixers, and producers want the most dynamic range to play with, even if they don’t use it all. That gives them freedom to boost the volume of too-quiet recordings, mix loud and soft sounds together without hearing background hiss, and generally muck around as much as they like. It’s similar to how professional photographers use uncompressed RAW files while shooting and editing, so they have a maximum amount of flexibility and freedom.
Human ears can’t tell the difference between lower sample rates and much higher ones. As we’ve discussed, the sample rate affects the maximum frequency; as the standard 44.1 kHz sample rate of most recordings allows for the reproduction of sounds beyond the upper bounds of human hearing, the effects of significantly higher sample rates aren’t going to be audible.
Bit depth, on the other hand, has more noticeable real-world effects. A 16 bit recording allows for a 96 decibel dynamic range, but the upper (and very painful) limits of the human ear are around 140 dB. As dB is a logarithmic measurement, that’s far louder than 96 dB, and increasing a 16 bit recording to 140 dB would produce a noticeable noise floor. By contrast, a 24 bit recording allows for somewhere between 110 and 120 dB of dynamic range, which allows for more room between softer, quieter sounds and silence. Most popular music doesn’t make use of the full dynamic range of what 16 bit recording allows, but jazz, classical, and folk recordings tend to benefit from the increased range of 24 bit audio.
Of course, once you have a high-res audio recording, you need to play it, and this is where it all comes back to the Lightning port. There is no limit on what an analog audio port — like the headphone jack — can send, but there is a limit to the headphone jack in the iPhone. This is determined by the digital-to-analog converter, or DAC.
The one in the iPhone is, as best as anyone can guess, a 16-bit DAC and it probably supports a maximum sample rate of 44.1 kHz. Regardless of the audio format on the device, it’s going to pass through a DAC that supports CD quality audio, but no greater. The HTC One, on the other hand, has a 192 kHz, 24-bit DAC that outputs through the headphone jack.
I explained everything prior to that last paragraph first because I think it’s important to understand what those two measurements mean. As Macotakara points out, 192 kHz, 24-bit audio is already supported via the Lightning port, but it’s probably intended to facilitate better support for iOS devices as recording and mixing platforms. They might wish to expand its use and make lossless tracks available, which should appeal to jazz and classical fans unrepresented by other major music services.
But high-res audio formats do not necessarily foretell the demise of the headphone jack, nor are they something most people will be able to perceive. I really care about audio quality, but I keep virtually all of my music in high-quality lossy compressed formats because the difference is imperceptible. A 24-bit standard would be appealing, but only if modern recordings were mixed and mastered to expand their dynamic range and take full advantage of it. Maybe this is another push towards the demise of the loudness war, but I doubt it will make a difference. And, as I said previously, the loudness of modern recordings is what makes them sound like crap, not the format they’re in.
The Lightning terminal with iOS 9 is compatible up to 192kHz/24Bit, but we do not have information on the sampling frequency of Apple Music download music.
As Eric Slivka of MacRumors points out, this isn’t the first time we’ve heard this rumour:
A year and a half ago, music blogger Robert Hutton claimed Apple was working to roll out high-resolution audio for the iTunes Store, and Mac Otakaramade similar claims about an HD Audio format and new hardware being planned for release alongside iOS 8 later that year.
If much of the Apple Music catalogue will be offered in a lossless format, that solves the availability problems of high-res audio. However, it’s still pretty much impossible to distinguish between high-quality lossy and lossless formats, at least as far as most people and most kinds of music are concerned. When paired with Apple’s EarPods, there’s probably no way anyone can tell the difference.
Music often sounds like crap today because producers and mastering engineers mix for volume rather than dynamics, not because songs are typically played in lossy compressed formats. That’s the problem that needs solving, and Apple has influential employees within their ranks who could begin solving it by, say, doubling-down on the Mastered for iTunes program. One problem: they are victims and perpetrators of the loudness war, too.
Update: Lossless audio, being an archival format, seems ill-suited for streaming. Perhaps lossless audio becomes the purchasing option, while lossy formats will prevail for streaming?
Well, Reitz is back, this time with an alarming exposé of Mast Brothers chocolates. You’ve probably seen this chocolate in your local specialty food store or similar, where they claim to make it “from bean to bar”.
Is this worth reading because the Mast brothers are being publicly shamed? No. It’s worth reading because it’s a nice four-part bit of solid investigation and journalism.
My favourite game is now available as a native app on iOS. It’s not feature-complete — you can’t log into a GeoGuessr account within the app, for instance — but I couldn’t be happier. The world map is free, and individual maps are unlocked by playing the game or in-app purchase. You should totally download it. They didn’t pay me to say this; I just love the game.
The fact that Apple was able to create custom-designed hardware for just a single feature [the front-facing flash] of the iPhone 6S highlights the expertise it has built in chip design, a technology that has traditionally been the preserve of specialist companies such as Qualcomm or Intel.
Apple shone a spotlight on that capability this week when it promoted Johny Srouji, head of its team of silicon engineers, to its executive ranks, as part of a wider management reshuffle.
This article focuses heavily on the iPhone, but Apple’s rising expertise in chip design has enabled the existence of products like the Apple Watch and the iPad Pro. But their chip designs are still prototyped and made by Samsung and TSMC. I think it’s a question of when — not if — Apple begins fabricating its own chips, and I think that’s going to be sooner rather than later.
Kinda surprised you didn’t make the connection between licensing killing a quality OS problem and Old Apple.
There are two really good points here: first, that the degrading of Microsoft’s reputation due to licensees making crappy products is similar to Apple’s situation in the mid-’90’s with the Macintosh clones; and, second, that it’s surprising that I didn’t make that connection.
The problem for Microsoft is that they _can’t_ win back control of their consumer product the way Apple did.
Their enterprise and embedded footprint is way too big.
IMO, the only way they could do it would be to turn Windows into a legacy product and ship their own hardware running a new OS.
Hard numbers are not available for most brands of clones, but it’s generally estimated that they accounted for about 15% of all Mac OS computers sold in 1997…
As numbers are hard to come by, it’s conceivable that Macintosh clones could have accounted for a much larger percentage of Mac OS sales at their peak. I doubt, however, that they ever had a market share as great as Windows licensees enjoy today.
Over the past year or so, I’ve noticed a marked increase in the number of websites that have autoplaying videos. I thought I was completely imagining this, but it turns out to be true. Tim Carmody for the Nieman Journalism Lab:
Video autoplay on the web has been around in different forms for years, but its embrace by Facebook, Twitter, Snapchat, Instagram, and other platforms in the past year is changing its economics and user’s expectations. Many people dislike autoplay — quite a few hate it — but it’s a powerful tool to capture and hold viewers’ attention, it’s a perfect medium for advertising, and it generates huge impression numbers for platforms and publishers. It’s not going away. For video, it’s the vector to the future.
Autoplaying video is loathsome, especially if it includes audio. It’s as unexpected and user-hostile as those websites that had background music fifteen years ago,1 it’s inherently interruptive, and it increases costs for mobile users. If publications thought user backlash was strong after shoving intrusive ads in their faces, just imagine the reaction to this trend.
Remember that crappy MIDI version of a Blink 182 song you’d hear on every other GeoCities page? ↩︎
Everything I’ve heard about Jeff Williams paints him as Tim Cook’s Tim Cook, keeping a close eye on Apple’s supply chain. I didn’t realize that the company hadn’t had a COO since Cook was promoted to CEO, but now they do again.
This, though, is very intriguing:
With added responsibility for the App Store, Phil Schiller will focus on strategies to extend the ecosystem Apple customers have come to love when using their iPhone, iPad, Mac, Apple Watch and Apple TV. Phil now leads nearly all developer-related functions at Apple, in addition to his other marketing responsibilities including Worldwide Product Marketing, international marketing, education and business marketing.
App Store duties were previously Eddy Cue’s purview, as they’re part of Apple’s internet services. Placing their management under Schiller signals, to me, a realization that App Store issues should be grouped with the rest of Apple’s developer relations; they are dissimilar to cloud, media, and payment services. Schiller has a very good relationship with the Apple developer community, and I think — or, at least, hope — that this bodes well for the future of the App Stores.
Even though I could now go whole-hog on iCloud Music Library, I’m not touching it yet. It’s still failing to provide correct metadata for songs from Apple Music, not just ripped or third-party tracks. It’s too bad, because a lot of Apple Music’s features — like offline playback and playlist creation — require an iCloud Music Library subscription.
My metadata system isn’t complex, but I worry that any cloud service will attempt to “correct” it. Worse, I’m scared that it’s going to treat different versions of a song as the same. I’m not prepared to roll back its changes because I don’t know how much effort it will take to recover my library even if I have a local backup.
That hesitance, I think, indicates that iCloud Music Library is a significant blemish on Apple’s record. Their web services are rarely class-leading, but they should not be this far behind.
An anonymous contributor to Vice attended Yahoo’s end-of-year party (via John Gruber):
The rest of the cavernous room was filled with even more chandeliers and urns and a vintage Rolls Royce. Swinging flapper aerialists pouring champagne towers and Gatsby-esque costumed actors walking around like vintage cigarette girls (but peddling only candy) were everywhere. […]
WebKit on iOS has a 350 millisecond delay before single taps activate links or buttons. WebKit has this delay because we also allow users to double tap to zoom, which is a great way to zoom in on content that is well-sized for large desktop displays, but appears too small on mobile devices. However, when a user has tapped once, WebKit cannot tell if the user intends on tapping again to trigger a double tap gesture. Since double tapping is defined as two taps within a short time interval (350ms), WebKit must wait for this time interval to pass before we’re sure that the user intended to tap only once. […]
On web pages optimized for mobile viewports, elements such as links and form controls are scaled to fit well on smaller screens. As such, double tapping on these elements increases the page scale by only a small amount. Since double tapping provides little value on these web pages, we’ve implemented a mechanism for removing the delay for single taps by disabling double tap gestures.