Search Results for: h.264

Adobe:

Since Adobe will no longer be supporting Flash Player after December 31, 2020 and Adobe will block Flash content from running in Flash Player beginning January 12, 2021, Adobe strongly recommends all users immediately uninstall Flash Player to help protect their systems.

Mike Davidson:

Flash, from the very beginning, was a transitional technology. It was a language that compiled into a binary executable. This made it consistent and performant, but was in conflict with how most of the web works. It was designed for a desktop world which wasn’t compatible with the emerging mobile web. Perhaps most importantly, it was developed by a single company. This allowed it to evolve more quickly for awhile, but goes against the very spirit of the entire internet. Long-term, we never want single companies — no matter who they may be — controlling the very building blocks of the web. The internet is a marketplace of technologies loosely tied together, each living and dying in rhythm with the utility it provides.

Most technology is transitional if your window is long enough. Cassette tapes showed us that taking our music with us was possible. Tapes served their purpose until compact discs and then MP3s came along. Then they took their rightful place in history alongside other evolutionary technologies. Flash showed us where we could go, without ever promising that it would be the long-term solution once we got there.

I am not as rosy-eyed about Flash as Davidson. Most of the Flash-based websites I remember loaded slowly, performed poorly, and were hard to use. I remain conflicted about a more interactive web and the entire notion of websites as applications, and I find it hard to be so kind to a plug-in that was responsible for so many security and stability problems.

But I do appreciate its place in history. Streaming video in the pre-Flash era was a particularly painful mix of codecs only supported by one of Real Player, Windows Media Player, or QuickTime. Flash video players allowed the web to standardize around H.264, eventually without requiring an SWF-based decoder.

It is impossible to know if we would have ended up with rich typography, streaming video players, full web applications, and online games without Flash — and, in the case of the latter two, Java. Regardless of my ambivalence, the web that we have today is rich, universal, and accessible, and much of that groundwork was catalyzed by Flash.

One of the bigger mysteries associated with the hack of Jeff Bezos’ iPhone X is how, exactly, it was breached. A report yesterday by Sheera Frenkel in the New York Times appeared to shed some light on that:

On the afternoon of May 1, 2018, Jeff Bezos received a message on WhatsApp from an account belonging to Saudi Arabia’s crown prince, Mohammed bin Salman.

The two men had previously communicated using WhatsApp, but Bezos, Amazon’s chief executive, had not expected a message that day — let alone one with a video of Saudi and Swedish flags with Arabic text.

The video, a file of more than 4.4 megabytes, was more than it appeared. Hidden in 14 bytes of that file was a separate bit of code that most likely implanted malware, malicious software, that gave attackers access to Bezos’ entire phone, including his photos and private communications.

The detail attributing the breach to fourteen bytes of malware was entirely new information, and not reported elsewhere. But I’m linking here to the Chicago Tribune’s syndicated copy because the version currently on the Times’ website no longer makes the same specific claim:

The video, a file of more than 4.4 megabytes, was more than it appeared, according to a forensic analysis that Mr. Bezos commissioned and paid for to discover who had hacked his iPhone X. Hidden in that file was a separate bit of code that most likely implanted malware that gave attackers access to Mr. Bezos’ entire phone, including his photos and private communications.

Despite this material change, there is no correction notice at the bottom of the article. The forensic report (PDF) acknowledges that “the file containing the video is slightly larger than the video itself”, but does not cite a specific figure. It does, however, state that the video file is 4.22 MB, not “more than 4.4” as stated in the Times report.

I know this seems ridiculously pedantic, but I want to know how this discrepancy can be explained. The UN press release also does not contain any more specific details. Is this just a weird instance of miscommunications that haven’t been fact-checked? Or is this perhaps news that hasn’t been fully confirmed? For example, is there another forensic report that hasn’t yet been made public?

This matters, I think, because it could suggest a difference between whether the H.264 MP4 video decoder on iOS has a vulnerability, or if it’s something specific to the WhatsApp container. If the former is true, that means that this isn’t just something that WhatsApp users need to watch out for.

It used to be the case that vulnerabilities like these were kept extremely close to the vest and only used on specific high-value targets. But, ever since we found out that China was attacking Uyghur iPhone users broadly, I’m no longer as convinced that not being a prominent individual is enough to avoid being a target.

Update: Ben Somers points out that 4.22 MiB roughly converts to 4.4 MB, which may be the source of that part of the discrepancy. The fourteen bytes are still unaccounted for.

Also, it’s worth mentioning that one reason that I wanted to draw attention to this story is because the Times often fails to post correction notices for online stories that have been updated after publication. I think this practice is ridiculous.

Update: A paragraph later in the story references the fourteen byte mystery, now with more context:

The May 2018 message that contained the innocuous-seeming video file, with a tiny 14-byte chunk of malicious code, came out of the blue, according to the report and additional notes obtained by The New York Times. In the 24 hours after it was sent, Mr. Bezos’ iPhone began sending large amounts of data, which increased approximately 29,000 percent over his normal data usage.

This wasn’t in the story last time I checked. There still isn’t a corrections or updates notice appended to the Times article. Thanks to Lawrence Velázquez for bringing it to my attention.

John Voorhees, MacStories:

Apple updated its website with news that the iMac Pro is shipping beginning on December 14, 2017. The pro-level iMac features a long list of impressive specifications. The desktop computer, which was announced in June at WWDC comes in 8, 10, and 18-core configurations, though the 18-core model will not ship until 2018. The new iMac can be configured with up to 128GB of RAM and can handle SSD storage of up to 4TB. Graphics are driven with the all-new Radeon Pro Vega, which Apple said offers three times the performance over other iMac GPUs.

Apple provided Marques Brownlee (MKBHD) and another YouTuber, Jonathan Morrison, with review units, and they seem effusively positive, with the exception of some concerns about the machine’s lack of post-purchase upgradability.

Of note, there’s nothing on the iMac Pro webpage nor in either of the review videos about the Secure Enclave that’s apparently in the machine, nor is there anything about an A10 Fusion chip or “Hey, Siri” functionality. These rumours were supported by evidence in MacOS; it isn’t as though the predictions came out of nowhere. It’s possible that these features will be unveiled on Thursday when the iMac Pro becomes available, or perhaps early next year with a software update, but I also haven’t seen any reason for the Secure Enclave — the keyboard doesn’t have a Touch Bar, nor is there Touch ID anywhere on this Mac.

Update: Filmmaker and photographer Vincent Laforet:

I found a very consistent set of results: a 2X to 3X boost in speed (relative to my current iMac and MacBook Pro 15”) a noticeable leap from most generational jumps that are generally ten times smaller.

Whether you’re editing 8K RED video, H.264 4K Drone footage, 6K 3D VR content or 50 Megapixel RAW stills – you can expect a 200-300% increase in performance in almost every industry leading software with the iMac Pro.

Mechanical and aerospace engineer Craig Hunter:

Most of my apps have around 20,000-30,000 lines of code spread out over 80-120 source files (mostly Obj-C and C with a teeny amount of Swift mixed in). There are so many variables that go into compile performance that it’s hard to come up with a benchmark that is universally relevant, so I’ll simply note that I saw reductions in compile time of between 30-60% while working on apps when I compared the iMac Pro to my 2016 MacBook Pro and 2013 iMac. If you’re developing for iOS you’ll still be subject to the bottleneck of installing and launching an app on the simulator or a device, but when developing for the Mac this makes a pretty noticeable improvement in repetitive code-compile-test cycles.

These are massive performance gains, even at the 10-core level; imagine what the 18-core iMac Pro is going be like. And then remember that this isn’t the Mac Pro replacement — it’s just a stopgap while they work on the real Mac Pro replacement.

Update: Rene Ritchie says that the A10 Fusion SoC is, indeed, present in the iMac Pro, albeit rebranded as a T2 coprocessor.

Among the many insightful observations in Jean-Louis Gassée’s Monday Note for today, there’s this:

CarPlay replicates your iDevice’s screen as H.264 video spewed through an intelligent Lightning cable connected to your car’s USB port.

Remember all of the bitching and moaning about how changing to the Lightning port was a hassle, and how the proprietary nature of it was so dreadful?

My favourite pet topic is back in the news. Janko Roettgers, GigaOm:

YouTube will be demonstrating 4K video at CES in Las Vegas next week, with a twist: The Google-owned video service will be showing off ultra high-definition streaming based on VP9, a new royalty-free codec that Google has been developing as an alternative to the H.265 video codec that’s at the core of many other 4K implementations.

There are two things to unpack in this: YouTube streaming in 4K, and the use of the new VP9 codec Google is developing.

The first isn’t really anything new — you can already find loads of videos on YouTube which stream in 4K. But most of the existing 4K videos on YouTube — indeed, most of any kind of video on YouTube — are dual-encoded in VP8 (WebM) and H.264. If you’ve seen YouTube’s HD offerings, you know that the video quality isn’t great: everything is extremely compressed so, while these videos are ostensibly “HD” resolution, they’re really murky. It’s the same story with 4K.

The question, then, is not only whether a different codec will make a substantial quality difference, but whether that codec will be playable at all. While VP8 has been around since 2008 and owned by Google since 2010, it is used almost exclusively by them. With VP9, though, they insist that it’s going to be better:

This time around, Google has lined up a whole list of hardware partners to kickstart VP9 deployment. YouTube will show off 4K streaming at the booths of LG, Panasonic and Sony. And on Thursday, YouTube released a list of 19 hardware partners that have pledged to support VP9, including chipset vendors like ARM, Intel, Broadcom and Marvell as well as consumer electronics heavyweights like Samsung, Sharp and Toshiba.

Roettgers makes it sound like this is a different approach than Google took with VP8. However, Mashable’s Google I/O liveblog from 2010 suggests differently:

Google is back on stage, discussing partners. Opera, Skype, Adobe, Nvidia, Logitech, Qualcomm, Texas Instruments, Theora, Brightcove, and others are part of the [WebM] project.

Despite those large, influential partners, VP8 never really caught on outside of the Google sphere. Skype is the only other major user of the codec, but they also encode in H.264. Based on what I’ve seen so far with VP9, and the support H.265 has received so far, I don’t see this playing out much better. VP9 may have the support of television manufacturers this time around, but there is no existing 4K spec which does not require H.264 and, eventually, H.265 support. Likewise, those two codecs support the Ultra HD colour space. It seems like the codec for 4K has already been decided.

I know you love it when I discuss video codecs. After all, what other subject could be more exhilarating?

Mozilla CTO Brendan Eich:

As I noted last year, one of the biggest challenges to open source software has been the patent status of video codecs. The most popular codec, H.264, is patent-encumbered and licensed by MPEG LA, under terms that prevent distributing it with open source products including Firefox. Cisco has announced today that they are going to release a gratis, high quality, open source H.264 implementation — along with gratis binary modules compiled from that source and hosted by Cisco for download. This move enables any open source project to incorporate Cisco’s H.264 module without paying MPEG LA license fees.

Pretty good solution, right? H.264 is, by far, the most popular video codec on the web, so it’s good that Firefox can finally play it in a way that’s largely in-line with their philosophy. They’re going to build it into Firefox, and any open source project can use the BSD-licensed version of H.264.

Free software proponent Monty Montgomery doesn’t see it as great news, though (ugly-ass LiveJournal warning):

Let’s state the obvious with respect to VP8 vs H.264: We lost, and we’re admitting defeat. Cisco is providing a path for orderly retreat that leaves supporters of an open web in a strong enough position to face the next battle, so we’re taking it. […]

Fully free and open codecs are in a better position today than before Google opened VP8 in 2010. Last year we completed standardization of Opus, our popular state-of-the-art audio codec (which also happens to be the best audio codec in the world at the moment). Now, Xiph.Org and Mozilla are building Daala, a next-generation solution for video.

In simpler terms, the decade-plus-long battle to have an open, free, and patent-less video codec on the web has, once again, failed. Therefore, the proponents of such measures are going to try again with a brand new and different codec.

At what point does someone realize that this is a fruitless endeavour?

H.264 is extremely popular, and compatible products (read: nearly every piece of video-playing gear or software released in the past five years) will be transitioned to HEVC, which significantly improves upon the compression/quality ratio of H.264, therefore being suitable for much higher-resolution video.

Meanwhile, there’s VP8 (otherwise known as WebM), which is used almost solely by Google. This succeeded Theora, of which the only major user is Wikipedia. Theora is old, and has a relatively poor quality-to-size ratio — this much was admitted when VP8 was released. VP8 still doesn’t compete with H.264 in terms of quality. Now, some in the open source community want to put both of these behind them as they develop a brand new codec, with the goal of beating HEVC and VP9.

I simply don’t see this endeavour being meaningfully more successful than past efforts to create an open source, free, and patent-less1 video codec for the web.


  1. The claims of Theora and WebM being unencumbered by patents are also suspicious↥︎

Janko Roettgers, GigaOm:

To enable HD, and prepare for this plugin-free future, Google quietly started to transition Hangouts from the H.264 video codec to VP8, an open and royalty-free video codec the company released back in 2010.

VP8 is still being used almost exclusively by Google. Skype is the only other major player using VP8, and they were doing so in 2011; why has it taken this long for Google to switch their own product to their own codec?

Also, why aren’t they using their newer VP9 format?

Stephen Shankland, for CNet:

“If you adopt VP9, as you can very quickly, you’ll have tremendous advantages over anyone else out there using H.264 or VP8, (its predecessor),” said VP9 engineer Ronald Bultje in a talk here at Google’s developer conference. “You can save about 50 percent of bandwidth by encoding your video with VP9 vs. H.264.”

From what I can find, the only widely-used products with VP8 implemented are YouTube and Skype, but the former also supports H.264-encoded video. The latter must also partially support H.264 because its iOS app appears to use the Core Video framework. Why would VP9 be adopted greater than VP8 has (or, for that matter, get greater play than Theora, Lagarith, OpenAVS, or any of the other free video codecs)?

Furthermore, why doesn’t Google spearhead the adoption of the codecs they proselytize by encoding their Play Store’s movie library in VP8 or VP9 format?1 Why doesn’t Google recommend VP8 or VP9 to their Android developers?

Standards are great; that’s why we have so many of them.


  1. Google doesn’t publicly acknowledge what video format their Play videos use; however, their requirement for Flash Player strongly suggests H.264 encoding. ↥︎

Oddly, Glass doesn’t support Google’s own WebM video format — only H.263 and H.264 videos are supported. These cards seem extremely easy to create, since they’re basically web pages.

Well, unfortunately there are browsers like Firefox that refuse to implement the defacto-standard in video codecs in their browsers. […] So what’s the next best course of action?  Well, you can either encode your videos in three different codecs to cover all your bases, or just in H.264 and use the JavaScript implementation to play it.

This is too clever.