Without diminishing the effort that’s been put into this new standard, I’m not convinced there’s a plausible rationale for it. It would impose significant costs on type designers, provide no obvious advantage to our customers, and mostly benefit a small set of wealthy corporate sponsors.
Butterick has many well-considered objections to packaging different weights of type into a single font file — OpenType Variations, in a nut — but I vehemently disagree with his objections to the OpenType working group’s file size argument. As written by John Hudson, member of the working group:
A variable font is a single binary with greatly-reduced comparable file size and, hence, smaller disc footprint and webfont bandwidth. This means more efficient packaging of embedded fonts, and faster delivery and loading of webfonts.
As far as I’m concerned, this is one of the best arguments in favour of OpenType Variations, though there are significant problems — see Butterick’s article. But Butterick’s refutation of this argument is incredibly flawed, even in its most basic premise:
“But customers benefit from smaller file sizes too, because that makes web pages faster.” Certainly, that was true in 1996. And some web developers persist with politcal objections. But with today’s faster connections — even on mobile — optimizing for file size is less useful than ever.
On the contrary, optimizing for file size continues to be paramount, especially on mobile. This is because most mobile connections – particularly in North America — continue to have a monthly data allotment. It would be impolite to serve, say, an entirely textual thousand-word article as a 4 MB document.1
That is, of course, a problem of today. There is, of course, the possibility that your cellular carrier will suddenly become charitable and allow everyone to use large amounts of data at very low monthly costs, but — if the way ISPs have behaved for the past twenty years is any guide — I foresee an increase in costs to users, not a decrease.
More than that, Butterick refutes his own objections to the working group’s assertion:
For reasons unclear, this claim about network latency has always provoked howls of outrage among the web-dev Twitterati. Folks, let’s work from evidence, not superstition. For example, here’s a quick test I did this week, with home pages ranked in order of load time. As you can see, load time correlates more strongly with number of requests than download size. And Practical Typography beats everyone but the world’s biggest corporation.
I’m seeing the transfer of eleven different font files every time I load a Practical Typography page without caching. Butterick could cut those requests down to just three — one for each typeface used on the page — by using OpenType Variations, and have a faster site as a result.
There are plenty of factors other than raw file size that affect load times, of course: the speed of the host’s connection, what kind of servers they use, the number of requests, the route that the connection takes between host and destination, and more. But optimizing all of these things is absolutely critical if you care about how your site loads if a visitor happens to have one or two bars or a crappy Midtown connection. Even if they have five bars or they’re using a gigabit connection, it’s just polite, especially when a site is little more than text.
I pay $65 per month for 2 GB of data from my cellular provider. Loading this document on my phone would, therefore, cost me about $0.13. Considering that there are all sorts of background processes and other apps requiring my cellular data, it is only responsible for web developers to be cognizant of the amount of data required for each page load, and to try to reduce it wherever possible. ↩︎