The goals of art and commerce are basically opposite. Art fills our soul; it gives us emotional life. I have rarely heard anyone describe commerce similarly.

At its most ideal, the business of art enables more of it in greater variety, while allowing those who create it a reasonable living. This has rarely been the case. There are hundreds of years of unbelievably wealthy patrons building their cultural cachet by supporting artists of their particular taste and at their behest. More recently, recording contracts are routinely described as “brutal”, “a raw deal”, “predatory”, and “exploitative”. That has been generally true for all artists, but has been particularly pronounced for marginalized — and, to be more even specific, black — artists since the recording industry’s origins.

In “Mood Machine”, released earlier this year, Liz Pelly adds an additional complicating question: what is the relationship between art and commerce when massive data collection becomes routine?

The origins of streaming music may be found first in piracy and later in Rhapsody, but Spotify is where modern streaming platforms truly began. While Spotify’s founders tend to describe a noble birth, Perry points to a 2015 interview with co-founder Martin Lorentzon in which he describes the idea to build a targeted advertising platform first. How it would acquire users was an open question — “[s]hould it be product search? Should it be movies, [or ‘Godfather’], or audiobooks? And then we ended up with music”. That is not necessarily a bad thing. What is bad, though, is that Spotify reportedly began with an unlicensed library and made money on the back of it. That combination does not sound to me like the result of a love of music.

Sadly, the interesting storytelling does not reliably continue. Admittedly, part of the reason for this is my personal distaste for Pelly’s style of writing, something which I would not normally mention — surely not everyone is a fan of my writing style, either — but feel compelled to do so for how intrusive I found it. Far too many sections and chapters in this book end in a tedious turn of phrase: “Under the gaze of streaming surveillance, the exchange is never truly one-to-one;”; “It makes sense that as the digital world has grown to feel more like a shopping mall, it is also sometimes the very companies making music for shopping malls that are flooding its soundtrack”. Another part of the problem is the way this book is organized. Each chapter reads like an individual essay dedicated to a specific problem with Spotify — algorithmic suggestions, vibe-based playlists, and changing business terms, to name a few. What that looks like in practice is a great deal of repetition. I count at least seven chapters, of eighteen, dedicated to background and unfocused listening.

Part of the problem, however, is that Pelly has been documenting these phenomena for years in articles published at the Baffler. I am familiar with the extraordinary amount of “chill” music, trendy sound palettes, the relationship between mood-based music and targeted advertising, and the comparisons to Uber because these articles were all published a minimum of six years ago. That I remember these articles is sometimes a testament to Pelly’s reporting; at other times, it reminds me of things I previously found questionable but could not quite articulate why.

One thing I remember from one article, for example, is its attempt to define a “Spotify sound”. This was revisited in the book in the “Streambait Pop” chapter (page 82):

By the time of Spotify’s IPO in 2018, it seemed that the peak playlist era had produced an aesthetic of its own. That year, one pop songwriter and producer told me that [Billie] Eilish had become a type of poster child for what was being called a “Spotify sound,” a deluge of platform-optimized pop that was muted, mid-tempo, and melancholy. He told me it had become normal for him to go into a session and hear someone say they wanted to write a “Spotify song” to pitch around to artists: “I’ve definitely been in circumstances where people are saying, ‘Let’s make one of those sad girl Spotify songs.’ You give yourself a target,” he said. It was a formula. “It has this soft, emo-y, cutesy thing to it. These days it’s often really minimal and based around just a few simple elements in verses. Often a snap in the verses. And then the choruses sometimes employ vocal samples. It’s usually kind of emo in lyrical nature.”

Pelly’s argument is built primarily around the works of Charlotte Lawrence, Sasha Sloan, and Nina Nesbitt, none of which I am familiar with. But their music — “Normal” and “Psychopath” are both named in the article — sound like a lot of pop music trends of the time: a blend of genres that emerges kind of beach-clubby, pretty breathy, and electronics-heavy but not particularly danceable. Pelly quotes an indie rock label owner calling it “emotional wallpaper”.

In re-reading Pelly’s “streambait” article for this piece, I found this paragraph a good distillation of many arguments made throughout the book:

Musical trends produced in the streaming era are inherently connected to attention, whether it’s hard-and-fast attention-grabbing hooks, pop drops and chorus-loops engineered for the pleasure centers of our brains, or music that strategically requires no attention at all—the background music, the emotional wallpaper, the chill-pop-sad-vibe playlist fodder. These sounds and strategies all have streambait tricks embedded within them, whether they aim to wedge bits of a song into our skulls or just angle toward the inoffensive and mood-specific-enough to prevent users from clicking away. All of this caters to an economy of clicks and completions, where the most precious commodity is polarized human attention — either amped up or zoned out—and where success is determined, almost in advance, by data.

Much like the similar essays in “Mood Machine”, very little of this feels like it is directly traceable to Spotify or streaming generally. There has long been pop music that is earwormy, and pop music that is kind of a silence-filler. When radio was more influential, the former could be found on the contemporary hit radio station and the latter on any number of adult contemporary variants.

Coalescing around a particular sound is also not a Spotify phenomenon. The early 1990s brought an onslaught of Nirvana imitators, and the late 1990s polished the sound so hard it removed any trace of edge and intrigue it once held. The early-2000s dominance of Coldplay made way for the mid-2000s Timbaland production craze, which led to a wave of late-2000s dance and club pop, which was followed in the early-2010s by Americana revival. This is the power of radio. Or, it was the power of radio, at least. You could describe any of the chart-topping songs in similar terms as Pelly uses for “streambait”: “attention-grabbing hooks”, “chorus-loops engineered for the pleasure centers of our brains”, and “chill-pop-sad-vibe playlist fodder”. Should this be blamed on the precise listener analytics dashboard available to artists on Spotify? I am not sufficiently convinced.

Pelly describes a discussion she had with two teenagers outside an all-ages venue as they struggled to describe the “aesthetic rap” show they were attending, a genre which seems to be a slower and spacier take on rage (page 118):

The kids I spoke to outside Market seemed genuinely enthused. But as I headed home, I was struck by how palpably it seemed that most of those conversations were more concerned with a niche vibe fandom — which no one could even really explain — than the artists themselves.

I cannot imagine this is a new phenomenon. Some people develop a deep fascination with music and seek releases from specific artists. But plenty are only looking for a sound and a vibe. It is why retailers and magazines gave away sampler CDs in the mid-2000s scratching a generic indie rock itch.

What Pelly keeps describing in these chapters is a kind of commodification of cool, none of which is new or unique to Spotify. It is the story of popular culture writ large: things begin as cool for a small group, are absorbed into broader society, and are sold back to us by industry. This most often happens to marginalized communities who find community in vocabulary, music, visual art, and dance, and then it gets diluted as it becomes mainstreamed.

As noted, Pelly dedicates considerable space in the book to chill playlists — “‘Chilled Dance Hits,’ ‘Chilled R&B,’ and ‘Chilled Reggae’ were all among Spotify’s official playlist offerings, alongside collections like ‘Chillin’ on a Dirt Road,’ ‘lofi chill,’ and ‘Calm.'” (page 45). This is partly not the fault of Spotify; YouTube expanded the availability of live streams in 2013 and it resulted in plenty of samey chill hop stations. In a 2018 New York Times article about these nonstop live-streams, Jonah E. Bromwich writes:

Channels like College Music, ChilledCow, Chillhop Music and others are unlikely to have a broad impact on the music industry. But they represent an underground alternative to the streaming hegemony of Spotify and Apple Music. The industry commentator Bob Lefsetz said that while the stations were not likely to become a lucrative endeavor, they were a way for members of the public to seize power back from cultural gatekeepers.

Instead of being predominantly inspired or encouraged by Spotify, it is possible the growth of the background music genre is something the company is instead taking advantage of — and take advantage it did.

Whether on YouTube or Spotify, the beat-makers behind these tracks are loosely inspired by artists like J Dilla, but they have coalesced around a distinctly hazy and slowed-down instrumental palette. That these tracks are deliberately indistinct has led to Spotify commissioning low-royalty generic tracks and, as Pelly writes, this passivity is an area “where the imminent A.I. infiltration of music was most feasible: the replacement of lean-back mood music” (page 132).

All told, it sure sounds like Spotify aligned its recommendations to compel users into filling silence with music featureless enough it could be replicated by what are, in effect, stock tracks. If this was a deliberate strategy to allow Spotify to have lower royalty expenses, it has had a mixed effect. Setting aside the ongoing popularity of big pop stars like Taylor Swift and Justin Bieber — I would love to know how much of Spotify’s revenue is sent to those two artists alone — the rise of streaming also coincided with the explosion of in-your-face K-pop groups, renewed interest in rock music, and revivals of funk and disco. These are not the kinds of passive listening genres Pelly seems so concerned with.

That is not to say Spotify plays no influence in what is popular. Just as what was made popular on the radio brought a wave of imitators, so too is the case for an era where streaming is where most people listen to music. None of this is new. A streaming listener’s context is often quite similar to a radio listener’s, too. Pelly’s exploration of the chilled-out Spotify playlist and lean-back listener reads, to me, with considerable disdain. But that kind of passive listening was common in the radio days. People put music on in the car and at work all the time. My parents used to put a C.D. on when they were cooking dinner. I do not think they were captivated in that moment by the sounds of Genesis or the Police. I was listening to music while reading this book and writing this article. Sometimes, an album will get played as a background to other tasks; any musician is surely aware of this.

Perhaps you, too, are now seeing what I began to understand. What Pelly keeps gesturing at throughout this book — but never quite reaches — is that the problems Spotify faces are similar to that of any massive two-sided platform. Pick your case study of any of the large tech companies and you can find parallels in Spotify. It has hundreds of millions of subscribers; everything it does is at vast scale.

It has privacy problems. Pelly dedicates a chapter to “Streaming as Surveillance”, pointing to Spotify’s participation in the data broker and ad tech economy. Spotify suggests its ads can be targeted based on users’ moods correlated with playlist and song data. This, like so many other ad tech sales pitches, is likely an inflated claim with only limited success. Yet it is also a little creepy to consider it is what Spotify aspires its ad product to be.

Spotify, like many others, faces moderation problems. Spotify does not want to put too many limits on what music is accepted. In the best of circumstances, this makes it possible for a nascent artist to rise from obscurity and start a career. But there are financial incentives to gaming the system. There are people who will follow trends, and even commit outright fraud manually or with A.I.-generated material. This is true for other broadly available revenue machines — Google Ads and YouTube are two that immediately spring to mind. In an attempt to disincentivize these behaviours and reduce Spotify’s costs, the company announced in November 2023 it would stop paying royalties for tracks with fewer than one thousand annual streams. Pelly writes this “was part of a campaign waged by Universal Music Group to revamp streaming in its favor” (page 155). When Spotify rolled out this new royalty structure, UMG CEO Lucian Grainge bade good riddance to “merchants of garbage […] no one actually wants to listen to” (page 157). How much it actually hurts low-effort spammers is a good question, but it impacts legitimate indie artists — what Grainge calls “garbage” — for whom Spotify now presents no advantage over piracy.

I would not be so presumptuous as to say that is what this book ought to have been but, as a longtime reader of Pelly’s articles about the subject, I was frustrated by “Mood Machine”. It is the kind of book I wish would be taken apart and reassembled for better flow and a more coherent structure. Spotify and the streaming model have problems. “Mood Machine” identifies many of them. But the money quote — the one that cut through everything for me — is from an anonymous former Spotify employee (page 167):

“If Spotify is guilty of something, they had the opportunity at one point to change the way value was exhanged in the music industry, and they decided not to,” the former employee told me. “So it’s just upholding the way that things have always been.”

We treat art terribly. It is glamorous and alluring, but it is ultimately a plaything of the rich. The people who make money in art, no matter the discipline, are those responsible for its distribution, management, and sale. Those creating the actual work that enriches our lives too often get the scraps, unless they have enough cachet to dictate the terms in their favour.

Spotify is just another layer in the chain; another place that makes far more money than artists ever will. An artist understands they are signing up for a job with unpredictable pay, but Spotify’s particular structure makes it even more opaque.

The four biggest audio streaming platforms — Spotify, YouTube, Tencent, and Apple Music are the top-down force pushing culture in a particular direction, a level of influence Clear Channel’s executives could have only hoped for in its heyday. Streaming can help people learn about music they have never heard before. But it is not very effective as an artistic conduit. It is an ad-supported platform operating at global scale, much like a handful of other large tech companies, and it faces similar problems.

Yet none of them are as essentially tied to the distribution of art as is Spotify. It is a shame it did not create something to upend the status quo and made more artists more money. I guess part of the reason for that could be because its co-founders saw music as one of several interchangeable user acquisition strategies to sell advertisements — you know, for the love of art and music.

Liz Reid, Google’s vice president of search:

Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year. Additionally, average click quality has increased and we’re actually sending slightly more quality clicks to websites than a year ago (by quality clicks, we mean those where users don’t quickly click back — typically a signal that a user is interested in the website). This data is in contrast to third-party reports that inaccurately suggest dramatic declines in aggregate traffic — often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the roll out of AI features in Search.

What “relatively stable” means is not explained and, incredibly, not a single number is used in this press release about quantifiable data. However, even giving Google an unearned benefit of the doubt, the company also says people “are searching more than ever”. If more searches are being done but the number of clicks is “relatively stable”, it effectively confirms a dropping click-through rate. None of this counters or disproves publishers’ findings of declining Google referral traffic. Even if aggregate traffic from Google has not dropped significantly, it is not clear it is going to the same places in similar amounts.

A big problem Google has is that it closely guards everything related to search, ostensibly to reduce gaming its ranking factors, and it is not a trustworthy narrator. It intuitively makes sense for A.I. Overviews to damage search traffic. We just do not know for which websites and by how much, and Reid’s post provides no clarity.

Reid:

The web has existed for over three decades, and we believe we’re entering its most exciting era yet. […]

As of the day this was published, exactly 34 years since the first website was launched.

Apple:

Apple today announced a new $100 billion commitment to America, a significant acceleration of its U.S. investment that now totals $600 billion over the next four years. Today’s announcement includes the ambitious new American Manufacturing Program (AMP), dedicated to bringing even more of Apple’s supply chain and advanced manufacturing to the U.S. Through AMP, Apple will increase its investment across America and incentivize global companies to manufacture even more critical components in the United States.

Is someone keeping track of the results of these commitments? Please let me know.

“Today, we’re proud to increase our investments across the United States to $600 billion over four years and launch our new American Manufacturing Program,” said Tim Cook, Apple’s CEO. “This includes new and expanded work with 10 companies across America. They produce components that are used in Apple products sold all over the world, and we’re grateful to the President for his support.”

I think Cook is also grateful to the president for exempting the company from new tariffs. Cook showed his appreciation by giving the president a glass and twenty-four karat gold memento, which is not a bribe.

Call it whatever you want, but Cook now finds himself trying to appease the presidents of both the United States and China. The goals of each are in opposition, yet either one could make a move imperilling Apple. When his time at Apple ends, Cook may wish to be remembered as a diplomat, but that word will always carry the vibe of euphemism.

Juli Clover, MacRumors:

Apple has been updating some classic Mac icons during the macOS Tahoe beta, upsetting some longtime Mac users who prefer the original look. In beta 5, Apple changed the design of the built-in Mac storage icon, which you’ll notice if you have it on your desktop.

I want to put a finer point on the problem with this icon: it is not a mere aesthetic preference or a reaction to change, but a simple acknowledgement that this icon is not good. It has a generic quality, a lack of personality. The perspective does not make sense, either. It is just a sad grey box without any connection to literal data storage on a modern Mac, the “Macintosh HD” label beside it on the Desktop, or any object in the real world. It feels like a first draft. I know we are still in the beta process, but I have little confidence it will progress much beyond what we see now.

Ghost:

The next major version of Ghost has arrived, and our 6.0 release is packed full of more upgrades and improvements than you can shake a stick at.

The headline feature is integration with the modern social web, including ActivityPub and the AT Protocol, but I think the most impressive thing is integrated cookie-less first-party analytics. This will allow some Ghost publishers to dispense with invasive third-party providers.

If Ghost added MarsEdit support, I would be awful tempted to switch from WordPress. I would rather use a platform that seems to care fundamentally about words on a webpage more than being a do-it-all CMS.

This year, eleven candidates compete for World Wide Web glory. Who will win? Will it be the cheeky Internet Roadtrip? The thoughtful Fifty Thousand Names? The absurd Traffic Cam Photo Booth? You can vote for your favourite until 1 September.

Google in July last year:

In 2018, we announced the deprecation and transition of Google URL Shortener because of the changes we’ve seen in how people find content on the internet, and the number of new popular URL shortening services that emerged in that time. This meant that we no longer accepted new URLs to shorten but that we would continue serving existing URLs.

Over time, these existing URLs saw less and less traffic as the years went on – in fact more than 99% of them had no activity in the last month.

As such, we will be turning off Google URL Shortener. Please read on below to understand more about how this may impact you.

Google last week:

While we previously announced discontinuing support for all goo.gl URLs after August 25, 2025, we’ve adjusted our approach in order to preserve actively used links.

We understand these links are embedded in countless documents, videos, posts and more, and we appreciate the input received.

This sounds like a big change, but it is a very small one — according to Google’s statistics, the ongoing support affects less than 1% of all shortened links. If you used Google’s URL shortener and have not actively been looking for goo.gl links since last year’s announcement, your links are probably going to stop working this month, and you might not know where they redirect.

Also, even though Google says it is “discontinuing support for all goo.gl URLs”, this is not true either. Google continues to use that domain for shared links created from its own apps like Maps and Photos. It says in its post from last year those links will continue to work. I think this is the original sin of this URL shortener and why the company is reluctant to keep supporting it. Google should never have used the same domain for trusted links it created and untrusted URLs from users. This is a problem that can be solved by, say, an interstitial notice of where the short URL is redirecting, but I think Google just wants to wash its hands of the whole thing regardless of the impact it will have.

Riccardo Mori:

Now, with these older iOS devices in particular, battery life is what it is, and I don’t always remember to keep them all charged at all times. It happens with my Mac laptops as well. Whenever I revive one of these devices, if it’s still able to access iCloud and other Apple ID-related services, I get a notification on all my other Apple devices that a certain device has now access to FaceTime and iMessage.

The wording in this notification has changed for the worse in more recent versions of Mac OS and iOS/iPadOS. […]

Michael Tsai:

The alert doesn’t actually mean that the the device was added in the user sense. Most of the time the device was already in my account, but a software update or something meant that Apple needed to do some kind of key refresh. It feels like I’m being interrupted for an implementation detail.

I do not see this as frequently as, it seems, Mori or Tsai — I do not have a stable of old devices I rotate between, nor am I a software developer. When I do, it is almost never because I have purchased a new device. It is usually because of, as Tsai writes, a software update or perhaps adding a travel SIM, so it is poorly confirming something I already know in an interruptive and ambiguous way. Occasionally, the software update was installed automatically, so I am surprised by the alert on a different device but have no way of understanding what happened. Then I think about what I should actually do with this information, particularly with the revised wording of this alert:

Your Apple ID and phone number are now being used for iMessage on a new Mac.

If you recently signed in to “[Device Name]”, you can ignore this notification.

[OK]

What do I do now? That is rhetorical; I understand I would search it. (I also asked Siri on iOS 26 — you know, the one with the product knowledge — and it, too, searched Google.) But what does a normal person do now? This is scary and unhelpful, yet the user interface says in the same breath it might be irrelevant.

It reminds me a little of the often-wrong map in the dialog box for two-factor authentication. These are features ostensibly to promote greater security but they only erode users’ awareness if they are not designed with more precision and care.

Matti Schneider, documenting this for Open Terms Archive:

LinkedIn removed transgender-related protections from its policy on hateful and derogatory content. The platform no longer lists “misgendering or deadnaming of transgender individuals” as examples of prohibited conduct. While “content that attacks, denigrates, intimidates, dehumanizes, incites or threatens hatred, violence, prejudicial or discriminatory action” is still considered hateful, addressing a person by a gender and name they ask not be designated by is not anymore.

Via Mike Masnick, Techdirt:

This follows the now-familiar playbook we’ve seen from Meta, YouTube, and others. Meta rewrote its policies in January to allow content calling LGBTQ+ people “mentally ill” and portraying trans identities as “abnormal.” YouTube quietly scrubbed “gender identity” from its hate speech policies, then had the audacity to call it “regular copy edits.” Now LinkedIn is doing the same cowardly dance.

Any one of these platform changes is dispiriting and upsetting; that it is part of a pattern to, I guess, avoid scrutiny from a government demanding subservience is pretty obvious. But there is something about it being LinkedIn — the lukewarm social network for middle management to broadcast their “work” — that makes it a specific kind of evil. Now professional connections can harass people for who they are. Appalling.

Also, worth a reminder that LinkedIn is owned by Microsoft and profile information can be integrated into Microsoft 365.

Mark Zuckerberg is not much of a visionary. He is ambitious, sure, and he has big ideas. He occasionally pops into the public consciousness to share some new direction in which he is taking his company — a new area of focus that promises to assert his company’s leadership in technology and society. But very little of it seems to bear fruit or be based on a coherent set of principles.

For example, due to Meta’s scale, it is running into limitations on its total addressable market based on global internet connectivity. It has therefore participated in several related projects, like measuring the availability of internet connectivity worldwide with the Economist, which has not been updated since 2022. In 2014, it acquired a company building a solar-powered drone to beam service to people in more remote locations; the project was cancelled in 2018. It made a robot to wrap fibre optic cable around existing power lines, which it licensed to Hibot in 2023; Hibot has nothing on its website about the robot.

It is not just Meta’s globe-spanning ambitions that have faltered. In 2019, Zuckerberg outlined a “privacy-focused vision for social networking” for what was then Facebook, the core tenets of which in no way conflict with the company’s targeted advertising business. Aside from the things I hope Facebook was already doing — data should be stored securely, private interactions should remain private, and so on — there were some lofty goals. Zuckerberg said the company should roll out end-to-end encrypted messaging across its product line; that it should add controls to automatically delete or hide posts after some amount of time; that its products should be extremely interoperable with those from third-parties. As of writing, Meta added end-to-end encryption to Facebook Messenger and Instagram, but it is only on by default for Facebook. (WhatsApp was end-to-end encrypted by default already.) It has not added an automatic post deletion feature to Facebook or Instagram. Its apps remain stubbornly walled-off. You cannot even sign into a third-party Mastodon app with a Threads account, even though it is amongst the newest and most interoperable offerings from Meta.

Zuckerberg published that when it was advantageous for the company to be seen as doing its part for user privacy. Similarly, when it was smart to advocate for platform safety, Zuckerberg was contrite:

But it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well. That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake. It was my mistake.

Then, when it became a good move to be brash and arrogant, Zuckerberg put on a gold chain and a million-dollar watch to explain how platform moderation had gone too far.

To be clear, Meta has not entirely failed with these initiatives. As mentioned, Threads is relatively interoperable, and the company defaulted to end-to-end encryption in Facebook Messenger in 2023. It said earlier this year it is spending $10 billion on a massive sub-sea cable, which is a proven technology to expand connectivity more than a solar-powered drone could.

But I have so far not mentioned the metaverse. According to Zuckerberg, this is “an embodied internet where you’re in the experience, not just looking at it”, and it was worth pivoting the entire company to be “metaverse-first”. The company renamed itself “Meta”. Zuckerberg forecasted an “Altria moment” a few years prior and the press noticed. In announcing this new direction in 2021, Zuckerberg acknowledged it would be a long-term goal, though predicted it would be “mainstream in the next five to ten years”:

Our hope is that within the next decade, the metaverse will reach a billion people, host hundreds of billions of dollars of digital commerce, and support jobs for millions of creators and developers.

Granted, it has not been even four years since Zuckerberg made these announcements, but are we any closer to his company’s vision becoming mainstream? If you broaden the definition of “metaverse” to include all augmented and virtual reality products then, yes, it appears to be a growing industry. But the vision shown at Connect 2021 is scarcely anywhere to be found. We are not attending virtual concerts or buying virtual merch at virtual after-parties. I am aching to know how the metaverse real estate market is doing as I am unaware of anyone I know living in a virtual house.

As part of this effort, Meta announced in May 2022 it would support NFTs on Instagram. These would be important building blocks for the metaverse, the company said, “critical for how people will buy, use and share virtual objects and experiences” in the virtual environment it was building. Meta quickly expanded availability to Facebook and rolled it out worldwide. Then, in March 2023, it ended support for NFTs altogether, saying “[a]ny collectibles you’ve already shared will remain as posts, but no blockchain info will be displayed”.

Zuckerberg has repeatedly changed direction on what his company is supposed to stand for. He has plenty of ideas, sure, and they are often the kinds of things requiring resources in an amount only possible for a giant corporation like the one he runs. And he has done it again by dedicating Meta’s efforts to what he is calling — in a new manifesto, open letter, mission statement, or whatever this is — “personal superintelligence”.

I do have to take a moment to acknowledge the bizarre quality of this page. It is ostensibly a minimalist and unstyled document of near-black Times New Roman on a white background — very hacker, very serious. It contains about 3,800 characters, which should mean a document barely above four or five kilobytes, accounting for HTML tags and a touch of CSS. Yet it is over 400 kilobytes. Also, I love that keywords are defined:

<meta name="keywords" content="Personal 
Superintelligence, AI systems improvement, 
Superintelligence vision, Mark Zuckerberg 
Meta, Human empowerment AI, Future of 
technology, AI safety and risks, Personal
AI devices, Creativity and culture with 
AI, Meta AI initiatives">

Very retro.

Anyway, what is “superintelligence”? is a reasonable question you may ask, and a term which Zuckerberg does not define. I guess it is supposed to be something more than or different from artificial intelligence, which is yesterday’s news:

As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.

He decries competitors’ ambitions:

This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well.

I am unsure what to make of this. It is sorely tempting to dismiss the whole endeavour as little more than words on a page for a company deriving 98% of its revenue (PDF) from advertising.1 If we consider it more seriously, however, we are left with an ugly impression for what “valuable work” may consist of. Meta is very proud of its technology to “generate photorealistic images”, thereby taking the work of artists and photographers. Examples of its technology also include generating blog posts and building study plans, so it seems writing and tutoring are not entirely “valuable work” either.

I am being a bit cheeky but, with Zuckerberg’s statement entirely devoid of specifics, I am also giving it the gravitas it has earned.

While I was taking way too long to write this, Om Malik examined it from the perspective of someone who has followed Zuckerberg’s career trajectory since it began. It is a really good piece. Though Malik starts by saying “Zuck is one of the best ‘chief executives’ to come out of Silicon Valley”, he concludes by acknowledging he is “skeptical of his ability to invent a new future for his company”:

Zuck has competitive anxiety. By repeatedly talking about being “distinct from others in the industry” he is tipping his hand. He is worried that Meta is being seen as a follower rather than leader. Young people are flocking to ChatGPT. Programmers are flocking to Claude Code.

What does Meta AI do? Bupkiss. And Zuck knows that very well. You don’t do a company makeover if things are working well.

If you are solely looking at Meta’s earnings, things seem to be working just fine for the company. Meta beat revenue expectations in its most recent quarter while saying the current quarter will also be better than analysts thought. Meta might not be meeting already-low analyst expectations for revenue in its Reality Labs metaverse segment, but the stock jumped by 10% anyhow. Even Wall Street is not taking Zuckerberg seriously as an innovator. Meta is great at selling ads. It is not very exciting, but it works.

Back to the superintelligence memo, emphasis mine:

We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible.

And here is what Zuckerberg wrote just one year ago:

Meta is committed to open source AI. I’ll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term.

[…]

There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it’s in their interest to support open source because it will make the world more prosperous and safer.

No mention of being careful, no mention of choosing what to open source. Zuckerberg took an ostensibly strong, principled view supportive of open source A.I. when it benefitted the company, and is now taking an ostensibly strong, principled view that it requires more nuance.

Zuckerberg concludes:

Meta believes strongly in building personal superintelligence that empowers everyone. We have the resources and the expertise to build the massive infrastructure required, and the capability and will to deliver new technology to billions of people across our products. I’m excited to focus Meta’s efforts towards building this future.

On this, I kind of believe him. I believe the company has the resources and reach to make “personal superintelligence” — whatever it is — a central part of Meta’s raison d’être, just as Malik says in his article he has “learned not to underestimate Zuckerberg”. The language in Zuckerberg’s post is flexible, vague, and optimistic enough to provide cover for whatever the company does next. It could be a unique virtual assistant, or it could be animated stickers in chats. Whatever it is, this technology will also assuredly be directed toward the company’s advertising machine, as its current A.I. efforts are providing “greater efficiency and gains across our ad system”. Zuckerberg is telling investors imagine what we could do with superintelligence.

In December 2023, Simon Willison wrote about the trust crisis in artificial intelligence, comparing it to the conspiracy theory that advertisers use audio from real-world conversations for targeting:

The key issue here is the same as the OpenAI training issue: people don’t believe these companies when they say that they aren’t doing something.

One interesting difference here is that in the Facebook example people have personal evidence that makes them believe they understand what’s going on.

With AI we have almost the complete opposite: AI models are weird black boxes, built in secret and with no way of understanding what the training data was or how it influences the model.

Meta has pulled off a remarkable feat. It has ground down users’ view of their own privacy into irrelevance, yet its services remain ubiquitous to the point of being essential. Maybe Meta does not need trust for its A.I. or “superintelligence” ambitions, either. It is unfathomably rich, has a huge volume of proprietary user data, and a CEO who keeps pushing forward despite failing at basically every quasi-visionary project. Maybe that is enough.


  1. Do note two slides later the company’s effective tax rate dropping from 17% in Q3 and Q4 2023 to just 9% in Q1 2025, and 11% in the most recent quarter. Nine percent on over $18 billion in income. ↥︎

Elizabeth Lopatto, the Verge:

Have you ever wondered what bops powerful figures are listening to on Spotify? You’d be amazed what you can get with a profile search — but just in case you want them all in one place, there’s the Panama Playlists, a newly published collection of data on the musical listening habits of politicians, journalists, and tech figures, as curated by an anonymous figure.

Silly name aside, I am glad to finally have a privacy-related concern that is not actually so bad. Yes, some people confirmed to Lopatto that they were unaware they were sharing their playlists publicly, but at least it is not private data per se. And it gives us a relatively non-creepy peek into the lives of the rich and famous. For example, Marc Benioff has a terrible “High Energy Party” playlists into which he has dumped whole albums with seeming disregard to vibe consistency or quality — it is simply a massive amount of songs from the Black Eyed Peas, Metallica, and Beatles tribute bands, plus one lonely song by the Doors. The art of the playlist is dead.

On 16 September 1997, Steve Jobs became interim CEO of Apple. 5,090 days later, he handed the reins to Tim Cook, weeks before he died.

5,090 days after 24 August 2011 is today. The Cook era is now as long as the Jobs renaissance era.

Just as it is baffling to consider how much time Cook has officially led Apple — I have not included the two times when he temporarily took on the role for reasons of Jobs’ health — it is hard for me to believe the same amount of time has now passed which solidified so much of today’s Apple. You already know the highlights: the iMac, Mac OS X, the iPod, the iPhone, and the iPad. All that and more happened between September 1997 and August 2011.

Apple was given new life under Jobs’ leadership. That relatively small group of people set the groundwork for it to become, under Cook, the giant it is today. I thought it was worth marking the day this era has overtaken the last.

Sean Heber, on Mastodon:

ChatGPT and other AI services are basically killing @Iconfactory and I’m not exaggerating or being hyperbolical.

Ged Maheux, on the company’s blog:

Our apps deserve more love than we can currently give. We’re looking to find new homes for our side products – many of which have storied histories and loads of happy & loyal customers.

It does not sound like this includes Linea Sketch, Tapestry, Tot, or Wallaroo, but I am not sure it is limited to the smaller free apps like Clicker or Fontcase, either.

This sure is a worrisome sign for the Iconfactory. Unfortunately, the trends of the past many years have not been kind to studios like theirs, and a future of thoughtless generative design and enforced mediocrity is ominous. I wish them only the best.

You all know I love the default Tiger desktop picture. It is a perfect shade of blue, and just the right balance of visual interest and neutrality. Sadly, it was only ever released at a maximum size of 2560 × 1600 pixels — slightly smaller, even, than the resolution of today’s 13-inch MacBook Air. I filed a radar many years ago asking Apple to release a high-resolution version without result. However, there are two great third-party options.

Hector Simpson made an excellent set a few years ago that has received colour scheme updates all the way through MacOS Sequoia.

Keir Ansell also published a set with his own take on a range of mostly blues and grey. Ansell has just added a variant based on the colour scheme of the MacOS Tahoe desktop picture.

I love both sets and, if you are as enthusiastic about this era of Mac OS X wallpapers as I am, I think you will too. Simpson’s set is $4 and Ansell’s is $5, though the standard Aqua variant is free.

I should also mention Stephen Hackett’s excellent high-res gallery of Mac desktop pictures, including upscaled versions of older images. Aside from Tiger, I am partial to the Snow Leopard and Mountain Lion space pictures.

Emanuel Maiberg and Joseph Cox, 404 Media:

Tea, which claims to have more than 1.6 million users, reached the top of the App Store charts this week and has tens of thousands of reviews there. The app aims to provide a space for women to exchange information about men in order to stay safe, and verifies that new users are women by asking them to upload a selfie.

“Yes, if you sent Tea App your face and drivers license, they doxxed you publicly! No authentication, no nothing. It’s a public bucket,” a post on 4chan providing details of the vulnerability reads. “DRIVERS LICENSES AND FACE PICS! GET THE FUCK IN HERE BEFORE THEY SHUT IT DOWN!”

This is ghastly. It seems possible Tea did not do even the most basic step of stripping location metadata from submitted photos.

Maiberg and Cox, 404 Media:

A second, major security issue with women’s dating safety app Tea has exposed much more user data than the first breach we first reported last week, with an independent security researcher now finding it was possible for hackers to access messages between users discussing abortions, cheating partners, and phone numbers they sent to one another. Despite Tea’s initial statement that “the incident involved a legacy data storage system containing information from over two years ago,” the second issue impacting a separate database is much more recent, affecting messages up until last week, according to the researcher’s findings that 404 Media verified. The researcher said they also found the ability to send a push notification to all of Tea’s users.

Lots of apps have insecure or poorly secured cloud data buckets, and their data gets leaked, and that really sucks. Given the function of Tea and the deserved reputation of 4chan, however, this seems to be driven by motivations greater than a typical breach. In my head, it aligns with the politically motivated breaches of university data.

It is entirely possible this is nothing more than hackers getting lucky, and they were not picking Tea specifically. Fine. Tea should have anticipated the possibility it is a greater target because of the function it serves.

From Tea’s response:

Why did you require IDs prior to end of 2023?

During our early stages of development, we required selfies and IDs as an added layer of safety to ensure that only women were signing up for the app. In 2023, we removed the ID requirement.

Shoshana Weissmann, of the R Street Institute:

Security is dependent in no small part on norms. Understanding how to spot a phishing email, not to share one’s two-factor authentication code, or how to recognize a scam call are all examples of norms that bolster security. Yet when people are increasingly encouraged to share their most sensitive information — photo IDs, Social Security numbers, face scans — across websites and apps, they will begin to feel comfortable doing so. Offering up sensitive data could become a reflexive act like agreeing to terms of service documents. However, people cannot be sure how this data will be stored and used. In this case, Tea could not have been adhering to its privacy policy regarding its data storage, which before now might have assuaged fears of people concerned how their information might be stored or used. Some companies may store and use sensitive data in safer ways, but users do not have the ability to vet this. Even companies using better security practices can face hacks.

R Street is a think tank that stands for “free markets and limited, effective government”, so they will not say this, but privacy legislation would help protect users from these kinds of abuses. It was probably a bad idea for Tea to be collecting so much personal information in the first place. Yet this kind of data is routinely used in some industries, and it is unrealistic to expect individuals to figured out and monitor the privacy practices of individual services. Policies that limit data collection and retention, along with public auditing or other compliance-checking methods, can allow us to be more confident and provide remedies for bad practices and misuse.

Taylor Lorenz, User Mag:

Substack sent a push alert encouraging users to subscribe to a Nazi newsletter that claimed Jewish people are a sickness and that we must eradicate minorities to build a “White homeland.”

[…]

Substack said that the alert was issued by mistake. “We discovered an error that caused some people to receive push notifications they should never have received,” a spokesperson told User Mag. “In some cases, these notifications were extremely offensive or disturbing. This was a serious error, and we apologize for the distress it caused. We have taken the relevant system offline, diagnosed the issue, and are making changes to ensure it doesn’t happen again.”

One way for a social media platform administrator to reduce the likelihood people will erroneously receive notifications for Nazi stuff is by disallowing Nazi stuff on their social media platform. Sadly, I do not think that is the change Substack is committing to making.

Thomas Germain, BBC News, covered the Pew report about the relationship between Google’s A.I. Overviews and click-through traffic:

Pew says it’s confident in its research. “Our findings are broadly consistent with independent studies conducted by web analytics firms,” [Pew’s Aaron] Smith says. Dozens of reports show AI Overviews cut search traffic as much as 30% to 70% depending on what people are Googling. [Amisive’s Lily] Ray says she’s personally seen this in data from hundreds of websites.

But Google tells the BBC you should disregard this, because it’s bad research, biased data and meaningless anecdotes. The company says web traffic fluctuates for many reasons, and AI Overviews link to a wider variety of sources and create new ways to discover websites. Google’s spokesperson says the clicks from AI answers are also higher quality because people spend more time on the sites they visit.

I will continue to harp on this report. I think it is going to be foundational, yet I do not think it is robust enough to sustain the number of articles and conclusions that will be derived from it. The same is also true of a lot of third-party research into Google’s search engine. A decline of 30% in click-through rates is significant, but 70% is catastrophic. It is hard to see how these are both valid results.

This research is also more complex than these headline findings. Take the report showing a 70% reduction in click-through rates when an A.I. Overview is present. That is true — it is a reduction from 2.94% to 0.84% — but the next finding is a near-doubling of the click-through rate when a link is a source in the A.I. Overview compared to if it is not. Not only does this appear to contradict Pew’s findings, it is also described as an “incremental” change despite click-through figures being as similarly small as the previous finding.

The overall trend seems undeniable, however — A.I. Overviews are generally clobbering search referral traffic. Publishers are aware of ebbs and flows in search referral traffic. A.I. Overviews are not having that kind of middling effect. I appreciate Germain’s yadda-yadda framing of Google’s response here; if it wishes to dispute the overall trend, Google should provide evidence.

Germain:

Ironically, Google’s own AI disagrees with its PR department. If you ask Google Gemini, it says AI Overviews hurt websites. […]

I get the joke but I wish it was not included in this article because it plays into myths of how generative A.I. works. Germain is smart enough to know Gemini is just parroting news articles about the subject. And if it did not, it would be deeply suspicious! If questions to Gemini only showed Google P.R.-approved responses, that would be much worse.

Emanuel Maiberg, 404 Media:

Several Reddit communities dedicated to sharing news and media from conflicts around the world now require users in the UK to submit a photo ID or selfie in order to prove they are old enough to view “mature” content. The new age verification system is a result of the recently enacted Online Safety Act in the UK, which aims to protect children from certain types of content and hold platforms like Reddit accountable if they don’t.

One formative memory from my childhood is when I saw nightly news broadcasts about the Bosnian war. I was too young to understand it, though I remember seeing gruesome footage of bloodied bodies. I have considered that maybe this is something I should not have been exposed to, and I have also considered it is how I have grown up having a glimpse of the horrors. However, a mix of broadcast standards and my parents’ decisions is how I saw that footage. None of that changes in the post-verification era. Broadcasters will continue to show this footage, and children and teenagers will continue to see it in their homes. But they will be carded when they try to learn more on the web.

Contrary to the beliefs of one moderator of one of these subreddits, this does not seem to be motivated by burying evidence of the atrocities of war. This is the predictable overreach of Reddit choosing to require age verification to view any “not safe for work” subreddit, because of course Reddit is not going to be sensitive to context. It is not right; it is what is least expensive because it requires little additional moderation or underlying technical changes. Reddit could implement different types of NSFW labelling, but that also increases its risk of legal liability if something is improperly labelled.

Pew Research Centre made headlines this week when it released a report on the effects of Google’s A.I. Overviews on user behaviour. It provided apparent evidence searchers do not explore much beyond the summary when presented with one. This caused understandable alarm among journalists who focused on two stats in particular: a reduction from 15% of searches which resulted in a result being clicked to just 8% when an A.I. Overview was shown, and finding that just 1% of searches with an Overview resulted in a click on a citation in that summary.

Beatrice Nolan, of Fortune, said this was evidence A.I. was “eating search”. Thomas Claburn, of the Register, said they were “killing the web”, and Emanuel Maiberg, of 404 Media, says Google’s push to boost A.I. “will end the flow of all that traffic almost completely and destroy the business of countless blogs and news sites in the process”. In addition to the aforementioned stats, Ryan Whitwam, of Ars Technica, also noted Pew found “Google users are more likely to end their browsing session after seeing an A.I. Overview” than if they do not. It is, indeed, worrisome.

Pew’s is not the only research finding a negative impact on search traffic to publishers thanks to Google’s A.I. search efforts. Ryan Law and Xibeijia Guan of Ahrefs published, earlier this year, the results of anonymized and aggregated Google Search Console data finding a 34.5% drop in click-through rate when A.I. Overviews were present. This is lower than the 47% drop found by Pew, but still a massive amount.

Ahrefs gives two main explanations for this decline in click-through traffic. First, and most obviously, these Overviews present as though they answer a query without needing to visit any other pages. Second, they push results further down the page. On a phone, an Overview may occupy the whole height of the display, as shown in Google’s many examples. Either one of these could be affecting whether users are clicking through to more stuff.

So we have two different reports showing, rather predictably, that Google’s A.I. Overviews kneecap click rates on search listings. But these findings are complicated by the various other boxes Google might show on a results page, none of which are what Google calls an “A.I.” feature. There are a slew of Rich Result types — event information, business listings, videos, and plenty more. There are Rich Answers for when you ask a general knowledge question. There are Featured Snippets that extract and highlight information from a specific page. These “zero-click” features all look and behave similarly to A.I. Overviews. They all try to answer a user’s question immediately. They all push organic results further down the page. So what is different about results with an A.I. twist?

Part of the problem is with methodology. That deja vu you are experiencing is because I wrote about this earlier this week, but I wanted to reiterate and expand upon that. The way Pew and Ahrefs collected the data for measuring click-through rates differs considerably. Pew, via Ipsos KnowledgePanel, collected browsing data from 900 U.S. adults. Researchers then used a selection of keywords to identify search result pages with A.I. Overviews. Ahrefs, on the other hand, relied on data directly from Google Search Console automatically provided by users who connected it to the company’s search optimization software. Ahrefs compared data collected in March 2024, pre-A.I. rollout, against that from March 2025 after Google made A.I. Overviews more present in search results.

In both reports, there is no effort made to distinguish between searches with A.I. Overviews present and those with the older search features mentioned above, and that would impact average click-through rates. Since Featured Snippets rolled out, for example, they have been considered the new first position in results and, unlike A.I. Overviews in the findings of Pew and Ahref, they can drive a lot of traffic. Search optimization studies are pretty inconsistent, finding Featured Snippets on between 11%, according to Stat, and up to 80% according to Ahrefs.

But the difference is even harder to research than it seems because A.I. Overviews do not necessarily replace Featured Snippets, nor are they independent of each other. There are queries for which Overviews are displayed that had no such additional features before, there are queries where Featured Snippets are being replaced. Sometimes, the results page will show an A.I. Overview and a Featured Snippet. There does not seem to be a lot of good data to disentangle what effect each of these features has in this era. A study from Amisive from earlier this year found the combined display of Overviews and Snippets reduced click-through rates by 37%, but Amisive did not publish a full data set to permit further exploration.

But publishers do seem to be feeling the effects of A.I. on traffic from Google’s search engine. The Wall Street Journal, relying on data from Similarweb, reported a precipitous drop in search traffic to mainstream news sources like Business Insider and the Washington Post from 2022 to 2025. Similarweb said the New York Times’ share of traffic coming from search fell from 44% to 36.5% in that time. Interestingly, Similarweb’s data did not show a similar effect for the Journal itself, reporting a five-point increase in the share of traffic derived from search over the same period.

The quality of Similarweb’s data is, I think, questionable. It would be better if we had access to a large-scale first-party source. Luckily, the United States Government operates proprietary analytics software with open access. Though it is not used on all U.S. federal government websites, its data set is both general-purpose — albeit U.S.-focused — and huge: 1.55 billion sessions in the last thirty days. As of writing, 44.1% of traffic in the current calendar year is from organic Google searches, down from 46.4% in the previous calendar year. That is not the steep decline found by Similarweb, but it is a decline nevertheless — enough to drop organic Google search traffic behind direct traffic. I also imagine Google’s A.I. Overviews impact different types of websites differently; the research from Ahrefs and Amisive seems to back this up.

Google has, naturally, disputed the results of Pew’s research. In an extended comment to Search Engine Journal, the company said Pew “use[d] a flawed methodology and skewed queryset that is not representative of Search traffic”, adding “[we] have not observed significant drops in aggregate web traffic”. What Google sees as flaws in Pew’s methodology is not disclosed, nor does the company provide any numbers to support its side of the story. Sundar Pichai, Google’s CEO, has even claimed A.I. Overviews are better for referral traffic than links outside Overviews — but, again, has never provided evidence.

Intuitively, it makes sense to me that A.I. Overviews are going to have a negative impact on click-through rates, because that is kind of the whole point. The amount of information being provided to users on the results page increases while the source of that information is minimized. It also seems like the popular data sources for A.I. Overviews are of mixed quality; according to a Semrush study, Quora is the most popular citation, while Reddit is the second-most popular.

I find all of these studies frustrating and it is not necessarily the fault of the firms conducting them. Try as hard as the search optimization industry has, we still do not have terrifically reliable ways of measuring the impact each new Google feature has on organic search traffic. The party in the best possible position to demystify this — Google — tends to be extremely secretive on the grounds it does not want people gaming its systems. Also, given the vast disconnect between the limited amount Google is saying and the findings of researchers, I am not sure how much I trust its word.

It is possible we cannot know exactly how much of an effect A.I. Overviews will have on search trafic, let alone that of “answer engines” like Perplexity. The best thing any publisher can do at this point is to assume the mutual benefits are going away — and not just in search. Between Google’s legal problems and it fundamentally reshaping how people discover things in search, one has to wonder how it will evolve its advertising business. Publishers have already been prioritizing direct relationships with readers. What about advertisers, too? Even with the unknown future of A.I. technologies, it seems like it would be advantageous to stop relying so heavily on Google.

Vjosa Isai, New York Times:

Some of the most popular bike lanes were making Toronto’s notorious traffic worse, according to the provincial government. So Doug Ford, Ontario’s premier, passed a law to rip out 14 miles of the lanes from three major streets that serve the core of the city.

Toronto’s mayor, Olivia Chow, arrived for her first day in office two years ago riding a bike. She was not pleased with the law, arguing that the city had sole discretion to decide street rules.

Jeremy Klaszus, the Sprawl:

Is Calgary city hall out of control in building new bike lanes or negligent in building too few?

Opinions abound. But with Alberta Transportation Minister Devin Dreeshen talking about pausing new bike lanes in Calgary and Edmonton (he’s meeting with Mayor Jyoti Gondek about this July 30), it’s worth looking at what city hall has and hasn’t done on the cycling file.

I commute and do a fair slice of my regular errands by bike, and it is clear to me that seemingly few people debating this issue actually ride these lanes. Bike lanes on city streets have always struck me as a compromised version of dedicated cycling infrastructure, albeit made necessary by an insufficient desire to radically alter the structure of our roadway network. Everything — the scale of the lanes, the banking of the road surface, the timing of the lights — is designed for cars, not bikes.

But it is what we have, and it is not as though the provincial governments in Alberta and Ontario are seriously considering investment in better infrastructure. They simply do not treat cycling seriously as a mode of transportation. Even at a municipal level, one councillor — who represents an area nowhere near the city’s centre — is advocating for the removal of a track on a quiet street, half of which is pedestrianized. This is not the behaviour of people who are just trying to balance different modes of transportation.

Klaszus:

Meanwhile independent mayoral candidate Jeromy Farkas, who was critical of expanding the downtown cycle track network when he was a councillor, has proposed tying capital transportation dollars to mode usage.

“Up until now we’ve had the sort of cars versus bikes debate and I think the way to break that logjam is to just acknowledge that every single form of transportation is legitimate,” Farkas said. “When we tie funding to usage, we take the guesswork and the gamesmanship out of it.”

This is a terrible idea. Without disproportionately high investment, cycle tracks will not be adequately built out and maintained and, consequently, people will not use them. This proposal would be a death spiral. Cycling can be a safe, practical, and commonplace means of commuting, if only we want it to be. We can decide to do that as a city, if not for the meddling of our provincial government.