Rule Britannia theintercept.com

When I post a link to an article from the Intercept, I’m sure some of you grumble and think “what is it this time?” It turns out that if you’ve browsed the internet since mid-2007 or so, much of your traffic has probably been scooped up and stored by British intelligence. Surprise!

Ryan Gallagher:

By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”

Metadata records in bulk with no identifying information are somewhat inconsequential; it’s basically a giant analytics engine at that point. What GCHQ decided they needed was to align metadata — domain names, access time, IP address, and the like — with actual records:

All GCHQ needs is a single identifier — a “selector,” in agency jargon — to follow a digital trail that can reveal a vast amount about a person’s online activities.

A top-secret GCHQ document from March 2009 reveals the agency has targeted a range of popular websites as part of an effort to covertly collect cookies on a massive scale. It shows a sample search in which the agency was extracting data from cookies containing information about people’s visits to the adult website YouPorn, search engines Yahoo and Google, and the Reuters news website.

Want another reason that targeted, profiling web ads suck? The U.K. government — and, probably, the U.S. government, too — is associating those profiles with scooped web traffic.