Search Results for: vice.com

Roshan Abraham, Vice:

“Algorithmic wage discrimination allows firms to personalize and differentiate wages for workers in ways unknown to them, to behave in ways that the firm desires, perhaps as little as the system determines that they may be willing to accept,” [Veena] Dubal writes. The wages are “calculated with ever-changing formulas using granular data on location, individual behavior, demand, supply, and other factors,” she adds.

In a study combining legal analysis and interviews with gig workers, Dubal concludes that Prop 22 has turned working into gambling. From a driver’s point of view, every time they log in to work they are essentially gambling for wages, as the algorithm provides no reason why those wages are what they are.

In a statement to Vice, an Uber spokesperson vehemently denied the claims in Dubal’s preprint study, emphasizing that its pricing algorithms do not include “factors like a driver’s race [or] ethnicity”. From what I can tell, Dubal never makes any such claim in her study, only stating that automatic fare structures may exacerbate existing pay disparities.

No matter whether these fee structures were made more transparent, Dubal’s study acknowledges the dangers of normalizing them. She writes of jobs which already have unpredictable wages which could be worsened by a wider rollout of pay determined dynamically by a series of factors out of their control. “Gig economy” drivers may be the first to experience it, but you just know there are employers salivating at the thought of saving money by allowing computers to make constant adjustments to worker pay.

We are officially one month into Elon Musk’s ownership of Twitter. One month of needlessly cruel layoffs, of cozying up to far right goons, of uncertainty about the direction my favourite bar is taking. It is under new management which thinks few people are unwelcome to stay regardless of their behaviour, and fired most of the bouncers so there are fewer people keeping an eye out for things that drive others away. At best, he is spineless. At worst, he is enabling and even welcoming terrible people; that is certainly how they read it.

Is it any wonder advertisers are reportedly spooked?

Now he has decided to take on what used to be his biggest advertiser after they, in the words of Musk, “threatened to withhold Twitter” from the App Store, apparently without explanation. But it does not take a close Apple watcher to speculate on why it would be newly concerned about the Twitter app: it requires all apps which permit user submissions to have functional filtering, blocking, and reporting mechanisms. This is not a mystery. Apple is probably — understandably — worried about Musk’s statements and the laying off of thousands of moderators. In fairness, Twitter does not have a spectacular track record of ridding its platform of even the most heinous material but, also in fairness, eliminating all but one person tasked with removing CSAM in the world’s most populous region will make it harder to solve this problem, despite claims to the contrary.

Musk framed Apple’s reduced advertising spend as an attack on free speech. That is a wild accusation to throw at a company that, as Jason Koebler at Vice pointed out, twice challenged the FBI when the Bureau attempted to compromise encryption. Apple’s control of native app distribution on iOS devices means it is uniquely positioned to influence acceptable limits of speech and, as Musk also complained about today, it extracts fees from digital businesses. Those are also concerning factors — ones which I have repeatedly writen about. But Musk has no credibility in framing its ad spending as a free speech issue.

Of note, Twitter has also been a staunch defender of free speech. This bar I love has long been home to anonymous users and a crack legal team pushing back against worldwide interference. It has also established internal boundaries to try to improve the comfort of its guests. Many of the people making those decisions have been pushed out, replaced by people more obedient to the whims of an owner who believes none of that is necessary. He says he will comply with regulators while laying off staff responsible for that. This bar is filling up with assholes who are making many of us uncomfortable and driving some away. Hopefully, the new spot can fill the void. Even so, it still feels like a loss.

Chloe Xiang, Vice:

On Tuesday, the Edmonton Police Service (EPS) shared a computer generated image of a suspect they created with DNA phenotyping, which it used for the first time in hopes of identifying a suspect from a 2019 sexual assault case. Using DNA evidence from the case, a company called Parabon NanoLabs created the image of a young Black man. The composite image did not factor in the suspect’s age, BMI, or environmental factors, such as facial hair, tattoos, and scars. The EPS then released this image to the public, both on its website and on social media platforms including its Twitter, claiming it to be “a last resort after all investigative avenues have been exhausted.”

This is not the first time police in Canada have turned to Parabon to create DNA-based predictive composites of suspects.

Sarah Rieger produced some terrific reporting on the use of this tool and its murky ethics while she was at CBC News. Here is Rieger in 2018 after a Parabon portrait was used to try to find the mother of a baby abandoned in Calgary:

Benedikt Hallgrímsson, a biological anthropologist and evolutionary biologist who studies the significance of phenotypic variation and variability at the University of Calgary, said he wouldn’t recommend phenotyping be used as a regular technique by law enforcement.

[…]

Hallgrímsson said the risk of these composite images is “twofold.” First, the image might lead to someone being falsely accused of a crime. Second, the actual suspect might not look anything like the picture and could be overlooked.

And here is a 2018 followup story from Rieger, answering the question of why they are used at all in Canada instead of family matches from the national databank of DNA from convicted criminals, missing persons, and volunteered samples:

A public affairs spokesperson told CBC that Canada is one of the only western countries not to allow familial DNA typing, even though it has been used to solve dozens of cases in the United States and around the globe.

“Jurisdictions that currently do use familial searching do so either on the basis of explicit legislative permission, or in some cases, more disturbingly, in the absence of any legislation explicitly prohibiting it,” Patricia Kosseim, the senior general counsel at the office of Canada’s privacy commissioner, said during a 2015 speech at the Canadian Institute on the Administration of Justice.

Rieger says consumer DNA databases like those used to crack cold cases in the U.S. would still be permissible for police to search. All of these options make me uncomfortable, but permitting exploratory use of the national database of criminals’ DNA seems like it could incentivize its expansion. When it was launched in 1998, only serious crimes required the collection of a convicted offender’s DNA. In 2008, amidst Stephen Harper’s crime-and-punishment tenure, law enforcement was permitted to collect offenders’ DNA for less violent criminal convictions. It would be worrisome if there were more reasons for more Canadians’ DNA to be in that databank. The whole point of the DNA Identification Act was to ensure the database does not become a means to collect a biological identity marker for everyone.

At the end of the second story, Rieger points out that the mother of the abandoned baby would not be identified using familial DNA matching unless one of her relatives was a convicted criminal. Rieger also notes that, at the time of writing, many of the cases involving Parabon’s predictive portraits remained open.

One of those cases was the 1998 murder of Renee Sweeney in Sudbury. In January 2017, police used Parabon’s software to update a sketch of the suspect. The suspect was arrested in December 2018, but police said they only received a tip that November. Even in favourable coverage, police do not seem like they want to draw a straight line between Parabon’s generated portrait and an arrest nearly two years after its release.

Do you remember having the capacity for shock?

To be fair, it may have been muted by years of relentless news stories exploring an entire industry of privacy invasions. Some of these articles might involve subjects familiar to you; perhaps you were an early worrier about how Facebook apps could harvest data on users’ friends, a capability which the company later found was happening at shocking scale. Unfortunately, most of the general-audience press began paying attention to these concerns after the 2016 U.S. election, when that Facebook scandal was disproportionately blamed for a particularly idiotic presidency. But, at last, mainstream newsrooms did cover these problems, and they brought the budget, sources, and access to uncover some truly horrifying news items, with such regularity that my ability to be shocked has been blunted.

This made my jaw drop.

Joseph Cox, Vice:

Multiple branches of the U.S. military have bought access to a powerful internet monitoring tool that claims to cover over 90 percent of the world’s internet traffic, and which in some cases provides access to people’s email data, browsing history, and other information such as their sensitive internet cookies, according to contracting data and other documents reviewed by Motherboard.

[…]

“The network data includes data from over 550 collection points worldwide, to include collection points in Europe, the Middle East, North/South America, Africa and Asia, and is updated with at least 100 billion new records each day,” a description of the Augury platform in a U.S. government procurement record reviewed by Motherboard reads. It adds that Augury provides access to “petabytes” of current and historical data.

The NSA and GCHQ have, for years, intercepted and ingested data as it flows from server farms through fibre optic cables and across the internet. These programs built upon previous general surveillance efforts like the FBI’s Carnivore software.

These wildly intrusive and untargeted capabilities, once the domain of government intelligence gathering efforts, now appear to be offered to anyone who can afford whatever Team Cymru is charging. Regardless of your opinion of the programs operated by the NSA and GCHQ, at least they had the appearance of formal controls and specific goals. As Cox reports, now that the monitoring is done by a private business, it eliminates the need for pesky roadblocks like warrants.

This is wild, too:

Beyond his day job as CEO of Team Cymru, Rabbi Rob Thomas also sits on the board of the Tor Project, a privacy focused non-profit that maintains the Tor software. That software is what underpins the Tor anonymity network, a collection of thousands of volunteer-run servers that allow anyone to anonymously browse the internet.

I am not sure if the dissidents and drug seekers who rely on Tor should be worried, but I do not know what to make of this conflict. The Tor Project says there is no conflict of interest, though, so I feel silly.

Bennett Cyphers, Electronic Frontier Foundation:

The company, Fog Data Science, has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and associate. Fog sells access to this data via a web application, called Fog Reveal, that lets customers point and click to access detailed histories of regular people’s lives. This panoptic surveillance apparatus is offered to state highway patrols, local police departments, and county sheriffs across the country for less than $10,000 per year.

The records received by EFF indicate that Fog has past or ongoing contractual relationships with at least 18 local, state, and federal law enforcement clients; several other agencies took advantage of free trials of Fog’s service. EFF learned about Fog after filing more than 100 public records requests over several months for documents pertaining to government relationships with location data brokers. EFF also shared these records with The Associated Press.

Cyphers found several connections between Fog Data Science and a data broker called Venntel. While Fog Data focuses on smaller police departments, Venntel works mostly with national agencies and, according to Cypher’s reporting, also provides data to other law enforcement-connected location companies like Babel Street and X-Mode. Venntel is well-connected in Washington. The Department of Homeland Security is a current user of its software; in the past, it has also held contracts with the FBI, DEA, ICE, and IRS, according to a search of USAspending.gov.

Cyphers:

Together, the “area search” and the “device search” functions allow surveillance that is both broad and specific. An area search can be used to gather device IDs for everyone in an area, and device searches can be used to learn where those people live and work. As a result, using Fog Reveal, police can execute searches that are functionally equivalent to the geofence warrants that are commonly served to Google.

The EFF says Fog Reveal will display a proprietary hash of the advertiser ID for devices within a geofence instead of the actual ID. But that may not be the case for all users.

Will Greenberg, EFF:

Federal users have access to an interface for converting between Fog’s internal device IDs (“FOG IDs”) and the device’s actual Advertiser ID:

This is eyebrow raising for a couple reasons. First, if this feature is operational, it would contradict assurances made in a sample State search warrant Fog sends to customers that FOG IDs can’t be converted back into Advertiser IDs. Second, if users could retrieve the Advertiser IDs of all devices in a query’s results, it would make Reveal far more capable of unmasking the identities of those device’s owners. This is due to the fact that if you have access to a device, you can read its Advertiser ID, and thus law enforcement would be able to verify if a specific person’s device was part of a query’s results.

To be clear, the EFF does not know if this extra level of federal functionality is available to end users. The U.S. Marshals had a two-year contract with Fog Data, which ended in 2020. It is the only national-level contract the EFF could find, and there is no evidence the Marshals or any Fog Data customer has access to unhashed advertiser IDs.

Even so, the presence of this functionality is worrisome. Last year, Joseph Cox of Vice explained how “identity resolution” companies like BIGDBM and FullContact brag about their ability to tie advertising identifiers to individual profiles of people: their names, physical addresses, IP addresses, property records, and more. If a law enforcement agency has contracts with a device location aggregator like Fog Data and an identity resolution company, and has access to this feature, officers could create full named profiles of people’s movements without a warrant.

Even if an agency does not have access to an unhashed device identifier, the repeated presence of a device at an address is a strong indicator that its owner lives there. It is hard to overstate how easy it is to link an address back to a name and phone number with free and publicly accessible web tools. That is, even though Fog Data may not collect what it deems is personally identifiable information — which, somehow, does not include device advertising identifiers — it is trivial to tie what it does show back to a specific person. And, again, police somehow do not need a warrant for this because the location data is bought from data brokers which harvest it from apps instead of cell towers.

Joseph Cox, Vice:

Ray, a cybersecurity researcher, who saw a similar item on online retailer AliExpress, knew the offer was too good to be true. He bought the drive, suspecting it was a scam, and took it apart to find out what exactly was happening here. Sure enough, he found what amounted to a different item cosplaying as a big SSD. Inside were two small memory cards and the item had been programmed in such a way so as to appear it had 30TB of storage when plugged into a computer.

[…]

As Ray tweeted out his findings, another user, SM4Tech, found that the drive was available on Walmart. Motherboard then contacted Walmart for comment.

As Cox writes, it may have appeared that Walmart was selling the drive, but it was actually a marketplace listing. Like Amazon, Walmart lets third-party vendors use its online store to sell their wares. Some vendors are household names, while others take the same Scrabble bag approach to branding as Amazon sellers.

Amazon and Walmart are two of many retailers you probably recognize which offer an online marketplace for third-party sellers, including Best Buy and Canadian retail giant Loblaw. Staples experimented with marketplace sales, too, but I could not find any current information about its program. These products are usually offered alongside those sold by the retailer itself, with few visual clues that they may have different return policies or expectations of quality.

A unique consequence of writing about the biggest computer companies, which are all based in the United States, from most any other country is a lurking sense of invasion. I do not mean this in an anti-American sense; it is perhaps inherent to any large organization emanating from the world’s most powerful economy. But there is always a sense that the hardware, software, and services we use are designed by Americans often for Americans. You can see this in a feature set inevitably richer in the U.S. than elsewhere, language offerings that prioritize U.S. English, pricing often pegged to the U.S. dollar, and — perhaps more subtly — in the values by which these products are created and administered.

These are values that I, as someone who resides in a country broadly similar to the U.S., often believe are positive forces. A right to free expression is among those historically espoused by these companies in the use of their products. But over the past fifteen years of their widespread use, platforms like Facebook, Instagram, Twitter, and YouTube have established rules of increasing specificity and caution to restrict what they consider permissible. That, in a nutshell, is the premise of Jillian C. York’s 2021 book, Silicon Values.

Though it was published last year, I only read it recently. I am glad I did, especially with several new stories questioning the impact of a popular tech company an ocean away. TikTok’s rapid rise after decades of industry dominance by American giants is causing a re-evaluation of an America-first perspective. Om Malik put it well:

For as long as I can remember, American technology habits did shape the world. Today, the biggest user base doesn’t live in the US. Billion-plus Indians do things differently. Ditto for China. Russia. Africa. These are giant markets, capable of dooming any technology that attempts a one-size-fits-all approach.

The path taken by York in Silicon Values gets right up to the first line of this quote from Malik. In the closing chapter, York (228) writes:

I used to believe that platforms should not moderate speech; that they should take a hands-off approach, with very few exceptions. That was naïve. I still believe that Silicon Valley shouldn’t be the arbiter of what we can say, but the simple fact is that we have entrusted these corporations to do just that, and as such, they must use wisely the responsibility that they have been given.

I am not sure this is exactly correct. We often do not trust the judgements of moderation teams, as evidenced by frequent complaints about what is permissible and, more often, what gets flagged, demonetized, or removed. As I was writing this article, reporters noted that Twitter took moderation action against doctors and scientists posting factual, non-controversial information about COVID-19. This erroneous flagging was reverted, but it is another in a series of stories about questionable decisions made by big platforms.

In fact, much of Silicon Values is about the tension between the power of these giants to shape the permissible bounds of public conversations and their disquieting influence. At the beginning of the book, York points to a 1946 U.S. Supreme Court decision, Marsh v. Alabama, which held that private entities can become sufficiently large and public to require them to be subject to the same Constitutional constraints as government entities. Though York says this ruling has “not as of this writing been applied to the quasi-public spaces of the internet” (14), I found a case which attempted to use Marsh to push against a moderation decision. In an appellate decision in Prager University v. Google, Judge M. Margaret McKeown wrote (PDF) “PragerU’s reliance on Marsh is not persuasive”. More importantly, McKeown reflected on the tension between influence and expectations:

Both sides say that the sky will fall if we do not adopt their position. PragerU prophesizes living under the tyranny of big-tech, possessing the power to censor any speech it does not like. YouTube and several amicus curiae, on the other hand, foretell the undoing of the Internet if online speech is regulated. While these arguments have interesting and important roles to play in policy discussions concerning the future of the Internet, they do not figure into our straightforward application of the First Amendment.

All of the subjects concerned being American, it makes sense to judge these actions on American legal principles. But even if YouTube were treated as an extension of government due to its size and required to retain every non-criminal video uploaded to its service, it would make as much of a political statement elsewhere, if not more. In France and Germany, it — like any other company — must comply with laws that require the removal of hate speech, laws which in the U.S. would be unconstitutional. York (19) contrasts their eager compliance with Facebook’s memorable inaction to rein in hate speech that contributed to the genocide of Rohingya people in Myanmar. Even if this is a difference of legal policy — that France and Germany have laws but Myanmar does not — it is clearly unethical for Facebook to have inadequately moderated this use of its platform.

The concept of an online world no longer influenced largely by U.S. soft power brings us back to the tension with TikTok and its Chinese ownership. It understandably makes some people nervous for the most popular social media platform for many Americans has the backing of an authoritarian regime. Some worry about the possibility of external government influence on public policy and discourse, though one study I found reflects a clear difference in moderation principles between TikTok and its Chinese-specific counterpart Douyin. Some are concerned about the mass collection of private data. I get it.

But from my Canadian perspective, it feels like most of the world is caught up in an argument between a superpower and a near-superpower, with continued dominance by the U.S. preferable only by comparison and familiarity. Several European countries have banned Google Analytics because it is impossible for their citizens to be protected against surveillance by American intelligence agencies. The U.S. may have legal processes to restrict ad hoc access by its spies, but those are something of a formality. Its processes are conducted in secret and with poor public oversight. What is known is that it rarely rejects warrants for surveillance, and that private companies must quietly comply with document requests with little opportunity for rebuttal or transparency. Sometimes, these processes are circumvented entirely. The data broker business permits surveillance for anyone willing to pay — including U.S. authorities.

The privacy angle holds little more weight. While it is concerning for an authoritarian government to be on the receiving end of surveillance technologies rather than advertising and marketing firms, it is unclear that any specific app disproportionately contributes to this sea of data. Banning TikTok does not make for a meaningful reduction of visibility into individual behaviours.

Even concerns about how much a recommendation algorithm may sway voter intent smell funny. Like Facebook before it, TikTok has downplayed the seriousness of its platform by framing it as an entertainment venue. As with other platforms, disinformation on TikTok spreads and multiplies. These factors may have an effect on how people vote. But the sudden alarm over yet-unproved allegations of algorithmic meddling in TikTok to boost Chinese interests is laughable to those of us who have been at the mercy of American-created algorithms despite living elsewhere. American state actors have also taken advantage of the popularity of social networks in ways not dissimilar from political adversaries.

However, it would be wrong to conclude that both countries are basically the same. They obviously differ in their means of governance and the freedoms afforded to people. The problem is that I should not be able to find so many similarities in the use of technology as a form of soft power, and certainly not for spying, between a democratic nation and an authoritarian one. The mount from which Silicon Values are being shouted looks awfully short from this perspective.

You do not need me to tell you that decades of undermining democracy within our countries has caused a rise in autocratic leanings, even in countries assumed stable. The degradation of faith in democratic institutions is part of a downward spiral caused by internal undermining and a failure to uphold democratic values. Again, there are clear differences and I do not pretend otherwise. You will not be thrown in jail for disagreeing with the President or Prime Minister, and please spare me the cynical and ridiculous “yet!” responses.

I wish there were a clear set of instructions about where to go from here. Silicon Values is, understandably, not a book about solutions; it is an exploration of often conflicting problems. York delivers compelling defences of free expression on the web, maddening cases where newsworthy posts were removed, and the inequity of platform moderation rules. It is not a secret, nor a compelling narrative, that rules are applied inconsistently, and that famous and rich people are treated with more lenience than the rest of us. But what York notes is how aligned platforms are with the biases of upper-class white Americans; not coincidentally, the boards and executive teams of these companies are dominated by people matching that description.

The question of how to apply more local customs and behaviours to a global platform is, I believe, the defining challenge of the next decade in tech. One thing seems clear to me: the world’s democracies need to do better. It should not be so easy to point to similarities in egregious behaviour; corruption of legal processes should not be so common. I worry that regulators in China and the U.S. will spend so much time negotiating which of them gets to treat the internet as their domain while the rest of us get steamrolled by policies that maximize their self-preferencing.

This is especially true as waves of stories have been published recently alleging TikTok and its adjacent companies have suspicious ties to arms of an autocratic state. Lots of TikTok employees apparently used to work for China’s state media outlets and, in another app from ByteDance, TikTok’s owner, pro-China stories were regularly promoted while critical news was minimized. ByteDance sure seems to be working more closely with government officials than operators of other social media platforms. That is probably not great; we all should be able to publish negative opinions about lawmakers and big businesses without fear of reprisal.

There is a laundry list of reasons why we must invest more in our democratic institutions. One of them is, I believe, to ensure a clear set of values projected into the world. One way to achieve that is to prefer protocols over platforms. It is impossible for Facebook or Twitter or YouTube to be moderated to the full expectations of its users, and the growth of platforms like Rumble is a natural offshoot of that. But platforms like Rumble which trumpet their free speech bonafides are missing the point: moderation is good, normal, and reinforces free speech principles. It is right for platform owners to decide the range of permissible posts. What is worrying is the size and scope of them. Facebook moderates the discussions of billions — with a b and an s — of people worldwide. In some places, this can permit greater expression, but it is also an impossible task to monitor well.

The ambition of Silicon Valley’s biggest businesses has not gone unnoticed outside of the U.S. and, from my perspective, feels out of place. Yes, the country’s light touch approach to regulation and generous support of its tech industry has brought the world many of its most popular products and services. But it should not be assumed that we must rely on these companies built in the context of middle- and upper-class America. That is not an anti-American statement; nothing in this piece should be construed as anti-American. Far from it. But I am dismayed after my reading of Silicon Values. What I would like is an internet where platforms are not so giant, common moderation actions are not viewed as weapons, and more power is in more relevant hands.

Anna Merlan, Vice:

On cross-examination, though, things got far stickier for [Alex] Jones, especially when plaintiffs’ attorney Mark Bankston informed him that 12 days ago, Jones’ attorneys accidentally sent him an entire digital copy of Jones’ cellphone, which they then failed to declare as privileged. That means Bankston has wide latitude to ask Jones about anything he found on the phone that conflicts with things Jones has said in his testimony.

This is personal to me. For lots of very boring reasons, Jones has unfortunately been a lurking figure in the back of my brain for about twenty years. The impact he has had on my life is certainly a tiny fraction of the degree to which his broadcasts have played a role in harming the lives of those connected to the mass murder at Sandy Hook. Still, it was immensely satisfying to watch the moment Bankston told him what he obtained.

Update: Parker Molloy:

I am asking people in media to understand that their editorial decisions, from who gets invited to appear on talk shows to what topics we actually hear about in the news (and how often), are not value-neutral. Want to invite the next Tomi Lahren or Alex Jones to appear on your show? Fine. But just know that you’re not “exposing” their bad ideas or “showing the public who they really are;” you’re giving them an opportunity, which they will be lucky to have (even if they pretend to be upset about it, as Jones did about his Megyn Kelly interview.

In short: make good choices.

Commentators are pointing to this factor as among the biggest problems with a new documentary about Jones.

While we are on the subject of data marketplaces, here is Joseph Cox, of Vice:

Placer.ai, a location data firm that Motherboard previously revealed was providing heatmaps of approximately where abortion clinic visitors live, has admitted that people have obtained data related to these visits in the past.

A different location data company, INRIX, offers census block-level aggregate statistics of Planned Parenthood visitors. But it is kind of irrelevant what individual data brokers are offering and the limitations they place on themselves because the value of this stuff is in the aggregate and users have little individual control. As an example, one data platform, Narrative, boasts connections to seventeen different location providers claiming two billion mobile identifiers. “Always present” in this data set are the latitude and longitude, timestamp, and device identifier. In May, it removed data on its platform collected from some health-related apps, but it relies on platform users following its terms and conditions.

Narrative is just one example of a massive and insidious industry relying on a lack of knowledge among users and failure to regulate.

James Vincent, the Verge:

BMW is now selling subscriptions for heated seats in a number of countries — the latest example of the company’s adoption of microtransactions for high-end car features.

A monthly subscription to heat your BMW’s front seats costs roughly $18, with options to subscribe for a year ($180), three years ($300), or pay for “unlimited” access for $415.

For comparison, BMW UK charges £600, or about $710 USD, to equip a 1-Series with heated seats as part of a larger package of comfort options.

Now that seemingly everything is a connected device, anything can be turned into a subscription. This is one bizarre example. Maybe an owner in a usually warm climate wakes up one frosty winter morning and, so, is happy subscribing to heated seats for a month or two, thereby saving money by not paying the flat rate price BMW charges to add the equipment to the car. Except heated seats have to be equipped from the factory. The hardware must be there; it is gated solely by software.

Joseph Cox and Aaron Gordon, Vice:

Historically, cars come with various features offered as part of packages, or “trims,” which the buyer decides when they purchase the car. Originally, these were nearly all physical or hardware upgrades like leather seats, more horsepower, or a sunroof. But, increasingly, they are software-enabled features like automatic headlights and wiper activation and driver assist features like adaptive cruise control. The creation of software-locked features means all versions of a car can have the feature, but only if the customer pays to unlock them. Some coders are helping customers do this off-the-books.

If BMW is going to entrust its software to be the gatekeeper deciding whether installed hardware can be used, I say “good luck”. Do you think owners who elect to not subscribe to heated seats are not actually paying for them? They must be built into the bill of materials. Behaviours like BMW’s are normalizing the ability for companies to skim revenue off the top for no reason other than because they can.

This is what we can expect going forward in contexts we had never previously imagined, enabled partly by laws like the DMCA’s anti-circumvention rules. Businesses love predictable monthly recurring revenue streams. Do you believe we are already being squeezed for every dollar we can give? Of course not; BMW’s strategy proves there is plenty more room for nickel-and-diming, customer experience be damned.

Jia Tolentino, the New Yorker:

If you become pregnant, your phone generally knows before many of your friends do. The entire Internet economy is built on meticulous user tracking — of purchases, search terms — and, as laws modelled on Texas’s S.B. 8 proliferate, encouraging private citizens to file lawsuits against anyone who facilitates an abortion, self-appointed vigilantes will have no shortage of tools to track and identify suspects. (The National Right to Life Committee recently published policy recommendations for anti-abortion states that included criminal penalties for anyone who provides information about self-managed abortion “over the telephone, the internet, or any other medium of communication.”) A reporter for Vice recently spent a mere hundred and sixty dollars to purchase a data set on visits to more than six hundred Planned Parenthood clinics. Brokers sell data that make it possible to track journeys to and from any location — say, an abortion clinic in another state. In Missouri, this year, a lawmaker proposed a measure that would allow private citizens to sue anyone who helps a resident of the state get an abortion elsewhere; as with S.B. 8, the law would reward successful plaintiffs with ten thousand dollars. The closest analogue to this kind of legislation is the Fugitive Slave Act of 1793.

Two data brokers, Safegraph and Placer.ai, said they removed Planned Parenthood visits from their data sets. They could reverse that decision at any time, and there is nothing preventing another company from offering its own package of users seeking a form of healthcare that is now illegal in a dozen states. People have little choice about which third-party providers receive data from the apps and services they use. Anyone using a period tracking app is at risk of that data being subpoenaed and, while some vendors say they do not pass health records to brokers, some of those same apps were found to be inadvertently sharing records with Facebook.

If the U.S. had more protective privacy laws, it would not make today’s ruling any less of a failure to uphold individuals’ rights in the face of encroaching authoritarian policies. But it would make it a whole lot harder for governments and those deputized on their behalf to impose their fringe views against medical practitioners, clinics, and people seeking a safe abortion.

Naomi Nix and Cat Zakrzewski, the Washington Post:

The way that generation uses social media more generally could render years of work to spot and identify public signs of upcoming violence obsolete, social media experts warn.

“There is this shift toward more-private spaces, more-ephemeral content,” said Evelyn Douek, a senior research fellow at the Knight First Amendment Institute at Columbia University. “The content moderation tools that platforms have been building and that we’ve been arguing about are kind of dated or talking about the last war.”

Many recent mass murderers have established a relationship between their horrific crimes and social media platforms. One of the primary sources of information about these criminals’ beliefs has been the evidence left behind in their own posts. That is also true of the hate crime in Buffalo, New York earlier this month. But it is worth being skeptical given that Americans are not particularly active on social media compared to people in other countries.

Reporting the following day from Cristiano Lima in the Washington Post’s “Technology 202” newsletter struck a more cautionary tone:

But former tech staffers, researchers and industry critics alike said rushing to draw a connection between the shooting and social media — particularly a causal one — is misguided and may distract from broader debates about the cause of such attacks.

“We should take care with how much we center the role of platforms unless there’s evidence to suggest that they substantively contributed to the violence,” said Emerson Brooking, a resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab, a think tank that researches online extremism.

Just about every country has social media, violent video games, believers in conspiracy theories, and guns in private hands. But only one nation is an outlier in both guns and mass shootings. Beware of the people attempting, as always, to shift blame from guns to anything else. Three full days of conversations about how many doors are on buildings is unhelpful and distracting. Blaming social media is probably just as ineffectual, though it is one avenue for deeper beliefs to spread. There is a single policy area where changes must be made first: guns.

Joseph Cox, Vice:

The Centers for Disease Control and Prevention (CDC) bought access to location data harvested from tens of millions of phones in the United States to perform analysis of compliance with curfews, track patterns of people visiting K-12 schools, and specifically monitor the effectiveness of policy in the Navajo Nation, according to CDC documents obtained by Motherboard. The documents also show that although the CDC used COVID-19 as a reason to buy access to the data more quickly, it intended to use it for more general CDC purposes.

Location data is information on a device’s location sourced from the phone, which can then show where a person lives, works, and where they went. The sort of data the CDC bought was aggregated — meaning it was designed to follow trends that emerge from the movements of groups of people — but researchers have repeatedly raised concerns with how location data can be deanonymized and used to track specific people.

Remember, during the early days of the pandemic, when the Washington Post published an article chastising Apple and Google for not providing health organizations full access to users’ physical locations? In the time since it was published, the two companies released their jointly-developed exposure notification framework which, depending on where you live, has either been somewhat beneficial or mostly inconsequential. Perhaps unsurprisingly, regions with more consistent messaging and better privacy regulations seemed to find it more useful than places where there were multiple competing crappy apps.

The reason I bring that up is because it turns out a new app that invades your privacy in the way the Post seemed to want was unnecessary when a bunch of other apps on your phone do that job just fine. And, for the record, that is terrible.

In a context vacuum, it would be better if health agencies were able to collect physical locations in a regulated and safe way for all kinds of diseases. But there have been at least stories about wild overreach during this pandemic alone: this one, in which the CDC wanted location data for all sorts of uses beyond contact tracing, and Singapore’s acknowledgement that data from its TraceTogether app — not based on the Apple–Google framework — was made available to police. These episodes do not engender confidence.

Also — and I could write these words for any of the number of posts I have published about the data broker economy — it is super weird how this data can be purchased by just about anyone. Any number of apps on our phones report our location to hundreds of these companies we have never heard of, and then a government agency or a media organization or some dude can just buy it in ostensibly anonymized form. This is the totally legal but horrific present.

Reports like these underscore how frustrating it was to see the misplaced privacy panic over stuff like the Apple–Google framework or digital vaccine passports. Those systems were generally designed to require minimal information, report as little externally as possible, and use good encryption for communications. Meanwhile, the CDC can just click “add to cart” on the location of millions of phones.

Lorenzo Franceschi-Bicchierai, Vice:

In the last few years, regulators all over the world have tried to limit how platforms like Facebook can use their own users’ data. One of the most notable and significant regulations is the European Union’s General Data Protection Regulation (GDPR), which went into effect in May 2018. In its article 5, the law mandates that personal data must be “collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes.” 

What that means is that every piece of data, such as a user’s location, or religious orientation, can only be collected and used for a specific purpose, and not reused for another purpose. For example, in the past Facebook took the phone number that users’ provided to protect their accounts with two-factor authentication and fed it to its “people you may know” feature, as well as to advertisers. Gizmodo, with the help of academic researchers, caught Facebook doing this, and eventually the company had to stop the practice

According to legal experts interviewed by Motherboard, GDPR specifically prohibits that kind of repurposing, and the leaked document shows Facebook may not even have the ability to limit how it handles users’ data. The document raises the question of whether Facebook is able to broadly comply with privacy regulations because of the sheer amount of data it collects and where it flows within the company.

Facebook denied it was unable to control user data internally, but it is hard to read this document and conclude it has everything neatly organized and all permissions are correct. At Facebook’s scale, I am not surprised that is the case, but it is damning to see it written in plain text.

Annie Palmer, in an April 1 CNBC article:

Employees at an Amazon warehouse on New York’s Staten Island voted Friday to join a union, a groundbreaking move for organized labor and a stinging defeat for the e-commerce giant, which has aggressively fought unionization efforts at the company.

[…]

The union is led by Christian Smalls, a former JFK8 manager, who was fired by Amazon in 2020 after the company claimed he violated social distancing rules. Smalls argued he was fired in retaliation for staging a protest in the early weeks of the coronavirus pandemic to call for stronger safety measures.

Smalls was smeared by Amazon’s general counsel in internal memos after his firing. Gerald Bryson was also fired from his job at JFK8 for protesting lacklustre safety measures with Smalls; Amazon was just told to reinstate his job. Amazon says it is appealing. I disagree.

Lauren Kaori Gurley, Vice:

An Apple Store in Atlanta has filed for a union election with the Communications Workers of America, becoming the first of Apple’s 272 brick-and-mortar stores in the country to do so.

[…]

The news coincides with a wave of burgeoning and growing union drives at Apple stores at least half a dozen Apple store locations, including locations in New York City and Maryland. Apple store employees are unionizing with at least three different national unions, a reflection of the siloed nature of Apple’s retail store locations. The CWA campaign is part of CODE-CWA, an initiative to unionize tech and games workers, and has members from Activision-Blizzard and Google.

Good for all of these workers. These are two of the most valuable companies on the planet, and their non-tech workforce should absolutely be negotiating for better pay and conditions. Both may pay higher than average wages for their roles but there is no reason why that should be a ceiling. Employees at the Genius Bar, in particular, used to be given unique experiences that made them feel like an integral part of Apple. Now? Not so much. These core workforces can expect better.

Microsoft president Brad Smith:

Today we’re announcing a new set of Open App Store Principles that will apply to the Microsoft Store on Windows and to the next-generation marketplaces we will build for games. We have developed these principles in part to address Microsoft’s growing role and responsibility as we start the process of seeking regulatory approval in capitals around the world for our acquisition of Activision Blizzard. This regulatory process begins while many governments are also moving forward with new laws to promote competition in app markets and beyond. We want regulators and the public to know that as a company, Microsoft is committed to adapting to these new laws, and with these principles, we’re moving to do so.

Microsoft is making eleven promises to developers for its Windows software marketplace. Some are straightforward enough, like how it says it will hold developers to standards of security — but not privacy — while others are clear shots at Apple and Google, like its flexible rules around in-app payments. But it says not all of the same rules will apply for Xbox developers, particularly those around in-app purchases and developer communications, because these promises are designed to get ahead of proposed legislation. That is not an oversimplification; Smith:

Second, some may ask why today’s principles do not apply immediately and wholesale to the current Xbox console store. It’s important to recognize that emerging legislation is being written to address app stores on those platforms that matter most to creators and consumers: PCs, mobile phones and other general purpose computing devices. For millions of creators across a multitude of businesses, these platforms operate as gateways every day to hundreds of millions of people. These platforms have become essential to our daily work and personal lives; creators cannot succeed without access to them. Emerging legislation is not being written for specialized computing devices, like gaming consoles, for good reasons. Gaming consoles, specifically, are sold to gamers at a loss to establish a robust and viable ecosystem for game developers. The costs are recovered later through revenue earned in the dedicated console store.

Microsoft says it will eventually make these rules standard on Xbox, too, but one wonders how that is possible when it says those existing payment structures are necessary for a “robust and viable ecosystem”. It may be happy to apply what it calls a “principled approach” in its Windows marketplace, but that is an open platform for developers. It would be like if Apple applied these same policies to its Mac App Store, but not iOS — I would have questions.

Emmanuel Maiberg and Edward Ongweso Jr, Vice:

We should recognize Microsoft’s messaging for what it is: an attempt to convince us that self-regulation will turn out good for all of us. Self-regulation is how we got here, though. We need laws, not agreements — antitrust actions backed up by law, not corporate rhetoric that will disappear at Microsoft’s earliest convenience.

Becky Hansmeyer:

As I think about Microsoft cleverly positioning themselves as a developer’s best friend, I can’t help but assume that Apple execs are whining “they’re making us look like the bad guys!” instead of asking themselves, “ARE we the bad guys?”

Well said.

Shoshana Wodinsky, Gizmodo:

This week, European authorities struck a massive blow to the digital data-mining industrial complex with a new ruling stating that, quite simply, most of those annoying cookie alert banners that sites were forced to onboard en masse after GDPR was passed haven’t… actually been compliant with GDPR. Sorry.

[…]

While the ruling showed that GDPR is very much still in effect, it doesn’t do a lot to explain how blatant some of these infringements were, or how loudly critics inside the industry had been raising red flags. Simply put, when the GDPR asked the adtech industry to get consent from users before tracking them, the IAB responded with a set of guidelines with loopholes large enough that data could still get through, anyway, without consent. And now that these practices are out in the public, nobody seems sure how to make them stop.

Bulk cookie consent banners so obviously violate the spirit and letter of the GDPR, it is no wonder authorities are taking action against them. Unfortunately, the offices in charge of investigating problems and administering fines are so under-resourced I am not surprised it took this long.

Aaron Gordon, Vice:

I think of phones in much the same way I think of refrigerators or stoves. It’s an appliance, something I need but feel no attachment to, and as long as it keeps fulfilling that need, I don’t want to spend money replacing it for no real reason. The Pixel 3 fulfills my needs, so I don’t want to spend $600 on the Pixel 6, which seems to be just another phone that does all the phone things.

But I have to get rid of it because Google has stopped supporting all Pixel 3s. Despite being just three years old, no Pixel 3 will ever receive another official security update. Installing security updates is the one basic thing everyone needs to do for their own digital security. If you don’t even get them, then you’re vulnerable to every security flaw discovered since your last patch. In response to an email asking Google why it stopped supporting the Pixel 3, a Googles spokesperson said, “We find that three years of security and OS updates still provides users with a great experience for their device.”

Conspiracy theories about planned obsolescence in Apple’s lineup appear like clockwork, but 2015’s iPhone 6S supports all the same security patches in iOS 15 as the newest iPhone models. My iPhone X, released one year before the Pixel 3, is running just fine with the latest software updates.

Meanwhile, Google declares Pixel models unsupported after just three years, which seems to be something of a standard among Android manufacturers.

Joseph Cox, Vice:

“The Banning Surveillance Advertising Act does what its title suggests. The legislation prohibits advertising facilitators (e.g., Facebook, Google DoubleClick, data brokers) from targeting ads with the exception of broad location targeting to a recognized place (e.g., municipality),” a press release announcing the proposed legislation reads. “The bill also prohibits advertisers from targeting ads based on protected class information and any information they purchase. Violations can be enforced by the Federal Trade Commission, state attorneys general, or private lawsuits,” it adds. The legislation would also prohibit targeted advertisements based on protected class attributes such as race, gender, and religion.

Reps. Anna G. Eshoo of California and Jan Schakowsky of Illinois, and Sen. Cory Booker of New Jersey are the Democratic lawmakers behind the proposed legislation.

Can Duruk:

My hope is that we will look back at the current state of the internet, funded solely by adtech, like when we used asbestos for insulation, lead for toys, and land mines for defense.

There is no chance that this bill becomes law in the U.S., thereby causing the world’s ad tech market to adjust to a better model, but a simple Canadian boy can dream.

Apple:

Apple today announced Self Service Repair, which will allow customers who are comfortable with completing their own repairs access to Apple genuine parts and tools. Available first for the iPhone 12 and iPhone 13 lineups, and soon to be followed by Mac computers featuring M1 chips, Self Service Repair will be available early next year in the US and expand to additional countries throughout 2022. Customers join more than 5,000 Apple Authorized Service Providers (AASPs) and 2,800 Independent Repair Providers who have access to these parts, tools, and manuals.

The initial phase of the program will focus on the most commonly serviced modules, such as the iPhone display, battery, and camera. The ability for additional repairs will be available later next year.

Brian Heater, TechCrunch:

Apple hasn’t listed specific prices yet, but customers will get a credit toward the final fee if they mail in the damaged component for recycling. When it launches in the U.S. in early-2022, the store will offer some 200 parts and tools to consumers. Performing these tasks at home won’t void the device’s warranty, though you might if you manage to further damage the product in the process of repairing it — so hew closely to those manuals. After reviewing that, you can purchase parts from the Apple Self Service Repair Online Store.

And you thought Apple could no longer surprise? This makes sense in the context of right-to-repair bills progressing in the U.S. and around the world. Apple has been lobbying against that legislation, often with ludicrous arguments that look especially funny in light of today’s news.

There seem to be a handful of caveats. Most notably, the program is launching only for very recent iPhones in the U.S., and then gradually rolling out to more countries and offering repairs for M1 Macs. This program will not help me replace the battery in my partner’s iPhone X when it is needed. Support for other products currently sold, like Intel Macs and iPads, also has not been announced. I have little hope future Apple Watch and AirPods models will become repairable, but they should be.

While I am cautiously optimistic about this new program, it does not resolve the rationale for oversight. Apple still controls the parts and repair channels, which means it can stop offering this at any time. As welcoming as I think this new direction seems to be, regulations can and should be used to set expectations. We should not be having this same discussion five or ten or twenty years from now.

Update: Maddie Stone, the Verge:

But Apple didn’t change its policy out of the goodness of its heart. The announcement follows months of growing pressure from repair activists and regulators — and its timing seems deliberate, considering a shareholder resolution environmental advocates filed with the company in September asking Apple to re-evaluate its stance on independent repair. Wednesday is a key deadline in the fight over the resolution, with advocates poised to bring the issue to the Securities and Exchange Commission to resolve.

This at least explains the timing.