The Apple Music Voice Plan will be available later this fall in 17 countries and regions, including Australia, Austria, Canada, China, France, Germany, Hong Kong, India, Ireland, Italy, Japan, Mexico, New Zealand, Spain, Taiwan, the United Kingdom, and the United States.
This is the same list of regions where Apple sells the HomePod Mini. If you have a HomePod and, I guess, only listen to music on that device and no other, perhaps this is a compelling offering? I am not sure I buy that. Along similar lines, I wondered if this was perhaps a low-cost way to encourage Spotify users to try Apple Music on their HomePods, but even though it is possible, Spotify still has not added HomePod support.
Whatever the case, I cannot imagine saving $5 per month is worth having to use Siri.
Maybe you also thought about this interview, published last night, during today’s short event, and are currently wondering what Tim Cook should write in the sympathy card he could send to Intel CEO Pat Gelsinger.
The senior vice president of Microsoft Teams announced that Teams would be moving to their own Edge Webview2 Rendering Engine ditching Electron for seeking performance gains. It is marketed that Teams would consume 2x less memory as a result of the transition. It would be called Teams 2.0 and might ship with Windows 11 in late 2022.
Webview2 cannot be thought of as a replacement to Electron; It is not a wrapper like Electron to rapidly ship web apps on the desktop platform. The original Webview (Webview1 for namesake) used Microsoft’s Edge rendering engine while the Webview2 uses the Chrome rendering engine. Webview2 is already used by Outlook as a part of Microsoft’s “One Outlook” project.
I’m not sure this makes much difference for Mac users, since it’s still built on Web technologies with a bundled browser engine.
If this is anything like the browser engine used in OneDrive, it might be worse. OneDrive regularly consumes nearly a gigabyte of RAM on my Mac while idling — several times more than the already bloated Electron-powered Dropbox client. When OneDrive syncs files, it helps itself to an entire Intel i7 CPU core and causes the fans to come on.
These issues are well documented, but Microsoft has no incentive to make improvements because anyone who has to rely on OneDrive or Teams for work has no alternative.
I am sure much of that behaviour is not attributable to the choice of browser engine. But I am worried I will soon have two apps I must keep running in the background that monopolize computer resources for trivial tasks.
Maybe it is reflective of my age, but Jon Stewart’s interpretation of the “Daily Show” has always held a special place for me. Not one of the shows it inspired has resonated in my brain the same way.
So when Apple announced “The Problem with Jon Stewart”, I was excited. Two episodes have now aired and, well, it is different than I was expecting — but I like it.
It does not feel like the “Daily Show”, which is an advantage. That would not be fair to Trevor Noah, current host of the “Daily Show”, nor do I think it makes sense for there to be yet another show with a comedian sat behind an anchor desk. That conceit has been worn out.
Unfortunately, its model of a more in-depth look at a single topic each show is an arena crowded with many “Daily Show” alumnus. There’s John Oliver’s “Last Week Tonight”, which uses that format every week; Samantha Bee’s “Full Frontal”, which does something similar from time to time; and Netflix carried “Patriot Act with Hasan Minhaj” for several seasons.
“The Problem” is not like those shows. Instead of trying to jam in a joke every thirty seconds, Stewart is comfortable leaving space and holding a relaxed conversation with guests. Its biweekly release schedule seems to reflect that slower pace, too. I appreciate that, but I feel like Stewart’s monologue at the top of the show could benefit from tighter editing. It is clear that he is as sharp as ever, but there was an almost musical beat to the way the “Daily Show” was edited that is missing here. It still feels like it is finding its footing.
I am just excited to once again hear from the team behind Every Frame a Painting. There are a million and a half YouTube channels making video essays these days, but few as competently as Tony Zhou and Taylor Ramos did with Every Frame.
An investigation by The Markup found that Amazon places products from its house brands and products exclusive to the site ahead of those from competitors — even competitors with higher customer ratings and more sales, judging from the volume of reviews.
By creating more than a hundred trademarked brands, most without an obvious connection to the company, Amazon can preserve its reputation if one of its homegrown products flops. This happened in 2015 when customer reviews for its newly launched Amazon Elements diapers included complaints about leaks and “sagginess.” Amazon pulled the products after just seven weeks to make “design improvements.”
Beyond the confusing language choices, Amazon seems to be doing its damndest to create a post-brand world. One where the company from which you bought a set of headphones or a refrigerator or a shirt simply did not exist the next day, like a Three-Card Monte dealer skipping town. Some companies still like to stand behind the quality of their products and thrive on that reputation, but that requires more effort.
The Post-Dispatch discovered the vulnerability in a web application that allowed the public to search teacher certifications and credentials. The department removed the affected pages from its website Tuesday after being notified of the problem by the Post-Dispatch.
Based on state pay records and other data, more than 100,000 Social Security numbers were vulnerable.
Though no private information was clearly visible nor searchable on any of the web pages, the newspaper found that teachers’ Social Security numbers were contained in the HTML source code of the pages involved.
The Post-Dispatch did the right thing when its reporters found this boneheaded privacy flaw in the website: it notified the department responsible, and held off disclosing the problem until it had been fixed just a couple days later. Job done, right?
Through a multi-step process, an individual took the records of at least three educators, decoded the HTML source code, and viewed the SSN of those specific educators.
We notified the Cole County prosecutor and the Highway Patrol’s Digital Forensic Unit will investigate.
“Decoding” HTML — what a concept.
The state should be sending these reporters a “thank you” card and an Edible Arrangement, not charging them as criminals for viewing the public source code of the website.
Reminds me a little of that incident last month with a vaccination status app, and its fragile CEO. If anyone finds a security vulnerability and responsibly discloses it, they should be thanked publicly and paid. That goes for small businesses, large corporations, and governments alike.
If you pay close attention to this stuff, what she’s talking about is Platforms 101. But most people don’t pay close attention to this stuff. And what Haugen is doing here is articulating a very powerful point that many Facebook users still take for granted: What you see on Facebook is not organic presentation of information. It is the result of decisions made for you by the company’s software, which follows its leaders’ directives.
This is a powerful sentiment because it gives every Facebook user a tangible example of how the platform deprives them of a certain kind of agency. In 2018, when the Cambridge Analytica scandal was in its second week, I wrote that it would have staying power because it reminded regular users how platforms have “stripped us of the agency to dictate what happens with our most personal information.” I think Haugen’s testimony (and the documents that help back it up) will do something similar for people who may have not realized that Facebook is not a pure reflection of what’s happening in the lives of their friends and families — it is a highly curated one. Talking about Facebook from the perspective of user agency has the potential to be effective. The company isn’t all powerful and platforms aren’t mind controllers, but they do exert influence on how information is amplified. And that’s a responsibility to be held accountable for.
Facebook did not do itself any favours when, in 2014, it announced it had manipulated the emotions of hundreds of thousands of users for a week two years prior.
In sworn testimony before the U.S. Congress in 2020, Amazon founder Jeff Bezos explained that the e-commerce giant prohibits its employees from using the data on individual sellers to help its private-label business. And, in 2019, another Amazon executive testified that the company does not use such data to create its own private-label products or alter its search results to favor them.
But the internal documents seen by Reuters show for the first time that, at least in India, manipulating search results to favor Amazon’s own products, as well as copying other sellers’ goods, were part of a formal, clandestine strategy at Amazon – and that high-level executives were told about it. The documents show that two executives reviewed the India strategy – senior vice presidents Diego Piacentini, who has since left the company, and Russell Grandinetti, who currently runs Amazon’s international consumer business.
Earlier this year, Mother Jones cited several journalists who, in the words of one, claimed that Amazon is “the only company [they have] dealt with that has directly lied to me”. Several reporters used that word, “lie”, or said the company was deceitful in its responses to journalists — that it goes far beyond a typical carefully worded corporate message.
It would make sense if that reputation carried through to its dealings with lawmakers. World leaders are mostly deferential to executive wrongdoing. What consequences would be faced by Jeff Bezos or any of the managers named in this article if these allegations were proven true, if only for their false public statements?
The company’s cofounder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from across the web — more than three times as many as has been previously reported.
Some of Clearview’s new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including “deblur” and “mask removal” tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person’s face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images.
I am stunned Clearview is allowed to remain in business, let alone continue to collect imagery and advance new features, given how invasive, infringing, and dangerous its technology is.
Sometimes, it makes sense to move first and wait for laws and policies to catch up. Facial recognition is not one of those times. And, to make matters worse, policymakers have barely gotten started in many jurisdictions. We are accelerating toward catastrophe and Clearview is leading the way.
One of the reasons I linked to coverage of the Ozy meltdown at the end of last month is because I was apparently one of its email subscribers, but I could not remember registering. But I did notice that my earliest emails from the company were co-branded with Wired, which I was subscribed to at the time. Is that a coincidence?
Ozy Media boasts that it has more than 26 million subscribers for its newsletters, but former employees say this is another example of deceptive tactics at the embattled digital media company, with most of the email addresses on its newsletter lists either purchased, taken from other companies without their permission or added back to the lists after the recipients unsubscribed — a potentially illegal act (representatives from Ozy have not responded to Forbes’ repeated requests for comment).
Among the companies they say Ozy collectively accumulated millions of email addresses from were the McClatchy newspaper chain and the technology magazine Wired, according to two of the former employees (McClatchy and Conde Nast, the parent company of Wired, did not respond to requests for comment from Forbes).
Recent Senate hearings — convened under the banner of “Protecting Kids Online” — focused on a whistleblower’s revelations regarding what Facebook itself knows about how its products harm teen users’ mental health. That’s an important question to ask. But if there’s going to be a reckoning around social media’s role in society, and in particular its effects on teens, shouldn’t lawmakers also talk about, um, the platforms teens actually use? The Wall Street Journal’s “Facebook Files” reports, after all, also showed that Facebook itself is petrified of young people abandoning its platforms. To these users, Facebook just isn’t cool.
So TikTok is not a passing fad or a tiny start-up in the social-media space. It’s a cultural powerhouse, creating superstars out of unknown artists overnight. It’s a career plan for young influencers and a portable shopping mall full of products and brands. It’s where many young people get their news and discuss politics. And sometimes they get rowdy: In June 2020, TikTok teens allegedly pranked then-President Donald Trump’s reelection campaign by overbooking tickets to a rally in Tulsa, Oklahoma, and then never showing.
TikTok is an unmitigated sensation, and the best argument made by those who insist that Facebook’s acquisitions of Instagram and WhatsApp have not meaningfully diminished competition in the social media space.
Its privacy and moderation policies are also worrying. Though similar to policies for platforms created in the U.S. and elsewhere, TikTok moderators have also censored videos, and there is more (emphasis mine):
One thing that is certainly concerning is TikTok’s ability to steer users deeper into niche video categories. Like many other things here, this is not unique to TikTok — YouTube is notorious for a recommendation system that used to push users down some pretty dark paths.
An investigation by the Wall Street Journal this summer found that TikTok primarily uses the time spent watching each video to signal what users are most interested in. That weighting is a clever decision in its simplicity. Interacting with something on any platform by liking it, re-sharing it, or commenting on it requires a deliberate effort, and it is often public. Those actions tell a recommendation algorithm what we are comfortable showing other people what we are interested in. But the amount of time we spend looking at something is a far more valuable metric about what captivates us most.
Which is kind of creepy when you think about it.
The fact that our base instincts are revealed by how often we rubberneck at the site of a car accident will, unsurprisingly, create pathways to mesmerizing but ethically dubious videos. A Journal investigation last month found that demo user accounts that appeared to be aged 13–15 were quickly directed to videos about drinking, drug use, and eating disorders, as well as those from users who indicated their videos were for an adult audience only.
I get why this is alarming, but I have to wonder how different it is from past decades’ moral panics. Remember the vitriol expressed against Marilyn Manson in the 2000s for his music? Parents ought to have saved up that anger for now, when it really matters. Rap and hip hop have been blamed for all kinds of youth wrongdoing, as have MTV, television, and the internet more broadly. Is there something different about hearing and seeing this stuff in video form instead of in song lyrics or on message boards?
After we interacted with anti-trans content, TikTok’s recommendation algorithm populated our FYP [For You Page] feed with more transphobic and homophobic videos, as well as other far-right, hateful, and violent content.
Exclusive interaction with anti-trans content spurred TikTok to recommend misogynistic content, racist and white supremacist content, anti-vaccine videos, antisemitic content, ableist narratives, conspiracy theories, hate symbols, and videos including general calls to violence.
That looks like a pathway to radicalization to me, especially for users in balkanized and politically fragile regions, or places with high levels of anxiety. That seems to describe much of the world right now.
Jeff Verkoeyen runs Google’s design team for products on Apple’s platforms:
So at the beginning of this year, my team began a deep evaluation of what it means to build a hallmark Google experience on Apple platforms by critically evaluating the space of “utility” vs key brand moments, and the components needed to achieve either.
Does a switch really need to be built custom in alignment with a generic design system? Or might it be sufficient to simply use the system solution and move on?
This is good news. It’s good for Google’s developers, who no longer have to build that custom code. And more importantly, it’s good for people who use Google’s apps on iOS, because with any luck they’ll be updated faster, work better, and feel more like proper iOS apps, not invaders from some other platform.
It’s now been almost ten years now since we set out on this journey, and many of the gaps MDC [Material Design Components] had filled have since been filled by UIKit — often in ways that result in much tighter integrations with the OS than what we can reasonably achieve via custom solutions.
I would love to know what specifically has changed in UIKit that would only now make it possible to build Google’s apps with native components, compared to many years ago.
The concept is known in tech circles as “interoperability,” “competitive interoperability,” or “adversarial interoperability.”
It doesn’t require the government to regulate speech. It doesn’t require you to delete Facebook, disconnect from your friends, or migrate your data. It doesn’t require there to be one algorithmic solution to all things.
It’s an appropriately decentralized, open-sourced, technologically elegant way of fixing the problem.
You or I could focus on the specific details of why this may not be a slam-dunk solution — money, probably — but I think it has legs. It is worth exploring, at least.
You can get a glimpse of this with Twitter. It remains one of the few big social networks that allows third-party clients. If you like and use the official Twitter app, that is cool, but you can choose from other ones for specific reasons. I use Twitterrific on my Mac and Tweetbot on my iPhone because they feel nicer to me, and always default to a reverse-chronological view.
But there are plenty of clients that do more than reproduce the Twitter experience: I use another app called Macaw to see great tweets from people with my orbit.1But Nick, you may say, the official Twitter client does that too. The difference is that it is deliberate and separated. I can choose the experience I want — sometimes it is the first-party client, but most of the time I prefer these discrete third-party apps. And there are other Twitter clients for specific purposes: I have found some that only allow you to post and not read tweets, some that only allow the reverse, and one that is a client for direct messages only.
Every so often, I will see some tech commentator say that, actually, algorithmically sorted feeds are good. I get where they are coming from; I do not think they are wrong. But I want to be able to make that choice. I get to make that decision with my Twitter browsing experience and I am happier for it, and I would be less happy if I could only use the first-party client. Here’s another example: Instagram’s website is a better browsing experience than Instagram’s app since it shows photos from everybody I follow, instead of just the ones it thinks I will engage with.
The thing is that we have tried decentralized and interoperable networks before and, aside from email, none have amassed the user base of something like Facebook or YouTube. Historically, that could be due to the nascent days of the web only representing a small audience. There used to be websites that listed every other website on the web — that is how small it was. Now, it could be because networks like Mastodon and Pixelfed are built by technologists for technologists or, at least, a technically savvy niche audience.
But it could also be due to the network effects of massive siloed platforms. One way we can find out is to turn them into protocols.
I just found out that Macaw is being integrated into a product called Clay. Watch it get shut down like Nuzzel and all the other apps for finding great tweets. ↩︎
The second item is a report in Reuters indicating that ExpressVPN CIO Daniel Gericke is among three men fined $1.6 million by the US Department of Justice for hacking and spying on US citizens on behalf of the government of the UAE (United Arab Emirates).
I’ll discuss each of these reports individually, and then share with you some thoughts about how these situations might impact your decision to use (or not use) ExpressVPN.
The operatives — Marc Baier, Ryan Adams and Daniel Gericke — were part of a clandestine unit named Project Raven, first reported by Reuters, that helped the UAE spy on its enemies.
At the behest of the UAE’s monarchy, the Project Raven team hacked into the accounts of human rights activists, journalists and rival governments, Reuters reported.
This is a more comprehensive look at ExpressVPN’s sketchy history and its ownership that leave me with the impression that the world of VPNs is mostly bullshit. The honest take is that these products help users circumvent geographic restrictions, particularly for things like streaming services. I am convinced that, if streaming companies and media rightsholders were less concerned with nit-picking contracts and more focused on providing a great experience, there would be far less demand among everyday users for VPNs. By no means am I blaming streaming services for creating this sleazy market, but they certainly have not helped.
I learned this the hard way. For several years, I subscribed to a popular VPN service called Private Internet Access. In 2019, I saw the news that the service had been acquired by Kape Technologies, a security firm in London. Kape was previously named Crossrider, a company that had been called out by researchers at Google and the University of California for developing malware. I immediately canceled my subscription.
The rest of Chen’s article is worth reading — VPNs are often marketed for their security and privacy promises, but it probably does not make sense for most people to route their web browsing through some third-party company — but these shady review sites caught my eye.
According to a May 2021 Restore Privacy report, Kape bought Webselenese and its vpnMentor and Wizcase review websites. Both websites aggressively push their top three picks which, funny enough, are all owned by Kape. Wizcase also publishes reviews of security software, and picks Intego as the best antivirus software for the Mac; Kape also owns Intego.
But if you were browsing either review website, you would probably miss Kape’s ownership. While a legitimate news organization would typically display conflicts of interest in immediate context, the word “Kape” appears nowhere in the on-page text, nor does it appear on the dedicated ExpressVPN review page. Wizcase’s “About” page says that the review site “believe[s] in transparency” and the footer on every page claims that it is an “independent review site”. vpnMentor says that its “reviews are not based on advertising” and its claims of honesty make it a “powerful transparency tool for the internet”.
There is only one place where a reader could find traces of Kape’s ownership on each site. You must find the small text reading “Ownership” at the top of a review page. On Wizcase’s website, it does not look like a link and it is a terribly low-contrast shade of grey — vpnMentor’s text link is blue — but, if you click on it, the site’s parentage is acknowledged.
Shortly after the Wall Street Journal began publishing “The Facebook Files” last month, a series of articles based on leaked internal research documents, the paper confirmed that two U.S. lawmakers were in touch with the whistleblower who leaked the files. Not only was the research in the possession of the Journal’s reporters and the SEC, the lawmakers said that they were hoping the whistleblower would speak publicly.
Zuckerberg’s letter is behind Facebook’s login wall; since I do not have an account, I cannot access it. Thankfully, the Vergehas reproduced it in full for those of us who think that the public statements of the CEO of a major company should be, you know, public.
I obviously do not know more about Facebook than its founder and CEO. But I think it would be worthwhile to compare Zuckerberg’s comments against the reporting so far, so we can see what may be omitted, taken out of context, or misrepresented. Skipping over a perfunctory introduction and a brief reflection on Monday’s companywide outage, here is Zuckerberg’s comment on Haugen’s congressional appearance:
Second, now that today’s testimony is over, I wanted to reflect on the public debate we’re in. I’m sure many of you have found the recent coverage hard to read because it just doesn’t reflect the company we know. We care deeply about issues like safety, well-being and mental health. It’s difficult to see coverage that misrepresents our work and our motives. At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted.
Many of the claims don’t make any sense. If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn’t care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space — even ones larger than us? If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we’re doing? And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
These are quite the paragraphs, with the latter being particularly misleading. Let’s look at each rhetorical question:
If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?
This premise is obviously false. Just because there exists a well-funded corporate research team, it does not mean their findings cannot be ignored — or worse. Researchers at oil companies were aware of the environmental harm caused by their products for decades before the general public; instead of doing something about it, they lied and lobbied.
If we didn’t care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space — even ones larger than us?
Of all the rhetorical questions in this paragraph, this one is framed around a wishy-washy straw man argument, so any response is going to be similarly vague. Framed as a binary choice of caring versus not caring, I suppose the presence of any platform moderation could be seen as caring. But perhaps this does not demonstrate an adequate level of care, even with the most contractors — not employees — compared to its competitors.
That premise is not the claim made by Haugen or the reporting on the documents she released, however. On September 16, the Journal published an analysis of moderation-related documents indicating that the company prioritizes growth and user retention, and is reluctant to remove users. The reporting portrays this as a systemic moderation problem that can be similarly attributed to greed and incompetence. As user growth has been driven almost exclusively (PDF) by the “Asia-Pacific” and “Rest of World” categories for years, platform moderation has not kept pace with language and regional requirements.
I bet Facebook’s staff and contractors, at all levels, are horrified to see the company’s platforms used to promote murder, drug cartels, human exploitation, and ethnicity-targeted violence. What the Journal’s reporting indicates is they struggle to balance those problems against profits, to which anyone with a conscience might wonder why there is a need for a touch so cautious they are reluctant to ban cartel members.
If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we’re doing?
And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
This is, no kidding, an honest-to-goodness question worth asking, though it seems like the answer may be fairly straightforward: platforms like Facebook may not be wholly to blame for polarization, but they seem to exacerbate existing societal fractures, according to the Brookings Institute. Regions that are already polarized or have more fragile democracies are pulled apart further, while reducing time spent on these platforms decreases animosity and hardened views.
At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That’s just not true. For example, one move that has been called into question is when we introduced the Meaningful Social Interactions change to News Feed. This change showed fewer viral videos and more content from friends and family — which we did knowing it would mean people spent less time on Facebook, but that research suggested it was the right thing for people’s well-being. Is that something a company focused on profits over people would do?
I am confused why Zuckerberg would choose to illustrate this by referencing Meaningful Social Interactions, the topic of one of the first pieces of reporting from the Journal based on Haugen’s document disclosures. The summary Zuckerberg paints is almost the opposite of what has been reported based on Facebook’s internal research. It is so easy to fact-check that it seems as though Zuckerberg is counting on readers not to. From the Journal:
Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.
“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.
They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.
This change may have reduced time spent on the site, but internal researchers found it made Facebook a worse place to be, not a better one. The Journal also says Zuckerberg was worried about its impact on engagement after this algorithm change was launched and asked for changes to reduce its impact.
In response to Zuckerberg’s question this week, “is that something a company focused on profits over people would do?”, I say “duh and/or hello”.
The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don’t want their ads next to harmful or angry content. And I don’t know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.
Once again, I think there is a subtle distinction that Zuckerberg is avoiding here to make an easier argument. The internal documents collected by Haugen indicate the company profits more when people are engaged more, that engagement rises with incendiary materials, and that engagement is prioritized in the News Feed. These documents and reporting based on them do not indicate the company is intentionally trying to make people angry, only that it is following a path to greater profit that, incidentally, stokes stronger emotions.
Buzzfeed data scientist Max Woolf, in a thread on Twitter, illustrated another problem with Zuckerberg’s claims. In posts where discriminatory perspectives are framed as defiant or patriotic, the most common responses are “likes” and “loves”, not angry reactions. Would it be fair to say these are positive posts? If you only look at their reactions without looking at the context, that is the impression you might get.
Zuckerberg also reflects on some of the reporting on the effects of Facebook’s products on children and youth, but ends up passing the buck:
Similar to balancing other social issues, I don’t believe private companies should make all of the decisions on their own. That’s why we have advocated for updated internet regulations for several years now. I have testified in Congress multiple times and asked them to update these regulations. I’ve written op-eds outlining the areas of regulation we think are most important related to elections, harmful content, privacy, and competition.
In testimony earlier this year, Zuckerberg said that he would support changing Section 230 of the Communications Decency Act, but in a specific way that benefits Facebook and other large companies. In a vacuum and without existing social media giants, I think his proposal makes sense. But, today, it would be toxic for the open web. Increasing liability for websites that allow public posting of any kind would make it hard for smaller businesses with lower budgets to compete. Contrary even to Haugen’s limited reform scope, it seems likely that changes to Section 230 — without antitrust action — will, like many other laws, be easily absorbed by massive companies like Facebook while disadvantaging upstarts.
That said, I’m worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It’s disheartening to see that work taken out of context and used to construct a false narrative that we don’t care. If we attack organizations making an effort to study their impact on the world, we’re effectively sending the message that it’s safer not to look at all, in case you find something that could be held against you. That’s the conclusion other companies seem to have reached, and I think that leads to a place that would be far worse for society. Even though it might be easier for us to follow that path, we’re going to keep doing research because it’s the right thing to do.
It is not every day you get an honest-to-goodness mafioso threat out of a CEO. I almost admire how straightforward it is.
When I reflect on our work, I think about the real impact we have on the world — the people who can now stay in touch with their loved ones, create opportunities to support themselves, and find community. This is why billions of people love our products. I’m proud of everything we do to keep building the best social products in the world and grateful to all of you for the work you do here every day.
Today, the Verge released the results of its latest Tech Trust Survey. Of the U.S.-representative 1,200 people polled in August, 31% think Facebook has a negative impact on society, 56% do not trust it with their personal information, and 72% think the company has too much power. 48% said they would not miss Facebook if it went away. Respondents were more positive about Instagram, but even more of them — 60% — said they would be okay if it disappeared. That is not a promising sign that people “love” the company’s offerings. All of this is after several years of critical coverage of Facebook, but before Haugen’s disclosures.
I do not think the Journal’s stories this month about Facebook revealed much new information that will swing those numbers much in either direction. What these leaks show is the degree to which Facebook is aware of the harmful effects of its products, yet often prioritizes its earnings over positive societal influence. If you read that sentence and thought like every company, Nick, I think we have found common ground on broader questions of balancing business desires with the public good.
So if you’ve been brought up to believe with every ounce of your mind and soul that growth is everything, and that the second you take your eye off the ball it will stop, decisions that are “good for Facebook, but bad for the world” become the norm. Going back to my post on the hubris of Facebook, it also feels like Mark thinks that once Facebook passes some imaginary boundary, then they can go back and fix the parts of the world they screwed up. It doesn’t work like that, though.
And that’s a problem.
The incentives are all screwed up here. While Zuckerberg may claim that advertisers will refuse to spend on platforms that regularly spew hate, spread misinformation, and sow division, the last three quarters have been the most financially successful in its history. Repeated negative press stories have not correlated with advertising spending or Facebook’s value to investors.
It is not surprising this is the case: Facebook runs two of the world’s most successful personalized advertising platforms. Regardless of what advertisers say, it is not like many of them will actually go anywhere else, because where else is there to go? If they are being honest, none of the senators before which Haugen testified will stop their millions of dollars spent on Facebook ads either.
The policies that will require Facebook to reform in big, meaningful ways are those that improve privacy and restrict the use of behavioural information. Facebook’s incentives are aligned with exploiting that data, and the company’s paranoia pushes it to stretch acceptable boundaries. It is long past time to change those incentives.
For the past several days at least, Google search results have not included AMP links on iOS 15, but they still include AMP links on iOS 14. I’ve determined that Safari’s User-Agent makes the difference.
I’ve received a statement from Danny Sullivan, Google’s public search liaison: “It’s a bug specific to iOS 15 that we’re working on. We expect it will be resolved soon.”