Ring’s website states that supplying a home address enables Neighbors to “create a radius around your home” in order to share alerts from “within that radius.” (Users aren’t required to provide accurate information.) As such, users presumably expect that their own posts are, likewise, visible only to the neighbors in whose radii they fall. Ring’s website implies as much: “Conversely, if you share an alert on the [Neighbors app] about a crime or safety issue in your radius,” it says, “your neighbors will also get a notification on their phones and tablets.”
A Ring spokesperson said elsewhere the company characterizes posts to Neighbors as “public” and allows users to link to specific posts on social media. Gizmodo found that Google has indexed almost 2,000 Ring videos so far. However, it’s unclear whether users understand that posts, including those containing accurate location information, can be easily viewed by anyone, from anywhere on the planet.
The density of cameras that this reporting revealed is shocking to me. For a country like the United States that as wary of interference in personal freedom, many residents of American cities certainly are eager to give Amazon eyes on every block.
It reminds me of the steady drumbeat of reporting from the United Kingdom that it is among the world’s most-surveilled nations. It seems that a handful of reports are released every year about Britain’s big brother problem — but few acknowledge that most of these cameras are privately owned, and are subject to far more permissive privacy laws. In many regions of the world, private surveillance can operate with few rules in broadly public spaces, so long as there is notice that cameras are being used.
In his excellent book on surveillance, Bruce Schneier has pointed out we would never agree to carry tracking devices and report all our most intimate conversations if the government made us do it.
But under such a scheme, we would enjoy more legal protections than we have now. By letting ourselves be tracked voluntarily, we forfeit all protection against how that information is used.
Those who control the data gain enormous power over those who don’t. The power is not overt, but implicit in the algorithms they write, the queries they run, and the kind of world they feel entitled to build.
When it comes to the ethics of monitoring individuals, is there really a difference between when it’s done by the government or one of the biggest companies on the planet? I don’t believe there is.
Twitter is funding a small independent team of up to five open source architects, engineers, and designers to develop an open and decentralized standard for social media. The goal is for Twitter to ultimately be a client of this standard.
This isn’t going to happen overnight. It will take many years to develop a sound, scalable, and usable decentralized standard for social media that paves the path to solving the challenges listed above. Our commitment is to fund this work to that point and beyond.
We’re calling this team @bluesky. Our CTO [Parag Agrawal] will be running point to find a lead, who will then hire and direct the rest of the team. Please follow or DM @bluesky if you’re interested in learning more or joining!
If you’re worried about the dominance of certain social media platforms, or if you’re concerned about privacy online, or if you’re uncomfortable with leaving the decisions for how content moderation works in the hands of a few internet company bosses — this is big news and something you should be paying attention to. It won’t change the way the web works overnight. Indeed, it might never have that big of an impact. But it certainly has the potential to be one of the most significant directional shifts for the mainstream internet in decades. Keep watching.
After a closer reading of Jack’s tweets, though, I think my first interpretation wasn’t quite right. Twitter isn’t necessarily interested in decentralizing content or even identity on their platform. Why would they be? Their business is based around having all your tweets in one place.
This “burden on people” is the resources it would take for Twitter to actively combat hate and abuse on their platform. Facebook, for example, has hired thousands of moderators. If Twitter is hoping to outsource curation to shared protocols, it should be in addition to — not a replacement for — the type of effort that Facebook is undertaking. I’ve outlined a better approach in my posts on open gardens and 4 parts to fixing social networks, which don’t seem compatible with Twitter’s current business.
As is the Twitter way, Dorsey does not seem to have fully considered this proposal. On the relatively simple question of whether this would be based on existing standards or if Twitter would be inventing an entirely new spec, he said that it was to be determined. As Dorsey acknowledges in the thread, Twitter already has an open API; it could be updated. Why not use that? Mastodon is a decentralized social networking platform — why not adopt its features?
This is a spitball at this stage — barely more than a napkin sketch. There might be something to show for it, sometime, in some capacity, but there’s a lot of buzzwords in this announcement without any product. That suggests a high likelihood of vapourware to me.
For the third time in our nation’s history, a President is facing an impeachment vote in the House of Representatives. As with the Mueller Report (and the DOJ IG Report), we at The Bulwark think the public needs to read the articles of impeachment thoroughly, carefully, as citizens — not as lawyers.
The House of Representatives is moving toward a momentous decision about whether to impeach a president for only the third time in U.S. history. The charges brought against President Trump by the House Judiciary Committee on Tuesday are clear: that he abused his office in an attempt to induce Ukraine’s new president to launch politicized investigations that would benefit Mr. Trump’s reelection campaign, and that he willfully obstructed the subsequent congressional investigation.
Because of that unprecedented stonewalling, and because House Democrats have chosen to rush the impeachment process, the inquiry has failed to collect important testimony and documentary evidence that might strengthen the case against the president. Nevertheless, it is our view that more than enough proof exists for the House to impeach Mr. Trump for abuse of power and obstruction of Congress, based on his own actions and the testimony of the 17 present and former administration officials who courageously appeared before the House Intelligence Committee.
I was too young to remember Bill Clinton’s impeachment over lying under oath and obstructing justice, and not yet born when Richard Nixon resigned for his corrupt abuse of power — and also obstructing justice. So it is a momentous occasion for me to witness what is, in my view, an instance where the President of the United States abused his power in an attempt to damage a political rival, and obstructed investigations of these actions. Today will be etched into my brain for as long as I live; I wanted to make sure it was recorded in my public diary, too.
Of course, I am Canadian. American politics has no immediate consequence nor direct impact on my life, but the uniquely close relationship of my country with the United States — not to mention the latter’s power and influence over nearly all countries — means that I am not far removed from its effects. This process is important, it is right, and it is a just decision to proceed with curtailing a president who does not value the law.
Earlier today, Apple began accepting orders for the all-new Mac Pro, which will start shipping to customers in 1-2 weeks. Reminiscent of what Apple did when it released the iMac Pro, the new Mac Pro was provided to a very limited set of reviewers with video production experience in advance of pre-orders.
Marques Brownlee shares his impressions after using the Mac Pro and two Pro Display XDRs to edit all of his YouTube videos for the past two weeks. His main takeaways? “One, it’s really quiet, Two, it’s really fast.” So fast, in fact, that he was able to render 8K video more quickly than the time it would take to watch.
Brownlee’s video is the only one I’ve watched so far, but his first impressions blew me away. I will never own one of these things, but I will live vicariously through those that need nothing but the fastest Mac for their specialized workflows.
Apple is on the cusp of shipping a new Mac Pro — a phrase many of us would not expect to have uttered three years ago, on the last Mac Pro’s third birthday without an update. The test will be whether that happens again. We already know of a few Mac Pro updates that Apple is readying, including Radeon Pro W5700X configuration options and up to 8 TB of storage. They also aren’t yet taking orders for the rack-mounted Mac Pro, which is “coming soon”. After that, though, it would be worrying if it once again stagnated for several years. I don’t expect to see a Mac Pro update every year, but I would hope to see gradual improvements — both to keep up with new technology, and to demonstrate their commitment to niche customers who depend on the Mac.
I have previously shared with you Balk’s Law (“Everything you hate about The Internet is actually everything you hate about people”) and Balk’s Second Law (“The worst thing is knowing what everyone thinks about anything”). Here I will impart to you Balk’s Third Law: “If you think The Internet is terrible now, just wait a while.” The moment you were just in was as good as it got. The stuff you shake your head about now will seem like fucking Shakespeare in 2016. I like to think of myself as an optimist, but I have a hard time seeing a future where anything gets better. Do you know why? Because everything is terrible and only getting worse. We won’t all be dead in twenty years, but we’ll all wish we were. I used to have hopes that once the Internet got completely unbearable some of the smart people would peel off and start something new, but with each passing day it seems ever less likely. (If anyone peels off to start something new it’s going to be teens, and we know what idiots they are.) No, the Internet is going to keep getting worse and there will be no chance for escape. It’s a massive torrent of sewage blasted at you at all hours and you pay handsomely for the privilege of having a hand-held cannon you carry with you at all times to spray more shit-sludge at yourself whenever you’re bored or anxious. Some of you sleep with it right next to your head in case you wake in the middle of the night and need to deliver another turgid shot to your wide-open mouth.
By 2010, personal blogs were thriving, Tumblr was still in its prime, and meme-makers were revolutionizing with form. Snapchat was created in 2011 and Vine, the beloved six-second video app, was born in 2012. People still spent time posting to forums, reading daily entries on sites like FML, and watching Shiba Inus grow up on 24-hour puppy cams. On February 26, 2015 — a day that now feels like an iconic marker of the decade — millions of people on the internet argued about whether a dress was blue or gold, and watched live video of two llamas on the lam in suburban Arizona. Sites like Gawker, the Awl, Rookie, the Hairpin, and Deadspin still existed. Until they didn’t. One by one, they were destroyed by an increasingly unsustainable media ecosystem built for the wealthy.
There may be an element of rosy retrospection to Chang’s piece, but the thesis of the argument fully summarizes a decade-long shift: the internet really has lost joy — lightness and vibrancy have been upended and replaced by gravitas. A veneer of marketing coats surfaces already soaked in cynicism. It becomes increasingly difficult to choose our own web adventure as it is more frequently dictated by opaque recommendations.
I am an optimistic person; this stuff can be changed. I’ve been writing about the misplaced trust of an advertising-driven techno-utopia for most of this decade, and I’m encouraged by the increasing awareness of its faults. Correcting these problems doesn’t require us to entirely give up on social media, or advertising, or smartphone apps, or any of that stuff. The 2020s should be marked by a conscientious effort to bring the fun back. It will be a slow process, but I think it has already begun.
Larry Page and Sergey Brin spent the first fifteen years of their careers building the greatest information network the world has ever known and the last five trying to escape it. Having made everything visible, they made themselves invisible. Larry has even managed to keep the names of his two kids secret, an act of paternal love that is also, given Google’s mission “to organize the world’s information and make it universally accessible and useful,” an act of corporate treason.
Larry Page’s efforts have made it trivial to find virtually anyone’s contact information, date of birth, siblings, and address, but effectively impossible to find his children’s names. Mark Zuckerberg wants to make everyone in the world closer, except his neighbours.
They’re fully aware of the problems they have played a part in creating, but the business models of the companies they started are dependent on everyone else not figuring that out.
Avast, the multibillion-dollar Czech security company, doesn’t just make money from protecting its 400 million users’ information. It also profits in part because of sales of users’ Web browsing habits and has been doing so since at least 2013.
That’s led to some labelling its tools “spyware,” the very thing Avast is supposed to be protecting users from. Both Mozilla and Opera were concerned enough to remove some Avast tools from their add-on stores earlier this month.
But recently appointed chief executive Ondrej Vlcek tells Forbes there’s no privacy scandal here. All that user information that it sells cannot be traced back to individual users, he asserts.
Here’s how it works, according to Vlcek: Avast users have their Web activity harvested by the company’s browser extensions. But before it lands on Avast servers, the data is stripped of anything that might expose an individual’s identity, such as a name in the URL, as when a Facebook user is logged in. All that data is analysed by Jumpshot, a company that’s 65%-owned by Avast, before being sold on as “insights” to customers. Those customers might be investors or brand managers.
On the marketing webpage for their anti-tracking product, Avast says that VPNs don’t secure your privacy enough because “advertisers can still track you and identify you based on your device and browser settings”. They also say that you’re not anonymous to trackers because your “online habits, along with your device and browser settings make up your unique digital fingerprint, allowing advertisers to identify you from a crowd of visitors”.
For four years during college, I bought and scalped tickets on the side. I didn’t use bots and I wasn’t good at it. I ultimately lost a lot of money. But I did learn quite a lot about the ticket scalping industry. And I learned enough to know that the “anti-scalper” strategies Ticketmaster has deployed in recent years benefits scalpers, not fans.
It is the full-time job of thousands of people in the U.S. and around the world to buy tickets during hectic Ticketmaster onsales and sell them at jacked-up prices. When Ticketmaster tweaks how sales work, scalpers have lots of time and incentive to learn how to optimize for its new systems and to circumvent its anti-scalper tech. By making onsales more complicated, Ticketmaster is hurting average fans who buy tickets using the site only a couple times a year and helping the people who buy tickets every single day, in dozens of different onsales.
I was reminded of this article today as I attempted to buy a couple of tickets to a low-demand show that definitely isn’t seeing mass orders by scalpers. Point of clarity: scumbags they may be, ticket scalpers do not actually collect human scalps.
I started on my phone, because I was in the kitchen making coffee. I have the Ticketmaster app, but it had logged me out at some point. So I had to go through all of its prompts to pick bands and artists to get emailed about — no, thank you — to switch on push notifications, and all the rest of it. I signed in using my complicated saved password, which had apparently expired, so I had to go through their whole password reset process. Expiring passwords are bullshit.
I switched over to my Mac and tried in Safari, Chrome — the browser for people who don’t give a shit about their privacy — and I even brushed the dust off my copy of Firefox. I got the same error in all of them. I tried again on my phone using LTE, and had the same problem. Their website is apparently so secure that I simply cannot use it to buy tickets; last year, however, a Canadian investigation found that Ticketmaster was complicit in scalping. Live Nation Entertainment — the parent company of both Ticketmaster and Live Nation, which were somehow permitted to merge in 2010 — has exclusive contracts with some of the biggest venues in North America, too, so they’re impossible to avoid.
So I guess I’ll try buying tickets in person, at a booth, the way my ancestors once did.
The feature “rollout” is a staple of tech launches. A feature technically goes live, but when it will actually reach all users is left vague. Dashboards tabulating screen time rolled out last year, making their way to users over the course of weeks. Instagram’s anti-bullying tools rolled out a couple of months ago. A year ago, a feature to unsend messages in Messenger went live … in Bolivia, Colombia, Lithuania, and Poland, until eventually making its way to everyone else. This rollout tactic gives major tech platforms a way to create the illusion that they are for everyone. Tech companies get outlets to write up press releases about features going live, even if the features are not, in many cases, actually live.
A cautionary approach to rolling out new features by testing and refining them in smaller markets is not a problem. The problem is that these features are often announced in press releases and news stores as though they are widely available when they aren’t. In the New York Times’ coverage of Facebook’s new tool to control data collection across the web, it isn’t mentioned until the very last paragraph that it is only available in Ireland, South Korea, and Spain, with no timeline for U.S. or worldwide access. There’s no sign that Facebook is restricting the feature to these markets for licensing, translation, or legal reasons; it is a strategic decision to test how it works for users, and how much it impacts the company’s data gathering. Reporters should reserve praise and more accurately describe these soft launches for what they are: tests in specific markets.
The policy explains users can disable all location services entirely with one swipe (by navigating to Settings > Privacy > Location Services, then switching “Location Services” to “off”). When one does this, the location services indicator — a small diagonal upward arrow to the left of the battery icon — no longer appears unless Location Services is re-enabled.
The policy continues: “You can also disable location-based system services by tapping on System Services and turning off each location-based system service.” But apparently there are some system services on this model (and possibly other iPhone 11 models) which request location data and cannot be disabled by users without completely turning off location services, as the arrow icon still appears periodically even after individually disabling all system services that use location.
“Ultra wideband technology is an industry standard technology and is subject to international regulatory requirements that require it to be turned off in certain locations,” an Apple spokesperson told TechCrunch. “iOS uses Location Services to help determine if an iPhone is in these prohibited locations in order to disable ultra wideband and comply with regulations.”
“The management of ultra wideband compliance and its use of location data is done entirely on the device and Apple is not collecting user location data,” the spokesperson said.
That seems to back up what experts have discerned so far. Will Strafach, chief executive at Guardian Firewall and iOS security expert, said in a tweet that his analysis showed there was “no evidence” that any location data is sent to a remote server.
Apple said it will provide a new dedicated toggle option for the feature in an upcoming iOS update.
This makes complete sense to me and appears to be nothing more than a mistake in not providing a toggle specifically for UWB. It seems that a risk of marketing a company as uniquely privacy-friendly is that any slip-up is magnified a hundredfold and treated as evidence that every tech company is basically the same.
One of the more noticeable changes in recent iOS releases is just how many of them there are. There were ten versions each of iOS 6 and 7, but there were sixteen versions of iOS 11, and fifteen of iOS 12.
iOS 13 has distinguished itself by racing to a x.2 version number faster than any other iOS release family — on October 28 — and has received two further version increments since. This rapid-fire pace of updates has been noticeable, to say the least, and helps illustrate a shift in the way iOS releases are handled.
Which brings me to a confession: I’ve slightly misled you. Merely counting the number of software updates isn’t necessarily a fair way of assessing how rapidly each version changes. For example, while both iOS 6 and 7 had ten versions each, they were clustered in low version numbers. iOS 6 had three 6.0 releases and, oddly, a whole bunch under 6.1; iOS 7’s were the reverse.1
In fact, it used to be the case that iOS rarely breached the x.2 release cycle at all. The first version to get to an x.3 release was iOS 4, but that was also the year that the company merged iPhone and iPad versions in 4.2. You have to skip all the way to iOS 8 to find another x.3 release; after that, though, every version of iOS has gotten to x.3, and iOS 8, 10, and 11 have each seen a series of x.4 releases as well.
iOS 13 is currently at 13.2.3; the developer beta is at 13.3, and 13.4 is being tested internally. Excluding the beta seeds, there have already been eight versions of iOS 13 released so far, and it has been available to the general public for less than three months.
And, again, just looking at the number of versions belies the impact of their contents. In addition to myriad bug fixes, iOS 13’s updates have introduced or reintroduced features that were announced at WWDC, but which did not appear in the gold master of 13.0. A similar pattern occurred with iOS 11 and 12: Apple announced, demoed, and often even released into beta features that were ultimately pulled from the x.0 version, before reappearing in a later update.
This indicates a shift in Apple’s product release strategy — not just from monumental updates to iterative ones, but also from just-in-time feature announcements to early previews. At WWDC, the iOS announcement was implied to be an indication of everything that would be available in the x.0 release; now, it’s a peek at everything that will be available across the entire release cycle.
I do not think that this is inherently problematic, or even concerning. But, so far, it does not seem to be a deliberate strategy. From the outside, it feels far more like an accidental result of announcing features too early — a predictable consequence of which is that announcements may have to be walked back. There are plenty of examples of this in Apple’s history, as well as the tech market and other industries as a whole. But you may recall that Apple’s push notification service was announced for an iPhone OS 2 release, but it was pushed back to iPhone OS 3 due to scalability concerns. So, this is not a new problem, but it is a more frequent concern lately, as features are increasingly deferred to later software updates.
I would rather features be stable; I do not think there is any reason that Apple should rush to release something before it’s ready. But I do wish this new strategy came across as a deliberate choice rather than what I perceive to be a lack of internal coordination.
I’ve experienced the tedium of plotting the iOS version release history as a spreadsheet so you don’t have to. ↩︎
Apple CarPlay in BMW vehicles is finally going to be free. Hallelujah! Autocar reported that BMW is eliminating the subscription charge for folks in the U.K. earlier today, and we just received confirmation from BMW that the change applies to U.S. BMW owners as well.
A BMW spokesperson told us that they “can confirm that this change does also apply to the U.S. market.” When we asked why the sudden change of heart, the same spokesperson sent us this statement: “BMW is always looking to satisfy our customers’ needs and this policy change is intended to provide BMW owners with a better ownership experience.”
Then it was time for opening statements. Taylor Wilson, a partner at L. Lin Wood and a lawyer for the plaintiff, put up a chart I couldn’t see with a lot of dates on it. (The chart was aimed at the jury and would continue to obscure my view all day.) He then walked through the dates of the basic action around the tweets with the energy of a nervous middle schooler doing a monologue at the school play. Not only did Musk call Unsworth a “pedo guy,” Wilson pointed out, when Kevin Beaumont sarcastically called the tweet “classy,” Musk replied “bet you a signed dollar it’s true.” (The “signed dollar” tweet has also been deleted.)
Musk apologized on July 17, but that wasn’t the end of it. Wilson rather irritably told the court that despite the apology, Musk did not retract his “worldwide accusation on Twitter” that Unsworth was a pedophile. Wilson then told the court that Musk’s family office retained a PI to look into Unsworth and on August 28th, instructed the investigator to leak negative information to the press. (It would later emerge that the PI was, in fact, a con man.)
Musk is not coming across particularly well — which is not surprising for someone who broadcast an insinuation, without any shred of evidence, that a barely-public person was a pedophile. I still cannot understand why he didn’t settle and retract his claims. Arrogance, perhaps.
You will thank your comment blocking browser extension when reading this and seemingly all articles reporting on the trial, as it prevents you from enduring a toxic wasteland of moronic pseudo-legal arguments and Musk worship. Lopatto’s piece, on the other hand, is terrific.
Today, in 2019, if the company was a person, it would be a young adult of 21 and it would be time to leave the roost. While it has been a tremendous privilege to be deeply involved in the day-to-day management of the company for so long, we believe it’s time to assume the role of proud parents — offering advice and love, but not daily nagging!
With Alphabet now well-established, and Google and the Other Bets operating effectively as independent companies, it’s the natural time to simplify our management structure. We’ve never been ones to hold on to management roles when we think there’s a better way to run the company. And Alphabet and Google no longer need two CEOs and a President. Going forward, Sundar will be the CEO of both Google and Alphabet. He will be the executive responsible and accountable for leading Google, and managing Alphabet’s investment in our portfolio of Other Bets. We are deeply committed to Google and Alphabet for the long term, and will remain actively involved as Board members, shareholders and co-founders. In addition, we plan to continue talking with Sundar regularly, especially on topics we’re passionate about!
This seems like huge news — and I suppose it inherently is a big deal for co-founders to step back from their company — but it does not mean that Brin and Page won’t be involved in Alphabet’s direction. This announcement contains nothing about the co-founders’ holding of unique shares that give them extraordinary control over the company. It also doesn’t clarify why the Alphabet holding company was created, what purpose it serves now, and why it needs to be distinct from Google.
The first time I tried to publish new images to Flickr, Lightroom aborted and the OS put up a dialog warning me that the app “magick” isn’t signed and so it might be dangerous, so the OS wouldn’t let it launch. “magick” is part of the ImageMagick graphics tool suite, a commonly used set of image manipulation tools; as of today the developers haven’t signed it with a developer certificate from Apple, so Apple’s Gatekeeper will reject it.
You can tell the OS to let the app run, but it’s not obvious where to do that. Here’s how:
Try to export some images and get the warning dialog. Then open up the System Preferences app and navigate to the “Security and Privacy” section and the “General” tab. At the bottom of that tab, you should see some text similar to the warning you got in the dialog. There’s an “Allow” button there. If you click it, you’re approving that app as something that’s okay to be launched.
When launching an app directly, the workaround is easier: you can Control-click and choose Open from the contextual menu.
In both cases, why doesn’t the alert tell you how to resolve the problem (if you do, in fact, trust the software)? In my view, this is poor design and essentially security through obscurity. Apple decided that they don’t want you to run unsigned software, but they don’t want to (or realistically can’t) completely forbid it, so they provide an escape hatch but keep it hidden. macOS doesn’t trust the user to make the right decision, so it acts as though there’s no choice.
The solution to these errors reminds me a little of the de facto standard for burying rarely-toggled options in hidden preferences set via the command line. It’s a pretty clever trick. But the dialog provides no indication that this is possible; it treats unsigned apps as inherently dangerous, not just a risk for the user to take. I know about the secondary-click-to-open trick, but I always forget it when I launch an unsigned app and get spooked before remembering how to proceed.
Perhaps this is the intention, but it makes security far too visible to the user and makes solutions far too opaque. The dialog is unhelpful for average users, and irksome for more technically-capable users. It’s not striking a good balance.
Descriptive error messages are useful; silent failures, misleading dialogs, and vague errors are not.
Russian President Vladimir Putin on Monday signed legislation requiring all smartphones, computers and smart TV sets sold in the country to come pre-installed with Russian software.
The country’s mobile phone market is dominated by foreign companies including Apple, Samsung and Huawei. The legislation signed by Putin said the government would come up with a list of Russian applications that would need to be installed on the different devices.
According to a profile official, in summer in informal conversations it was said that the main goal of the bill is Apple, which they are trying to oblige the law to install Russian applications on iPhones and iPads. But the iOS operating system that Apple uses does not at all imply the ability to preinstall third-party applications.
At one of the meetings, Apple representatives warned that the introduction of such standards would force the company to “revise its business model in Russia, ” Vedomosti wrote in the summer. As of September, the company’s position has not changed, the official said. “The company then took this position: we will show you the middle finger, your market is a very small segment of our business, its loss is insignificant,” he says. Perhaps the authors of the project were inspired by the example of China, from where, after the adoption of similar rules, no one left, admits The Bell’s interlocutor. But Russia is not China, there are no levers of pressure on Apple, he states.
I’m not sure what Chinese law the writers are referring to. The only laws restricting smartphone apps that I can find being passed by China include one that prohibits preinstalled apps that invade users’ privacy without permission — presumably, this does not include government monitored services — and one that requires the ability to remove preinstalled apps. I cannot find a record of a Chinese law that requires the installation of software on devices sold in the country.
This Russian law really is something else. While I could see a situation in which certain apps aren’t available in Russia, I cannot imagine that Apple would sell iPhones specially customized in accordance with the Russian government’s wishes. That’s an indefensible precedent. Russia’s internet policy goals are increasingly distant from the rest of the world. If isolation is what they wish for, the rest of us should not be dragged along.
Now, when you want to share a photo, you no longer have to create an entire album. You can send a one-off message to a friend, so long as they also have Google Photos installed, that contains a photo, just as you would on Instagram, Snapchat, SMS, or any other chat app. If you want to turn that thread into a conversation, you can both start chatting, as well as react to the photos with likes and share more. That way, the photos become a starting point for a conversation, much in the way photos have become just another form of communicating on social platforms.
Since Google Photos is now, effectively, a standalone messaging app in addition to being a place for your photo library, it brings the total count of apps made by the company which have some sort of chat functionality up to six.
Lesley Stahl of CBS’ 60 Minutes interviewed Susan Wojcicki about the state of YouTube:
And what about medical quackery on the site? Like turmeric can reverse cancer; bleach cures autism; vaccines cause autism.
Once you watch one of these, YouTube’s algorithms might recommend you watch similar content. But no matter how harmful or untruthful, YouTube can’t be held liable for any content, due to a legal protection called Section 230.
Lesley Stahl The law under 230 does not hold you responsible for user-generated content. But in that you recommend things, sometimes 1,000 times, sometimes 5,000 times, shouldn’t you be held responsible for that material, because you recommend it?
Susan Wojcicki Well, our systems wouldn’t work without recommending. And so if—
Lesley Stahl I’m not saying don’t recommend. I’m just saying be responsible for when you recommend so many times.
Susan Wojcicki If we were held liable for every single piece of content that we recommended, we would have to review it. That would mean there’d be a much smaller set of information that people would be finding. Much, much smaller.
I entirely buy the near-impossibility of moderating a platform where hundreds of hours of video are uploaded every second. It seems plausible that uploads could be held for initial machine review, with a human-assisted second stage — particularly for new accounts — but that’s kind of nitpicking at YouTube’s scale. It would not be preferable to hold YouTube legally accountable for the videos users upload.
However, I do not buy for one second that YouTube should not be held morally accountable for the videos it recommends. The process and intention of recommendations is entirely in YouTube’s hands, and they can adjust it as they choose. Watching a video from a reputable newspaper should not suggest a video from a hate group in its “Up Next” feature. Conspiracy theories should not be the first search result, for example; they should be far harder to find. YouTube clearly agrees, and have been making changes as a result. But it isn’t enough. It’s misleading to paint uploads and recommendations with the same brush, and it is worrying that a lack of legal obligations is used to justify moral inaction.