The feature “rollout” is a staple of tech launches. A feature technically goes live, but when it will actually reach all users is left vague. Dashboards tabulating screen time rolled out last year, making their way to users over the course of weeks. Instagram’s anti-bullying tools rolled out a couple of months ago. A year ago, a feature to unsend messages in Messenger went live … in Bolivia, Colombia, Lithuania, and Poland, until eventually making its way to everyone else. This rollout tactic gives major tech platforms a way to create the illusion that they are for everyone. Tech companies get outlets to write up press releases about features going live, even if the features are not, in many cases, actually live.
A cautionary approach to rolling out new features by testing and refining them in smaller markets is not a problem. The problem is that these features are often announced in press releases and news stores as though they are widely available when they aren’t. In the New York Times’ coverage of Facebook’s new tool to control data collection across the web, it isn’t mentioned until the very last paragraph that it is only available in Ireland, South Korea, and Spain, with no timeline for U.S. or worldwide access. There’s no sign that Facebook is restricting the feature to these markets for licensing, translation, or legal reasons; it is a strategic decision to test how it works for users, and how much it impacts the company’s data gathering. Reporters should reserve praise and more accurately describe these soft launches for what they are: tests in specific markets.
The policy explains users can disable all location services entirely with one swipe (by navigating to Settings > Privacy > Location Services, then switching “Location Services” to “off”). When one does this, the location services indicator — a small diagonal upward arrow to the left of the battery icon — no longer appears unless Location Services is re-enabled.
The policy continues: “You can also disable location-based system services by tapping on System Services and turning off each location-based system service.” But apparently there are some system services on this model (and possibly other iPhone 11 models) which request location data and cannot be disabled by users without completely turning off location services, as the arrow icon still appears periodically even after individually disabling all system services that use location.
“Ultra wideband technology is an industry standard technology and is subject to international regulatory requirements that require it to be turned off in certain locations,” an Apple spokesperson told TechCrunch. “iOS uses Location Services to help determine if an iPhone is in these prohibited locations in order to disable ultra wideband and comply with regulations.”
“The management of ultra wideband compliance and its use of location data is done entirely on the device and Apple is not collecting user location data,” the spokesperson said.
That seems to back up what experts have discerned so far. Will Strafach, chief executive at Guardian Firewall and iOS security expert, said in a tweet that his analysis showed there was “no evidence” that any location data is sent to a remote server.
Apple said it will provide a new dedicated toggle option for the feature in an upcoming iOS update.
This makes complete sense to me and appears to be nothing more than a mistake in not providing a toggle specifically for UWB. It seems that a risk of marketing a company as uniquely privacy-friendly is that any slip-up is magnified a hundredfold and treated as evidence that every tech company is basically the same.
One of the more noticeable changes in recent iOS releases is just how many of them there are. There were ten versions each of iOS 6 and 7, but there were sixteen versions of iOS 11, and fifteen of iOS 12.
iOS 13 has distinguished itself by racing to a x.2 version number faster than any other iOS release family — on October 28 — and has received two further version increments since. This rapid-fire pace of updates has been noticeable, to say the least, and helps illustrate a shift in the way iOS releases are handled.
Which brings me to a confession: I’ve slightly misled you. Merely counting the number of software updates isn’t necessarily a fair way of assessing how rapidly each version changes. For example, while both iOS 6 and 7 had ten versions each, they were clustered in low version numbers. iOS 6 had three 6.0 releases and, oddly, a whole bunch under 6.1; iOS 7’s were the reverse.1
In fact, it used to be the case that iOS rarely breached the x.2 release cycle at all. The first version to get to an x.3 release was iOS 4, but that was also the year that the company merged iPhone and iPad versions in 4.2. You have to skip all the way to iOS 8 to find another x.3 release; after that, though, every version of iOS has gotten to x.3, and iOS 8, 10, and 11 have each seen a series of x.4 releases as well.
iOS 13 is currently at 13.2.3; the developer beta is at 13.3, and 13.4 is being tested internally. Excluding the beta seeds, there have already been eight versions of iOS 13 released so far, and it has been available to the general public for less than three months.
And, again, just looking at the number of versions belies the impact of their contents. In addition to myriad bug fixes, iOS 13’s updates have introduced or reintroduced features that were announced at WWDC, but which did not appear in the gold master of 13.0. A similar pattern occurred with iOS 11 and 12: Apple announced, demoed, and often even released into beta features that were ultimately pulled from the x.0 version, before reappearing in a later update.
This indicates a shift in Apple’s product release strategy — not just from monumental updates to iterative ones, but also from just-in-time feature announcements to early previews. At WWDC, the iOS announcement was implied to be an indication of everything that would be available in the x.0 release; now, it’s a peek at everything that will be available across the entire release cycle.
I do not think that this is inherently problematic, or even concerning. But, so far, it does not seem to be a deliberate strategy. From then outside, it feels far more like an accidental result of announcing features too early — a predictable consequence of which is that announcements may have to be walked back. There are plenty of examples of this in Apple’s history, as well as the tech market and other industries as a whole. But you may recall that Apple’s push notification service was announced for an iPhone OS 2 release, but it was pushed back to iPhone OS 3 due to scalability concerns. So, this is not a new problem, but it is a more frequent concern lately, as features are increasingly deferred to later software updates.
I would rather features be stable; I do not think there is any reason that Apple should rush to release something before it’s ready. But I do wish this new strategy came across as a deliberate choice rather than what I perceive to be a lack of internal coordination.
I’ve experienced the tedium of plotting the iOS version release history as a spreadsheet so you don’t have to. ↩︎
Apple CarPlay in BMW vehicles is finally going to be free. Hallelujah! Autocar reported that BMW is eliminating the subscription charge for folks in the U.K. earlier today, and we just received confirmation from BMW that the change applies to U.S. BMW owners as well.
A BMW spokesperson told us that they “can confirm that this change does also apply to the U.S. market.” When we asked why the sudden change of heart, the same spokesperson sent us this statement: “BMW is always looking to satisfy our customers’ needs and this policy change is intended to provide BMW owners with a better ownership experience.”
Then it was time for opening statements. Taylor Wilson, a partner at L. Lin Wood and a lawyer for the plaintiff, put up a chart I couldn’t see with a lot of dates on it. (The chart was aimed at the jury and would continue to obscure my view all day.) He then walked through the dates of the basic action around the tweets with the energy of a nervous middle schooler doing a monologue at the school play. Not only did Musk call Unsworth a “pedo guy,” Wilson pointed out, when Kevin Beaumont sarcastically called the tweet “classy,” Musk replied “bet you a signed dollar it’s true.” (The “signed dollar” tweet has also been deleted.)
Musk apologized on July 17, but that wasn’t the end of it. Wilson rather irritably told the court that despite the apology, Musk did not retract his “worldwide accusation on Twitter” that Unsworth was a pedophile. Wilson then told the court that Musk’s family office retained a PI to look into Unsworth and on August 28th, instructed the investigator to leak negative information to the press. (It would later emerge that the PI was, in fact, a con man.)
Musk is not coming across particularly well — which is not surprising for someone who broadcast an insinuation, without any shred of evidence, that a barely-public person was a pedophile. I still cannot understand why he didn’t settle and retract his claims. Arrogance, perhaps.
You will thank your comment blocking browser extension when reading this and seemingly all articles reporting on the trial, as it prevents you from enduring a toxic wasteland of moronic pseudo-legal arguments and Musk worship. Lopatto’s piece, on the other hand, is terrific.
Today, in 2019, if the company was a person, it would be a young adult of 21 and it would be time to leave the roost. While it has been a tremendous privilege to be deeply involved in the day-to-day management of the company for so long, we believe it’s time to assume the role of proud parents — offering advice and love, but not daily nagging!
With Alphabet now well-established, and Google and the Other Bets operating effectively as independent companies, it’s the natural time to simplify our management structure. We’ve never been ones to hold on to management roles when we think there’s a better way to run the company. And Alphabet and Google no longer need two CEOs and a President. Going forward, Sundar will be the CEO of both Google and Alphabet. He will be the executive responsible and accountable for leading Google, and managing Alphabet’s investment in our portfolio of Other Bets. We are deeply committed to Google and Alphabet for the long term, and will remain actively involved as Board members, shareholders and co-founders. In addition, we plan to continue talking with Sundar regularly, especially on topics we’re passionate about!
This seems like huge news — and I suppose it inherently is a big deal for co-founders to step back from their company — but it does not mean that Brin and Page won’t be involved in Alphabet’s direction. This announcement contains nothing about the co-founders’ holding of unique shares that give them extraordinary control over the company. It also doesn’t clarify why the Alphabet holding company was created, what purpose it serves now, and why it needs to be distinct from Google.
The first time I tried to publish new images to Flickr, Lightroom aborted and the OS put up a dialog warning me that the app “magick” isn’t signed and so it might be dangerous, so the OS wouldn’t let it launch. “magick” is part of the ImageMagick graphics tool suite, a commonly used set of image manipulation tools; as of today the developers haven’t signed it with a developer certificate from Apple, so Apple’s Gatekeeper will reject it.
You can tell the OS to let the app run, but it’s not obvious where to do that. Here’s how:
Try to export some images and get the warning dialog. Then open up the System Preferences app and navigate to the “Security and Privacy” section and the “General” tab. At the bottom of that tab, you should see some text similar to the warning you got in the dialog. There’s an “Allow” button there. If you click it, you’re approving that app as something that’s okay to be launched.
When launching an app directly, the workaround is easier: you can Control-click and choose Open from the contextual menu.
In both cases, why doesn’t the alert tell you how to resolve the problem (if you do, in fact, trust the software)? In my view, this is poor design and essentially security through obscurity. Apple decided that they don’t want you to run unsigned software, but they don’t want to (or realistically can’t) completely forbid it, so they provide an escape hatch but keep it hidden. macOS doesn’t trust the user to make the right decision, so it acts as though there’s no choice.
The solution to these errors reminds me a little of the de facto standard for burying rarely-toggled options in hidden preferences set via the command line. It’s a pretty clever trick. But the dialog provides no indication that this is possible; it treats unsigned apps as inherently dangerous, not just a risk for the user to take. I know about the secondary-click-to-open trick, but I always forget it when I launch an unsigned app and get spooked before remembering how to proceed.
Perhaps this is the intention, but it makes security far too visible to the user and makes solutions far too opaque. The dialog is unhelpful for average users, and irksome for more technically-capable users. It’s not striking a good balance.
Descriptive error messages are useful; silent failures, misleading dialogs, and vague errors are not.
Russian President Vladimir Putin on Monday signed legislation requiring all smartphones, computers and smart TV sets sold in the country to come pre-installed with Russian software.
The country’s mobile phone market is dominated by foreign companies including Apple, Samsung and Huawei. The legislation signed by Putin said the government would come up with a list of Russian applications that would need to be installed on the different devices.
According to a profile official, in summer in informal conversations it was said that the main goal of the bill is Apple, which they are trying to oblige the law to install Russian applications on iPhones and iPads. But the iOS operating system that Apple uses does not at all imply the ability to preinstall third-party applications.
At one of the meetings, Apple representatives warned that the introduction of such standards would force the company to “revise its business model in Russia, ” Vedomosti wrote in the summer. As of September, the company’s position has not changed, the official said. “The company then took this position: we will show you the middle finger, your market is a very small segment of our business, its loss is insignificant,” he says. Perhaps the authors of the project were inspired by the example of China, from where, after the adoption of similar rules, no one left, admits The Bell’s interlocutor. But Russia is not China, there are no levers of pressure on Apple, he states.
I’m not sure what Chinese law the writers are referring to. The only laws restricting smartphone apps that I can find being passed by China include one that prohibits preinstalled apps that invade users’ privacy without permission — presumably, this does not include government monitored services — and one that requires the ability to remove preinstalled apps. I cannot find a record of a Chinese law that requires the installation of software on devices sold in the country.
This Russian law really is something else. While I could see a situation in which certain apps aren’t available in Russia, I cannot imagine that Apple would sell iPhones specially customized in accordance with the Russian government’s wishes. That’s an indefensible precedent. Russia’s internet policy goals are increasingly distant from the rest of the world. If isolation is what they wish for, the rest of us should not be dragged along.
Now, when you want to share a photo, you no longer have to create an entire album. You can send a one-off message to a friend, so long as they also have Google Photos installed, that contains a photo, just as you would on Instagram, Snapchat, SMS, or any other chat app. If you want to turn that thread into a conversation, you can both start chatting, as well as react to the photos with likes and share more. That way, the photos become a starting point for a conversation, much in the way photos have become just another form of communicating on social platforms.
Since Google Photos is now, effectively, a standalone messaging app in addition to being a place for your photo library, it brings the total count of apps made by the company which have some sort of chat functionality up to six.
Lesley Stahl of CBS’ 60 Minutes interviewed Susan Wojcicki about the state of YouTube:
And what about medical quackery on the site? Like turmeric can reverse cancer; bleach cures autism; vaccines cause autism.
Once you watch one of these, YouTube’s algorithms might recommend you watch similar content. But no matter how harmful or untruthful, YouTube can’t be held liable for any content, due to a legal protection called Section 230.
Lesley Stahl The law under 230 does not hold you responsible for user-generated content. But in that you recommend things, sometimes 1,000 times, sometimes 5,000 times, shouldn’t you be held responsible for that material, because you recommend it?
Susan Wojcicki Well, our systems wouldn’t work without recommending. And so if—
Lesley Stahl I’m not saying don’t recommend. I’m just saying be responsible for when you recommend so many times.
Susan Wojcicki If we were held liable for every single piece of content that we recommended, we would have to review it. That would mean there’d be a much smaller set of information that people would be finding. Much, much smaller.
I entirely buy the near-impossibility of moderating a platform where hundreds of hours of video are uploaded every second. It seems plausible that uploads could be held for initial machine review, with a human-assisted second stage — particularly for new accounts — but that’s kind of nitpicking at YouTube’s scale. It would not be preferable to hold YouTube legally accountable for the videos users upload.
However, I do not buy for one second that YouTube should not be held morally accountable for the videos it recommends. The process and intention of recommendations is entirely in YouTube’s hands, and they can adjust it as they choose. Watching a video from a reputable newspaper should not suggest a video from a hate group in its “Up Next” feature. Conspiracy theories should not be the first search result, for example; they should be far harder to find. YouTube clearly agrees, and have been making changes as a result. But it isn’t enough. It’s misleading to paint uploads and recommendations with the same brush, and it is worrying that a lack of legal obligations is used to justify moral inaction.
“I did expect some people to be unhappy with the decision, I expected some pushback,” he told The Register, adding: “But the level of pushback has been very strong.”
He was aware, he says, that people would not like two key aspects of the decision: the move from a non-profit model to a for-profit one; and the lack of consultation. […]
Translation: “I, Andrew Sullivan, thought I could get away with exploiting charities and not-for-profits so long as I did so quietly. However, this plan has backfired spectacularly because it turns out that people actually pay attention to this stuff. Who knew?”
[…] He had explanations ready for both: “The registry business is still a business, and this represented a really big opportunity, and one that is good for PIR [Public Interest Registry].”
As for the lack of consultation: “We didn’t go looking for this. If we had done that [consulted publicly about the sale .org], the opportunity would have been lost. If we had done it in public, it would have created a lot of uncertainty without any benefit.”
Translation: “If we had told people about this before the sale, it would have meant answering awkward questions that I very much wish to avoid — then and now.”
Just why is ISOC approving this deal, going back on nearly two decades of non-profit stewardship and infuriating many of its ardent supporters? Is it just money?
Yes and no.
“The lump sum is definitely a benefit,” he admits, before arguing passionately about ISOC’s core missions. “The work ISOC does is focused on policies and connecting the unconnected. There is already a community organisation that covers domain names – and that’s ICANN.”
There are four main issues libraries are having when it comes to accessing ebooks and e-audiobooks, she said. The first is cost: ebooks or e-audiobooks can cost up to five times the price of a print copy for a library, she said.
The second issue is the rise in metered access or expiry dates for ebook licenses. More and more, publishers are making ebook licenses expire after two years, or after a certain number of uses.
Third, some e-audiobooks just aren’t available to libraries at all. That’s because companies like Audible have exclusivity rights on certain titles, blocking libraries from accessing them.
And of course, there’s the recent change by MacMillan, a new type of restriction.
Can you imagine the hysterical reaction if someone had suggested the creation of public libraries today. ‘For free? How are you going to pay for that, STALIN?’
This is not a uniqueobservation in the world of tweet-based observations, but it has remained a nagging thought in the back of my head for years. Libraries have nimbly adapted as they continue to serve community needs, in spite of ridiculous doubts about their continued relevance and twenty-first century roadblocks like those reported above. Libraries deserve ongoing support for the greater good; DRM and other gatekeepers to learning are antithetical to their mission and role.
This Crimea situation is a real shitshow. And so is Apple’s response to it.
Last night, I oversimplified my reaction to Apple’s compliance with Russia’s requirement that maps display Crimea as Russian territory when those maps are viewed in Russia. There’s some subtlety that I neglected to dive into that doesn’t change my objection to Apple’s acquiescence, but helps provide some clarity on why it is objectionable.
The first thing to know is that Apple is not unique in how it recognizes Crimea and disputed territory elsewhere. Google has a similar policy, even saying to Tass, a Russian news agency, that they “fixed a bug” that indicated Crimea was Ukrainian territory. This is similar to the obviously misleading language used by Russia to describe Apple’s change yesterday. Here WeGo — originally developed by Nokia before being spun off as its own company — shows Crimean addresses as Russian when browsing from within Russia, and Ukrainian when browsing elsewhere.
But other mapping software still retains Ukraine’s territorial claim over Crimea, even when browsing using a Russian proxy, including Microsoft’s Bing Maps. OpenStreetMap — used by Facebook, Foursquare, and others — seems to take a middle-ground approach with Crimean addresses shown as being within Ukraine, but with a border around the entire peninsula as though it’s its own country.
This is also a situation that is not entirely unique to Ukraine, Russia, and Crimea. Maps with countries and cities and borders are inherently political — it’s right there in the name — and there are dozens of disputes over borders and sovereignty all around the world. The display of this disputed land is handled differently depending on mapping software and region but, due to the nature of things that are location dependent, this is devilishly difficult to test. I am still not entirely confident in what I found. For example, the region of Kashmir displays in Apple Maps and Google Maps on my iPad as disputed territories; but, if I use Google Maps on the web and switch its region to India, it becomes solidly Indian. A 1961 law prohibits making maps of India that are incongruous with the one made by the Survey of India, so I imagine that Apple’s map would follow suit — but I cannot verify that.
I haven’t mentioned Israel and Palestine which, suffice to say, as Jon Stewart once put it, is a “bottomless cup of sadness”.
So it’s not a situation that is specific to Apple’s maps app, nor is it specific to Russia’s occupation of Ukrainian territory. But it remains one of several recent examples of tyrannical leaders wielding influence over American tech companies to further their propaganda campaigns. Apple and Google have little choice but to comply with the laws of the regions in which they operate, no matter how authoritarian.
But they would also not be forced to be used as vehicles for disinformation if they chose not to operate within countries that require such compliance. This doesn’t have to be a wholesale withdrawal. Apple doesn’t have to include Weather or Maps on iPhones sold in Russia, for example; Google has the ability to prevent its own maps app from being accessed from within the country. I’m not saying that either company should do this, and I’m sure this solution was at least suggested at both and was clearly shot down for reasons not publicly known.
This also becomes vastly more difficult when it comes to Apple’s relationship with Chinese authorities. In August, Google’s Project Zero team announced that iOS vulnerabilities that were patched earlier in the year were actively exploited. Reporters put together the clues and established that the Chinese government was likely responsible for hacking into websites that targeted the oppressed Uyghur population. But Apple’s response mostly nitpicked Google’s description and did not acknowledge the real damage these security bugs caused. Did they worry about whether their Chinese manufacturing facilities would be impacted by a more complete response that acknowledged the damage these vulnerabilities inflicted upon Uyghurs? I don’t know, but it’s awfully concerning that it’s a question that can reasonably be asked. If this was a worry, I maintain that Apple ought to have stayed silent and let press reports do the talking — but that is a last-ditch option that is only slightly more preferable than a complete response. Their purely defensive response was misleading, weak, and capitulating.
Quite simply, any company operating worldwide must set a line that it will not cross. There cannot be limitless ethical bending to appease an audience of countries ranging from liberal democracies to ruthless authoritarian states. Otherwise, products and services will morph from tools for customers into tools for dictators. There is unambiguous precedence.
I’m sure the founders of today’s tech giants did not consider any of this in their nascent days spent in proverbial Silicon Valley garages. Nevertheless, they must respect their responsibility now.
Apple has complied with Moscow’s demands to show Crimea, annexed from Ukraine in 2014, as Russian territory. Crimea & the cities of Sevastopol & Simferopol are now displayed as Rus. territory on Apple’s map & weather apps when used in Russia.
The United Nations continues to recognize Crimea as a Ukrainian territory, describing Russia’s presence on the peninsula as an “occupation”. The Russian state spun Apple’s labelling as an “inaccuracy”, as they are wont to do.
Earlier this year, Foreign Policy reported that Russia had successfully compelled Apple to store Russian users’ data on servers in Russia — adding that if it follows Russian counterterrorism law, it would be forced to decrypt and surrender user data to the government.
In 2017, Apple removed LinkedIn from the App Store in Russia, and there was some speculation that Apple had quietly stopped updating Telegram in the wake of Russia’s call for a ban on the app. (It eventually did make the updates.)
Earlier this year, I linked to the Foreign Policy report on Apple’s migration of Russian users’ iCloud data to local servers, wondering where the company would draw the line. Apple’s limits haven’t been found yet, as it slowly but surely capitulates to strongman leaders and authoritarian states. Give them an inch, they’ll take a peninsula.
Many readers probably believe they can trust links and emails coming from U.S. federal government domain names, or else assume there are at least more stringent verification requirements involved in obtaining a .gov domain versus a commercial one ending in .com or .org. But a recent experience suggests this trust may be severely misplaced, and that it is relatively straightforward for anyone to obtain their very own .gov domain.
Earlier this month, KrebsOnSecurity received an email from a researcher who said he got a .gov domain simply by filling out and emailing an online form, grabbing some letterhead off the homepage of a small U.S. town that only has a “.us” domain name, and impersonating the town’s mayor in the application.
The webpage for the DotGov registry, operated by the General Services Administration, hilariously states that “bona fide government services should be easy to identify on the internet”. They sure should.
By the way, the .gov domain extension is a bizarrely U.S.-only feature of the web that should eventually be abolished. Virtually every other country has its government services associated with a second-level domain with a country-specific domain extension — in Canada, for instance, we use .gc.ca; in the U.K., it’s .gov.uk. American government institutions should be required to use a specific .us address for consistency and equality. Arguably, .mil should follow suit in being decommissioned, and .edu could become available worldwide.
Historically, app makers could ask users for permission to track their location even when they’re not using the app. That was helpful for services that tracked where a user parked their car or where they may have lost a device paired to the phone. But in the new update, app makers can no longer ask for that functionality when an app is first set up — a potentially devastating blow to competitors such as Tile, maker of Bluetooth trackers that help people find lost items.
By contrast, Apple tracks iPhone users’ location at all time — and users can’t opt out unless they go deep into Apple’s labyrinthine menu of settings.
It isn’t exactly true that iPhone users’ locations are always being tracked by the system. Users are asked when setting up their iOS device whether they would like to enable location-based services; they are not automatically opted in. But once a user sets their device up, it’s unlikely they’ll change that setting. There is huge power in being the default, particularly when that’s the default across the entire system for any and all of Apple’s own services that require location access.
There is a fair argument about how this makes sense. Buyers, presumably, have an implied trust in the first-party device manufacturer that cannot be extended to third-party developers. Apple’s track record on privacy is generally good; it would be falsely equivalent to compare their requirement of system-forced permission requests with companies like Facebook and Google that inhale user data and spit out creepy advertisements.
“I’m increasingly concerned about the use of privacy as a shield for anti-competitive conduct,” said Rep. David N. Cicilline (R.I.), who serves as chairman of the House Judiciary antitrust subcommittee. “There is a growing risk that without a strong privacy law in the United States, platforms will exploit their role as de facto private regulators by placing a thumb on the scale in their own favor.”
Cicilline is correct: the duty of regulating this stuff should not be passed off to companies motivated less by ethical concerns than revenue.
Twitter will begin deleting accounts that have been inactive for more than six months, unless they log in before an 11 December deadline.
The cull will include users who stopped posting to the site because they died — unless someone with that person’s account details is able to log-in.
It is the first time Twitter has removed inactive accounts on such a large scale.
The site said it was because users who do not log-in were unable to agree to its updated privacy policies.
My timeline has been humming today with people excited to claim usernames that will likely be freed up, but it seems as though Twitter has — as ever — failed to fully think through their plans. I imagine there are plenty of people out there who occasionally check in on the Twitter accounts of deceased friends and family; Twitter simply has no solution to preserve those memories.
The first is the tax we each pay so that companies can bid against each other to buy traffic from Google. Because their revenue model is (cleverly) built on both direct marketing and an auction, they are able to keep a significant portion of the margin from many industries. They’ve become the internet’s landlord.
The second is harder to see: Because Google has made it ever more difficult for sites to be found, previously successful businesses like Groupon, Travelocity and Hipmunk suffer. As a result, new web companies are significantly harder to fund and build. If you’re dependent on being found in a Google search, it’s probably worth rethinking your plan.
I think there’s a widespread assumption that Google’s search engine is a relatively benevolent and impartial directory of the web at large. The Wall Street Journal’s recent investigation sure makes it sound like that’s the expectation; the authors seemed surprised by how often the ranking parameters are adjusted so that spam, trash, and marketing pablum doesn’t find its way to the top — albeit twisting their findings to imply that Google is pushing a political agenda. There simply isn’t a good way to make search engines truly neutral; that’s fine, but users need to understand that.
Non-Google search engines also need to be more competitive, but it takes time to chip away at a company with complete market share dominance — particularly when they use it as leverage for obtaining an advantage in other markets.
The Washington Post and New York Times have both now struck deals with cellular providers to hype 5G networking for journalism; neither has explained what, exactly, faster cellular networks will do to make journalism any better — where by “better”, in the case of journalism, I mean “more accurate, situated in context, and comprehensive”.
Here’s what the Times said they’d be using 5G to do in their partnership with Verizon:
The Times has journalists reporting on stories from over 160 countries. Getting their content online often requires high bandwidth and reliable internet connections. At home, too, covering live events means photographers might take thousands of photos without access to a reliable connection to send data back to our media servers. We’re exploring how 5G can help our journalists automatically stream media — HD photos, videos and audio, and even 3D models — back to the Newsroom in real-time, as they are captured.
In addition, as news breaks throughout the country, The Post plans to experiment with reporters using millimeter wave 5G+ technology to transmit their stories, photos and videos faster and more reliably, whether they are covering forest fires on the West Coast or hurricane weather in the southeast.
Most journalism is still text. The Times and Post are absolutely doing wonderful things with video, but most of what they produce is still text, and text doesn’t need speed. I can see how photos and video would get to the newsroom faster, but is the speed of delivery really improving journalism?
I hope that the most time-consuming part of a journalist’s job is and remains in the analysis and research of a story — and having a faster connection does not inherently make someone a better researcher.
[…] It’s pretty telling of the era that nobody at either paper thought such a partnership could potentially represent a possible conflict of interest as they cover one of the most heavily hyped tech shifts in telecom history.
I don’t think either publication would jeopardize its integrity to spike stories about its corporate partners. But as antitrust questions increasingly circle tech companies, it is only a matter of time before questions about the lack of competition amongst ISPs and cellular providers cannot be ignored by lawmakers any longer. These are among the most important stories of our time. Should inherently skeptical publications be cozying up to the subjects of their investigations?
Late last week, people on Twitter started noticing sponsored tweets promoting the island of Eroda, linking to a website advertising its picturesque views, marine life, and seaside cuisine.
The only catch? Eroda doesn’t exist. It’s completely fictional. Musician/photographer Austin Strifler was the first to notice, bringing attention to it in a long thread that unraveled over the last few days.
The creators of the Visit Eroda campaign covered their tracks well. According to Baio, they didn’t leave any identifying information in image metadata, domain records, or in the site’s markup.
I verified a connection between @visiteroda and @Harry_Styles. The Eroda page is using a [Facebook] pixel installed on http://hstyles.co.uk. You can only track websites you have control of. They are related.
I’m not arguing that a promotional campaign for Harry Styles’ new record should be taken as a serious privacy violation; I am, in fact, quite sober. But I think there’s a lesson in the campaign’s difficulty for identifying data to be completely disassociated. A need for behaviourally-targeted advertising is what ultimately made it easy to reassociate the anonymous website.
See also: A 2011 article by Andy Baio in which he describes how he was able to figure out the author of an ostensibly anonymous blog because of a shared Google Analytics account.
Sir Tim Berners-Lee has launched a global action plan to save the web from political manipulation, fake news, privacy violations and other malign forces that threaten to plunge the world into a “digital dystopia”.
The Contract for the Web requires endorsing governments, companies and individuals to make concrete commitments to protect the web from abuse and ensure it benefits humanity.
The “contract” — a term I use very loosely, as the only punishment for a signatory’s failure to uphold its terms is to be removed from the list of organizations which support it — is endorsed by usual suspects like the Electronic Frontier Foundation and DuckDuckGo. It also counts as supporters Google, Facebook, and Twitter. Two of the nine principles of the Contract for the Web are about respecting users’ privacy in meaningful ways. You do the math.
So it’s flabbergasting to now see Berners-Lee in the New York Times sidestepping any accountability, and instead promoting himself as the restorer of the web’s virtue. Berners-Lee is pushing what he calls the Contract for the Web, which he describes, with no irony, as a “global plan of action … to make sure our online world is safe, empowering and genuinely for everyone.” He assures us that “the tech giants Google, Facebook, [and] Microsoft” are all “committing to action.” What a relief! Berners-Lee still seems to think Big Tech can do no wrong, even at a time when public and political opinion are going the opposite direction.
I’m not sure I share Butterick’s cynical view of this effort, but I do not see it making a lick of difference in the behaviour or business models of behavioural advertising companies with interactive front-ends.
On October 16, 2019 Bob Diachenko and Vinny Troia discovered a wide-open Elasticsearch server containing an unprecedented 4 billion user accounts spanning more than 4 terabytes of data.
A total count of unique people across all data sets reached more than 1.2 billion people, making this one of the largest data leaks from a single source organization in history. The leaked data contained names, email addresses, phone numbers, LinkedIN and Facebook profile information.
What makes this data leak unique is that it contains data sets that appear to originate from 2 different data enrichment companies.
It’s entirely possible that this data came from a PDL subscriber and not PDL themselves. Someone left an Elasticsearch instance wide open and by definition, that’s a breach on their behalf and not PDL’s. Yet it doesn’t change the fact that PDL is indicated as the source in the data itself and it definitely doesn’t change the fact that my data (and probably your data too), is available freely to anyone who wishes to query their API. I signed up for a free API key just to see how much they have on me (they’ll give you 1k free API calls a month) and the result was rather staggering.
And this is the real problem: regardless of how well these data enrichment companies secure their own system, once they pass the data downstream to customers it’s completely out of their control. My data — almost certainly your data too — is replicated, mishandled and exposed and there’s absolutely nothing we can do about it. Well, almost nothing…
I also signed up for an API key and found records associated with my name and one of my email addresses. Everything in it appears to be scraped from public sources — my name matched outdated LinkedIn data from the time that I thought it was an excellent idea to have a LinkedIn profile, while my email address surfaced a mixed data set.
I am, of course, responsible for putting my information out into the world — if someone can see it, they can copy it. But should they be allowed to store it as long as they like? I deleted my LinkedIn profile years ago, but People Data Labs still has my employment history from there. Furthermore, my email address was not public or visible on any of my social media profiles, but PDL still managed to connect all of them because they used each social media company’s API to scrape user details. I have little recourse in getting rid of PDL’s copy of this information short of contacting them and all other “data enrichment” companies individually to request deletion. That seems entirely wrong.
At the end of last week, the Internet Society (ISOC) announced that it has sold the rights to the .org registry for an undisclosed sum to a private equity company called Ethos Capital. The deal is set to complete in the first quarter of next year.
The decision shocked the internet industry, not least because the .org registry has always been operated on a non-profit basis and has actively marketed itself as such. The suffix “org” on an internet address – and there are over 10 million of them – has become synonymous with non-profit organizations.
However, overnight and without warning that situation changed when the registry was sold to a for-profit company. The organization that operates the .org registry, PIR – which stands for Public Interest Registry – has confirmed it will discard the non-profit status it has held since 2003 as a result of the sale.
It’s not just a bleak turn of events for millions of charities and non-profit organizations worldwide that are tied to their domains; McCarthy’s investigation found suspicious undercurrents behind the sale. Truly one of the year’s most upsetting stories about the web.
After the storm, I was determined to find out why the ‘Report An Outage’ page was so painful to download.
There were 4.6MB of unused code being downloaded on a page whose main and only content, apart from a sign in button, is a form to submit your address.
Paul Boag tweeted an excellent illustration of the benefits of designing for accessibility, dividing their impact into permanent, temporary, and situational occurrences. Performance could easily be on the same list: some people have permanently restricted bandwidth because of where they live or the device they use; temporarily, something like the storm that Stimac faced would impact connectivity; and simply getting in an elevator or being in a crowded city can be situations that impact performance.