“I did expect some people to be unhappy with the decision, I expected some pushback,” he told The Register, adding: “But the level of pushback has been very strong.”
He was aware, he says, that people would not like two key aspects of the decision: the move from a non-profit model to a for-profit one; and the lack of consultation. […]
Translation: “I, Andrew Sullivan, thought I could get away with exploiting charities and not-for-profits so long as I did so quietly. However, this plan has backfired spectacularly because it turns out that people actually pay attention to this stuff. Who knew?”
[…] He had explanations ready for both: “The registry business is still a business, and this represented a really big opportunity, and one that is good for PIR [Public Interest Registry].”
As for the lack of consultation: “We didn’t go looking for this. If we had done that [consulted publicly about the sale .org], the opportunity would have been lost. If we had done it in public, it would have created a lot of uncertainty without any benefit.”
Translation: “If we had told people about this before the sale, it would have meant answering awkward questions that I very much wish to avoid — then and now.”
Just why is ISOC approving this deal, going back on nearly two decades of non-profit stewardship and infuriating many of its ardent supporters? Is it just money?
Yes and no.
“The lump sum is definitely a benefit,” he admits, before arguing passionately about ISOC’s core missions. “The work ISOC does is focused on policies and connecting the unconnected. There is already a community organisation that covers domain names – and that’s ICANN.”
There are four main issues libraries are having when it comes to accessing ebooks and e-audiobooks, she said. The first is cost: ebooks or e-audiobooks can cost up to five times the price of a print copy for a library, she said.
The second issue is the rise in metered access or expiry dates for ebook licenses. More and more, publishers are making ebook licenses expire after two years, or after a certain number of uses.
Third, some e-audiobooks just aren’t available to libraries at all. That’s because companies like Audible have exclusivity rights on certain titles, blocking libraries from accessing them.
And of course, there’s the recent change by MacMillan, a new type of restriction.
Can you imagine the hysterical reaction if someone had suggested the creation of public libraries today. ‘For free? How are you going to pay for that, STALIN?’
This is not a uniqueobservation in the world of tweet-based observations, but it has remained a nagging thought in the back of my head for years. Libraries have nimbly adapted as they continue to serve community needs, in spite of ridiculous doubts about their continued relevance and twenty-first century roadblocks like those reported above. Libraries deserve ongoing support for the greater good; DRM and other gatekeepers to learning are antithetical to their mission and role.
This Crimea situation is a real shitshow. And so is Apple’s response to it.
Last night, I oversimplified my reaction to Apple’s compliance with Russia’s requirement that maps display Crimea as Russian territory when those maps are viewed in Russia. There’s some subtlety that I neglected to dive into that doesn’t change my objection to Apple’s acquiescence, but helps provide some clarity on why it is objectionable.
The first thing to know is that Apple is not unique in how it recognizes Crimea and disputed territory elsewhere. Google has a similar policy, even saying to Tass, a Russian news agency, that they “fixed a bug” that indicated Crimea was Ukrainian territory. This is similar to the obviously misleading language used by Russia to describe Apple’s change yesterday. Here WeGo — originally developed by Nokia before being spun off as its own company — shows Crimean addresses as Russian when browsing from within Russia, and Ukrainian when browsing elsewhere.
But other mapping software still retains Ukraine’s territorial claim over Crimea, even when browsing using a Russian proxy, including Microsoft’s Bing Maps. OpenStreetMap — used by Facebook, Foursquare, and others — seems to take a middle-ground approach with Crimean addresses shown as being within Ukraine, but with a border around the entire peninsula as though it’s its own country.
This is also a situation that is not entirely unique to Ukraine, Russia, and Crimea. Maps with countries and cities and borders are inherently political — it’s right there in the name — and there are dozens of disputes over borders and sovereignty all around the world. The display of this disputed land is handled differently depending on mapping software and region but, due to the nature of things that are location dependent, this is devilishly difficult to test. I am still not entirely confident in what I found. For example, the region of Kashmir displays in Apple Maps and Google Maps on my iPad as disputed territories; but, if I use Google Maps on the web and switch its region to India, it becomes solidly Indian. A 1961 law prohibits making maps of India that are incongruous with the one made by the Survey of India, so I imagine that Apple’s map would follow suit — but I cannot verify that.
I haven’t mentioned Israel and Palestine which, suffice to say, as Jon Stewart once put it, is a “bottomless cup of sadness”.
So it’s not a situation that is specific to Apple’s maps app, nor is it specific to Russia’s occupation of Ukrainian territory. But it remains one of several recent examples of tyrannical leaders wielding influence over American tech companies to further their propaganda campaigns. Apple and Google have little choice but to comply with the laws of the regions in which they operate, no matter how authoritarian.
But they would also not be forced to be used as vehicles for disinformation if they chose not to operate within countries that require such compliance. This doesn’t have to be a wholesale withdrawal. Apple doesn’t have to include Weather or Maps on iPhones sold in Russia, for example; Google has the ability to prevent its own maps app from being accessed from within the country. I’m not saying that either company should do this, and I’m sure this solution was at least suggested at both and was clearly shot down for reasons not publicly known.
This also becomes vastly more difficult when it comes to Apple’s relationship with Chinese authorities. In August, Google’s Project Zero team announced that iOS vulnerabilities that were patched earlier in the year were actively exploited. Reporters put together the clues and established that the Chinese government was likely responsible for hacking into websites that targeted the oppressed Uyghur population. But Apple’s response mostly nitpicked Google’s description and did not acknowledge the real damage these security bugs caused. Did they worry about whether their Chinese manufacturing facilities would be impacted by a more complete response that acknowledged the damage these vulnerabilities inflicted upon Uyghurs? I don’t know, but it’s awfully concerning that it’s a question that can reasonably be asked. If this was a worry, I maintain that Apple ought to have stayed silent and let press reports do the talking — but that is a last-ditch option that is only slightly more preferable than a complete response. Their purely defensive response was misleading, weak, and capitulating.
Quite simply, any company operating worldwide must set a line that it will not cross. There cannot be limitless ethical bending to appease an audience of countries ranging from liberal democracies to ruthless authoritarian states. Otherwise, products and services will morph from tools for customers into tools for dictators. There is unambiguous precedence.
I’m sure the founders of today’s tech giants did not consider any of this in their nascent days spent in proverbial Silicon Valley garages. Nevertheless, they must respect their responsibility now.
Apple has complied with Moscow’s demands to show Crimea, annexed from Ukraine in 2014, as Russian territory. Crimea & the cities of Sevastopol & Simferopol are now displayed as Rus. territory on Apple’s map & weather apps when used in Russia.
The United Nations continues to recognize Crimea as a Ukrainian territory, describing Russia’s presence on the peninsula as an “occupation”. The Russian state spun Apple’s labelling as an “inaccuracy”, as they are wont to do.
Earlier this year, Foreign Policy reported that Russia had successfully compelled Apple to store Russian users’ data on servers in Russia — adding that if it follows Russian counterterrorism law, it would be forced to decrypt and surrender user data to the government.
In 2017, Apple removed LinkedIn from the App Store in Russia, and there was some speculation that Apple had quietly stopped updating Telegram in the wake of Russia’s call for a ban on the app. (It eventually did make the updates.)
Earlier this year, I linked to the Foreign Policy report on Apple’s migration of Russian users’ iCloud data to local servers, wondering where the company would draw the line. Apple’s limits haven’t been found yet, as it slowly but surely capitulates to strongman leaders and authoritarian states. Give them an inch, they’ll take a peninsula.
Many readers probably believe they can trust links and emails coming from U.S. federal government domain names, or else assume there are at least more stringent verification requirements involved in obtaining a .gov domain versus a commercial one ending in .com or .org. But a recent experience suggests this trust may be severely misplaced, and that it is relatively straightforward for anyone to obtain their very own .gov domain.
Earlier this month, KrebsOnSecurity received an email from a researcher who said he got a .gov domain simply by filling out and emailing an online form, grabbing some letterhead off the homepage of a small U.S. town that only has a “.us” domain name, and impersonating the town’s mayor in the application.
The webpage for the DotGov registry, operated by the General Services Administration, hilariously states that “bona fide government services should be easy to identify on the internet”. They sure should.
By the way, the .gov domain extension is a bizarrely U.S.-only feature of the web that should eventually be abolished. Virtually every other country has its government services associated with a second-level domain with a country-specific domain extension — in Canada, for instance, we use .gc.ca; in the U.K., it’s .gov.uk. American government institutions should be required to use a specific .us address for consistency and equality. Arguably, .mil should follow suit in being decommissioned, and .edu could become available worldwide.
Historically, app makers could ask users for permission to track their location even when they’re not using the app. That was helpful for services that tracked where a user parked their car or where they may have lost a device paired to the phone. But in the new update, app makers can no longer ask for that functionality when an app is first set up — a potentially devastating blow to competitors such as Tile, maker of Bluetooth trackers that help people find lost items.
By contrast, Apple tracks iPhone users’ location at all time — and users can’t opt out unless they go deep into Apple’s labyrinthine menu of settings.
It isn’t exactly true that iPhone users’ locations are always being tracked by the system. Users are asked when setting up their iOS device whether they would like to enable location-based services; they are not automatically opted in. But once a user sets their device up, it’s unlikely they’ll change that setting. There is huge power in being the default, particularly when that’s the default across the entire system for any and all of Apple’s own services that require location access.
There is a fair argument about how this makes sense. Buyers, presumably, have an implied trust in the first-party device manufacturer that cannot be extended to third-party developers. Apple’s track record on privacy is generally good; it would be falsely equivalent to compare their requirement of system-forced permission requests with companies like Facebook and Google that inhale user data and spit out creepy advertisements.
“I’m increasingly concerned about the use of privacy as a shield for anti-competitive conduct,” said Rep. David N. Cicilline (R.I.), who serves as chairman of the House Judiciary antitrust subcommittee. “There is a growing risk that without a strong privacy law in the United States, platforms will exploit their role as de facto private regulators by placing a thumb on the scale in their own favor.”
Cicilline is correct: the duty of regulating this stuff should not be passed off to companies motivated less by ethical concerns than revenue.
Twitter will begin deleting accounts that have been inactive for more than six months, unless they log in before an 11 December deadline.
The cull will include users who stopped posting to the site because they died — unless someone with that person’s account details is able to log-in.
It is the first time Twitter has removed inactive accounts on such a large scale.
The site said it was because users who do not log-in were unable to agree to its updated privacy policies.
My timeline has been humming today with people excited to claim usernames that will likely be freed up, but it seems as though Twitter has — as ever — failed to fully think through their plans. I imagine there are plenty of people out there who occasionally check in on the Twitter accounts of deceased friends and family; Twitter simply has no solution to preserve those memories.
The first is the tax we each pay so that companies can bid against each other to buy traffic from Google. Because their revenue model is (cleverly) built on both direct marketing and an auction, they are able to keep a significant portion of the margin from many industries. They’ve become the internet’s landlord.
The second is harder to see: Because Google has made it ever more difficult for sites to be found, previously successful businesses like Groupon, Travelocity and Hipmunk suffer. As a result, new web companies are significantly harder to fund and build. If you’re dependent on being found in a Google search, it’s probably worth rethinking your plan.
I think there’s a widespread assumption that Google’s search engine is a relatively benevolent and impartial directory of the web at large. The Wall Street Journal’s recent investigation sure makes it sound like that’s the expectation; the authors seemed surprised by how often the ranking parameters are adjusted so that spam, trash, and marketing pablum doesn’t find its way to the top — albeit twisting their findings to imply that Google is pushing a political agenda. There simply isn’t a good way to make search engines truly neutral; that’s fine, but users need to understand that.
Non-Google search engines also need to be more competitive, but it takes time to chip away at a company with complete market share dominance — particularly when they use it as leverage for obtaining an advantage in other markets.
The Washington Post and New York Times have both now struck deals with cellular providers to hype 5G networking for journalism; neither has explained what, exactly, faster cellular networks will do to make journalism any better — where by “better”, in the case of journalism, I mean “more accurate, situated in context, and comprehensive”.
Here’s what the Times said they’d be using 5G to do in their partnership with Verizon:
The Times has journalists reporting on stories from over 160 countries. Getting their content online often requires high bandwidth and reliable internet connections. At home, too, covering live events means photographers might take thousands of photos without access to a reliable connection to send data back to our media servers. We’re exploring how 5G can help our journalists automatically stream media — HD photos, videos and audio, and even 3D models — back to the Newsroom in real-time, as they are captured.
In addition, as news breaks throughout the country, The Post plans to experiment with reporters using millimeter wave 5G+ technology to transmit their stories, photos and videos faster and more reliably, whether they are covering forest fires on the West Coast or hurricane weather in the southeast.
Most journalism is still text. The Times and Post are absolutely doing wonderful things with video, but most of what they produce is still text, and text doesn’t need speed. I can see how photos and video would get to the newsroom faster, but is the speed of delivery really improving journalism?
I hope that the most time-consuming part of a journalist’s job is and remains in the analysis and research of a story — and having a faster connection does not inherently make someone a better researcher.
[…] It’s pretty telling of the era that nobody at either paper thought such a partnership could potentially represent a possible conflict of interest as they cover one of the most heavily hyped tech shifts in telecom history.
I don’t think either publication would jeopardize its integrity to spike stories about its corporate partners. But as antitrust questions increasingly circle tech companies, it is only a matter of time before questions about the lack of competition amongst ISPs and cellular providers cannot be ignored by lawmakers any longer. These are among the most important stories of our time. Should inherently skeptical publications be cozying up to the subjects of their investigations?
Late last week, people on Twitter started noticing sponsored tweets promoting the island of Eroda, linking to a website advertising its picturesque views, marine life, and seaside cuisine.
The only catch? Eroda doesn’t exist. It’s completely fictional. Musician/photographer Austin Strifler was the first to notice, bringing attention to it in a long thread that unraveled over the last few days.
The creators of the Visit Eroda campaign covered their tracks well. According to Baio, they didn’t leave any identifying information in image metadata, domain records, or in the site’s markup.
I verified a connection between @visiteroda and @Harry_Styles. The Eroda page is using a [Facebook] pixel installed on http://hstyles.co.uk. You can only track websites you have control of. They are related.
I’m not arguing that a promotional campaign for Harry Styles’ new record should be taken as a serious privacy violation; I am, in fact, quite sober. But I think there’s a lesson in the campaign’s difficulty for identifying data to be completely disassociated. A need for behaviourally-targeted advertising is what ultimately made it easy to reassociate the anonymous website.
See also: A 2011 article by Andy Baio in which he describes how he was able to figure out the author of an ostensibly anonymous blog because of a shared Google Analytics account.
Sir Tim Berners-Lee has launched a global action plan to save the web from political manipulation, fake news, privacy violations and other malign forces that threaten to plunge the world into a “digital dystopia”.
The Contract for the Web requires endorsing governments, companies and individuals to make concrete commitments to protect the web from abuse and ensure it benefits humanity.
The “contract” — a term I use very loosely, as the only punishment for a signatory’s failure to uphold its terms is to be removed from the list of organizations which support it — is endorsed by usual suspects like the Electronic Frontier Foundation and DuckDuckGo. It also counts as supporters Google, Facebook, and Twitter. Two of the nine principles of the Contract for the Web are about respecting users’ privacy in meaningful ways. You do the math.
So it’s flabbergasting to now see Berners-Lee in the New York Times sidestepping any accountability, and instead promoting himself as the restorer of the web’s virtue. Berners-Lee is pushing what he calls the Contract for the Web, which he describes, with no irony, as a “global plan of action … to make sure our online world is safe, empowering and genuinely for everyone.” He assures us that “the tech giants Google, Facebook, [and] Microsoft” are all “committing to action.” What a relief! Berners-Lee still seems to think Big Tech can do no wrong, even at a time when public and political opinion are going the opposite direction.
I’m not sure I share Butterick’s cynical view of this effort, but I do not see it making a lick of difference in the behaviour or business models of behavioural advertising companies with interactive front-ends.
On October 16, 2019 Bob Diachenko and Vinny Troia discovered a wide-open Elasticsearch server containing an unprecedented 4 billion user accounts spanning more than 4 terabytes of data.
A total count of unique people across all data sets reached more than 1.2 billion people, making this one of the largest data leaks from a single source organization in history. The leaked data contained names, email addresses, phone numbers, LinkedIN and Facebook profile information.
What makes this data leak unique is that it contains data sets that appear to originate from 2 different data enrichment companies.
It’s entirely possible that this data came from a PDL subscriber and not PDL themselves. Someone left an Elasticsearch instance wide open and by definition, that’s a breach on their behalf and not PDL’s. Yet it doesn’t change the fact that PDL is indicated as the source in the data itself and it definitely doesn’t change the fact that my data (and probably your data too), is available freely to anyone who wishes to query their API. I signed up for a free API key just to see how much they have on me (they’ll give you 1k free API calls a month) and the result was rather staggering.
And this is the real problem: regardless of how well these data enrichment companies secure their own system, once they pass the data downstream to customers it’s completely out of their control. My data — almost certainly your data too — is replicated, mishandled and exposed and there’s absolutely nothing we can do about it. Well, almost nothing…
I also signed up for an API key and found records associated with my name and one of my email addresses. Everything in it appears to be scraped from public sources — my name matched outdated LinkedIn data from the time that I thought it was an excellent idea to have a LinkedIn profile, while my email address surfaced a mixed data set.
I am, of course, responsible for putting my information out into the world — if someone can see it, they can copy it. But should they be allowed to store it as long as they like? I deleted my LinkedIn profile years ago, but People Data Labs still has my employment history from there. Furthermore, my email address was not public or visible on any of my social media profiles, but PDL still managed to connect all of them because they used each social media company’s API to scrape user details. I have little recourse in getting rid of PDL’s copy of this information short of contacting them and all other “data enrichment” companies individually to request deletion. That seems entirely wrong.
At the end of last week, the Internet Society (ISOC) announced that it has sold the rights to the .org registry for an undisclosed sum to a private equity company called Ethos Capital. The deal is set to complete in the first quarter of next year.
The decision shocked the internet industry, not least because the .org registry has always been operated on a non-profit basis and has actively marketed itself as such. The suffix “org” on an internet address – and there are over 10 million of them – has become synonymous with non-profit organizations.
However, overnight and without warning that situation changed when the registry was sold to a for-profit company. The organization that operates the .org registry, PIR – which stands for Public Interest Registry – has confirmed it will discard the non-profit status it has held since 2003 as a result of the sale.
It’s not just a bleak turn of events for millions of charities and non-profit organizations worldwide that are tied to their domains; McCarthy’s investigation found suspicious undercurrents behind the sale. Truly one of the year’s most upsetting stories about the web.
After the storm, I was determined to find out why the ‘Report An Outage’ page was so painful to download.
There were 4.6MB of unused code being downloaded on a page whose main and only content, apart from a sign in button, is a form to submit your address.
Paul Boag tweeted an excellent illustration of the benefits of designing for accessibility, dividing their impact into permanent, temporary, and situational occurrences. Performance could easily be on the same list: some people have permanently restricted bandwidth because of where they live or the device they use; temporarily, something like the storm that Stimac faced would impact connectivity; and simply getting in an elevator or being in a crowded city can be situations that impact performance.
On November 7th, tens of thousands of people across the US woke up to strange text messages from friends and loved ones, occasionally from people who were no longer in their lives, like an ex-boyfriend or a best friend who had recently died. The messages had actually been sent months earlier, on Valentine’s Day, but had been frozen in place by a glitched server and were only shot out when the system was finally fixed nine months later, in the middle of the night.
AT&T, T-Mobile, and Sprint currently use Syniverse to route text messages to people on other networks, according to data available to Tyntec, a smaller messaging services company that spoke with The Verge. T-Mobile confirmed that it uses Syniverse, AT&T declined to comment, and Sprint did not respond to a request for comment. Verizon confirmed that it uses a competitor, SAP.
But for years, industry figures have been sounding the alarm about just such a scenario. The very same Valentine’s Day that the SMS server froze up, a mobile services executive named Thorsten Trapp had flown into Washington to warn lawmakers about Syniverse’s dominance in messaging and other carrier services. He came armed with a series of slide decks laying out Syniverse’s dominance in SMS and MMS messaging, as well as in providing critical services for 2G, 3G, and roaming.
“This thing is monopolized. You have literally only one provider who makes sense in the messaging world,” says Trapp, the chief technology officer of Tyntec. “No innovation, no nothing.” His company is currently suing Syniverse for alleged anticompetitive behavior.
Imagine a parallel universe where antitrust law still had teeth.
Apple Inc. is overhauling how it tests software after a swarm of bugs marred the latest iPhone and iPad operating systems, according to people familiar with the shift.
Software chief Craig Federighi and lieutenants including Stacey Lysik announced the changes at a recent internal “kickoff” meeting with the company’s software developers. The new approach calls for Apple’s development teams to ensure that test versions, known as “daily builds,” of future software updates disable unfinished or buggy features by default. Testers will then have the option to selectively enable those features, via a new internal process and settings menu dubbed Flags, allowing them to isolate the impact of each individual addition on the system.
The news in this story is not that Apple has added a system to hide unfinished changes and new features. Such a process is already in place; that’s how they try to prevent unannounced stuff from showing up in external builds. Nor is it particularly newsworthy that Apple is working on iOS 14. Gurman provides no details about the release, other than writing that it will “rival iOS 13 in the breadth of its new capabilities”, despite the HTML page title implying that the article describes iOS 14 features.
The news seems to be entirely contained in this sentence:
The new approach calls for Apple’s development teams to ensure that test versions, known as “daily builds,” of future software updates disable unfinished or buggy features by default.
From the outside, this feels like something of a rehash of the internal meeting after iOS 11’s similarly buggy release. Federighi announced that the company was pushing features scheduled for iOS 12 into the following year so that there would be a renewed focus on quality. It’s worrying that this is an issue that needs to be emphasized again, and so soon.
Tim Cook isn’t the only tech CEO making friends with the big wet President. But if I were on the same short list as Mark Zuckerberg, I may want to take that as a clue to reconsider my stance.
Also reportedly dining with Zuckerberg and the President was Peter Thiel, a man who once said that he “no longer [believes] that freedom and democracy are compatible”.
Update: For clarification, I understand that working dinners with the President are fairly common for CEOs and other prominent business leaders. They are obviously valuable for in-person lobbying, but I think they create an uncomfortable compromise. The less-formal and cozier setting is unbecoming of CEOs who wish to distance themselves from a discriminatory President.
President Trump just toured a Texas plant that has been making Apple computers since 2013 and took credit for it, suggesting the plant opened today. “Today is a very special day.”
Tim Cook spoke immediately after him and did not correct the record.
The President later made the same claim on Twitter, taking credit for “[bringing] high paying jobs back to America”, which is a lie. It is a manufacturing facility that has been producing the same low-volume product for the past six years. I wish Cook had corrected him, and also defended reporters subjected to the President’s abuse since Apple now runs a news subscription business.
The plant toured on Wednesday, operated by Flex, assembles the Mac Pro, a high-end computer that starts at $6,000. A previous model of the computer was made in the same facility starting in 2013. Apple doesn’t own or operate its own manufacturing and instead contracts with companies like Flex. A Flex spokesperson declined to comment.
This isn’t the first time the big wet President said some bullshit about Apple manufacturing products in the United States. In 2017, he claimed that Apple would open “three big plants, beautiful plants” in the U.S., because he doesn’t know how to match adjectives and nouns. While Apple has invested in American manufacturing, they have not built three factories in the U.S., not even small and ugly ones.
The FCC’s Orwellian-named “Restoring Internet Freedom” order certainly did kill rules preventing internet service providers (ISPs) from abusing their broadband monopolies to harm competitors and consumers. And it did so in a flurry of controversy and fraud, all while ignoring the opinions of a bipartisan majority of Americans who wanted to keep net neutrality in place.
But the industry-backed repeal quietly had a much broader objective: It all-but obliterated the FCC’s authority to hold ISPs accountable for any number of other bad behaviors. Instead, it dumped most telecom oversight on a Federal Trade Commission (FTC) that experts say lacks the resources or authority to police the sector and punish bad behavior.
“The fight over net neutrality has always been about gutting the FCC’s legal authority to protect consumers and promote competition,” said Gigi Sohn, a former FCC lawyer and advisor who helped craft the agency’s original 2015 net neutrality rules.
If there is any consistent theme to this administration and its agencies, it is that they are being plundered for personal gain while being dismantled from the inside, with obviously devastating consequences that will only fully be realized in the years to come.