[Cambridge Analytica] had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.
So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.
The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.
However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.
Both the Times and the Guardian describe this as a “data breach”, but I don’t think that’s entirely descriptive of what went on here. When I hear “data breach”, I think that a password got stolen or a system was hacked into. But Facebook VP Andrew Bosworth tweeted that there was nothing that was stolen — users willingly gave their information to an app, which went behind their backs to use the information in a somewhat sketchy way that users did not expect.
Which, when you think about it, is kind of Facebook’s business model. Maciej Cegłowski:
The data that Facebook leaked to Cambridge Analytica is the same data Facebook retains on everyone and sells targeting services around. The problem is not shady Russian researchers; it’s Facebook’s core business model of collect, store, analyze, exploit.
Facebook preempted the publication of both of these stories with a press release indicating that they’ve suspended Strategic Communications Laboratories — Cambridge Analytica’s parent — from accessing Facebook, including the properties of any of their clients.
However, the reason for that suspension is not what you may think: it isn’t because Kogan, the developer of the thisisyourdigitallife app, passed information to Cambridge Analytica, but rather because he did not delete all of the data after Facebook told him to.
Also, from that press release:
We are constantly working to improve the safety and experience of everyone on Facebook. In the past five years, we have made significant improvements in our ability to detect and prevent violations by app developers. Now all apps requesting detailed user information go through our App Review process, which requires developers to justify the data they’re looking to collect and how they’re going to use it – before they’re allowed to even ask people for it.
Today, Facebook execs are going out of their way to let us know that this is the intended purpose of the platform. This isn’t unexpected. This is why they built it. They just didn’t expect to be held accountable.
Facebook can make all the policy changes it likes, but I don’t see any reason why something like this can’t happen again at some point in the future. Something will slip through the cracks and create unintended consequences of third-party companies having extraordinary access to one of the largest databases of people anywhere.
Facebook is more than happy to collect the world’s information, but it is clear to me that they have no intention for taking full responsibility for what that entails.
Amazon confirmed it’s rolling out an optional “Brief Mode” that lets Alexa users configure their Echo devices to use chimes and sounds for confirmations, instead of having Alexa respond with her voice. For example, if you ask Alexa to turn on your lights today, she will respond “okay” as she does so. But with Brief Mode enabled, Alexa will instead emit a small chime as she performs the task.
The mode would be beneficial to someone who appreciates being able to control their smart home via voice, but doesn’t necessarily need to have Alexa verbally confirming that she took action with each command. This is especially helpful for those who have voice-enabled a range of smart home accessories, and have gotten a little tired of hearing Alexa answer back.
I would love an option like this for Siri on all of my devices. It indicates a great deal of trust Amazon has in its own product for them to reduce Alexa’s feedback to a simple audio chime. They must be convinced that users will have enough confidence in Alexa’s abilities for its feedback to be truncated to such an extreme.
HTTP Strict Transport Security (HSTS) is a security standard that provides a mechanism for web sites to declare themselves accessible only via secure connections, and to tell web browsers where to go to get that secure version. Web browsers that honor the HSTS standard also prevent users from ignoring server certificate errors.
What could be wrong with that?
Well, the HSTS standard describes that web browsers should remember when redirected to a secure location, and to automatically make that conversion on behalf of the user if they attempt an insecure connection in the future. This creates information that can be stored on the user’s device and referenced later. And this can be used to create a “super cookie” that can be read by cross-site trackers.
I already think that most trackers are installed unethically, as users frequently aren’t aware of the implications of different cookie policies and privacy settings. But this is a special level of intrusive. At what point does a company offering a user tracking solution go beyond what is reasonably expected by customers from software like that and create something downright abusive to users’ rights? I’d argue that this is pretty close.
Thoughtful article by Ryan Christoffel at MacStories:
HomePod succeeds as a music speaker, but it’s not the device we expected – at least not yet. Due to its arrival date more than three years after the birth of Alexa, we expected a smarter, more capable product. We expected the kind of product the HomePod should be: a smart speaker that’s heavy on the smarts. Apple nailed certain aspects with its 1.0: the design, sound quality, and setup are all excellent. But that’s not enough.
HomePod isn’t a bad product today, but it could become a great one.
By becoming a true hub for all our Apple-centric needs.
I love the idea of the HomePod becoming a sort of “source of truth” in the home. It could know a lot more about each family member’s devices, and perhaps use the voice “fingerprint” created for “Hey Siri” to figure out which family member is using it. Due to Apple’s unique stance on user privacy, I would even feel comfortable with keeping my tailored Siri profile, if you will — my Siri history, things I usually request, knowledge about my particular music library, and so on — in iCloud, and synced between all my devices and a HomePod or two. That’s a big ask, but something like that would make it feel more complete — more of an Only Apple can do this kind of a product.
The web that many connected to years ago is not what new users will find today. What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared.
These dominant platforms are able to lock in their position by creating barriers for competitors. They acquire startup challengers, buy up new innovations and hire the industry’s top talent. Add to this the competitive advantage that their user data gives them and we can expect the next 20 years to be far less innovative than the last.
It’s worthwhile asking just what is needed to — *sigh* — disrupt the business of companies like Facebook, Google, and Amazon, especially if they’re simply going to buy or copy potential threats. A little part of me worries that it isn’t enough to create a different site or app to reduce the influence of today’s dominant web companies.
Washington became the first state Monday to set up its own net-neutrality requirements after U.S. regulators repealed Obama-era rules that banned internet providers from blocking content or interfering with online traffic.
The new law also requires internet providers to disclose information about their management practices, performance and commercial terms. Violations would be enforceable under the state’s Consumer Protection Act.
Jon Brodkin of Ars Technica, in an article today about California’s tough new net neutrality proposal:
[Stanford law professor Barbara Van Schewick] argues that the FCC’s preemption claims are invalid.
“While the FCC’s 2017 Order explicitly bans states from adopting their own net neutrality laws, that preemption is invalid,” she wrote. “According to case law, an agency that does not have the power to regulate does not have the power to preempt. That means the FCC can only prevent the states from adopting net neutrality protections if the FCC has authority to adopt net neutrality protections itself.”
The California proposal is remarkably strong, by the way. It isn’t just a copy of the FCC’s 2015 rules; it’s much more comprehensive than that, mandating tight restrictions on interconnection and zero-rating. Brodkin again:
Van Schewick said the California bill is notable for prohibiting ISPs from charging “access fees” that online services would have to pay in order to send data to broadband consumers. “None of the other [state] bills have done this and it’s one of the loopholes that ISPs will use (if it’s not closed) to extract payments from edge providers,” van Schewick told Ars.
From the reporting I’ve read in Ars and other publications, this bill ticks a lot of boxes for effective legislation of ISPs as de facto common carriers.
Aaron Tilley and Kevin McLaughlin of the Information (this article is behind a paywall):
To determine how Apple squandered its own head start over rivals Amazon and Google in the digital assistant realm, The Information interviewed a dozen former employees who worked on various teams responsible for creating Siri or integrating it into Apple’s ecosystem. Most of them agreed to speak only on the condition that they not be named, citing non-disclosure agreements they had signed or concerns about retaliation from Apple executives.
Many of the former employees acknowledged for the first time that Apple rushed Siri into the iPhone 4s before the technology was fully baked, setting up an internal debate that has raged since Siri’s inception over whether to continue patching up a flawed build or to rip it up and start from scratch. And that debate was just one of many, as Siri’s various teams morphed into an unwieldy apparatus that engaged in petty turf battles and heated arguments over what an ideal version of Siri should be — a quick and accurate information fetcher or a conversant and intuitive assistant capable of complex tasks.
Even if you view this as a half-true gossip piece — and I don’t think it is, for what it’s worth — it’s still a fascinating look into the struggles Apple has faced with improving Siri’s capabilities.
For example, Tilley and McLaughlin report that separate teams worked on Siri and Spotlight’s suggested answers, which explains why the same query would sometimes return different results in each. On iOS, Apple rebranded some Spotlight features as Siri features: Siri App Suggestions, and Siri Search Suggestions, for example.
And then there’s Apple’s acquisition of VocalIQ two and a half years ago:
The VocalIQ team viewed Siri as a “manually-crafted system” and felt their technology could help improve it, said a former VocalIQ employee. VocalIQ’s technology is designed to continually finetune its accuracy by ingesting and analyzing data from voice interactions, he said. Apple has successfully integrated the VocalIQ technology into Siri’s calendar capabilities, sources familiar with the project said.
It’s interesting that Siri’s capabilities are set up in such a way that something like VocalIQ can be applied to just one feature. I don’t know how much this says, if anything, about why Siri often feels like its capabilities are so fragmented, but it struck me as odd.
Siri has been the responsibility of Craig Federighi since last year, transferred from Eddy Cue’s online services oversight. This year’s WWDC seems too soon to see that particular branch of discussion bear fruit; but, then again, the inconsistencies and general untrustworthiness of Siri make it feel like it cannot be soon enough for real changes to be made.
The only thing you need to know about Siri is that the people who used to build it feel the need to absolve themselves of personal responsibility for the state that it is in. That they are doing so in the press is almost an implementation detail.
Eye-opening op-ed by Zeynep Tufekci, in the New York Times:
Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.
In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.
In 2010, Tom Gruber created an impressive demo video of Siri, his company’s new app. It showed how someone could use relatively natural language requests to get things done on an iPhone using little more than their voice, and effectively kicked off the virtual assistant wave since.
It’s fascinating that the original Siri demo is still better than today’s Siri in a few aspects.
For fun and frustration, I tried all of the original commands featured in that eight year old video on my iPhone:
“I’d like a romantic place for Italian food near my office”: Siri today correctly parses everything up until “near my office”, which it interprets as near me. I tried using the name of the organization that I work for instead of my office and it also interpreted that as near me.
Then I tried asking Siri to find me restaurants near the address of my office. It interpreted that as an instruction to find restaurants in Cranbrook, BC — about 400 kilometres or four hours away. I don’t see why I should have to specify that I’m looking for restaurants in Calgary.
“I’d like a table for two at Il Fornaio in San Jose tomorrow night at 7:30”: I tried using this exact phrasing — of course, swapping out Il Fornaio for a restaurant near me — and I was told that Siri “can’t book a table right now”. That felt like a failure until I tried rephrasing asking it “how about next Friday?”, at which point I was prompted to continue making the reservation using OpenTable. I was impressed that it kept the context intact.
However, when I tried again with the request, “I’d like a table for two at Model Milk next Friday at 7:30”, I received the same “can’t book a table right now” error, and I can’t seem to reproduce the apparent success I had earlier. That’s frustrating; I was very impressed with the first apparent success, despite the vague error message.
“Where can I see Avatar in 3D IMAX?”: I swapped “Avatar” for a better film but otherwise kept the request the same. Siri successfully found a theatre showing it in 3D — as far as I know, there isn’t a 3D IMAX showing near me — but I wasn’t able to buy tickets through Siri and it doesn’t check the showtimes against other calendar events, like a dinner reservation. To be fair, Siri has never allowed you to buy movie tickets in Canada because Fandango isn’t available here, but I also have the (terrible) Cineplex app installed — I wish there were some connection between the two.
One thing I noticed when I tested several phrasings of this is that Siri only responds to full theatre names. All of the theatres near me have very long names, but nobody here actually uses the full name. For example, when I tried asking for “showtimes for Black Panther at Eau Claire”, Siri got confused. It also transcribed Eau Claire wrong most times I tried it, but that’s not necessarily relevant here. It wasn’t until I asked for “showtimes for Black Panther at Cineplex Odeon Eau Claire Market” that I got an answer. I wish it responded to fuzzier matches.
“What’s happening this weekend around here?”: Siri interprets this as a request for news headlines, not events as in the original Siri app.
When I tried rephrasing this question to “what events are happening this weekend”, it did a web search in Google, but without my location. It wasn’t until I asked “what events are happening in Calgary this weekend” that I got a web search with links to local event calendars.
In the original Siri demo, they extend this by asking “how about San Francisco?”, so I did the same. It returned the weather forecast for this evening in San Francisco.
“Take me drunk I’m home”: Today’s Siri did well here, responding “I can’t be your designated driver”, and offering to call me a taxi.
All of this may vary depending on where you’re located, what Siri localization you have, and even what device you use Siri on.
What’s clear to me is that the Siri of eight years ago was, in some circumstances, more capable than the Siri of today. That could simply be because the demo video was created in Silicon Valley, and things tend to perform better there than almost anywhere else. But it’s been eight years since that was created, and over seven since Siri was integrated into the iPhone. One would think that it should be at least as capable as it was when Apple bought it.
It’s no secret that Siri often feels like it has languished, and almost nothing demonstrates that more than the original demo. I’m sure there are domains where it performs better than the original — for example, it works, to varying extents, in countries outside of the United States. It works with more languages than just English, too. That’s all very important, but it boggles my mind that even some of the simpler stuff — like asking for restaurants near a different location — fails today, even in English.
I’d like to hear from readers who have time to attempt this same demo where they live. Please let me know if you give it a try; I would love to know the results.
This has been my life for nearly two months. In January, after the breaking-newsiest year in recent memory, I decided to travel back in time. I turned off my digital news notifications, unplugged from Twitter and other social networks, and subscribed to home delivery of three print newspapers — The Times, The Wall Street Journal and my local paper, The San Francisco Chronicle — plus a weekly newsmagazine, The Economist.
But he didn’t really unplug from social media at all. The evidence is right there in his Twitter feed, just below where he tweeted out his column: Manjoo remained a daily, active Twitter user throughout the two months he claims to have gone cold turkey, tweeting many hundreds of times, perhaps more than 1,000. In an email interview on Thursday, he stuck to his story, essentially arguing that the gist of what he wrote remains true, despite the tweets throughout his self-imposed hiatus.
The biggest problem with Manjoo’s piece is that it is framed as “unplugging” from social media, when it’s really just a reduction in using it as a primary source for news. It’s more subtle and makes for a way less interesting headline, but it’s more honest.
By the way, I find the entire genre of tech writers writing about not using technology so trite. Beyond that, it’s 2018 — telling people not to follow news accounts on Twitter is just yelling into the wind. Want a few tips for reading the news? Here are four things I try to do, for whatever it’s worth:
Resist the urge to react immediately.
Resist the urge to refresh feeds and news sources when bored. News will happen regardless.
During a breaking news event, nothing makes sense to anyone, so keep that in mind when reading the first wave of reporting on it.
Twitter threads tend to be tedious and unnecessary.
Maybe those tips will be useful to you; maybe they won’t. Maybe they’re things you do already without thinking about it. But at least you didn’t have to pretend to stop using Twitter for two months to figure it out.
Tim Cushing of Techdirt, responding to FBI Director Chris Wray:
We have a whole bunch of folks at FBI Headquarters devoted to explaining this challenge and working with stakeholders to find a way forward. But we need and want the private sector’s help. We need them to respond to lawfully issued court orders, in a way that is consistent with both the rule of law and strong cybersecurity. We need to have both, and can have both. I recognize this entails varying degrees of innovation by the industry to ensure lawful access is available. But I just don’t buy the claim that it’s impossible.
It really doesn’t matter whether or not Wray “buys” this claim. If you deliberately weaken encryption — either through key escrow or by making it easier to bypass — the encryption no longer offers the protection it did before it was compromised. That’s the thing about facts. They’re not like cult leaders. They don’t need a bunch of true believers hanging around to retain their strength.
The thing that bothers me most about Wray’s insistence that a magical “secure but accessible only by law enforcement” encryption standard is that technical experts at the FBI surely know that it isn’t possible, yet he keeps making the claim that it is. Does Wray simply not pay attention to his employees?
I don’t think the explosion is over. I want to make it easier and easier for people to run their own web servers. Google is doing what the programming priesthood always does, building the barrier to entry higher, making things more complicated, giving themselves an exclusive. This means only super nerds will be able to put up sites. And we will lose a lot of sites that were quickly posted on a whim, over the 25 years the web has existed, by people that didn’t fully understand what they were doing. That’s also the glory of the web. Fumbling around in the dark actually gets you somewhere. In worlds created by corporate programmers, it’s often impossible to find your way around, by design.
The web is a social agreement not to break things. It’s served us for 25 years. I don’t want to give it up because a bunch of nerds at Google think they know best.
Mozilla has indicated that they are doing the same. But Eric Mill wrote a piece a couple of years ago about this very topic, and he appreciates the deprecation of HTTP:
I understand the fear of raising the barriers to entry. As a child, I too fell in love with an internet made by everyone, and have spent my career, my volunteer work, and my hobbies trying to share what that love has taught me. I want children everywhere in the world to grow up feeling like the internet that permeates their lives is also in their service — a lego set in real life that you can buy with a week’s allowance.
Yet as an adult, I also understand that power for ordinary people is hard to come by and hard to keep. The path of least resistance for human society is for money to buy more money, and might to demand more might. Democracy is designed not so much to expand freedom as it is to give people tools to desperately hold onto the freedom they have.
Put another way: power has a way of flowing away from the varied, strange, beautiful little leaf nodes on the outer edges and into the unaccountable, unimaginative, ever-hungry center.
Mill actually uses the enforcement of HTTPS by browser vendors as a knock against big companies like Verizon and Comcast that inject ads into HTTP-served websites, and spy agencies like the NSA and the GCHQ:
What animates me is knowing that we can actually change this dynamic by making strong encryption ubiquitous. We can force online surveillance to be as narrowly targeted and inconvenient as law enforcement was always meant to be. We can force ISPs to be the neutral commodity pipes they were always meant to be. On the web, that means HTTPS.
As Mill points out in his article, there are great reasons to add an HTTPS certificate to a website that has no interactive elements beyond links. It makes sense to me to generally prefer HTTPS going forward, but I have concerns about two browser vendors working to effectively eliminate the non-HTTPS web; or, at least, to put barriers between it and users.
I like the way Firefox attempts to educate users directly adjacent to insecure password fields; I also don’t mind the way Chrome handles notifications of HTTP-only webpages today. But the changes coming in July that will mark all HTTP webpages as “not secure”, and that will make a large — if hardly-trafficked — part of the web feel like it’s diseased. And what will Google do in the future, I wonder? If they’re going to progressively increase their warnings on HTTP webpages, what’s next?
I’m sure the kids will figure it out — they always do. However, I worry that introducing more requirements, even something as simple as HTTPS, can be discouraging. That’s the last thing HTTP/HTML web should be: discouraging. It is one of the greatest enablers of communication in human history. Let’s not allow its future to be dictated by browser vendors.
Or, in Mill’s language: let’s make sure we encourage building more leaf nodes by making their creation easier and more fun, instead of allowing a much stronger centre to form.
A conspiracy theory has spread among Facebook and Instagram users: The company is tapping our microphones to target ads. It’s not.
I believe them, but for another reason: Facebook is now so good at watching what we do online — and even offline, wandering around the physical world — it doesn’t need to hear us. After digging into the various bits of info Facebook and its advertisers collect and the bits I’ve actually handed over myself, I can now explain why I got each of those eerily relevant ads. (Facebook ads themselves offer limited explanations when you click “Why am I seeing this?”)
Advertising is an important staple of the free internet, but the companies buying and selling ads are turning into stalkers. We need to understand what they’re doing, and what we can — or can’t — do to limit them.
Think about how quickly we’ve accepted this as the new normal, and why. Do we really prefer highly-specific advertising, as Facebook and Google say we do, or is it simply very creepy? Even if you don’t have a Facebook or Google account, you’re using Safari — which limits ad tracking by default — and have all sorts of silly settings to limit your exposure to trackers, there are still an extraordinary number of ways that your information can be acquired for highly-targeted advertising, almost always without your explicit permission.
“The Right to Repair Act will provide consumers with the freedom to have their electronic products and appliances fixed by a repair shop or service provider of their choice, a practice that was taken for granted a generation ago but is now becoming increasingly rare in a world of planned obsolescence,” Susan Talamantes Eggman, a Democrat from Stockton who introduced the bill said in a statement.
The announcement had been rumored for about a week but became official Wednesday. The bill would require electronics manufacturers to make repair guides and repair parts available to the public and independent repair professionals and would also would make diagnostic software and tools that are available to authorized and first-party repair technicians available to independent companies.
I’m intrigued by this wave of “right to repair” legislation — much of which has been pushed by Repair.org, a repair industry trade group — but I’m curious about what parts must be repairable, especially in consumer electronics. The full text of the California bill hasn’t been posted publicly, as far as I can see, but Minnesota’s has and it’s fairly nonspecific. I’m all for batteries being designed to be more replaceable, even if it takes popping a few screws out, but what about trickier components, like chips that are soldered to the board? Would a manufacturer be required to provide full board component repairability, or just the ability to replace the board itself?
Selfishly, I hope this legislation leads to more upgradable MacBooks, especially the Pro. I don’t think a professional notebook designed to last several years should have its internal storage capacity capped at time of purchase.
Recent media coverage of Onavo Protect encouraged me to investigate the code for the iOS version of their app. I wanted to determine what types of data is collected in addition to the alleged per-app-MAU tracking performed server-side.
I found that Onavo Protect uses a Packet Tunnel Provider app extension, which should consistently run for as long as the VPN is connected, in order to periodically send the following data to Facebook (graph.facebook.com) as the user goes about their day:
When user’s mobile device screen is turned on and turned off
Total daily Wi-Fi data usage in bytes (Even when VPN is turned off)
Total daily cellular data usage in bytes (Even when VPN is turned off)
Periodic beacon containing an “uptime” to indicate how long the VPN has been connected
If I’m reading this right, Strafach hasn’t found indications — yet? — that Onavo sends app usage data to graph.facebook.com, but we know Onavo collects that data.
What he has found so far doesn’t appear to be nearly that intrusive, but it’s also bizarre. For example, why does Facebook need to know when your phone’s display is on?
Tangentially, Onavo’s behaviour is the kind of thing I wish App Review was more strict towards. There’s perhaps a thin line between analytics packages that developers sometimes use and what Onavo does; similarly, there’s a thin line between Onavo’s data collection and Facebook’s entire business model. But this app is just skeevy — it buries its Facebook affiliation1 and data gathering behind a different brand and the promise of protecting you from phishing.
The only mention of Facebook on their website is on the about page, and in the App Store, the Facebook affiliation is in a large paragraph of text in the initially hidden area of the app description. ↩︎
I’m not quite sure whether iTunes LP was a bad idea or simply one that neither Apple (aside from Steve Jobs?) nor the music producers actually had much interest in. How else to explain that Apple never brought it to iPad?
I think iTunes LP was a fine enough idea; ultimately, though, I can’t imagine that many people went out of their way to buy iTunes LPs instead of the usually-cheaper non-LP version of the album.
They were built using an extraordinarily flexible and easy-to-use SDK by way of TuneKit, which was basically just a website. Theoretically, that simplicity should mean that they should have worked perfectly okay on the iPad that shipped just six months after iTunes LP was introduced, and that the number of iTunes LPs created should have been more than could easily be catalogued on Wikipedia. If lots of people truly cared about them, there would be an easy way to find them in a user’s iTunes library and in the iTunes Store.
Over the past few days, users with Alexa-enabled devices have reported hearing strange, unprompted laughter. Amazon responded to the creepiness in a statement to The Verge, saying, “We’re aware of this and working to fix it.”
As noted in media reports and a trending Twitter moment, Alexa laughs without being prompted to wake. People on Twitter and Reddit reported that they thought it was an actual person laughing near them, which can be scary when you’re home alone. Many responded to the cackling sounds by unplugging their Alexa-enabled devices.
Just one more thing Amazon’s virtual assistants can do that the HomePod cannot.
But why is this possible at all? Is there some sort of hidden maniacal laughter mode? Is that something people would ever want to trigger intentionally, let alone have the device invoke accidentally? Is this a prank? And could you trust Amazon’s virtual assistant to not do anything like this again?
iTunes LP is the next evolution of the music album delivering a rich, immersive experience for select albums on the iTunes Store by combining beautiful design with expanded visual features like live performance videos, lyrics, artwork, liner notes, interviews, photos, album credits and more.
At the time, Steve Jobs described it as a way to replicate an album-like experience digitally.
As of the end of this month, though, Apple will no longer accept new iTunes LP releases. Dani Deahl, the Verge:
Earlier today, UK-based website Metro claimed to have a leaked internal email from Apple sent to music producers titled “The End of iTunes LPs.” The email supposedly stated that “Apple will no longer accept new submissions of iTunes LPs after March 2018,” and that “existing LPs will be deprecated from the store during the remainder of 2018. Customers who have previously purchased an album containing an iTunes LP will still be able to download the additional content using iTunes Match.”
While iTunes LP submissions will end this month, existing iTunes LPs will not be depreciated. Not only will these iTunes LPs continue to be available, but users will still be able to download any previous or new purchases of iTunes LPs at any time via iTunes.
I have a few iTunes LPs, but I also have a ton of actual LPs. One thing that network-accessed music will always lack, whether it is streamed or purchased, is the physicality of an album. Apple’s attempt at replicating it was a good effort and allowed them to do things that you simply can’t do with album art and liner notes, like including music videos, or behind-the-scenes films of the recording process.
But, these days, those extras don’t require a specific packaged format. Videos are streamed for the one or two times most people watch them, and lyrics are just a scroll away for many Apple Music tracks. The world moved beyond iTunes LP. And the remaining things it offered — like exquisite artwork on gorgeous poet, and that sense of a packaged product — simply can’t be replicated effectively on a screen. The weight of an LP still means something, and bytes simply don’t weigh anything.
By the way, I see a lot of stories right now forecasting the end of the iTunes Store based, in part, on this announcement. The original Metro story, for example, mis-quotes the email in its headline, and Cult of Macjumped right on that bandwagon. I wouldn’t read too much into those. If Apple were killing music sales, they would just come out and say that.
Google, Amazon, Apple and Facebook have all faced different issues when it comes to tax optimizations. They’ve been routing their revenue through Ireland, Luxembourg, the Netherlands and other countries with a low corporate tax. Sometimes the money end up in Bermuda or the tiny island of Jersey.
That’s why Europe’s economy ministers wanted to find a way to tax them properly that is easy to implement. And Le Maire confirmed that Europe will look at the overall revenue of tech giants in each country and tax them based on that figure.
This makes complete sense to me. As Tim Cook once wrote:
Taxes for multinational companies are complex, yet a fundamental principle is recognized around the world: A company’s profits should be taxed in the country where the value is created.
This is a tax that will be assessed in each country based on companies’ earnings in each country — that seems fair enough. What’s strange, though, is that the original article off which TechCrunch’s report is based indicates that this is a tax specifically on tech companies. Perhaps it’s just a lack of context created by a poor automatic translation, but that seems silly to me. As virtually all multinational companies practice various forms of tax avoidance, why not apply this strategy to all companies operating across the E.U.?
Just a week after Forbes reported on the claim of Israeli U.S. government manufacturer Cellebrite that it could unlock the latest Apple iPhone models, another service has emerged promising much the same. Except this time it comes from an unkown entity, an obscure American startup named Grayshift, which appears to be run by long-time U.S. intelligence agency contractors and an ex-Apple security engineer.
In recent weeks, its marketing materials have been disseminated around private online police and forensics groups, offering a $15,000 iPhone unlock tool named GrayKey, which permits 300 uses. That’s for the online mode that requires constant connectivity at the customer end, whilst an offline version costs $30,000. The latter comes with unlimited uses.
I don’t imagine Apple’s legal department is particularly thrilled that one of their ex-employees is helping crack device security measures.
At any rate, that’s now two firms that have similar intrusion capabilities using methods that they won’t report to Apple because their business models depend on their not doing so. That means that all iPhone owners are walking around with serious — albeit perhaps hard-to-exploit — vulnerabilities in their device’s security architecture. At least Apple may be able to surreptitiously acquire a copy of GrayKey and patch the vulnerabilities it uses.
Alex Hern, with one hell of a lede in the Guardian:
Facebook has admitted it was a “mistake” to ask users whether paedophiles requesting sexual pictures from children should be allowed on its website.
You don’t say.
On Sunday, the social network ran a survey for some users asking how they thought the company should handle grooming behaviour. “There are a wide range of topics and behaviours that appear on Facebook,” one question began. “In thinking about an ideal world where you could set Facebook’s policies, how would you handle the following: a private message in which an adult man asks a 14-year-old girl for sexual pictures.”
The options available to respondents ranged from “this content should not be allowed on Facebook, and no one should be able to see it” to “this content should be allowed on Facebook, and I would not mind seeing it”.
I don’t know how something like this could be possible, unless Facebook is somehow running this survey in an entirely automated way, including in writing the questions. Maybe they are, but I think someone — a human being — must have written this question and someone else must have seen it before it was published. Either there was an over-reliance in automated tools, nobody working on this survey caught such a blatantly stupid question, or someone genuinely believed this was something worth asking.
The phone was shipped “on time.” It was shipped when it was announced to ship and when Apple was able to meet enough demand. Your imaginary ship dates do not enter into this equation.
Eassa thinks there are people who looked at the later release date for the iPhone X and were “discouraged at having to wait until November to buy an iPhone that would ultimately be replaced by a newer, better model in about 10 months” and therefore didn’t buy an iPhone this year at all.
That seems like a very small set of people. And it’s quite likely that the 2018 release schedule will be exactly the same as the 2017 release schedule, with a base phone coming first and a higher end model coming second. So it’s a very small set of people who are very bad at evaluating choices.
Interestingly, one year ago — nearly to the day — Eassa argued that releasing the then-rumoured OLED iPhone in November was preferable:
Of course, Apple is better off delaying a product a smidgen to make sure it’s ready to go and if the redesigned fingerprint scanner meaningfully enhances the user experience, then the delay is probably worth it.
Three things about last year’s article:
This was published when some rumours still claimed that the OLED iPhone would ship with a fingerprint scanner, hence that reference.
Its headline frames this as “bad news”, so it sounds like Eassa is just sticking with that narrative rather than revising it in the face of facts.
We love instant, public, global messaging and conversation. It’s what Twitter is and it’s why we‘re here. But we didn’t fully predict or understand the real-world negative consequences. We acknowledge that now, and are determined to find holistic and fair solutions.
We have witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers. We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough.
That’s an extraordinarily frank admission. I admire that. So what will Twitter do about it?
Recently we were asked a simple question: could we measure the “health” of conversation on Twitter? This felt immediately tangible as it spoke to understanding a holistic system rather than just the problematic parts.
Dorsey points to an article from Cortico,1 a nonprofit firm that “aims to strengthen an American public sphere weakened by political, cultural and socioeconomic isolation“:
This experience led us to the idea that perhaps we could measure aspects of the health of the public sphere—in terms of communication exchanges between groups or tribes—grounded in data from public social media and other public media sources. As a starting point, we are developing a set of health indicators for the U.S. (with the potential to expand to other nations) aligned with four principles of a healthy public sphere:
Shared Attention: Is there overlap in what we are talking about?
Shared Reality: Are we using the same facts?
Variety: Are we exposed to different opinions grounded in shared reality?
Receptivity: Are we open, civil, and listening to different opinions?
This sounds a lot like Twitter will reference Cortico’s techniques to try to automate the hate away from conversations, but a post on Twitter’s blog indicates that they have no idea how to do this. I’m skeptical of its success. I’m concerned that Dorsey sees it as a problem, but has waited too long to do anything about it and now wants to invent a way to do it automatically, like a university student who waited to start writing their ten-thousand word essay until the night before it’s due. It seems earnest, but also a bit desperate.
I think that a better start would be to ban Nazis. I mean that literally. Flag any account where its name, handle, location, bio, or recent tweets contain allusions to Hitler normally used by white supremacist groups: “1488”, “HH”, “14 words”, and other hate symbols in context. That gives human operators the ability to sift through heaps of these accounts and ban the ones that are clearly and obviously Nazis, of which there are frighteningly many. This isn’t a perfect solution; it’s barely scratching the surface. But it would be a material change in how Twitter operates and a clear line as to what they do not tolerate. “No Nazis” should not be a controversial point of view.
I had never heard of Cortico before Dorsey posted this, so I went to Wikipedia. There’s no entry for the company; there is, however, an entry for cortiço, a term used in Portugal and Brazil to describe ultra high density housing with poor sanitary conditions. I don’t know where the American firm got their name, but that’s a hell of an association. ↩︎
Today, we’re introducing Bookmarks, an easy way to save Tweets for quick access later. But wait, there’s more! Today’s update makes sharing better, too. With our new “share” icon on every Tweet, you’ll be able to bookmark a Tweet, share via Direct Message, or share off of Twitter any number of ways. Because we put all sharing actions together in one place, it’s easier to save and share privately or publicly — in the moment, or later.
This looks great. Bookmarking is easily one-third to one-half of how I use the “like” button. A key difference between the two is that bookmarks are private; likes are public and, for a few years now, followed users’ likes have been inserted at the top of the algorithmic timeline. If Twitter were driven less by juicing “engagement” metrics, this feature might not be necessary.
Unfortunately, there’s nothing in this announcement nor anything in Twitter’s documentation that suggests they’re making this available to third-party developers; I hope they do.
Cellebrite, a Petah Tikva, Israel-based vendor that’s become the U.S. government’s company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.
The Israeli firm, a subsidiary of Japan’s Sun Corporation, hasn’t made any major public announcement about its new iOS capabilities. But Forbes was told by sources (who asked to remain anonymous as they weren’t authorized to talk on the matter) that in the last few months the company has developed undisclosed techniques to get into iOS 11 and is advertising them to law enforcement and private forensics folk across the globe. […]
On some level, this is extremely impressive. The iPhone is the gold standard in consumer smartphone security — possibly in smartphone security period — and they keep improving with every generation. A flaw that allows someone to bypass an iPhone’s hardware-enforced encryption is very rare indeed; that’s why some security firms will pay up to a million dollars for that kind of an exploit.
But it is deeply troubling as well. While we don’t know anything about Cellebrite’s technique for breaching an iPhone’s security — including whether their method has been patched in an iOS 11 update — it is notable that a security firm has found an exploit but is unlikely to tell Apple about it. It’s concerning that three-letter agencies are hoarding zero-days, but at least those agencies are ostensibly publicly accountable. That doesn’t make it right, but it does make it slightly easier to stomach than a for-profit company charging $1,500 a pop to law enforcement agencies worldwide — some of which are less reputable than others, mind you — and not disclosing vulnerabilities to software vendors is callous. It puts users worldwide at risk for their financial gain.
Update: If you are worried about the possibility of Cellebrite — or anyone else who figures out their PIN cracking methodology — breaking into your phone, Ray “Redacted” has a good tip:
If you are concerned by this then one thing you can due to mitigate it is to change your iPhone PIN from a six digit number to an alphanumeric passphrase. The cellebrite exploit involves a brute force PIN trick that allows unlimited attempts without wiping.
Like any passphrase, it should contain a mix of lowercase and uppercase letters, numbers, and symbols. It can even be of a similar length, but a greater combination of character options means a longer cracking process.
Update: Fox-Brewster has confirmed with Cellebrite that their method can unlock iPhones running up to iOS 11.2.6, the latest public release.