Joe Veix, in one of my favourite pieces in recent memory for the Outline:
In 2014, Mark Zuckerberg bought a new home in San Francisco’s Mission District, about a mile from where I lived at the time. Shortly after the purchase, the man who once printed business cards boasting, “I’m CEO, Bitch” began refurbishing the $10 million “fixer upper.”
I immediately biked over to the area to scope the place out. I figured that having the address of one of the richest and most powerful people in the world could be vaguely useful. Maybe if a Class War ever started, I could point an angry mob in his general direction. Or maybe I could steal his valuable trash.
After four years of stalling, I finally decided to go ahead with the latter idea. My quarter-baked plan was this: I’d drive to his Mission District pied-à-terre on trash collection day, snatch a few bags of whatever, and dig through it. I could learn more about Mark Zuckerberg’s habits and interests, creating my own ad profile of him. Then I could sell this information to brands looking to target that coveted “male, 18-34, billionaire” demographic. Think of it as a physical version of Facebook’s business model.
This update is probably going to be thought of as the “we’re sorry we didn’t tell everyone about performance throttling on iPhones with reduced battery life” version of iOS, but there are plenty of new features in it as well. I like the bigger and better Animoji selection, enhanced privacy features — likely partially encouraged by GDPR compliance — and Safari improvements designed to impede surreptitious efforts to track users via form autofill.
Notably absent from this release, however, are AirPlay 2 and Messages in the Cloud, both of which appeared in early betas before being removed from the public release. I don’t know about AirPlay 2, by Messages in the Cloud has remained buggy in every iteration I’ve tested: messages frequently take a while to sync, and occasionally appear wildly out of order. That’s the kind of thing that needs to be fixed before it’s released publicly.
As I rewatched the 2012 keynote and pondered the 2018 keynote, I realized that Apple is yet again trying to craft a future for education that I am not sure fits with reality.
Individual schools certainly have and will continue to take advantage of both Swift Playgrounds and Everyone Can Code. Some schools will undoubtedly take advantage of Everyone Can Create content that Apple announced yesterday.
Some teachers will look at some of the new apps that Apple has created for educators, but will 50% of teachers in the US explore new solutions? I highly doubt it. Teaching is a hard job. Apple even had a video where students talked about how hard their teacher’s job was. Being a teacher can be a thankless job. Teachers put in a lot of hours outside the classroom for a salary that is less than they deserve. I’m not sure the average teacher is getting excited about another new app to learn (and then explain to students).
This much I completely understand as a concern. I worry that Apple’s strategy simply requires too many (expensive) pieces and too many things to learn for schools to even consider adopting it.
Here’s what puzzles me about Chambers’ take:
This doctrine should apply to education as well. If Apple believes they can make a significant contribution to schools, then they should go all in to change everything about school technology. They should buy major a textbook publisher and change the purchasing model for books when you deploy iPads. They should buy (or buy back) a student information system platform and integrate it with all of their new apps.
They should build a viable alternative to G-Suite that makes it easy for schools to manage communications. They should do all of this at a price where the least affluent districts can deploy it as easily as the most affluent ones.
That seems great, but it also sounds like another world of complexity that schools simply don’t have the time or finances to implement, regardless of how inexpensive Apple makes their solution.
Also, not that textbook publishers are saints — far from it — but I’m not sure I’d like to see tech companies owning such a fundamental piece of school hardware.
Regardless, I’d love to see Apple making a bigger impact in the space. Schools, in particular, shouldn’t be relying upon technologies built by companies with a business model dependent on mass data collection.
Apple introduced a good round of minor updates to its 9.7-inch base model iPad, iWork suite, and education-focused software today. There’s nothing groundbreaking here — you’ve probably seen either the keynote or the highlight reel — but today’s event was interesting to me for two reasons:
it was Apple’s first education-focused event in six years; and,
it was Apple’s first ever product event to be held in Chicago — at least, as far as I can figure out.
Both of these factors signified to me that Apple was likely framing this event as meaningful updates with a cohesive story, but not brand new products. If they had major products to introduce — like, say, an Apple Pencil with support for wireless charging, or an iPad with Face ID — I feel like they would choose to have this event at the Steve Jobs Theater instead.
Coincidentally, minor spec bump-like updates like these are some of my favourites. They show incremental progress that may not look as important, but indicates ongoing attention and effort.
The updated base model iPad introduced today, for example, combines the processor from an iPhone 7, the LTE capabilities from an iPhone 6S, the first-generation Touch ID sensor from the iPhone 5S, and the Apple Pencil support from iPad Pro models, all inside a body that’s basically unchanged from the first iPad Air. That’s not a complaint; the base model iPad is an exceptional value, especially now with support for the Apple Pencil. I only wish that its display were laminated, and that every iPad came with LTE as standard.
Apple’s iWork updates are also pretty solid, with the addition of more advanced ePub creation features, though Apple insists that it is not a replacement for iBooks Author — for now. There are also some sweet new drawing features in the iWork apps that make use of the Apple Pencil.
New for teachers is an app called Schoolwork. Coming in June, it appears to be Apple’s take on an LMS specifically built for iPads managed via Classroom. They also introduced a companion framework for developers called ClassKit that allows apps to offer assignments and activities for use with Schoolwork.
The combined story here is that Apple has a more compelling narrative for how they’re building their vision for the future of education. Whether they’ll be able to claw back significant influence in the space is a good question, though — budget-restricted school districts may simply be swayed by the much cheaper price of Google’s Chromebooks, regardless of the iPad’s features. But there’s a lot here to love even if you aren’t a student or teacher: Apple Pencil support on the base model iPad and updates to the iWork suite are great news regardless.
This past week, a New Zealand man was looking through the data Facebook had collected from him in an archive he had pulled down from the social networking site. While scanning the information Facebook had stored about his contacts, Dylan McKay discovered something distressing: Facebook also had about two years’ worth of phone call metadata from his Android phone, including names, phone numbers, and the length of each call made or received.
This experience has been shared by a number of other Facebook users who spoke with Ars, as well as independently by us — my own Facebook data archive, I found, contained call-log data for a certain Android device I used in 2015 and 2016, along with SMS and MMS message metadata.
Facebook responded by claiming that this creepy spyware they call a “feature” is only available through Messenger and Facebook Lite with explicit user opt-in, but Gallagher is reporting that neither app was installed on the specific device he found call history for, nor does he recall consenting to Facebook tracking his messaging history. Facebook also says that they don’t record the contents of phone calls or messages, which is awfully similar to the defence repeated by the NSA after it was revealed that they were collecting the same kind of metadata. That’s probably not the kind of comparison Facebook would like to strike, but it isn’t inappropriate.
Also keep in mind that several people had to write the code that makes this possible: someone had to write the Android API that allowed these logs to be monitored, while someone else had to write Facebook’s end that made this whole thing possible. Then there were managers and quality assurance staffers who could have objected to this capability. It took years for this functionality to be stopped for third party apps on Android.
For what it’s worth, this story applies only to Android users, because of course it does; iOS has never allowed a third-party app to silently monitor call or messaging history.
"I’m not sure we shouldn’t be regulated," Zuckerberg said in an interview with CNN’s Laurie Segall, after being asked why his company shouldn’t be regulated.
Asked how the government should regulate Facebook, Zuckerberg said "ads transparency regulation — that I would love to see." He referenced legislation that’s currently in the Senate that would require internet companies to disclose who paid for ads, a clear reference to the Honest Ads Act. The bill hasn’t gone anywhere since its introduction last fall. Zuckerberg said he didn’t believe internet companies should be less transparent than other mediums, like radio or TV.
Facebook could do this today, right now, without waiting for regulations that require them to do so. But Zuckerberg is indicating here that they won’t implement the policies of the Honest Ads Act without being obligated to legally. In addition, the Internet Association lobbying group — of which Facebook is a member — has so far campaigned against the Act. The difference between what Zuckerberg says in interviews and the actions of the company he runs is a chasm that splits universes.
Mark, Sheryl and their teams are working around the clock to get all the facts and take the appropriate action moving forward, because they understand the seriousness of this issue. The entire company is outraged we were deceived. […]
In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people’s consent, so we immediately banned Kogan’s app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.
They did not disclose this at the time, nor did they notify the fifty million users whose information was accessed by Cambridge Analytica. So their claim in their press statement that they felt deceived is bunk: they knew, and did nothing when it mattered first.
Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services. Cambridge Analytica claims they have already deleted the data and has agreed to a forensic audit by a firm we hired to confirm this. We’re also working with regulators as they investigate what happened.
One other thing that Facebook immediately did after being notified of the forthcoming media reports is that they — and Cambridge Analytica — threatened to sue. Mike Masnick, Techdirt:
But, it’s raising a bigger question, as well, and it’s one that caused Facebook to do something that I’ll definitively call as “incredibly stupid,” which is that it threatened to sue the Guardian over its story, mainly because the Guardian story refers to this whole mess as a “data breach” for Facebook’s data.
Facebook instructed external lawyers and warned us we were making ‘false and defamatory’ allegations. Today they said it was not correct to call this a data breach. We are calling it a data breach. https://t.co/Q8wrw0FDyr
And, of course, Facebook wasn’t the only one who threatened to sue. Cambridge Analytica did too:
The Observer also received the first of three letters from Cambridge Analytica threatening to sue Guardian News and Media for defamation.
Facebook’s attitude so far is that this story has been a massive inconvenience to them, and they’d rather not think about it if that’s okay with everyone. But it isn’t okay. It’s an outrageous exploitation of data that Facebook’s business model has enabled, and they’re scared that users will figure that out.
You might be familiar with Uses This, a collection of interviews by Daniel Bogan about the hardware and software tools people use to get things done. Well, Bogan asked me to tell everyone about what I use to do whatever it is that I do. It’s a collection of things that are horribly inefficient and woefully outdated, but these things work for me.
If Facebook failed to understand that this data could be used in dangerous ways, that it shouldn’t have let anyone harvest data in this manner and that a third-party ticking a box on a form wouldn’t free the company from responsibility, it had no business collecting anyone’s data in the first place. But the vast infrastructure Facebook has built to obtain data, and its consequent half-a-trillion-dollar market capitalization, suggest that the company knows all too well the value of this kind of vast data surveillance.
Should we all just leave Facebook? That may sound attractive but it is not a viable solution. In many countries, Facebook and its products simply are the internet. Some employers and landlords demand to see Facebook profiles, and there are increasingly vast swaths of public and civic life — from volunteer groups to political campaigns to marches and protests — that are accessible or organized only via Facebook.
One uniquely terrible attribute that these companies share is their willingness to exploit developing nations as test beds for techniques they hope to use elsewhere. From the Times story that broke the news of the way Cambridge Analytica acquired Facebook user data in the United States:
Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.
The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.
There isn’t any evidence that Cambridge Analytica used Facebook user data in these experiments. But the way that Facebook has made itself a de facto component of the communications infrastructure of developing nations is troubling as well. Massive amounts of user data from Facebook initiatives like Internet.org is being scooped up and held by a giant company in California, largely because many in the developing world have few options for getting online. It’s exploitative and shameful.
It’s also worth pointing out that lax American privacy laws and a weak regulatory environment also enabled Facebook’s mass data collection. If Facebook were instead a European company, they would have faced much stricter limitations on what kind of data they could collect and how they could use it. That probably means they wouldn’t have been as successful, but it also means that there likely wouldn’t be a gigantic database of attributes about one-third of the world’s population in the hands of a single company. Something to think about.
I live in a house with both the Echo and the Home. And I’m always testing out Siri to see what she can and cannot do in relation to the competition. It’s just so much nicer to invoke Alexa than the others. And I’m certain a part of it is not having to add that extra wake word.
It also happens to be an awful word. Hey. Every time I hear it, I think back to growing up when my parents would make the dreadful parenting joke — which was really more of a reprimand. “‘Hey’ is for horses.” These days, we’re not only letting our children say “hey”, we’re basically forcing them to.
Not only that, but with the anthropomorphization of assistant software, I think the “Hey” can be a little demeaning as well.
There’s something about all of this software that feels like it’s still a prototype. A proof of concept, and little more. It’s not just Siri — it’s everything. And, while today’s virtual assistants are better at parsing natural language commands, they’re still more verbose and far more particular than how we actually speak to other people. Alexa’s new brief mode is a step in the right direction, I think, as is its lack of a “Hey”. But there’s still so far to go.
[Cambridge Analytica] had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.
So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.
The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.
However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.
Both the Times and the Guardian describe this as a “data breach”, but I don’t think that’s entirely descriptive of what went on here. When I hear “data breach”, I think that a password got stolen or a system was hacked into. But Facebook VP Andrew Bosworth tweeted that there was nothing that was stolen — users willingly gave their information to an app, which went behind their backs to use the information in a somewhat sketchy way that users did not expect.
Which, when you think about it, is kind of Facebook’s business model. Maciej Cegłowski:
The data that Facebook leaked to Cambridge Analytica is the same data Facebook retains on everyone and sells targeting services around. The problem is not shady Russian researchers; it’s Facebook’s core business model of collect, store, analyze, exploit.
Facebook preempted the publication of both of these stories with a press release indicating that they’ve suspended Strategic Communications Laboratories — Cambridge Analytica’s parent — from accessing Facebook, including the properties of any of their clients.
However, the reason for that suspension is not what you may think: it isn’t because Kogan, the developer of the thisisyourdigitallife app, passed information to Cambridge Analytica, but rather because he did not delete all of the data after Facebook told him to.
Also, from that press release:
We are constantly working to improve the safety and experience of everyone on Facebook. In the past five years, we have made significant improvements in our ability to detect and prevent violations by app developers. Now all apps requesting detailed user information go through our App Review process, which requires developers to justify the data they’re looking to collect and how they’re going to use it – before they’re allowed to even ask people for it.
Today, Facebook execs are going out of their way to let us know that this is the intended purpose of the platform. This isn’t unexpected. This is why they built it. They just didn’t expect to be held accountable.
Facebook can make all the policy changes it likes, but I don’t see any reason why something like this can’t happen again at some point in the future. Something will slip through the cracks and create unintended consequences of third-party companies having extraordinary access to one of the largest databases of people anywhere.
Facebook is more than happy to collect the world’s information, but it is clear to me that they have no intention for taking full responsibility for what that entails.
Amazon confirmed it’s rolling out an optional “Brief Mode” that lets Alexa users configure their Echo devices to use chimes and sounds for confirmations, instead of having Alexa respond with her voice. For example, if you ask Alexa to turn on your lights today, she will respond “okay” as she does so. But with Brief Mode enabled, Alexa will instead emit a small chime as she performs the task.
The mode would be beneficial to someone who appreciates being able to control their smart home via voice, but doesn’t necessarily need to have Alexa verbally confirming that she took action with each command. This is especially helpful for those who have voice-enabled a range of smart home accessories, and have gotten a little tired of hearing Alexa answer back.
I would love an option like this for Siri on all of my devices. It indicates a great deal of trust Amazon has in its own product for them to reduce Alexa’s feedback to a simple audio chime. They must be convinced that users will have enough confidence in Alexa’s abilities for its feedback to be truncated to such an extreme.
HTTP Strict Transport Security (HSTS) is a security standard that provides a mechanism for web sites to declare themselves accessible only via secure connections, and to tell web browsers where to go to get that secure version. Web browsers that honor the HSTS standard also prevent users from ignoring server certificate errors.
What could be wrong with that?
Well, the HSTS standard describes that web browsers should remember when redirected to a secure location, and to automatically make that conversion on behalf of the user if they attempt an insecure connection in the future. This creates information that can be stored on the user’s device and referenced later. And this can be used to create a “super cookie” that can be read by cross-site trackers.
I already think that most trackers are installed unethically, as users frequently aren’t aware of the implications of different cookie policies and privacy settings. But this is a special level of intrusive. At what point does a company offering a user tracking solution go beyond what is reasonably expected by customers from software like that and create something downright abusive to users’ rights? I’d argue that this is pretty close.
Thoughtful article by Ryan Christoffel at MacStories:
HomePod succeeds as a music speaker, but it’s not the device we expected – at least not yet. Due to its arrival date more than three years after the birth of Alexa, we expected a smarter, more capable product. We expected the kind of product the HomePod should be: a smart speaker that’s heavy on the smarts. Apple nailed certain aspects with its 1.0: the design, sound quality, and setup are all excellent. But that’s not enough.
HomePod isn’t a bad product today, but it could become a great one.
By becoming a true hub for all our Apple-centric needs.
I love the idea of the HomePod becoming a sort of “source of truth” in the home. It could know a lot more about each family member’s devices, and perhaps use the voice “fingerprint” created for “Hey Siri” to figure out which family member is using it. Due to Apple’s unique stance on user privacy, I would even feel comfortable with keeping my tailored Siri profile, if you will — my Siri history, things I usually request, knowledge about my particular music library, and so on — in iCloud, and synced between all my devices and a HomePod or two. That’s a big ask, but something like that would make it feel more complete — more of an Only Apple can do this kind of a product.
The web that many connected to years ago is not what new users will find today. What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared.
These dominant platforms are able to lock in their position by creating barriers for competitors. They acquire startup challengers, buy up new innovations and hire the industry’s top talent. Add to this the competitive advantage that their user data gives them and we can expect the next 20 years to be far less innovative than the last.
It’s worthwhile asking just what is needed to — *sigh* — disrupt the business of companies like Facebook, Google, and Amazon, especially if they’re simply going to buy or copy potential threats. A little part of me worries that it isn’t enough to create a different site or app to reduce the influence of today’s dominant web companies.
Washington became the first state Monday to set up its own net-neutrality requirements after U.S. regulators repealed Obama-era rules that banned internet providers from blocking content or interfering with online traffic.
The new law also requires internet providers to disclose information about their management practices, performance and commercial terms. Violations would be enforceable under the state’s Consumer Protection Act.
Jon Brodkin of Ars Technica, in an article today about California’s tough new net neutrality proposal:
[Stanford law professor Barbara Van Schewick] argues that the FCC’s preemption claims are invalid.
“While the FCC’s 2017 Order explicitly bans states from adopting their own net neutrality laws, that preemption is invalid,” she wrote. “According to case law, an agency that does not have the power to regulate does not have the power to preempt. That means the FCC can only prevent the states from adopting net neutrality protections if the FCC has authority to adopt net neutrality protections itself.”
The California proposal is remarkably strong, by the way. It isn’t just a copy of the FCC’s 2015 rules; it’s much more comprehensive than that, mandating tight restrictions on interconnection and zero-rating. Brodkin again:
Van Schewick said the California bill is notable for prohibiting ISPs from charging “access fees” that online services would have to pay in order to send data to broadband consumers. “None of the other [state] bills have done this and it’s one of the loopholes that ISPs will use (if it’s not closed) to extract payments from edge providers,” van Schewick told Ars.
From the reporting I’ve read in Ars and other publications, this bill ticks a lot of boxes for effective legislation of ISPs as de facto common carriers.
Aaron Tilley and Kevin McLaughlin of the Information (this article is behind a paywall):
To determine how Apple squandered its own head start over rivals Amazon and Google in the digital assistant realm, The Information interviewed a dozen former employees who worked on various teams responsible for creating Siri or integrating it into Apple’s ecosystem. Most of them agreed to speak only on the condition that they not be named, citing non-disclosure agreements they had signed or concerns about retaliation from Apple executives.
Many of the former employees acknowledged for the first time that Apple rushed Siri into the iPhone 4s before the technology was fully baked, setting up an internal debate that has raged since Siri’s inception over whether to continue patching up a flawed build or to rip it up and start from scratch. And that debate was just one of many, as Siri’s various teams morphed into an unwieldy apparatus that engaged in petty turf battles and heated arguments over what an ideal version of Siri should be — a quick and accurate information fetcher or a conversant and intuitive assistant capable of complex tasks.
Even if you view this as a half-true gossip piece — and I don’t think it is, for what it’s worth — it’s still a fascinating look into the struggles Apple has faced with improving Siri’s capabilities.
For example, Tilley and McLaughlin report that separate teams worked on Siri and Spotlight’s suggested answers, which explains why the same query would sometimes return different results in each. On iOS, Apple rebranded some Spotlight features as Siri features: Siri App Suggestions, and Siri Search Suggestions, for example.
And then there’s Apple’s acquisition of VocalIQ two and a half years ago:
The VocalIQ team viewed Siri as a “manually-crafted system” and felt their technology could help improve it, said a former VocalIQ employee. VocalIQ’s technology is designed to continually finetune its accuracy by ingesting and analyzing data from voice interactions, he said. Apple has successfully integrated the VocalIQ technology into Siri’s calendar capabilities, sources familiar with the project said.
It’s interesting that Siri’s capabilities are set up in such a way that something like VocalIQ can be applied to just one feature. I don’t know how much this says, if anything, about why Siri often feels like its capabilities are so fragmented, but it struck me as odd.
Siri has been the responsibility of Craig Federighi since last year, transferred from Eddy Cue’s online services oversight. This year’s WWDC seems too soon to see that particular branch of discussion bear fruit; but, then again, the inconsistencies and general untrustworthiness of Siri make it feel like it cannot be soon enough for real changes to be made.
The only thing you need to know about Siri is that the people who used to build it feel the need to absolve themselves of personal responsibility for the state that it is in. That they are doing so in the press is almost an implementation detail.
Eye-opening op-ed by Zeynep Tufekci, in the New York Times:
Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.
In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.
In 2010, Tom Gruber created an impressive demo video of Siri, his company’s new app. It showed how someone could use relatively natural language requests to get things done on an iPhone using little more than their voice, and effectively kicked off the virtual assistant wave since.
It’s fascinating that the original Siri demo is still better than today’s Siri in a few aspects.
For fun and frustration, I tried all of the original commands featured in that eight year old video on my iPhone:
“I’d like a romantic place for Italian food near my office”: Siri today correctly parses everything up until “near my office”, which it interprets as near me. I tried using the name of the organization that I work for instead of my office and it also interpreted that as near me.
Then I tried asking Siri to find me restaurants near the address of my office. It interpreted that as an instruction to find restaurants in Cranbrook, BC — about 400 kilometres or four hours away. I don’t see why I should have to specify that I’m looking for restaurants in Calgary.
“I’d like a table for two at Il Fornaio in San Jose tomorrow night at 7:30”: I tried using this exact phrasing — of course, swapping out Il Fornaio for a restaurant near me — and I was told that Siri “can’t book a table right now”. That felt like a failure until I tried rephrasing asking it “how about next Friday?”, at which point I was prompted to continue making the reservation using OpenTable. I was impressed that it kept the context intact.
However, when I tried again with the request, “I’d like a table for two at Model Milk next Friday at 7:30”, I received the same “can’t book a table right now” error, and I can’t seem to reproduce the apparent success I had earlier. That’s frustrating; I was very impressed with the first apparent success, despite the vague error message.
“Where can I see Avatar in 3D IMAX?”: I swapped “Avatar” for a better film but otherwise kept the request the same. Siri successfully found a theatre showing it in 3D — as far as I know, there isn’t a 3D IMAX showing near me — but I wasn’t able to buy tickets through Siri and it doesn’t check the showtimes against other calendar events, like a dinner reservation. To be fair, Siri has never allowed you to buy movie tickets in Canada because Fandango isn’t available here, but I also have the (terrible) Cineplex app installed — I wish there were some connection between the two.
One thing I noticed when I tested several phrasings of this is that Siri only responds to full theatre names. All of the theatres near me have very long names, but nobody here actually uses the full name. For example, when I tried asking for “showtimes for Black Panther at Eau Claire”, Siri got confused. It also transcribed Eau Claire wrong most times I tried it, but that’s not necessarily relevant here. It wasn’t until I asked for “showtimes for Black Panther at Cineplex Odeon Eau Claire Market” that I got an answer. I wish it responded to fuzzier matches.
“What’s happening this weekend around here?”: Siri interprets this as a request for news headlines, not events as in the original Siri app.
When I tried rephrasing this question to “what events are happening this weekend”, it did a web search in Google, but without my location. It wasn’t until I asked “what events are happening in Calgary this weekend” that I got a web search with links to local event calendars.
In the original Siri demo, they extend this by asking “how about San Francisco?”, so I did the same. It returned the weather forecast for this evening in San Francisco.
“Take me drunk I’m home”: Today’s Siri did well here, responding “I can’t be your designated driver”, and offering to call me a taxi.
All of this may vary depending on where you’re located, what Siri localization you have, and even what device you use Siri on.
What’s clear to me is that the Siri of eight years ago was, in some circumstances, more capable than the Siri of today. That could simply be because the demo video was created in Silicon Valley, and things tend to perform better there than almost anywhere else. But it’s been eight years since that was created, and over seven since Siri was integrated into the iPhone. One would think that it should be at least as capable as it was when Apple bought it.
It’s no secret that Siri often feels like it has languished, and almost nothing demonstrates that more than the original demo. I’m sure there are domains where it performs better than the original — for example, it works, to varying extents, in countries outside of the United States. It works with more languages than just English, too. That’s all very important, but it boggles my mind that even some of the simpler stuff — like asking for restaurants near a different location — fails today, even in English.
I’d like to hear from readers who have time to attempt this same demo where they live. Please let me know if you give it a try; I would love to know the results.
This has been my life for nearly two months. In January, after the breaking-newsiest year in recent memory, I decided to travel back in time. I turned off my digital news notifications, unplugged from Twitter and other social networks, and subscribed to home delivery of three print newspapers — The Times, The Wall Street Journal and my local paper, The San Francisco Chronicle — plus a weekly newsmagazine, The Economist.
But he didn’t really unplug from social media at all. The evidence is right there in his Twitter feed, just below where he tweeted out his column: Manjoo remained a daily, active Twitter user throughout the two months he claims to have gone cold turkey, tweeting many hundreds of times, perhaps more than 1,000. In an email interview on Thursday, he stuck to his story, essentially arguing that the gist of what he wrote remains true, despite the tweets throughout his self-imposed hiatus.
The biggest problem with Manjoo’s piece is that it is framed as “unplugging” from social media, when it’s really just a reduction in using it as a primary source for news. It’s more subtle and makes for a way less interesting headline, but it’s more honest.
By the way, I find the entire genre of tech writers writing about not using technology so trite. Beyond that, it’s 2018 — telling people not to follow news accounts on Twitter is just yelling into the wind. Want a few tips for reading the news? Here are four things I try to do, for whatever it’s worth:
Resist the urge to react immediately.
Resist the urge to refresh feeds and news sources when bored. News will happen regardless.
During a breaking news event, nothing makes sense to anyone, so keep that in mind when reading the first wave of reporting on it.
Twitter threads tend to be tedious and unnecessary.
Maybe those tips will be useful to you; maybe they won’t. Maybe they’re things you do already without thinking about it. But at least you didn’t have to pretend to stop using Twitter for two months to figure it out.