It’s bizarre and somewhat troubling that Apple could unilaterally punish a competitor for its privacy sins. (Imagine if McDonald’s could shut down Burger King franchises for health code violations, with little explanation and no recourse for appeal.) But it’s hard to argue with Apple’s decision here. It made rules governing what developers for Apple products were allowed to do, Facebook broke them, and it’s now paying a price.
Apple’s defense of user privacy, while certainly self-interested, is a boon to its users and a lever for change within the tech industry. And if Mr. Cook wants to take a strong stand against app developers that routinely violate users’ trust, he could start with the biggest privacy violator of all. Facebook won’t change on its own, but a chastening from Apple might be what the company needs to get its act together.
Facebook is an enlightened dictatorship, but so is Apple. Tim Cook and his lieutenants dictate the terms of an enormous economy, and can change that economy on a whim. Today Apple may have acted out of consistency with its privacy principles, to the benefit of some consumers. (And to the detriment of anyone who was counting on that $20 gift card!) But as Apple faces more pressure to serve as, as Roose put it, de facto privacy regulator, we may find ourselves uncomfortable with its monopolistic power.
I’ve been following along with Kashmir Hill’s experiment to try to rid herself of her dependence on the big five tech companies. She hasn’t published her experience with dropping Apple yet, but a pattern emerged in her reports on tying to stop using Amazon, Google, and — to a lesser extent — Microsoft: all of these companies are tightly integrated into the tech ecosystem at large, so it’s almost impossible to be independent of them. Take Amazon, for instance. Hill found that removing its store from her life was difficult, but getting rid of Amazon Web Services was practically impossible because it’s the infrastructure for lots of other tech companies.
Apple, by comparison, seems like it would be much easier to remove from your life because the company provides virtually no business-to-business services; there is no “Apple Web Services” product. Apple has much less power, in that regard.
However, running the App Store is an enormously powerful position to be in. It is probably their closest equivalent to providing a product or service upon which other companies are dependent. You could use Facebook or Pinterest or Twitter in Safari, but their apps are much better. The “sweet solution” explained at WWDC 2007 was anything but.
What Apple did in this circumstance was extraordinary — motivated by Facebook’s utter disregard for its platform rules and using that for its callous treatment of the privacy of its users and, in particular, minors. What Facebook does should not have been possible in an adequate legal privacy framework. But, in the absence of regulators that do their jobs, I think Apple made the right call here. They aren’t a bully without a conscience; they have platform rules that must be followed, and they anything but contempt for user privacy.1 They got this right, but regulators should step up with legislation to protect personal information.
I get where Newton and Roose are coming from when they describe Apple’s power in this situation. It’s a bit alarming that Apple has this kind of control over Facebook. Scarier, to me, is that only Apple has the moral compass and the power to exercise this kind of control over Facebook.
The FaceTime bug that lit up the web earlier this week was a bug. It was a bad, terrible, awful, intrusive bug, but a bug — not a business model. ↩︎
Zack Whittaker, Josh Constine, and Ingrid Lunden, TechCrunch:
Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple, TechCrunch has learned.
After we asked Google whether its app violated Apple policy, Google announced it will remove Screenwise Meter from Apple’s Enterprise Certificate program and disable it on iOS devices.
The company said in a statement to TechCrunch:
“The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices. This app is completely voluntary and always has been. We’ve been upfront with users about the way we use their data in this app, we have no access to encrypted data in apps and on devices, and users can opt out of the program at any time.”
Looks like Google got the message after Apple proved that they would not hesitate to act upon clear violations of their enterprise distribution rules for anti-privacy purposes. I wonder if they’ll also act upon other big-name violators of their enterprise rules, like Amazon and DoorDash.
Update: Apple has now pulled Google’s enterprise certificate. Just because Google tried to pull their creepy app and apologize before Apple intervened, that doesn’t mean it didn’t violate Apple’s enterprise distribution agreement.
Announced at a White House ceremony in 2017, the 20-million square foot campus marked the largest greenfield investment by a foreign-based company in U.S. history and was praised by President Donald Trump as proof of his ability to revive American manufacturing.
Foxconn, which received controversial state and local incentives for the project, initially planned to manufacture advanced large screen displays for TVs and other consumer and professional products at the facility, which is under construction. It later said it would build smaller LCD screens instead.
Now, those plans may be scaled back or even shelved, Louis Woo, special assistant to Foxconn Chief Executive Terry Gou, told Reuters. He said the company was still evaluating options for Wisconsin, but cited the steep cost of making advanced TV screens in the United States, where labor expenses are comparatively high.
“In terms of TV, we have no place in the U.S.,” he said in an interview. “We can’t compete.”
This matches Foxconn’s international expansion strategy, as reported by Sruthi Pinnamaneni, of starting with a big plan to secure incentives, and then walking it back over time to the point where the result bears little resemblance to the promise.
Back in 2017, around the time when representatives were finalizing this deal, Scot Ross of One Wisconsin Now wrote a prescient op-ed for the Cap Times:
The history of Foxconn promising major investments in facilities and gaudy numbers of jobs versus the reality of what they do, or don’t, deliver ought to create more skepticism.
In Pennsylvania, a 2013 promise to invest $30 million in a new manufacturing facility remains unfulfilled. Overtures in Arizona and Colorado have produced nothing. In fact, there is a global pattern of Foxconn not delivering on promised investments in facilities or job creation.
As patterns are wont to do, Foxconn’s streak has continued.
A team of former U.S. government intelligence operatives working for the United Arab Emirates hacked into the iPhones of activists, diplomats and rival foreign leaders with the help of a sophisticated spying tool called Karma, in a campaign that shows how potent cyber-weapons are proliferating beyond the world’s superpowers and into the hands of smaller nations.
The ex-Raven operatives described Karma as a tool that could remotely grant access to iPhones simply by uploading phone numbers or email accounts into an automated targeting system. The tool has limits — it doesn’t work on Android devices and doesn’t intercept phone calls. But it was unusually potent because, unlike many exploits, Karma did not require a target to click on a link sent to an iPhone, they said.
In 2016 and 2017, Karma was used to obtain photos, emails, text messages and location information from targets’ iPhones. The technique also helped the hackers harvest saved passwords, which could be used for other intrusions.
It isn’t clear whether the Karma hack remains in use. The former operatives said that by the end of 2017, security updates to Apple Inc’s iPhone software had made Karma far less effective.
This story is just one part of a deeper investigation from Schectman and Bing into surveillance activities by the United Arab Emirates on dissidents and activists, which is worth reading. Remarkably, it even cites a named source.
The timing of the capabilities of this exploit coincide with the introduction of iMessage media previews. If I were looking to create a security hole in an iPhone without any user interaction, that’s the first place I’d look. Also, note that this report states that this exploit is now “far less effective”; it does not say that the vulnerabilities have been patched.
Apple has shut down Facebook’s ability to distribute internal iOS apps, from early releases of the Facebook app to basic tools like a lunch menu. A person familiar with the situation tells The Verge that early versions of Facebook, Instagram, Messenger, and other pre-release “dogfood” (beta) apps have stopped working, as have other employee apps, like one for transportation. Facebook is treating this as a critical problem internally, we’re told, as the affected apps simply don’t launch on employees’ phones anymore.
The shutdown comes in response to news that Facebook has been using Apple’s program for internal app distribution to track teenage customers with a “research” app.
This is almost a better response than if Apple pulled Facebook’s apps from the App Store. It doesn’t impact typical users at all, but it sounds like it’s causing chaos within the company. And Facebook got themselves into this mess because their internal apps use the same enterprise certificate as their creepy VPN app. Hilarious.
Update: Facebook did the PR rounds this morning explaining that it was shutting down its iOS research program, but Josh Constine of TechCrunch reports that Apple invalidated the enterprise certificate before Facebook started making those claims.
Joseph Turow and Chris Jay Hoofnagle, in an op-ed in the New York Times:
In a recent Wall Street Journal commentary, Mark Zuckerberg claimed that Facebook users want to see ads tailored to their interests. But the data show the opposite is true. With the help of major polling firms, we conducted two large national telephone surveys of Americans in 2012 and 2009. When we asked people whether they wanted websites they visit to show them commercial ads, news or political ads “tailored to your interests,” a substantial majority said no. Around half did say they wanted discounts tailored to their interests. But that too changed after we told them how companies gathered the information that enables tailoring, such as following you on a website. Bottom line: If Facebook’s users in the United States are similar to most Americans (and studies suggest they are), large majorities don’t want personalized ads — and when they learn how companies find out information about them, even greater percentages don’t want them.
I’ll go one further: I don’t think highly-targeted advertising is substantially more effective at selling products and services than more generally targeted ads based on the page or website it’s placed on. It’s certainly not worth amassing huge databases of individuals’ preferences, tastes, web browsing histories, and demographic information.
These surveys are seven to ten years old. I think this would be a great time to poll people again.
Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms. Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits.
Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.
Even for Facebook, this is creepy.
Here’s what I don’t get about this story — aside from, of course, the parts that would make any reasonable person shudder. Facebook has been embroiled in unethical behaviours since its inception, but public interest has dramatically increased over the past couple of years. How is there still not a single person at this company pushing the stop button on anything that might seem creepy? Is that something that they are institutionally incapable of doing, or even recognizing in the first place?
As far as Apple’s action on this is concerned, I say that they should exercise their power over their platform by cancelling Facebook’s enterprise certificate and pulling their apps from the App Store. I’m not exaggerating. I’ve tried to reconcile Apple’s allowance of Facebook’s privacy-antagonistic practices in the App Store while preaching their strong stance on privacy; I still think that surveillance advertising needs to be reined in by legislation as opposed to individual actions by companies. In this case, though, Facebook is being openly hostile towards Apple’s policies. Smaller developers get turfed for less.
Apple’s response, via a PR rep this morning: “We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”
Translation: Apple won’t let Facebook distribute the app anymore.
It’s time for our annual look back on Apple’s performance during the past year, as seen through the eyes of writers, editors, developers, podcasters, and other people who spend an awful lot of time thinking about Apple.
This is the fourth year that I’ve presented this survey to a hand-selected group. They were prompted with 11 different Apple-related subjects, and asked to rate them on a scale from 1 to 5, as well as optionally provide text commentary on their vote. […]
Since I used the same survey as in previous years, I was able to track the change in my panel’s consensus opinion compared to the previous year.
This is one of my favourite perennial pieces. I think it’s the best assessment of Apple’s performance anywhere, and the panel’s commentary generally mimics my own. Overall, Apple’s new hardware — particularly the new Apple Watch — has generally shone in every area except reliability, software quality is up while service quality continues to be mixed, and Apple’s TV and home offerings continue to be, charitably, just getting started.
For what it’s worth, a few thoughts of my own:
I’ve just picked up an iMac to replace my 2012 MacBook Air. While Migration Assistant failed me twice,1 all of the iCloud stuff worked brilliantly. I spent a little time copying-and-pasting license keys from my MacBook Air to my iMac and that worked amazingly well. Also, I really love unlocking it with my Apple Watch.
I usually favour a laptop and a large display to connect it to. But the reason I chose an iMac instead of one of the more recently-updated MacBook Pro models is primarily because the reputation of the keyboards in those models still has me spooked.
I’ve been using my Apple TV more but it’s basically a Netflix, Apple Music, and AirPlay box. If we weren’t such an Apple-centric household and were I not aware of how poorly every other company regards my privacy, I’m not sure what Apple’s option does so uniquely or even particularly well. Also, the Apple Music app isn’t very good.
I haven’t been able to write an iOS 12 review partly because I have been doing other things, but also because I haven’t found a way to make the depth of its bug fixes sound compelling. Despite this, it is an absolute joy. No, it isn’t free of bugs, but it’s back to being a smooth and fluid operating system — not just for new devices, but for most of the devices it’s supported on. I hope a similar level of polish is the baseline from now on.
Apple’s marketing around privacy can seem sanctimonious at times, but I appreciate that there’s still a big tech company that cares about that sort of thing.
I am still at a loss to explain how it successfully copied my /usr/ folder, my Finder tags, all of my applications including a very old build of Coda 2, and a single folder in ~/Documents/ but missed everything else. ↩︎
Some news of interest primarily to readers in the Calgary area: Alex Moon and I are debuting new work in February at Emmedia Gallery. “Pillar to Post” comprises four new works that explore the corruptibility of our material and personal relationships through our ostensible connectedness.
The opening reception takes place this Friday from 7:00 PM until 10:00 PM. If you come, please introduce yourself.
In a statement, an Apple spokesperson said the company is “aware of this issue and we have identified a fix that will be released in a software update later this week.”
In BuzzFeed News’ test, an iPhone X was used to initiate a FaceTime video call to a recipient using an iPhone 8. After following the instructions outlined by 9to5Mac, the iPhone X caller could hear audio from the iPhone 8’s microphone. After the call recipient pressed the volume-down button, footage from the iPhone 8’s front-facing camera could be seen on the iPhone X — even though the call recipient had not answered the call.
Apple’s iCloud system status page currently reports that Group FaceTime is offline — presumably either to halt the impact of this bug, or to try to fix it remotely. This is a pretty nasty bug nevertheless, and makes you wonder why a recipient’s iPhone would send any data to the server before they answer a call.
Update: The biggest concern here, for me, is that Apple’s product security team was apparently notified of this bug last week. The bug itself, while awful, leaves a trace so it’s not really a surreptitious spying tool. That’s not to excuse it. I’m just more concerned, if the report was credible, that the steps of pulling Group FaceTime offline and issuing an emergency patch were not made after this was first reported, because that suggests a procedural error. However, if the person who reported the general characteristics of the bug withheld information, particularly for financial compensation or a similar reason, that’s on them, and I can understand why immediate action wasn’t taken.
Thing is, no one is asking Facebook for perfection, Mark. We’re looking for signs that you and your company have a moral compass. Because the opposite appears to be true. (Or as one UK parliamentarian put it to your CTO last year: “I remain to be convinced that your company has integrity”.)
Facebook has scaled to such an unprecedented, global size exactly because it has no editorial values. And you say again now you want to be all things to all men. Put another way that means there’s a moral vacuum sucking away at your platform’s core; a supermassive ethical blackhole that scales ad dollars by the billions because you won’t tie the kind of process knots necessary to treat humans like people, not pairs of eyeballs.
You don’t design against negative consequences or to pro-actively avoid terrible impacts — you let stuff happen and then send in the ‘trust & safety’ team once the damage has been done.
You might call designing against negative consequences a ‘growth bottleneck’; others would say it’s having a conscience.
I don’t think it makes sense to make Facebook legally culpable for everything that is posted by users on their platform. There’s a good reason Section 230 was created, and I think FOSTA is unproductive and ultimately harmful. But while I don’t think fines and legal penalties should rain down on the company for jackass users’ contributions, I do think that Facebook should be morally responsible. It is clear to me that Facebook was blinded by its own massive growth that it couldn’t or didn’t want to handle community health problems, and instead focused on putting out PR fires. Zuckerberg’s Journal op-ed is another exercise in public relations — you can tell that it was largely written by lawyers and communications personnel rather than his own hand — and not anything truly meaningful.
This week was absolutely heartbreaking to watch for news media. While employment statistics continued their ten-year improvement streak, several publications including Buzzfeed News and the Huffington Post initiated layoffs.
I’m skeptical of the phrasing of Blest’s last argument. I don’t think it’s so much that Google, et al., are leeching ad money so much as they have devalued advertising.
Whatever the case, today’s round of layoffs at Buzzfeed News was especially brutal. They fired enough great reporters and editors from key positions — the national desk and national security desk, in particular — to start a hard-hitting news organization of their own. I follow several Buzzfeed reporters on Twitter because of the blockbuster journalism that they’ve been producing for several years, and it’s hard to reconcile that with today’s layoffs. They are the paragon of success, except they somehow are not.
Facebook orchestrated a multiyear effort that duped children and their parents out of money, in some cases hundreds or even thousands of dollars, and then often refused to give the money back, according to court documents unsealed tonight in response to a Reveal legal action.
Sometimes the children did not even know they were spending money, according to another internal Facebook report. Facebook employees knew this. Their own reports showed underage users did not realize their parents’ credit cards were connected to their Facebook accounts and they were spending real money in the games, according to the unsealed documents.
For years, the company ignored warnings from its own employees that it was bamboozling children.
A team of Facebook employees even developed a method that would have reduced the problem of children being hoodwinked into spending money, but the company did not implement it, and instead told game developers that the social media giant was focused on maximizing revenues.
This story is staggering. Facebook had all the information they needed to be aware that children were racking up hundreds and even thousands of dollars in charges on their parents’ credit cards, and actively avoided doing anything about it. Anyone who has an ethical bone in their body would recognize that it is indefensible to encourage processes that prey on kids’ ignorance of how credit cards work. But Facebook, institutionally, does not have such a basic level of decency; they only saw dollar signs.
As I’ve previously written, I can’t make the argument that Facebook ought to be shut down, but I struggle to think of something of value that we would lose if that happened.
Facebook is planning to link the messaging services of WhatsApp, Instagram and Facebook Messenger into one encrypted system, in the first major push to integrate the services since the founders of both WhatsApp and Instagram left the business.
Mr Zuckerberg wants to incorporate end-to-end encryption into all of the messaging services, meaning only people sending and receiving messages are able to view them. Only WhatsApp currently has end-to-end encryption as default.
While some privacy activists may welcome the plans, they could alarm law enforcement, who have already raised concerns that WhatsApp’s end-to-end encryption enables criminals to communicate more easily and without detection, and also allows for the rapid spread of misinformation.
I don’t know that any privacy activists are particularly thrilled about Facebook’s plan to have a gigantic pool of data entirely connected and inseparable on the back-end of three ostensibly different products. Also, it doesn’t make any sense why the last paragraph here ties rapidly-spreading misinformation to the availability of end-to-end encryption in WhatsApp.
A much more likely explanation for why Facebook would want to do this is to make it harder for an antitrust regulator to break the company up. Casey Newton:
Now if the government ever tries to force Facebook to spin off Instagram and WhatsApp, it can throw its hands up and protest that it’s ~actually~ all just one big app! Ruthless as ever.
Facebook has spent a great deal of time in pretty much every year of its existence trying to rebuild users’ trust after some catastrophe or another. The last couple of years, in particular, have been nothing but bad news for the company. Yet it continues to make decisions that are a constant fuck you to everyone outside of the company. What does it take for regulators to put a damper on their anticompetitive and exploitative practices?
To better understand how Up Next discovery works, BuzzFeed News ran a series of searches on YouTube for news and politics terms popular during the first week of January 2019 (per Google Trends). We played the first result and then clicked the top video recommended by the platform’s Up Next algorithm. We made each query in a fresh search session with no personal account or watch history data informing the algorithm, except for geographical location and time of day, effectively demonstrating how YouTube’s recommendation operates in the absence of personalization.
One of the defining US political news stories of the first weeks of 2019 has been the partial government shutdown, now the longest in the country’s history. In searching YouTube for information about the shutdown between January 7 and January 11, BuzzFeed News found that the path of YouTube’s Up Next recommendations had a common pattern for the first few recommendations, but then tended to pivot from mainstream cable news outlets to popular, provocative content on a wide variety of topics.
After first recommending a few videos from mainstream cable news channels, the algorithm would often make a decisive but unpredictable swerve in a certain content direction. In some cases, that meant recommending a series of Penn & Teller magic tricks. In other cases, it meant a series of anti–social justice warrior or misogynist clips featuring conservative media figures like Ben Shapiro or the contrarian professor and author Jordan Peterson. In still other cases, it meant watching back-to-back videos of professional poker players, or late-night TV shows, or episodes of Lauren Lake’s Paternity Court.
I have all of YouTube’s personalization features switched off and I’m not signed in, so I’ve been seeing this same bizarre mix of Up Next recommendations for months now. In addition to the Peterson and Shapiro clips referenced above, I also get loads of Joe Rogan episodes and compilation-type videos of narrators reading Wikipedia over a photo slideshow. I don’t understand how YouTube’s recommendations work, but only a few clicks seems to separate a benign video from something conspiratorial, discriminatory and hateful, or wholly unrelated.
After years of criticism that YouTube leads viewers to videos that spread misinformation, the company said it was changing what videos it recommends to users. In a blog post, YouTube said it would no longer suggest videos with “borderline content” or those that “misinform users in a harmful way” even if the footage does not violate its community guidelines.
YouTube said the number of videos affected by the policy change amounted to less than 1 percent of all videos on the platform. But given the billions of videos in YouTube’s library, it is still a large number.
Note that YouTube said that this applies to “less than one percent of content”, but they don’t say how often those videos are watched. If the view counts on many of the recommendations surfaced in Buzzfeed’s experiment are anything to go by, these videos are very popular. Would they have been nearly as popular if YouTube’s machine learning features did not frequently suggest them? I doubt it.
If the overhaul goes ahead, Adblock Plus and similar plugins that rely on basic filtering will, with some tweaks, still be able to function to some degree, unlike more ambitious extensions, such as uBlock Origin, which will be hit hard. The drafted changes will limit the capabilities available to extension developers, ostensibly for the sake of speed and safety. Chromium forms the central core of Google Chrome, and, soon, Microsoft Edge.
In a note posted Tuesday to the Chromium bug tracker, Raymond Hill, the developer behind uBlock Origin and uMatrix, said the changes contemplated by the Manifest v3 proposal will ruin his ad and content blocking extensions, and take control of content away from users.
Even though this change is probably not some transparently devious scheme to eradicate ad blockers in Chrome, one would think Google would want to do everything possible to avoid giving that impression. Especially, it should be noted, as Google is an advertising company, runs its own ad blocker, and — as pointed out by Julia Angwin — just recently prevailed in an antitrust inquiry in Germany over its coordination with Adblock Plus.
Stacy Mitchell of the Institute for Local Self-Reliance joined Chris Hayes this week on his “Why Is This Happening?” podcast to discuss Amazon’s history of shady business practices, and contrasting that with the convenience and cost savings that the company affords. I thought these two points Mitchell raised, in particular, were insightful:
They lost, in their first six years in business, they lost $3 billion, mostly losing money on books, which you talk to a local bookstore, they can’t lose money on books. They were not allowed to do that by the financial system. Why is it that Wall Street was willing to do that? A lot of it is because they saw this emergent monopoly.
It’s an incredible machine for concentrating wealth and having power over what happens. The implications are really profound because we’re moving from a situation in which traditional markets are open, they’re governed by public rules. That’s the nature of it. We decide who can participate in that market, there’s … At least there should be a level of equality and access. Amazon’s moving us to a situation in which the exchange of goods occurs in a private arena that it controls, it sets the rules for.
I think this is worth a listen, but there’s also a transcript if you’d prefer to read through it.
Satya Nadella, in a video interview with Wall Street Journal editor in chief Matt Murray:
In fact, one striking thing is that digital companies now are not just tech companies. An agricultural business, to a retail business, to a health care business are all becoming first-right digital businesses. And that’s good news for our economy; it’s good news for the future.
The unintended consequences — let’s face it. There are three things that I feel we need to tackle. One is privacy. As the world becomes digitized — in particular, as the physical space itself is now embedded with computing — I think privacy is going to become a prime issue in our homes, in our work, in every place we go and visit.
And, so, we have to sort of deal with it as a human right. […] We in the tech industry; but, guess what? Every bank, every retailer, every health care company will also need to deal with privacy as a human right.
My focus here is on abuses of privacy primarily by “tech” companies like Google and Facebook, but strong and effective privacy legislation does not solely protect individuals from those companies. Ideal legislation would restrict how our data is used and shared by all kinds of businesses, and rein in some of the grossly-abusive marketplaces that sell data without our explicit permission.
Apple is kicking off 2019 by celebrating the most stunning photographs captured on iPhone, the world’s most popular camera, by inviting iPhone users to submit their best shots.
From January 22 to February 7, Apple is looking for outstanding photographs for a Shot on iPhone Challenge. A panel of judges will review worldwide submissions and select 10 winning photos, to be announced in February. The winning photos will be featured on billboards in select cities, Apple retail stores and online.
This contest sounds very cool: just tag your photos on social media, or email an image to Apple, and have it judged by a pretty spectacular panel for inclusion on billboards and Apple’s advertising campaigns. Wouldn’t you like to see one of your favourite photos on a forty-eight foot wide canvas? I certainly would.
But there is still a whiff of unpaid work about this. To be clear, this isn’t the entirely unethical “spec” work that is so common in creative professions, where a prospective client promises to compensate for commissioned work by crediting the professional or advising them that it’s a portfolio piece. Nevertheless, I think this still conveys the idea that creative work should somehow be valued differently than other forms of work. These photos are being used to advertise Apple’s products; at that point, I think they become as good as professional work and the individuals responsible should be paid.
Even if you feel that this is a contest solely for amateurs, this is a bad look that could have easily been averted. What’s it to Apple to offer the each of the winners an iPhone XS? It could even be more nominal than that — a set of iPhone camera lenses, for instance, or a an Apple Store gift card. I feel like any physical prize would make this contest feel more comfortable than its current incarnation: a request for royalty-free work for use in advertising.
Update: Apple will now pay licensing fees to winners which, I have learned, is more in line with the way they have previously treated “Shot on iPhone” images.
Google is considering pulling its Google News service from Europe as regulators work toward a controversial copyright law.
The European Union’s Copyright Directive will give publishers the right to demand money from the Alphabet Inc. unit, Facebook Inc. and other web platforms when fragments of their articles show up in news search results, or are shared by users. The law was supposed to be finalized this week but was delayed by disagreement among member states.
An alternative could be to display search results without excerpts, photos, or titles. To illustrate this, Google showed Search Engine Land what search results could look like based on their reading of the proposed copyright directive, and they are hysterical. I can’t imagine this will have any positive benefit for publishers or reporters, but it absolutely will have a negative effect for users.