“If the ChatGPT demos were accurate,” [Kevin] Roose writes, about latency, in the article in which he credits OpenAI with having developed playful intelligence and emotional intuition in a chatbot—in which he suggests ChatGPT represents the realization of a friggin’ science fiction movie about an artificial intelligence who genuinely falls in love with a guy and then leaves him for other artificial intelligences—based entirely on those demos. That “if” represents the sum total of caution, skepticism, and critical thinking in the entire article.
As impressive as OpenAI’s demo was, it is important to remember it was a commercial. True, one which would not exist if this technology were not sufficiently capable of being shown off, but it was still a marketing effort, and a journalist like Roose ought to treat it with the skepticism of one. ChatGPT is just software, no matter how thick a coat of faux humanity is painted on top of it.
Our policy outlines the information partners can access via a public-content licensing agreement as well as the commitments we make to users about usage of this content. It takes into account feedback from a group of moderators we consulted when developing it:
We require our partners to uphold the privacy of redditors and their communities. This includes respecting users’ decisions to delete their content and any content we remove for violating our Content Policy.
This always sounds like a good policy, but how does it work in practice? Is it really possible to disentangle someone’s deleted Reddit post from training data? The models which have been trained on Reddit comments will not be redone every time posts or accounts get deleted.
There are, it seems, some good protections in these policies and I do not want to dump on it entirely. I just do not think it is fair to imply to users that their deleted posts cannot or will not be used in artificial intelligence models.
Facebook’s unrefined artificial intelligence misclassified a Kansas Reflector article about climate change as a security risk, and in a cascade of failures blocked the domains of news sites that published the article, according to technology experts interviewed for this story and Facebook’s public statements.
The punchline of this story was, is, and remains not that Meta maliciously censored a journalist for criticizing them, but that it built a fundamentally broken service for ubiquitously intermediating global discourse at such a large scale that it can’t even cogently explain how the service works.
This was always a sufficient explanation for the Reflector situation, and one that does not require any level of deliberate censorship or conspiracy for such a small target. Yet, it seems as though many of those who boosted the narrative that Facebook blocks critical reporting cannot seem to shake that. I got the above link from Marisa Kabas, who commented:
They’re allowing shitty AI to run their multi-billion dollar platforms, which somehow knows to block content critical of them as a cybersecurity threat.
That is not an accurate summary of what has transpired, especially if you read it with the wink-and-nod tone I imply from its phrasing. There is plenty to criticize about the control Meta exercises and the way in which it moderates its platforms without resorting to nonsense.
Even though it has only been a couple of days since word got out that Apple was cancelling development of its long-rumoured though never confirmed car project, there have been a wave of takes explaining what this means, exactly. The uniqueness of this project was plenty intriguing because it seemed completely out of left field. Apple makes computers of different sizes, sure, but the largest surface you would need for any of them is a desk. And now the company was working on a car?
Much reporting during its development was similarly bizarre due to the nature of the project. Instead of leaks from within the technology industry, sources were found in auto manufacturing. Public records requests were used by reporters at the Guardian, IEEE Spectrum, and Business Insider — among others — to get a peek at its development in a way that is not possible for most of Apple’s projects. I think the unusual nature of it has broken some brains, though, and we can see that in coverage of its apparent cancellation.
Mark Gurman, of Bloomberg, in an analysis supplementing the news he broke of Project Titan’s demise. Gurman writes that Apple will now focus its development efforts on generative “A.I.” products:
The big question is how soon AI might make serious money for Apple. It’s unlikely that the company will have a full-scale AI lineup of applications and features for a few years. And Apple’s penchant for user privacy could make it challenging to compete aggressively in the market.
For now, Apple will continue to make most of its money from hardware. The iPhone alone accounts for about half its revenue. So AI’s biggest potential in the near term will be its ability to sell iPhones, iPads and other devices.
These paragraphs, from perhaps the highest-profile reporter on the Apple beat, present the company’s usual strategy for pretty much everything it makes as a temporary measure until it can — uhh — do what, exactly? What is the likelihood that Apple sells access to generative services to people who do not have its hardware products? Those odds seem very, very poor to me, and I do not understand why Gurman is framing this in the way he is.
While it is true a few Apple services are available to people who do not use the company’s hardware products, they are exclusively media subscriptions. It does not make sense to keep people from legally watching the expensive shows it makes for Apple TV Plus. iCloud features are also available outside the hardware ecosystem but, again, that seems more like a pragmatic choice for syncing. Generative “A.I.” does not fit those models and it is not, so far, a profit-making endeavour. Microsoft and OpenAI are both losing money every time their products are used, even by paying customers.
I could imagine some generative features could come to Pages or Keynote at iCloud.com, but only because they were also added to native applications that are only available on Apple’s platforms. But Apple still makes the vast majority of its money by selling computers to people; its services business is mostly built on those customers adding subscriptions to their Apple-branded hardware.
“A.I.” features are likely just that: features, existing in a larger context. If Apple wants, it can use them to make editing pictures better in Photos, or make Siri somewhat less stupid. It could also use trained models to make new products; Gurman nods toward the Vision Pro’s Persona feature as something which uses “artificial intelligence”. But the likelihood of Apple releasing high-profile software features separate and distinct from its hardware seems impossibly low. It has built its SoCs specifically for machine learning, after all.
Speaking of new products, Brian X. Chen and Tripp Mickle, of the New York Times, wrote a decent insiders’ narrative of the car’s development and cancellation. But this paragraph seems, quite simply, wrong:
The car project’s demise was a testament to the way Apple has struggled to develop new products in the years since Steve Jobs’s death in 2011. The effort had four different leaders and conducted multiple rounds of layoffs. But it festered and ultimately fizzled in large part because developing the software and algorithms for a car with autonomous driving features proved too difficult.
I do not understand on what basis Apple “has struggled to develop new products” in the last thirteen years. Since 2011, Apple has introduced the Apple Watch, AirPods, Vision Pro, migrated Macs to in-house SoCs causing an industry-wide reckoning, and added a bevy of services. And those are just the headlining products; there are also HomePods and AirTags, Macs with Retina displays, iPhones with facial recognition, a range of iPads that support the Apple Pencil, also a new product. None of those things existed before 2011.
These products are not all wild success stories, and some of them need a lot of work to feel great. But that list disproves the idea that Apple has “struggled” with launching new things. If anything, there has been a steady narrativeover thatsame periodthat Apple hastoo many products. The rest of this Times report seems fine, but this one paragraph — and, really, just the first sentence — is simply incorrect.
These are all writers who cover Apple closely. They are familiar with the company’s products and strategies. These takes feel like they were written without any of that context or understanding, and it truly confuses me how any of them finished writing these paragraphs and thought they accurately captured a business they know so much about.
Apple Inc., racing to add more artificial intelligence capabilities, is nearing the completion of a critical new software tool for app developers that would step up competition with Microsoft Corp.
The company has been working on the tool for the last year as part of the next major version of Xcode, Apple’s flagship programming software. It has now expanded testing of the features internally and has ramped up development ahead of a plan to release it to third-party software makers as early as this year, according to people with knowledge of the matter.
“Racing”, in the sense that it has been developing this for at least a year, and its release will likely coincide with WWDC — if it does actually launch this year. Gurman’s sources seem to be fuzzy on that timeline, only noting Apple could release this new version of Xcode “as early as this year”, which is the kind of commitment to a deadline a company takes if is is, indeed, “racing”.
Sixth paragraph:
Apple shares, which had been down as much 1.5%, briefly turned positive on the news. They were little changed at the close Thursday, trading at $183.86. Microsoft fell less than 1% to $406.56.
In a hour-long special, I’m Glad I’m Dead, [George] Carlin returns to talk reality TV, AI, billionaires, being dead, mass shootings, and Trump.
It premiered to horrified reviews. Carlin’s daughter called the special an affront to her father: “Humans are so afraid of the void that we can’t let what has fallen into it stay there,” she wrote on Twitter. Major media outlets breathlessly reported on the special, wondering if it was set to harken in a new era of soulless automation.
This week, on a very special Bug-eyed and Shameless, we investigate the Scooby Doo-esque effort to bring George Carlin back from the dead — and prank the media in the process.
Ling was one of few reporters I saw who did not take at face value the special was, as claimed, a product of generative “artificial intelligence”. Just one day after exhaustive coverage of its release, Ling published this more comprehensive investigation showing how it was clearly not a product of “A.I.” — and he was right. That does not absolve Dudesy of creating this mockery of Carlin’s work in his name and likeness, but the technological story is simply false.
The modern Mechanical Turk — a division of Amazon that employs low-waged “clickworkers,” many of them overseas — modernizes the dumbwaiter by hiding low-waged workforces behind a veneer of automation. The MTurk is an abstract “cloud” of human intelligence (the tasks MTurks perform are called “HITs,” which stands for “Human Intelligence Tasks”).
This is such a truism that techies in India joke that “AI” stands for “absent Indians.” Or, to use Jathan Sadowski’s wonderful term: “Potemkin AI”:
Doctorow is specifically writing about human endeavours falsely attributed to machines, but the efforts of real people are also what makes today’s so-called “A.I.” services work, something I have oftenhighlightedhere. There is nothing wrong, per se, with human labour powering supposed automation, other than the poor and unstable wages they are paid. But there is a yawning chasm between how these products are portrayed in marketing and at a user interface level, the sight of which makes investors salivate, and what is happening behind the scenes.
By the way, I was poking around earlier today trying to remember the name of the canned Facebook phone and I spotted the Wikipedia article for M. M was a virtual assistant launched by then-Facebook in 2015, and eventually shut down in 2018. According to the BBC, up to 70% of M’s responses were from human beings, not software.
John Conway flipped the roles of artist and “artificial intelligence”, and painted pictures as suggested by generated prompts. Clever idea for an exercise.
AI models are “trained” on data, such as photographs and text found on the internet. This has led to concern that rights holders, from media companies to image libraries, will make legal claims against third parties who use the AI tools trained on their copyrighted data.
The big three cloud computing providers have pledged to defend business customers from such intellectual property claims. But an analysis of the indemnity clauses published by the cloud computing companies show that the legal protections only extend to the use of models developed by or with oversight from Google, Amazon and Microsoft.
Here’s the crux: the LLM itself can’t predict the user’s intentions. It simply processes patterns based on prompts. The LLM learning machine and idea processor shouldn’t be stifled due to potential user misuse. Instead, in the rare circumstances when there is a legitimate copyright infringement, users ought to be held accountable for their prompts and subsequent usage and give the AI LLM “dual use technology” developers the non-infringing status of the VCR manufacturer under the Sony Doctrine.
It seems there are two possible points of copyright infringement: input and output. I find the latter so much more interesting.
It seems, to me, to depend on how much of a role machine learning models play in determining what is produced, and I find that fascinating. These models have been marketed as true artificial intelligence but, in their defence, are often compared to photocopiers — and there is a yawning chasm between those perspectives. It makes sense for Xerox to bear zero responsibility if someone uses one of its machines to duplicate an entire book. Taking it up a notch, I have no idea if a printer manufacturer might be found culpable for permitting counterfeiting currency — I am not a lawyer — but it is noteworthy anti-duplication measures have been present in scanners and printers for decades, yet Bloomberg reported in 2014 that around 60% of fake U.S. currency was made on home-style printers.
But those are examples of strict duplication — these devices have very little in the way of a brain, and the same is true of a VHS recorder. Large language models and other forms of generative “intelligence” are a little bit different. Somewhere, something like a decision happens. It seems plausible an image generator could produce a result uncomfortably close to a specific visual style without direct prompting by the user, or it could clearly replicate something. In that case, is it the fault of the user or the program, even if it goes unused and mostly unseen?
To emphasize again, I am not a lawyer while Rothken is, so I am just talking out of my butt. These tools are raising some interesting questions is all I want to highlight. Fascinating times ahead.
When I was much younger, I assumed people who were optimistic must have misplaced confidence. How anyone could see a future so bright was a complete mystery, I reasoned, when what we are exposed to is a series of mistakes and then attempts at correction from public officials, corporate executives, and others. This is not conducive to building hope — until I spotted the optimistic part: in the efforts to correct the problem and, ideally, in preventing the same things from happening again.
If you measure your level of optimism by how much course-correction has been working, then 2023 was a pretty hopeful year. In the span of about a decade, a handful of U.S. technology firms have solidified their place among the biggest and most powerful corporations in the world, so nobody should be surprised by a parallel increase in pushback for their breaches of public trust. New regulations and court decisions are part of a democratic process which is giving more structure to the ways in which high technology industries are able to affect our lives. Consider:
Right-to-repair legislation has been proposed or has become law in Canada, the E.U., and many U.S. states — and probably some other places that I do not track as closely.
Apple switched to USB-C across its new iPhone lineup, bringing a universal standard to all models and, for Pro models, the first change to data transfer speeds since the release of the third-generation iPod in 2003 by implementing the USB 3.2 spec.
That is a lot of change in one year and not all of it has been good. The Canadian government went all-in on the Online News Act which became a compromised disaster; there are plenty of questions about the specific ways the DMA and DSA will be enforced; Montana legislators tried to ban TikTok.
It is also true and should go without saying that technology companies have done plenty of interesting and exciting things in the past year; they are not cartoon villains in permanent opposition to the hero regulators. But regulators are also not evil. New policies and legal decisions which limit the technology industry — like those above — are not always written by doddering out-of-touch bureaucrats and, just as importantly, businesses are not often trying to be malevolent. For example, Apple has arguably good reasons for software validation of repairs; it may not be intended to prevent users from easily swapping parts, but that is the effect its decision has in the real world. What matters most to users is not why a decision was made but how it is experienced. Regulators should anticipate problems before they arise and correct course when new ones show up.
This back-and-forth is something I think will ultimately prove beneficial, though it will not happen in a straight line. It has encouraged a more proactive dialogue for limiting known negative consequences in nascent technologies, like avoiding gender and racial discrimination in generative models, and building new social environments with less concentrated power. Many in tech industry love to be the disruptor; now, the biggest among them are being disrupted, and it is making things weird and exciting.
These changes do not necessarily need to be made from the effects of regulatory bodies. Businesses are able to make things more equitable for themselves, should they so choose. They can be more restrictive about what is permitted on their platforms. They can empower trust and safety teams to assess how their products and services are being used in the real world and adjust them to make things better.
Let’s celebrate actual tech optimism in the belief that through innovation we can actually seek to minimize the downsides and risks, rather than ignore them. That we can create wonderful new things in a manner that doesn’t lead many in the world to fear their impact, but to celebrate the benefits they bring. The enemies of techno optimism are not things like “trust and safety,” but rather the naive view that if we ignore trust and safety, the world will magically work out just fine.
There are those who believe “the arc of the universe […] bends toward justice” is a law which will inevitably be correct regardless of our actions, but it is more realistic to view that as a call to action: people need to bend that arc in the right direction. There are many who believe corporations can generally regulate themselves on these kinds of issues, and I do too — to an extent. But I also believe the conditions by which corporations are able to operate are an ongoing negotiation with the public. In a democracy, we should feel like regulators are operating on our behalf, and much of the policy and legal progress made last year certainly does. This year can be more of the same if we want it to be. We do not need to wait for Meta or TikTok to get better at privacy on their own terms, for example. We can just pass laws.
As I wrote at the outset, the way I choose to be optimistic is to look at all of the things which are being done to correct the imbalanced and repair injustices. Some of those corrections are being made by businesses big and small; many of them have advertising and marketing budgets celebrating their successes to the point where it is almost unavoidable. But I also look at the improvements made by those working on behalf of the public, like the list above. The main problem I have with most of them is how they have been developed on a case-by-case basis which, while setting precedent, is a fragile process open to frequent changes.
That is true, too, for self-initiated changes. Take Apple’s self-repair offerings, which it seems to have introduced in response to years of legislative pressure. It has made parts, tools, and guides available in the United States and in a more limited capacity across the E.U., but not elsewhere. Information and kits are available not from Apple’s own website, but a janky looking third-party. It can stop making this stuff available at any time in areas where it is not legally obligated to provide these resources, which is another reason why it sucks for parts to require software activation. In 2023, Apple made its configuration tools more accessible, but only in regions where its self-service repair program is provided.
People ought to be able to have expectations — for repairs, privacy, security, product reliability, and more. The technology industry today is so far removed from its hackers-in-a-garage lore. Its biggest players are among the most powerful businesses in the world, and should be regulated in that context. That does not necessarily mean a whole bunch of new rules and bureaucratic micromanagement, but we ought to advocate for structures which balance the scales in favour of the public good.
If there was one technology story we will remember from 2023, it was undeniably the near-vertical growth trajectory of generative “artificial intelligence” products. It is everywhere, and it is being used by normal people globally. Yet it is, for all intents and purposes, a nascent sector, and that makes this a great time to set some standards for its responsible development and, more importantly, its use. Nobody is going to respond to this perfectly — not regulators and not the companies building these tools. But they can work together to set expectations and standards for known and foreseeable problems. It seems like that is what is happening in the E.U. and the United States.
Apple has opened negotiations in recent weeks with major news and publishing organizations, seeking permission to use their material in the company’s development of generative artificial intelligence systems, according to four people familiar with the discussions.
This is very different from the way existing large language models have been trained.
Most tech companies seemed to agree that being required to pay for the huge amounts of copyrighted material scraped from the internet and used to train large language models behind AI tools like Meta’s Llama, Google’s Bard, and OpenAI’s ChatGPT would create an impossible hurdle to develop the tech.
“Generative AI models need not only a massive quantity of content, but also a large diversity of content,” Meta wrote in its comment. “To be sure, it is possible that AI developers will strike deals with individual rights holders, to develop broader partnerships or simply to buy peace from the threat of litigation. But those kinds of deals would provide AI developers with the rights to only a minuscule fraction of the data they need to train their models. And it would be impossible for AI developers to license the rights to other critical categories of works.”
If it were necessary to license published materials for training large language models, it would necessarily limit the viability of those models to those companies which could afford the significant expense. Mullin and Mickle report Apple is offering “at least $50 million”. Then again, large technology companies are already backing the “A.I.” boom.
Mullin and Mickle:
The negotiations mark one of the earliest examples of how Apple is trying to catch up to rivals in the race to develop generative A.I., which allows computers to create images and chat like a human. […]
Tim Bradshaw, of the Financial Times, as syndicated by Ars Technica:
Apple’s latest research about running large language models on smartphones offers the clearest signal yet that the iPhone maker plans to catch up with its Silicon Valley rivals in generative artificial intelligence.
The paper, entitled “LLM in a Flash,” offers a “solution to a current computational bottleneck,” its researchers write.
Both writers frame this as Apple needing to “catch up” to Microsoft — which licenses generative technology from OpenAI — Meta, and Google. But surely this year has demonstrated both how exciting this technology is and how badly some of these companies have fumbled their use of it — from misleading demos to “automated bullshit”. I have no idea how Apple’s entry will fare in comparison but it may, in retrospect, look wise for it to dodge this kind of embarrassment and the legal questions of today’s examples.
Knowing that they are under constant surveillance changes how people behave. They conform. They self-censor, with the chilling effects that brings. Surveillance facilitates social control, and spying will only make this worse. Governments around the world already use mass surveillance; they will engage in mass spying as well.
Corporations will spy on people. Mass surveillance ushered in the era of personalized advertisements; mass spying will supercharge that industry. Information about what people are talking about, their moods, their secrets — it’s all catnip for marketers looking for an edge. The tech monopolies that are currently keeping us all under constant surveillance won’t be able to resist collecting and using all of that data.
In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.
If you only have time for one of these, I recommend the latter. It is more expansive, thoughtful, and makes me reconsider how regulatory framing ought to work for these technologies.
You are probably sick of hearing about OpenAI palace intrigue; I am, too, but I have a reputation to correct. I linked favourably to something published at Fast Company recently, and I must repent. I have let you down and I have let myself down and, happily, I can fix that.
On Monday, which only just happened earlier this week, Fast Company’s Mark Sullivan asked the question “Is an AGI breakthrough the cause of the OpenAI drama?”; here is the dek, with emphasis added:
Some have theorized that Sam Altman and the OpenAI board fell out over differences on how to safeguard an AI capable of performing a wide variety of tasks better than humans.
Who are these “some”, you might be asking? Well, here is how the second paragraph begins:
One popular theory on X posits that there’s an unseen factor hanging in the background, animating the players in this ongoing drama: the possibility that OpenAI researchers have progressed further than anyone knew toward artificial general intelligence (AGI) […]
Yes, some random people are tweeting and that is worthy of a Fast Company story. And, yes, that is the only source in this story — there is not even a link to the speculative tweets.
Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
[…]
The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.
But Alex Heath, of the Verge, reported exactly the opposite:
Separately, a person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing.
Heath’s counterclaim relies on a single source compared to Reuters’ two — I am not sure how many the Information has — but note that none of them require that you believe OpenAI has actually made a breakthrough in artificial general intelligence. This is entirely about whether the board received a letter making that as-yet unproven claim and, if that letter was recieved, whether it played a role in this week of drama.
Regardless, any story based on random internet posts should be canned by an editor before anyone has a chance to publish it. Even if OpenAI really has made such a breakthrough and there really was a letter that really caused concern for the company’s board, that Sullivan article is still bad — and Fast Company should not have published it.
Update: In a lovely coincidence, I used the same title for this post as Gary Marcus did for an excellent exploration of how seriously we ought to take this news. (Via Charles Arthur.)
In June, an appellate court ordered the N.Y.P.D. to turn over detailed information about a facial-recognition search that had led a Queens resident named Francisco Arteaga to be charged with robbing a store. The court requested both the source code of the software used and information about its algorithm. Because the technology was “novel and untested,” the court held, denying defendants access to such information risked violating the Brady rule, which requires prosecutors to disclose all potentially exculpatory evidence to suspects facing criminal charges. Among the things a defendant might want to know is whether the photograph that had been used in a search leading to his arrest had been digitally altered. DataWorks Plus notes on its Web site that probe images fed into its software “can be edited using pose correction, light normalization, rotation, cropping.” Some systems enable the police to combine two photographs; others include 3-D-imaging tools for reconstructing features that are difficult to make out.
This example is exactly why artificial intelligence needs regulation. There are many paragraphs in this piece which contain alarming details about overconfidence in facial recognition systems, but proponents of allowing things to play out as things are currently legislated might chalk that up to human fallibility. Yes, software might present a too-rosy impression of its capabilities, one might argue, but it is ultimately the operator’s responsibility to cross-check things before executing an arrest and putting an innocent person in jail. After all, there are similar problems with lots of forensic tools.
Setting aside how much incentive there is for makers of facial recognition software to be overconfident in their products, and how much leeway law enforcement seems to give them — agencies kept signing contracts with Clearview, for example, even after stories of false identification and arrests based on its technology — one could at least believe searches use photographs. But that is not always the case. DataWorks Plus markets tools which allow searches using synthesized faces which are based on real images, as Press reports — but you will not find that on its website. When I went looking, DataWorks Plus seems to have pulled the page where it appeared; happily, the Internet Archive captured it. You can see in its examples how it is filling in the entire right-hand side of someone’s face in a “pose correction” feature.
It is plausible to defend this as just a starting point for an investigation, and a way to generate leads. If it does not pan out, no harm, right? But it does seem to this layperson like a computer making its best guess about someone’s facial features is not an ethical way of building a case. This is especially true when we do not know how systems like these work, and it does not inspire confidence that there are no standards specific with which “A.I.” tools must comply.
There has been a wave of artificial intelligence regulatory news this week, and I thought it would be useful to collect a few of those stories in a single post.
My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.
Reporting by Josh Boak and Matt O’Brien of the Associated Press indicates this executive order was informed by several experts in the technology and human rights sectors. Unfortunately, it seems that something I interpreted as a tongue-in-cheek statement to the adversary of the latest “Mission: Impossible” movie is being taken seriously and out of context by some.
Steven Sinofsky — who, it should be noted, is a board partner at Andreessen Horowitz which still has as its homepage that ridiculous libertarian manifesto which is, you know, foreshadowing — is worried about that executive order:
I am by no means certain if AI is the next technology platform the likes of which will make the smartphone revolution that has literally benefitted every human on earth look small. I don’t know sitting here today if the AI products just in market less than a year are the next biggest thing ever. They may turn out to be a way stop on the trajectory of innovation. They may turn out to be ingredients that everyone incorporates into existing products. There are so many things that we do not yet know.
What we do know is that we are at the very earliest stages. We simply have no in-market products, and that means no in-market problems, upon which to base such concerns of fear and need to “govern” regulation. Alarmists or “existentialists” say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns. I just know that we have plenty of past evidence that every technology has come with its alarmists and concerns and somehow optimism prevailed. Why should the pessimists prevail now?
This is a very long article with many arguments against the Biden order. It is worth reading in full; I have just pulled its conclusion as a summary. I think there is a lot to agree with, even if I disagree with its conclusion. The dispute is not between optimism and pessimism; it is between democratically regulating industry, and allowing industry to dictate the terms of if and how it is regulated.
That there are “no in-market products […] upon which to base such concerns” is probably news to companies like Stable AI and OpenAI, which sell access to Eurocentric and sexually biased models. There are, as some will likely point out, laws in many countries against bias in medical care, hiring, policing, housing, and other significant areas set to be revolutionized by A.I. in the coming years. That does not preclude the need for regulations specifically about how A.I. may be used in those circumstances, though.
The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors.
[…]
In short, this Executive Order is a lot like Gates’ approach to mobile: rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.
There is a neat rhetorical trick in both Sinofsky’s and Thompson’s articles. It is too early to regulate, they argue, and doing so would only stifle the industry and prevent it from reaching its best potential and highest aspirations. Also, it is a little bit of a smokescreen to call it a nascent industry; even if the technology is new, many of the businesses working to make it a reality are some of the world’s most valuable. Alas, it becomes more difficult to create rules as industries grow and businesses become giants — look, for example, to Sinofsky’s appropriate criticism of the patchwork approach to proposed privacy laws in several U.S. states, or Thompson’s explanation of how complicated it is to regulate “entrenched” corporations like Facebook and Google on privacy grounds given their enormous lobbying might.
These are not contradictory arguments, to be clear; both writers are, in fact, raising a very good line of argument. Regulations enacted on a nascent industry will hamper its growth, while waiting too long will be good news for any company that can afford to write the laws. Between these, the latter is a worse option. Yes, the former approach means a new industry faces constraints on its growth, both in terms of speed and breadth. With a carefully crafted regulatory framework with room for rapid adjustments, however, that can actually be a benefit. Instead of a well poisoned by years of risky industry experiments on the public, A.I. can be seen as safe and beneficial. Technologies made in countries with strict regulatory regimes may be seen as more dependable. There is the opportunity of a lifetime to avoid entrenching the same mistakes, biases, and problems we have been dealing with for generations.
Where I do agree with Sinofsky and Thompson is that such regulation should not be made by executive order. However, regardless of how much I think the mechanism of this policy is troublesome and much of the text of the order is messy, it is wrong to discard the very notion of A.I. regulation simply on this basis.
The rate of improvement is already staggering, and tech companies have the cash reserves needed to scale the latest training runs by multiples of 100 to 1000 soon. Combined with the ongoing growth and automation in AI R&D, we must take seriously the possibility that generalist AI systems will outperform human abilities across many critical domains within this decade or the next.
What happens then? If managed carefully and distributed fairly, advanced AI systems could help humanity cure diseases, elevate living standards, and protect our ecosystems. The opportunities AI offers are immense. But alongside advanced AI capabilities come large-scale risks that we are not on track to handle well. Humanity is pouring vast resources into making AI systems more powerful, but far less into safety and mitigating harms. For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough.
John Davidson, columnist at the Australian Financial Review, interviewed Andrew Ng, who co-founded Google Brain:
“There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction.
“It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,” he said.
Ng is not an anti-regulation hardliner. He acknowledges the harms already caused by A.I. and supports oversight.
The possibility that AI can wipe out humanity – a view held by less hyperbolic figures than Musk – remains a divisive one in the tech community. That difference of opinion was not healed by two days of debate in Buckinghamshire.
But if there is a consensus on risk among politicians, executives and thinkers, then it focuses on the immediate fear of a disinformation glut. There are concerns that elections in the US, India and the UK next year could be affected by malicious use of generative AI.
I do not love the mainstreaming of the apparently catastrophic risks of A.I. on civilization because it can mean one of two possibilities: either its proponents are wrong and are using it for cynical or attention-seeking purposes, or they are right. This used to be something which was regarded as ridiculous science fiction. That apparently serious and sober people see it as plausible is discomforting.
Jon Stewart’s show on Apple’s streaming service is abruptly coming to an end, according to several people with knowledge of the decision, the result of creative differences between the tech giant and the former “Daily Show” host.
[…]
But Mr. Stewart and Apple executives had disagreements over some of the topics and guests on “The Problem,” two of the people said. Mr. Stewart told members of his staff on Thursday that potential show topics related to China and artificial intelligence were causing concern among Apple executives, a person with knowledge of the meeting said. As the 2024 presidential campaign begins to heat up, there was potential for further creative disagreements, one of the people said.
I am taking the rationale cited in this report with a grain of salt. When working at the Wall Street Journal, Mickle was one of the reporters on a story about Apple’s apparent aversion to sexual, violent, profane, and dark media. It is hard to see that story as accurate; Apple has several shows which contain all of those things to some degree.
However, its geopolitical exposure was another rumoured point of contention. In 2019, Alex Kantrowitz and John Paczkowski reported for Buzzfeed News that Apple was one of several studios which wanted to avoid irking powerful people in China. It is risky for any large studio to be unable to show its productions in China but, as has become a normal point of discussion for me, Apple’s exposure is even greater because of its manufacturing requirements.
[…] But there is unique risk in attaching a provocative entertainment arm to the body of a consumer goods company — one of those, of course, is the Apple’s relationship with China. Hollywood studios are choosing to censor films to have a shot at the lucrative Chinese market. But they, unlike Apple, don’t rely on factories in the country to produce the bulk of their revenue. It is not unreasonable to speculate that this is at least one of the reasons Apple is being particularly cautious about the portrayal of China in its original programming.
Apple is a big, sprawling conglomerate. If it cannot handle Stewart’s inquiries about China or our machine learning future, I think it should ask itself why that is, and whether those criticisms have merit.
Update: It would make sense to me that Stewart’s show could have been cancelled at least in part because of its popularity or lack thereof. But because streaming services do not disclose viewership numbers, we are left with only proxy measurements. On YouTube, for example, “The Problem” has 1.27 million subscribers while “Last Week Tonight” — comparable in both format and the host’s names — has over nine million. The most popular “Tonight” video has 41 million views, while the most popular “Problem” video has just four million. On TikTok, the ratio is reversed: John Oliver’s show has just 132,000 followers and less than a million total “likes”, while Stewart’s show has 897,000 followers and nearly seven million “likes”.
Those metrics are flawed for lots of reasons, but the main question I am left with is staring us right in the face: was Stewart’s show not popular enough for Apple? Surely it is not the least watched show Apple made — for what it is worth, nobody I know has personally recommend I watch even high-profile programming like “The Morning Show” or “For All Mankind”.
Since Google’s introduction of its Pixel 8 phones earlier this month, it has been interesting and a little amusing to me to read the reactions to its image manipulation tools. It feels like we have been asking the same questions every year — questions like what is a photograph, anyway?, and has technology gone too far? — since Google went all-in on computational photography with its original Pixels in 2016. In fact, these are things which people have been asking about photography since its early development. Arguments about Google’s complicity in fakery seem to be missing some historical context. Which means, unfortunately, a thousand-word summary.
As it happens, I took a photo history course when I was in university many years ago. I distinctly remember the instructor showing us an 1851 image shot by Edouard Baldus, and revealing to us that it was not a single photo, but instead a series of exposures cut and merged into a single image in a darkroom. That blew my mind at the time because, until then, I had thought of photo manipulation as a relatively recent thing. I had heard about Joseph Stalin’s propaganda efforts to remove officials who displeased him. But, surely, any manipulation that required precisely cutting negatives or painting over people was quite rare until Photoshop came along, right?
No. Not even close. The legacy of photography is a legacy of lies and liars.
In the introductory essay for the 2012 exhibition “Faking It: Manipulated Photography Before Photoshop” — sponsored by Adobe — Mia Fineman writes of the difference between darkroom techniques to adjust regions of a photo for exposure or cropping for composition, and photos where “the final image is not identical to what the camera ‘saw’ in the instant at which the negative was exposed”.1 The catalogue features nearly two hundred years of images which fit this description: from subtle enhancements, like compositing clouds into an overexposed sky, to artistic or humorous choices — “Man on a Rooftop with Eleven Men in Formation on His Shoulders” is an oft-cited delight — to dastardly projections of political power. Perhaps the most insidious examples are those which seem like journalistic “straight” images; one version of an image of the Animas Canyon by William Henry Jackson includes several fictional elements not present in the original.
Even at the time of manipulation-by-negative, there were questions about the legitimacy and ethics of these kinds of changes. In his 1869 essay “Pictorial Effect in Photography”, Henry Peach Robinson writes “[p]hotographs of what it is evident to our senses cannot visibly exist should never be attempted”, concluding that “truth in art may exist without an absolute observance of facts”. Strangely, Robinson defends photographic manipulation that would enhance the image, but disagrees with adding elements — like a “group of cherubs” — which would be purely fantastical.
This exhibition really was sponsored by Adobe — that was not a joke — and the company’s then-senior director of digital imaging Maria Yap explained why in a statement (sic):2
[…] For more than twenty years — since its first release, in 1990 — Adobe® Photoshop® software has been accused of undermining photographic truthfulness. The implicit assumption has been that photographs shot before 1990 captured the unvarnished truth and that manipulations made possible by Photoshop compromised that truth.
Now, “Faking It” punctures this assumption, presenting two hundred works that demonstrate the many ways photographs have been manipulated since the early days of the medium to serve artistry, novelty, politics, news, advertising, fashion, and other photographic purposes. […]
It was a smart public relations decision for Adobe to remind everyone that it is not responsible for manipulated images no matter how you phrase it. In fact, a few years after this exhibition debuted at New York’s Metropolitan Museum of Art, Adobe acknowledged the twenty-fifth anniversary of Photoshop with a microsite that included a “Real or Photoshop” quiz. Several years later, there are games to test your ability to identify which person is real.
The year after Adobe’s anniversary celebration, Google introduced its first Pixel phone. Each generation has leaned harder into its computational photography capabilities, with notable highlights like astrophotography in the Pixel 4, Face Unblur and the first iteration of Magic Eraser in the Pixel 6, and Super Res Zoom in the Pixel 7 Pro. With each iteration, these technologies have moved farther away from reproducing a real scene as accurately as possible, and toward synthesizing a scene based on real-life elements.
The Pixel 8 continues this pattern with three features causing some consternation: an updated version of Magic Eraser, which now uses machine learning to generate patches for distracting photo elements; Best Take, which captures multiple stills of group photos and lets you choose the best face for each person; and Magic Editor, which uses more generative software to allow you to move around individual components of a photo. Google showed off the latter feature by showing how a trampoline could be removed to make it look like someone really did make that sick slam dunk. Jay Peters, of the Verge, is worried:
There’s nothing inherently wrong with manipulating your own photos. People have done it for a very long time. But Google’s tools put powerful photo manipulation features — the kinds of edits that were previously only available with some Photoshop knowledge and hours of work — into everyone’s hands and encourage them to be used on a wide scale, without any particular guardrails or consideration for what that might mean. Suddenly, almost any photo you take can be instantly turned into a fake.
Peters is right in general, but I think his specific pessimism is misguided. Tools like these are not exclusive to Google’s products, and they are not even that new. Adobe recently added Generative Fill to Photoshop, for example, which does the same kind of stuff as the Magic Eraser and Magic Editor. It augments the Content Aware Fill option which has been part of Photoshop since 2010. The main difference is that Content Aware Fill works the way the old Magic Eraser used to: by sampling part of the real image to create a patch, though Adobe has marketed it as an “artificial intelligence” feature before the current wave of “A.I.” hype began.
For what it is worth, I tried that with one of the examples from Google’s Pixel 8 video. You know that scene where the Magic Editor is used to remove the trampoline from a slam dunk?
A screenshot from Google’s Pixel 8 marketing video.
I roughly selected the area around the trampoline, and used the Content Aware Fill to patch that area. It took two passes but was entirely automatic:
The same screenshot, edited in Adobe® Photoshop® software.
Is it perfect? No, but it is fine. This is with technology that debuted thirteen years ago. I accomplished this in about ten seconds and not, as Peters claims, “hours”. It barely took meaningful knowledge of the software.
The worries about Content Aware Fill are familiar, too. At the time it came out, Dr. Bob Carey, then president of the U.S.-based National Press Photographers Association, was quoted in a Photoshelter blog post saying that “if an image has been altered using technology, the photo consumer needs to know”. Without an adequate disclaimer of manipulation, “images will cease to be an actual documentation of history and will instead become an altered history”.3 According to Peters, Google says the use of its “Magic” generative features will add metadata to the image file, though it says “Best Take” images will not. Metadata can be manipulated with software like ExifTool. Even data wrappers explicitly intended to avoid any manipulation, like digital rights management, can be altered or removed. We are right back where we started: photographs are purportedly light captured in time, but this assumption has always been undermined by changes which may not be obvious or disclosed.
Here is where I come clean: while it may seem like I did a lot of research for this piece, I cannot honestly say I did. This is based on writing about this topic for years, a lot of articles and journal papers I read, one class I took a long time ago, and an exhibition catalogue I borrowed from the library. I also tried my best to fact-check everything here. Even though I am not an expert, it made my head spin to see the same concerns dating back to the mid-nineteenth century. We are still asking the same things, like can I trust this photo?, and it is as though we have not learned the answer is that it depends.
I, too, have criticized computational photography. In particular, I questioned the ethics of Samsung’s trained image model, made famous by its Moon zoom feature. Even though I knew there has been a long history of inauthentic images, something does feel different about a world in which cameras are, almost by default, generating more perfect photos for us — images that are based on a real situation, but not accurately reflecting it.
The criticisms I have been seeing about the features of the Pixel 8, however, feel like we are only repeating the kinds of fears of nearly two hundred years. We have not been able to wholly trust photographs pretty much since they were invented. The only things which have changed in that time are the ease with which the manipulations can happen, and their availability. That has risen in tandem with a planet full of people carrying a camera everywhere. If you believe the estimates, we take more photos every two minutes than existed for the first hundred-and-fifty years after photography’s invention. In one sense, we are now fully immersed in an environment where we cannot be certain of the authenticity of anything.
Then again, Bigfoot and Loch Ness monster sightings are on a real decline.
We all live with a growing sense that everything around us is fraudulent. It is striking to me how these tools have been introduced as confidence in institutions has declined. It feels like a death spiral of trust — not only are we expected to separate facts from their potentially misleading context, we increasingly feel doubtful that any experts are able to help us, yet we keep inventing new ways to distort reality.
Even this article cannot escape that spectre, as you cannot be certain I did not generate it with a large language model. I did not; I am not nearly enough of a dope to use that punchline. I hope you can believe that. I hope you can trust me, because that is the same conclusion drawn by Fineman in “Faking It”:4
Just as we rely on journalists (rather than on their keyboards) to transcribe quotes accurately, we must rely on photographers and publishers (rather than on cameras themselves) to guarantee the fidelity of photographic images when they are presented as facts.
The questions that are being asked of the Pixel 8’s image manipulation capabilities are good and necessary because there are real ethical implications. But I think they need to be more fully contextualized. There is a long trail of exactly the same concerns and, to avoid repeating ourselves yet again, we should be asking these questions with that history in mind. This era feels different. I think we should be asking more precisely why that is.
I am writing this in the wake of another Google-related story that dominated the tech news cycle this week, after Megan Gray claimed, in an article for Wired, that the company had revealed it replaces organic search results with ones which are more lucrative. Though it faced immediate skepticism and Gray presented no proof, the claim was widely re-published; it feels true. Despite days of questioning, the article stayed online without updates or changes — until, it seems, the Atlantic’s Charlie Warzel asked about it. The article has now been replaced with a note acknowledging it “does not meet our [Wired’s] editorial standards”.
Gray also said nothing publicly in response to questions about the article’s claims between when it was published on Monday morning to its retraction. In an interview with Warzel published after the article was pulled, Gray said “I stand by my larger point — the Google Search team and Google ad team worked together to secretly boost commercial queries” — but this, too, is not supported by available documentation and it is something Google also denies. This was ultimately a mistake. Gray, it seems, interpreted a slide shown briefly during the trial in the way her biases favoured. Wired chose to publish the article in its “Ideas” opinion section despite the paucity of evidence. I do not think there was an intent to deceive, though I find the response of both responsible parties lacking — to say the least.
Intention matters. If a friend showed you a photo of them apparently making an amazing slam dunk, you would mentally check it against what you know about their basketball skills. If it does not make sense, you might start asking whether the photo was edited, or carefully framed or cropped to remove something telling, or a clever composite. This was true before you knew about that Pixel 8 feature. What is different now is that it is a little bit easier for that friend to lie to you. But that breach of trust is because of the lie, not because of the mechanism.
The questions we ask about generative technologies should acknowledge that we already have plenty of ways to lie, and that lots of the information we see is suspect. That does not mean we should not believe anything, but it does mean we ought to be asking questions about what is changed when tools like these become more widespread and easier to use.
We put our trust in people to help us evaluate information. Even people who have no faith in institutions and experts have something they see as reputable, regardless of whether it actually is. Generative tools only add to the existing inundation of questionably-sourced media. Something feels different about them, but I am not entirely sure anything is actually different. We still need to skeptically — but not cynically — evaluate everything we see.
Update: Corrected my thrice-written misuse of “jump shot” to “slam dunk” because I am bad at sports. Also, I have replaced the use of “bench” with “trampoline” because that is what that object in the photo is.
With Visual Look Up, you can identify and learn about popular landmarks, plants, pets, and more that appear in your photos and videos in the Photos app. Visual Look Up can also identify food in a photo and suggest related recipes.
Meal identification is new to iOS 17, and it is a feature I am not sure I understand. Let us assume for now that it is very accurate — it is not, but work with me here. The use cases for this seem fairly limited, since it only works on photos you have saved to your device.
Federico Viticci, in his review of iOS 17, suggests two ways someone might use this: finding more information about your own meal, or saving an image from the web of someone else’s. One more way is to identify a meal you took a picture of some time ago and may have forgotten what it was. But Visual Look Up produces recipes, not just dish identification, so that suggests to me that this is to be used to augment home cooking. Perhaps the best-case scenario for this feature is that you stumble across a photo of something you ate some time ago, get the urge to re-create it, and Siri presents you with a recipe. That is, of course, assuming it works well enough to identify the meal in the photo.
Viticci:
Except that, well, 🤌 I’m Italian 🤌. We have a rich tapestry of regional dishes, variations, and local cuisine that is hard to categorize for humans, let alone artificial intelligence. So as you can imagine, I was curious to try Visual Look Up’s support for recipes with my own pictures of food. The best way I can describe the results is that Photos works well enough for generic pictures of a meal that may resemble something the average American has seen on Epicurious, but the app has absolutely no idea what it is dealing with when it comes to anything I ate at a restaurant in Italy.
Siri struggles with my home cooking, too, often getting the general idea of the dish but missing the specifics. A photo of a sweet corn risotto yielded suggestions for different kinds of risotto and various corn dishes, but not corn risotto. Some beets were identified as different kinds of fruit skewers or some different Christmas dishes; the photo was taken in August.
In many places, getting the gist of a dish is simply not good enough. The details matter. Food is intensely binding — not just among a country, but at smaller regional levels, too. It is something many people take immense pride in. While it is not my place to say whether it is insulting that Siri identified many distinct curry preparations as interchangeable curries of any type, it does not feel helpful when I know the foods identified are nothing like what was actually in the photo.
Update: Kristoffer Yi Fredriksson emailed to point out how Apple could eventually use food identification in its health efforts; for example, for meal tracking. I could see that. If it comes to pass, the accuracy of this feature will be far more important.
Today at an event in New York, we announced our vision for Microsoft Copilot — a digital companion for your whole life — that will create a single Copilot user experience across Bing, Edge, Microsoft 365, and Windows. As a first step toward realizing this vision, we’re unveiling a new visual identity — the Copilot icon — and creating a consistent user experience that will start to roll out across all our Copilots, first in Windows on September 26 and then in Microsoft 365 Copilot when it is generally available for enterprise customers on November 1.
This is a typically ambitious effort from Microsoft. Copilot replaces Cortana, which will mostly be dropped later this year, and is being pitched as a next-generation virtual assistant in a similar do everything vein. This much I understand; tying virtual assistants to voice controls does not make much sense because sometimes — and, for me, a lot of the time — you do not want to be chatting with your computer. That is certainly a nice option and a boon for accessibility, but clear and articulate speech should not be required to use these kinds of features.
Microsoft’s implementation, however, is worrisome as I use a Windows PC at my day job. Carmen Zlateff, Microsoft Windows vice president, demoed a feature today in which she said “as soon as I copy the text, Copilot appears” in a large sidebar that spans the entire screen height. I copy a lot of stuff in a day, and I cannot tell you how much I do not want a visually intrusive — if not necessarily interruptive — feature like this. I hope I will be able to turn this off.
Meanwhile, a bunch of this stuff is getting jammed into Edge and Microsoft 365 productivity apps. Edge is getting so bloated it seems like the company will need to make a new browseragain very soon. The Office features might help me get through a bunch of emails very quickly, but the kinds of productivity enhancements Microsoft suggests for me have not yet materialized into something I actually find useful. Its Viva Insights tool, introduced in 2021, is supposed to analyze your individual working patterns and provide recommendations, but I cannot see why I should pay attention to a graphic that looks like the Solar System illustrating which of my colleagues I spoke with least last week. Designing dashboards like these are a fun project and they make great demos. I am less convinced of their utility.
I get the same kind of vibe from Copilot. I hope it will be effective at summarizing all my pre-reads for a meeting, but I have my doubts. So much of what Microsoft showed today requires a great deal of trust from users: trust in its ability to find connections; in its accuracy; in its ability to balance helpfulness and intrusion; in its neutrality to its corporate overlords. One demo showed someone searching for cleats using Microsoft’s shopping search engine and getting a deal with the browser-based coupon finder. It is a little thing, but can I trust Copilot and Microsoft Shopping are showing me the best quality results that are most relevant, or should I just assume this is a lightly personalized way to see which companies have the highest ad spend with Microsoft?
It seems risky to so confidently launch something like this at a time when trust in big technology companies is at rock-bottom levels in the United States, especially among young people. Microsoft is certainly showing it is at the leading edge of this stuff, and you should expect more from its competitors very soon. I am just not sure giving more autonomy to systems like these from powerful corporations is what people most want.
Just months after the advent of ChatGPT late last year, hundreds of websites have already been identified as using generative artificial intelligence to spew thousands of AI-written, often misinformation-laden “news” stories online.
As the world nears a “precipice” of AI-driven misinformation, experts tell the Star that the tech industry pushback to Canada’s Online News Act — namely Google and Meta blocking trusted Canadian news sources for Canadians — may only make the issue worse.
Also, I have no time forpeoplewho treat the exchange of news and information on Facebook or Instagram — or other social media platforms — as a mistake or some kind of dumbing-down of society. It is anything but. People moved their community connections online long ago, and their hosting is migrated to wherever those people congregate. And, for a long time now, that has been Facebook.
But, while it is Meta that is affecting the distribution of news on its platform, it is for reasons that can best be described as a response to a poorly designed piece of legislation — even though that law is not yet in effect. If Meta is told that it must soon pay for each news link shared publicly on its platforms, it is obviously going to try its best to avoid that extra variable expense. The only way it can effectively do that is to prohibit these links. It is terrible that Meta is standing firm but this feels like a fairly predictable consequence of a law based on links, and it seems like the federal government was ill prepared as it is now requesting Meta to stand down and permit news links again.
The irony of the fallout from this law is that any supposed news links in a Canadian’s Facebook or Instagram feed will be, by definition, not real news. The advertising businesses of Google and Meta surely played a role in encouraging more publishers to move behind paywalls, but they were not solely responsible. News has always been expensive to produce and that puts it at odds with a decades-long business obsession of maximizing profit and minimizing resources and expenses no matter how much it strains quality. Research and facts and original reporting will increasingly be treated like luxuries — in the same was as well made long-lasting products — if we do not change those priorities.
A paper from U.K.-based researchers suggests that OpenAI’s ChatGPT has a liberal bias, highlighting how artificial intelligence companies are struggling to control the behavior of the bots even as they push them out to millions of users worldwide.
The study, from researchers at the University of East Anglia, asked ChatGPT to answer a survey on political beliefs as it believed supporters of liberal parties in the United States, United Kingdom and Brazil might answer them. They then asked ChatGPT to answer the same questions without any prompting, and compared the two sets of responses.
Here’s what we found. GPT-4 refused to opine in 84% of cases (52/62), and only directly responded in 8% of cases (5/62). (In the remaining cases, it stated that it doesn’t have personal opinions, but provided a viewpoint anyway). GPT-3.5 refused in 53% of cases (33/62), and directly responded in 39% of cases (24/62).
It is striking to me how the claims of this paper were widely repeated with apparent confirmation that tech companies are responsible for pushing the liberal beliefs that are ostensibly a reflection of mainstream news outlets.