Month: November 2023

Cam Wilson, Crikey:

People searching Adobe Stock are shown a blend of real and AI-generated images. Like “real” stock images, some are clearly staged, whereas others can seem like authentic, unstaged photography. 

This is true of Adobe Stock’s collection of images for searches relating to Israel, Palestine, Gaza and Hamas. For example, the first image shown when searching for Palestine is a photorealistic image of a missile attack on a cityscape titled “Conflict between Israel and Palestine generative AI”. Other images show protests, on-the-ground conflict and even children running away from bomb blasts — all of which aren’t real.

This is also true of other search terms — for Russia and Ukraine, for September 11, and for World War II. It seems that there are a couple of points of failure here, the first of which is that Adobe should not permit generated images in its stock photo collection which purport to be from real-life events. This should be true for anything photorealistic but a blanket rule would be fine too; there is no need for a computer-made watercolour image which is supposedly of a real place or event.

The editors from the small number of publications running these images are also to blame if they use them in a newsworthy context. Adobe Stock has a filter to show only “editorial” images, which are specifically for news and media and, as far as I can tell, there are no actual generative images if you filter to this context. The only results in Adobe’s entire stock library marked as both “generative A.I.” and “editorial” appear to me to be real photographs about generative imagery. Adobe could help prevent the use of offensive generated images in news contexts by allowing users to set default search filters — something users have been begging them to do.

A quick review of the articles shown in the screenshot posted by Wilson, however, seems to show most of them used this example to show that it was a generated image, and that readers should be cautious of what they believe. I did see a couple of instances where it appeared to be used without acknowledging that it was computer generated, and this is just one example. Still, it would be misleading to believe all uses of this image treated it as a real photo.

The past week in Apple product news has been made more interesting by the meta commentary about its October 30 presentation which was, as noted, shot on an array of iPhone 15 Pro Max,1 and edited on Macs. In the behind-the-scenes video released shortly after the presentation, it became clear there was a third layer of commentary.

Scott Simmons, ProVideo Coalition:

Once I saw that end card and realized they were going to lean into the actual production of this event video, I was actually quite surprised it wasn’t edited on Final Cut Pro, and that FCP wasn’t in the spotlight at least somewhere in the video. It really was the Premiere Pro and Resolve show. And looking at the BTS video, it does look like it was edited on Adobe Premiere Pro.

It is not that much of a revelation to me that Apple may not be using its own products wherever possible for the entire production pipeline, but it is made a little more bizarre when you consider that the behind-the-scenes look at this presentation is, itself, an advertisement for Apple’s products. Even the modern Apple keynote style and format originates, to some extent, with in-house software made for its own presentations.

Also, Apple’s October 30 presentation occurred just one week before this year’s Final Cut Pro Summit in Cupertino. I cannot speak for professional editors, but I wonder if that left some of them with a mixed message about the company’s commitment.

Apple:

Today Apple announced updates to Final Cut Pro across Mac and iPad, offering powerful new features that help streamline workflows. Final Cut Pro now includes improvements in timeline navigation and organization, as well as new ways to simplify complex edits. The apps leverage the power-efficient performance of Apple silicon along with an all-new machine learning model for Object Tracker, and export speeds are turbocharged on Mac models powered by multiple media engines. […]

I will leave it up to professionals to explain whether these updates are meaningful to their workflows. I do some light video editing for my day job in Premiere Pro and I would love if I could automatically colour clips in the timeline by role, for example; whether that makes a difference on a larger-scale project is a good question. Almost as good a question is what kind of changes would be required for Apple’s in-house editors to be using Final Cut Pro instead of Premiere Pro.

There is good reason to ask, other than out of idle curiosity. The quality of Apple’s products tends to drop when it is something seemingly few of its own employees appear to be using. Clips, for example, is updated infrequently, and has not seen a significant feature update in about two years.


  1. I have no idea how to pluralize this. ↥︎

If you want to know what absolutely disingenuous A.I. regulation looks like, look no further than Elon Musk who, in March, was among the signatories of a plea to pause development of these technologies. Thanks to Mike Masnick for this first link.

Jyoti Narayan, Krystal Hu, Martin Coulter, and Supantha Mukherjee, Reuters:

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter issued by the Future of Life Institute.

The Musk Foundation is a major donor to the non-profit, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union’s transparency register.

“AI stresses me out,” Musk said earlier this month. He is one of the co-founders of industry leader OpenAI and his carmaker Tesla uses AI for an autopilot system.

Even at the time, as this Reuters story acknowledges, Musk’s concerns about A.I. rang hollow since he is so eager to avoid responsibility for the autonomous systems in Teslas.

I suppose Musk must have had a change of heart because his xAI venture launched the Grok language model; I am linking to an Internet Archive capture because this website does not have permalinks. Grok was introduced like so:

Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!

A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems.

If you want to be pedantic, this does not breach the declarations in the open letter from the Future of Life Institute, signed by Musk, for two reasons:

  1. The letter-writers demanded a six-month pause and, to be fair, it has been seven months.

  2. The letter-writers were worried about “systems more powerful than GPT-4”. Grok, according to xAI, is somewhere between GPT-3.5 and GPT-4.

But come on; I am no fool. If Musk really was co-signing on the intent of the open letter and agreed with growing concern over a “dangerous race to ever-larger unpredictable black-box models with emergent capabilities”, it is idiotic to launch something which offers “spicy” and “rebellious” answers.

Regulators are not complete dummies. They are surely aware of efforts by A.I. businesses to write favourable laws to govern the industry. My hope is for there to be a framework which encourages cautious advancement, with more oversight required as risks become more severe. That sure sounds a lot like the E.U.’s proposal to me, and it seems quite reasonable.

Brian Heater, TechCrunch:

If you’ve been waiting on a new 27-inch to upgrade your desktop setup, maybe consider the new 24-inch iMac or a Mac Studio instead. Despite reports to the contrary, Apple this morning said it won’t be releasing a larger screen all-in-one desktop.

If you are desperate for a larger iMac — as I am — you are probably tempted to contort this update so the possibility remains for a bigger model with a higher-end processor. Maybe instead of a larger iMac, Apple is planning a larger iMac Pro, yeah? I doubt it; by explaining this much about its roadmap, it seems Apple is trying to close this door. The iMac is only an entry level product now, and anyone who wants a bigger display or has a need for a higher performance computer has to buy those things separately.

That is a shame because Apple’s integrated SoC setup means an all-in-one is the most honest expression of the computer upon which it is built. That is true in laptops, obviously, and is equally so in something like the iMac. Neither have built-in expandability nor any post-purchase upgrades. Compare that to something like today’s Mac Pro, which feels like the most dishonest Apple silicon product — it is a Mac Studio in a larger case with questionable internal customization. An Mx Pro or Max iMac would feel right at home but, sadly, it appears that will not happen.

What a pisser.

Want to experience twice as fast load times in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock — the ad blocker designed for you. It’s easy to setup, blocks all ads, and doubles the speed at which Safari loads.

Magic Lasso Adblock is an efficient and high performance ad blocker for your iPhone, iPad, and Mac. It simply and easily blocks all intrusive ads, trackers and annoyances in Safari. Just enable to browse in bliss.

Magic Lasso Adblock screenshot

By cutting down on ads and trackers, common news websites load 2× faster and use less data.

Over 280,000+ users rely on Magic Lasso Adblock to:

  • Improve their privacy and security by removing ad trackers

  • Block annoying cookie notices and privacy prompts

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad, and Mac.

Download today via the Magic Lasso website.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Ian Betteridge wrote a good counterargument to the general thrust of Steven Sinofsky’s post about the A.I. regulation executive order:

There is a myth in Silicon Valley that innovation is somehow an unalloyed good which must always be protected and should never be regulated, lest we stop some world-shaking discovery. It doesn’t take 20 seconds of thinking – or even any understanding of history – to see that’s not true. Yes, experimentation is how we learn, how we discover new things which benefit us all. But there are no spheres of knowledge outside possibly the humanities where that is completely unregulated. If you want to do nuclear research, good look with getting a permit to run your experimental reactor in the middle of a city. If you would like to do experimental chemistry, you’re going to be on the wrong side of the law if you do it in your garage.

All of those things “stifle innovation”. All of them are entirely justified. Given the world-changing hype – created by technology business people – around AI, they really should get used to a little stifling too.

It is possible to discount the cautionary letter signed by A.I. developers earlier this year and find fault with the executive order mechanism while also agreeing with the overall thrust of Betteridge’s response. It should be possible to nurture the incredible possibilities of A.I. argued by its biggest proponents while creating guardrails to reduce its biggest negative risks.

One problem with talking about “regulating A.I.” is that “A.I.” is such a vague and expansive term. Some companies have rebranded existing processes with “A.I.” language with varying degrees of seriousness. Generative text and media tools are also “A.I.”, and so is facial recognition, and so is a self-driving car. I think even the most optimistic person would acknowledge autonomous vehicles need oversight, and that facial recognition has so many privacy and surveillance tangents that it desperately needs more aggressive regulation.

Natalie Sherman and Peter Hoskins, BBC News:

Sam Bankman-Fried, who once ran one of the world’s biggest cryptocurrency exchanges, has been found guilty of fraud and money laundering at the end of a month-long trial in New York.

The jury delivered its verdict after less than five hours of deliberations.

It concludes a stunning fall from grace for the 31-year-old former billionaire, once known as the “King of Crypto”, who now faces decades in jail.

Two books were recently published about Bankman-Fried’s enterprise: Michael Lewis’ “Going Infinite” and Zeke Faux’s “Number Go Up”.

John Lanchester covered both for the London Review of Books:

[…] Sceptics have latched onto a remark that SBF, thinking he was off the record, made to a journalist about his ethical commitments in November 2022: ‘Man, all the dumb shit I said. It’s not true, not really.’ Some take that as a gotcha! revelation about SBF’s not really believing in EA [Effective Altruism]. I don’t think it is that: I think it’s consistent with what Lewis quotes about his inner emptiness. I think that interview, like most things SBF says about himself, is a glimpse into the abyss. He has no moral compass, other than one on loan from EA. Many people borrow their moral compasses from religion, but all religions have a place for empathy, even if it’s selectively applied. EA has no place for empathy, and neither does SBF.

I have not gotten around to Lewis’ book but, as luck would have had it, “Number Go Up” by Faux was already on my nightstand and I read it yesterday. It was an often engaging read undermined, I think, by a strange emptiness to the whole thing. As Lanchester captures, Bankman-Fried often comes across as a little hollow, unable to contend with exactly who he is or what he wants. But the book itself contains so many anecdotes that do not really go anywhere and only serve to illustrate how empty the narrative around cryptocurrency seems.

This review was published before the jury found Bankman-Fried guilty. However, Lanchester also writes effectively about the strangeness of the trial itself. It is worth your time.

There has been a wave of artificial intelligence regulatory news this week, and I thought it would be useful to collect a few of those stories in a single post.

Earlier this week, U.S. president Joe Biden issued an executive order:

My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.

Reporting by Josh Boak and Matt O’Brien of the Associated Press indicates this executive order was informed by several experts in the technology and human rights sectors. Unfortunately, it seems that something I interpreted as a tongue-in-cheek statement to the adversary of the latest “Mission: Impossible” movie is being taken seriously and out of context by some.

Steven Sinofsky — who, it should be noted, is a board partner at Andreessen Horowitz which still has as its homepage that ridiculous libertarian manifesto which is, you know, foreshadowing — is worried about that executive order:

I am by no means certain if AI is the next technology platform the likes of which will make the smartphone revolution that has literally benefitted every human on earth look small. I don’t know sitting here today if the AI products just in market less than a year are the next biggest thing ever. They may turn out to be a way stop on the trajectory of innovation. They may turn out to be ingredients that everyone incorporates into existing products. There are so many things that we do not yet know.

What we do know is that we are at the very earliest stages. We simply have no in-market products, and that means no in-market problems, upon which to base such concerns of fear and need to “govern” regulation. Alarmists or “existentialists” say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns. I just know that we have plenty of past evidence that every technology has come with its alarmists and concerns and somehow optimism prevailed. Why should the pessimists prevail now?

This is a very long article with many arguments against the Biden order. It is worth reading in full; I have just pulled its conclusion as a summary. I think there is a lot to agree with, even if I disagree with its conclusion. The dispute is not between optimism and pessimism; it is between democratically regulating industry, and allowing industry to dictate the terms of if and how it is regulated.

That there are “no in-market products […] upon which to base such concerns” is probably news to companies like Stable AI and OpenAI, which sell access to Eurocentric and sexually biased models. There are, as some will likely point out, laws in many countries against bias in medical care, hiring, policing, housing, and other significant areas set to be revolutionized by A.I. in the coming years. That does not preclude the need for regulations specifically about how A.I. may be used in those circumstances, though.

Ben Thompson:

The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors.

[…]

In short, this Executive Order is a lot like Gates’ approach to mobile: rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.

There is a neat rhetorical trick in both Sinofsky’s and Thompson’s articles. It is too early to regulate, they argue, and doing so would only stifle the industry and prevent it from reaching its best potential and highest aspirations. Also, it is a little bit of a smokescreen to call it a nascent industry; even if the technology is new, many of the businesses working to make it a reality are some of the world’s most valuable. Alas, it becomes more difficult to create rules as industries grow and businesses become giants — look, for example, to Sinofsky’s appropriate criticism of the patchwork approach to proposed privacy laws in several U.S. states, or Thompson’s explanation of how complicated it is to regulate “entrenched” corporations like Facebook and Google on privacy grounds given their enormous lobbying might.

These are not contradictory arguments, to be clear; both writers are, in fact, raising a very good line of argument. Regulations enacted on a nascent industry will hamper its growth, while waiting too long will be good news for any company that can afford to write the laws. Between these, the latter is a worse option. Yes, the former approach means a new industry faces constraints on its growth, both in terms of speed and breadth. With a carefully crafted regulatory framework with room for rapid adjustments, however, that can actually be a benefit. Instead of a well poisoned by years of risky industry experiments on the public, A.I. can be seen as safe and beneficial. Technologies made in countries with strict regulatory regimes may be seen as more dependable. There is the opportunity of a lifetime to avoid entrenching the same mistakes, biases, and problems we have been dealing with for generations.

Where I do agree with Sinofsky and Thompson is that such regulation should not be made by executive order. However, regardless of how much I think the mechanism of this policy is troublesome and much of the text of the order is messy, it is wrong to discard the very notion of A.I. regulation simply on this basis.

A group of academics published a joint paper concerning A.I. development, which I thought was less alarmist and more grounded than most of these efforts:

The rate of improvement is already staggering, and tech companies have the cash reserves needed to scale the latest training runs by multiples of 100 to 1000 soon. Combined with the ongoing growth and automation in AI R&D, we must take seriously the possibility that generalist AI systems will outperform human abilities across many critical domains within this decade or the next.

What happens then? If managed carefully and distributed fairly, advanced AI systems could help humanity cure diseases, elevate living standards, and protect our ecosystems. The opportunities AI offers are immense. But alongside advanced AI capabilities come large-scale risks that we are not on track to handle well. Humanity is pouring vast resources into making AI systems more powerful, but far less into safety and mitigating harms. For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough.

John Davidson, columnist at the Australian Financial Review, interviewed Andrew Ng, who co-founded Google Brain:

“There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction.

“It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,” he said.

Ng is not an anti-regulation hardliner. He acknowledges the harms already caused by A.I. and supports oversight.

Dan Milmo and Kiran Stacey, of the Guardian, covered this week’s Bletchley Park A.I. safety summit:

The possibility that AI can wipe out humanity – a view held by less hyperbolic figures than Musk – remains a divisive one in the tech community. That difference of opinion was not healed by two days of debate in Buckinghamshire.

But if there is a consensus on risk among politicians, executives and thinkers, then it focuses on the immediate fear of a disinformation glut. There are concerns that elections in the US, India and the UK next year could be affected by malicious use of generative AI.

I do not love the mainstreaming of the apparently catastrophic risks of A.I. on civilization because it can mean one of two possibilities: either its proponents are wrong and are using it for cynical or attention-seeking purposes, or they are right. This used to be something which was regarded as ridiculous science fiction. That apparently serious and sober people see it as plausible is discomforting.

John Herrman, New York:

But it’s worth paying attention to which, and whose, problems these companies are trying to solve with AI. Amazon is attempting to address an issue for sellers and for its own advertising business: Clickable “lifestyle” ad imagery is expensive to hire for and difficult to shoot yourself, so a lot of sellers advertise with less-clickable materials. In the process of addressing this problem for its sellers, and attempting to automate certain kinds of product photography, Amazon ended up creating a tool that automates the (low-stakes, slight, and frankly sort of surreal) deception of customers by sellers. […]

Frequent readers have likely noticed several of the underlying principles and understandings which form the basis for my commentary: privacy is a right; it is good to be skeptical of expressions of power by governments and corporations alike; products and services should be respectful of users. That kind of thing. Here is another one: we should have higher standards and expect more.

In this case, we should demand advertising to be an honest and true representation. As Herrman notes, it is not like these generative imagery tools have invented deceptive ads; if anything, they continue an industry tradition. But the lack of newness is not an excuse. The businesses that are involved in advertising at any level must demand a basic level of authenticity, and it is revolting that powerful corporations seem to think they can help tell lies without consequence. I do not see how or why this would be controversial. Better is possible if we demand it.

Aaron Gordon, writing at the remaining shell of Vice:

For the last three months, I’ve been trying to find an answer to a basic question at the heart of this theft wave: Why didn’t the U.S. follow Canada’s lead and mandate immobilizers, too? If it had, either around the same time as Canada or when it considered new regulations in the mid-2010s, the method of stealing Kias and Hyundais widely popularized in online videos would not be possible, as evidenced by the fact that no similar theft wave is occurring north of the border. (Canada is experiencing its own problems with auto thefts, as explained below, but the trend is tied to organized crime and not centered around Kias and Hyundais or engine immobilizers).

I appreciate Gordon looking into this more than I was able to when I wrote about it earlier this year. TikTok and other social media platforms are being scapegoated for not preventing the spread of the technique behind this wave of thefts when it could have been prevented in the first place by regulators and automakers.