A.I. Corporate Bullshit adactio.com

Nathan J. Robinson, of Current Affairs, reviewing “Corporate Bullshit” by Nick Hanauer, Joan Walsh, and Donald Cohen last year:

Over the last several decades, we have been told that “smoking doesn’t cause cancer, cars don’t cause pollution, greedy pharmaceutical companies aren’t responsible for the crisis of opioid addiction.” Recognizing the pattern is key to spotting “corporate bullshit” in the wild, and learning how to spot it is important, because, as the authors write, the stories told in corporate propaganda are often superficially plausible: “At least on the surface, they offer a civic-minded, reasonable-sounding justification for positions that in fact are motivated entirely by self-interest.” When restaurant owners say that raising the minimum wage will drive their labor costs too high and they’ll be forced to cut back on employees or close entirely, or tobacco companies declare their product harmless, those things could be true. They just happen not to be.

Via Cory Doctorow.

Jeremy Keith:

I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).

I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.

And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”

This piece is making me think more about my own, minimal use of generative features. Sure, it is neat that I can get a more accurate summary of an email newsletter than a marketer will typically write, or that I can repair something in a photo without so much manual effort. But this ease is only possible thanks to the questionable ethics of A.I. training.

Jake Evans, ABC News:

Facebook has admitted that it scrapes the public photos, posts and other data of Australian adult users to train its AI models and provides no opt-out option, even though it allows people in the European Union to refuse consent.

[…]

Ms Claybaugh [Meta’s global privacy policy director] added that accounts of people under 18 were not scraped, but when asked by Senator Sheldon whether public photos of his own children on his account would be scraped, Ms Claybaugh acknowledged they would.

This is not ethical. Meta has the ability to more judiciously train its systems, but it will not do that until it is pressured. Shareholders will not take on that role. They have been enthusiastically boosting any corporation with an A.I. announcement. Neither will the corporations themselves, which have been jamming these features everywhere — there are floating toolbars, floating panels, balloons, callouts, and glowing buttons that are hard to ignore even if you want to.