Report from Google Researchers Finds Impersonation Is the Most Likely Way Generative A.I. Is Misused

Cristina Criddle, Financial Times:

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

Emanuel Maiberg, 404 Media:

Generative AI could “distort collective understanding of socio-political reality or scientific consensus,” and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI.

It is probably worth emphasizing this is a preprint published to arXiv, so I am not sure of how much faith should be placed its scholarly rigour. Nevertheless, when in-house researchers are pointing out the ways in which generative A.I. is misused, you might think that would be motivation for their employer to act with caution. But you, reader, are probably not an executive at Google.

This paper was submitted on 19 June. A few days later, reporters at the Information said Google was working on A.I. chat bots with real-person likenesses, according to Pranav Dixit of Engadget:

Google is reportedly building new AI-powered chatbots based on celebrities and YouTube influencers. The idea isn’t groundbreaking — startups like and companies like Meta have already launched products like this — but neither is Google’s AI strategy so far.

Maybe nothing will come of this. Maybe it is outdated; Google’s executives may have looked at the research produced by its DeepMind division and concluded the risks are too great. But you would not get that impression from a spate of stories which suggest the company is sprinting into the future, powered by the trust of users it spent twenty years building and a whole lot of fossil fuels.