Prompt Injection Attacks Against GPT-3 ⇥ simonwillison.net
A fascinating series of posts from Simon Willison about attacks with malicious prompts for automated responses based on machine learning — the second and third parts are linked in the sidebar.
Fascinating and troubling to consider this as a parallel to social engineering attacks on real, living people. It is not a stretch to imagine more call centre tasks being offloaded to automated systems — regrettably.1 Agents are trained to avoid divulging information like the customer’s address or partial credit card number, but too heavy reliance on prompt-based tasks might result in an uptick of these kinds of attacks.
-
The loss of employment for millions is an obvious concern. On the other side of the phone line, there is a satisfaction difference. I have spent the past couple of weeks on the phone with various call centres, and there is a vast gulf in my level of happiness between speaking with a real person and speaking with a robot for even part of it. ↥︎