Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.
The suggestion that this is a "radical approach" might actually be the most insane part of what is already a fairly insane article.
On top of that they say that these sorts of behaviors only arise when the models are "stressed", and the article also mentions "threats" like being unplugged. What kind of response do they actually expect from a fill-in-the-conversation machine when the prompt it's been asked to continue from is a threat?