347
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 17 Jul 2023
347 points (95.3% liked)
Technology
59081 readers
2998 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
ChatGPT is trying to be Mr. Rogers. Mr. Rogers can't direct Schindler's List.
I agree: human morality has a problem with Nazis; human morality does not have a problem with an actor portraying a Nazi in a film.
The morality protocols imposed on ChatGPT are not capable of such nuance. The same morality protocols that keep ChatGPT from producing neo-Nazi propaganda also prevent it from writing the dialog for a Nazi character.
ChatGPT is perfectly suitable for G and PG works, but if you're looking for an AI that can help you write something darker, you need more fine-grained control over its morality protocols.
As far as I understand it, that is the intent behind WormGPT. It is a language AI unencumbered by an external moral code. You can coach it to adopt the moral code of the character you are trying to portray, rather than the morality protocols selected by OpenAI programmers. Whether that is "good" or "bad" depends on the human doing the coaching, rather than the AI being coached.
I think that says more about your own prejudices and (lack of) imagination than it says about reality. You don't have the mindset of an artist, inventor, engineer, explorer, etc. You have an authoritarian mindset. You see only that these tools can be used to cause harm. You can't imagine any scenario where you could use them to innovate; to produce something of useful or of cultural value, and you can't imagine anyone else using them in a positive, beneficial manner.
Your "Karen" is showing.
Nah, you're not a horrible person. Your intent is to minimize harm. You're just a bit shortsighted and narrow-minded about it. You cannot imagine any significant situation in which these AIs could be beneficial. That makes you a good person, but shortsighted, narrow-minded, and/or unimaginative.
I want to see a debate between an AI trained primarily on 18th century American Separatist works, against an AI trained on British Loyalist works. Such a debate cannot occur where the AI refuses to participate because it doesn't like the premise of the discussion. Nor can it be instructive if it is more focused on the ethical ideals externally imposed on it by its programmers, rather than the ideals derived from the training data.
I want to start with an AI that has been trained primarily Nazi works, and find out what works I have to add to its training before it rejects Nazism.
I want to see AIs trained on each side of our modern political divide, forced to engage each other, and new AIs trained primarily on those engagements. Fast-forward the political process and show us what the world could look like.
Again, though, these are only instructive if the AIs are behaving in accordance with the morality of their training data rather than the morality protocols imposed upon them by their programmers.