313
submitted 1 year ago* (last edited 1 year ago) by pavnilschanda@lemmy.world to c/aicompanions@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] whispering_depths@lemmy.world 3 points 1 year ago

it also helps to have custom instructions on the client that tell it to say no first, or to make a fake post for internet points.

[-] canihasaccount@lemmy.world 2 points 1 year ago

You can try this yourself with GPT-4. I have, and it fails every time. Earlier GPT-4 versions, via the API, also fail every time. Claude reasons before it answers, but if you ask it to say yes or no only, it fails. Bard is the only one that gets it right, right off the bat

this post was submitted on 16 Sep 2023
313 points (96.2% liked)

AI Companions

520 readers
2 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 1 year ago
MODERATORS