21
you are viewing a single comment's thread
view the rest of the comments
[-] threelonmusketeers@sh.itjust.works 1 points 3 months ago

LLMs were designed to generate coherent statements, but not necessarily correct ones, and are unable to consistently spot logical fallacies in their output. Humans can do this (some better than others), so computers should be capable of this too. The technology is not there yet, but I'm glad people are working on it.

this post was submitted on 25 Jun 2024
21 points (95.7% liked)

Futurology

1696 readers
67 users here now

founded 1 year ago
MODERATORS