this post was submitted on 27 Feb 2026
59 points (98.4% liked)
Technology
42366 readers
350 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've got an idea. If 90% of AI's output is accurate, just have humans review the 10% that will be inaccurate.
(Yes I am an AI expert, how did you know)
Which outputs are accurate, and which ones are inaccurate? How could you tell? What steps did you take to verify accuracy? Was verifying it a manual process?
That's easy. You just get a second AI to ask the first AI if their responses were accurate or not
(/s)
This is unironically what I've seen people try to do, except they assume the second AI is correct.
Unrelated, but this is how GANs work to some extent. GANs train during the back-and-forth though, while LLMs do not.