the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
view the rest of the comments

I am terrified of this weird reliance on "AI" chatbots for information. The chatbots are made by a bunch of silicon valley techbro nerds who have no fucking clue about anything and no way to check if the chatbot ever gives correct information. Chatbots are already using their own past response history to "hone" their answers. What will happen once these AIs just start referring to each other over and over again? And people, too ignorant to figure out that the chatbot is just spouting nonsense, take it as actual information? I may be being a little overly dramatic, but what if someone asks a chatbot for medical advice or something and gets someone killed? Has that already happened? This is the dumbest possible dystopia.
Chatbots already have a body count.
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
And are great at being Hitler. Here’s the latest:
https://www.gptrolley.com/
Works best when Elon Musk is one of the prompts
that guy who tried to kill the queen with a crossbow was convinced to do so by his AI "girlfriend" which is a really sad concept more sad the more you know about how AI works
I actually think it should be illegal to market AI as a companion service because it's preying so tragically on the lonely and vulnerable