28
submitted 1 year ago by zer0@thelemmy.club to c/lemmy@lemmy.ml

I'm sure this is a common topic but the timeline is pretty fast these days.

With bots looking more human than ever i'm wondering what's going to happen once everyone start using them to spam the platform. Lemmy with it's simple username/text layout seem to offer the perfect ground for bots, to verify if someone is real is going to take scrolling through all his comments and read them accurately one by one.

you are viewing a single comment's thread
view the rest of the comments
[-] muddybulldog@mylemmy.win 11 points 1 year ago

Somewhat of a loaded question but, if we need to scroll through their comment history meticulously to separate real from bot, does it really matter at that point?

SPAM is SPAM and we’re all in agreement that we don’t want bots junking up the communities with low effort content. However if they reach the point that it takes real effort to ferret them out they must be successfully driving some sort of engagement.

I’m not positive that’s a bad thing.

[-] HelloHotel@lemm.ee 2 points 1 year ago

Things like chatGPT are not designed to think using object relations like a human. Its designed to respond the way a human would, (a speach quartex with no brain), it is made to figure out what a human would respond with rather than give a well thoght out answer.

Robert Miles can explain it better than i ever could

[-] PipedLinkBot@feddit.rocks 2 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=w65p_IIp6JY

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[-] usernotfound@lemmy.ml 1 points 1 year ago
load more comments (4 replies)
this post was submitted on 28 Jul 2023
28 points (88.9% liked)

Lemmy

11947 readers
31 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS