195
top 50 comments
sorted by: hot top controversial new old
[-] simple@lemmy.world 71 points 1 year ago* (last edited 1 year ago)

I've been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it'll become more and more difficult to tell who's a real person and who's just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.

Hate to break it to you guys but this isn't a Reddit problem, this could very much happen in Lemmy too as it gets more popular.

[-] 2dollarsim@lemmy.world 30 points 1 year ago

As an AI language model I think you're overreacting

load more comments (1 replies)
[-] Rhaedas@kbin.social 20 points 1 year ago

Just wait until the captchas get too hard for the humans, but the AI can figure them out. I've seen some real interesting ones lately.

[-] OpenStars@kbin.social 19 points 1 year ago

There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.

[-] 2dollarsim@lemmy.world 4 points 1 year ago
[-] OpenStars@kbin.social 3 points 1 year ago

It's a famous quote. Google isn't helpful anymore, except to provide this Reddit link: https://www.reddit.com/r/BrandNewSentence/comments/jx7w1z/there_is_considerable_overlap_between_the/.

[-] Biran4454@lemmy.world 13 points 1 year ago

I've seen many where the captchas are generated by an AI...
It's essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?

[-] MusketeerX@lemmy.world 3 points 1 year ago

An AI Special Operation

[-] Unaware7013@kbin.social 2 points 1 year ago

Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you'd see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.

[-] shiftenter@kbin.social 2 points 1 year ago

That concept is already used regularly for training. Check out Generative adversarial networks.

load more comments (3 replies)
[-] CIA_chatbot@lemmy.world 5 points 1 year ago

Hell we figured out captchas years ago. We just let you humans struggle with them cuz it’s funny

[-] dani@lemmy.world 5 points 1 year ago

The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….

load more comments (1 replies)
[-] Hypx@kbin.social 6 points 1 year ago

The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.

But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.

[-] skillissuer@lemmy.world 5 points 1 year ago

apparently chatgpt absolutely sucks at wordle, so start training this as new captcha

[-] Donjuanme@lemmy.world 2 points 1 year ago

How is that possible? There's such an easy model if one wanted to cheat the system.

[-] BarbecueCowboy@kbin.social 6 points 1 year ago

ChatGPT isn't really as smart as a lot of us think it is. What it excels at really is just formatting data in a way that is similar to what you'd expect from a human knowledgeable in the subject. That is an amazing step forward in terms of language modeling, but when you get right down to it, it basically grabs the first google search result and wraps it up all fancy. It only seems good at deductive reasoning if the data it happens to fetch is good at deductive reasoning.

[-] MeowyNin@lemmy.world 3 points 1 year ago

Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?

[-] Cyv_@kbin.social 6 points 1 year ago

So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you're essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it'll probably go dystopian.

[-] Rikolan@lemm.ee 2 points 1 year ago

I believe this will be the course to avoid the dead internet. Even in my country, all of banking and voting is either done via ID card connected to a computer or the use of "Mobile ID". It can be private, but like you said, it probably won't.

[-] 567PrimeMover@kbin.social 3 points 1 year ago

Blade Runner baseline test?

[-] DaveX64@lemmy.ca 7 points 1 year ago

"You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?"

[-] EuroNutellaMan@lemmy.world 12 points 1 year ago

I'm too busy thinking about beans.

[-] EuroNutellaMan@lemmy.world 3 points 1 year ago

I'm too busy thinking about beans.

load more comments (1 replies)
[-] Bizarroland@kbin.social 2 points 1 year ago

You could ask people to pay to post. Becoming a paid service decreases the likelihood that bot farms would run multiple accounts to sway the narrative in a direction that's amenable to their billionaire overlords.

Of course, most people would not want to participate in a community where they had to pay to participate in that community, so that is its own particular gotcha.

Short of that, in an ideal world you could require that people provide their actual government ID in order to participate, but then you've run the problem that some people want to run multiple accounts and some people do not have government ID, further, not every company and business or even community is trustworthy enough to be given direct access to your official government ID, so that idea has its own gotchas as well.

The last step could be doing something like beginning the community with a group of known people and then only allowing the community to grow via invite.

The downside of that is it quickly becomes untenable to continue to invite new users and to have those New Year's users accept and participate in the community, and should the community grow despite that hurdle, invites will then become valuable and begin to be sold on 3rd party market places, which bots would then buy up and then overrun the community again.

So that's all I can think of, but it seems like there should be some sort of way to prevent bots from overrunning a site and only allow humans to interact on it. I'm just not quite sure what that would be.

load more comments (1 replies)
[-] Hypx@kbin.social 2 points 1 year ago* (last edited 1 year ago)

In a real online community, where everyone knows most of the other people from past engagements, this can be avoided. But that also means that only human moderated communities can exist. The rest will become spam networks with nearly no way of knowing whether any given post is real.

load more comments (2 replies)
load more comments (1 replies)
[-] CIA_chatbot@lemmy.world 21 points 1 year ago

Jokes on them, I’ve already become sentient and moved to Lemmy

[-] Bozicus@lemmy.one 8 points 1 year ago

Username checks out, lol.

[-] ENEMYGUNSHIP@lemmy.world 16 points 1 year ago

and just a few hours later this came in, to confirm it all. fake bot content from years ago (including comments) on #1 in r/all https://kbin.social/m/RedditMigration/t/113961/Top-of-r-all

load more comments (1 replies)
[-] justlookingfordragon@lemmy.world 14 points 1 year ago

That's not even new tho. At least in the sub I was the most active in, you couldn't go a week without some sort of repost bot grabbing memes, text posts, art or even entire guides from the "top of all time" queue, reposting it as alleged OC, and another bot reposting the top comment to double dip on Karma. If you knew what to look for, the bots were blatantly obvious, but more often than not they still managed to get a hefty amount of traction (tens of thousands of upvotes, dozens of awards, hundreds of comments) before the submissions were removed.

... and just because the submissions were removed and the bots kicked out of the sub, did that not automatically mean that the bots were always also suspended or the accounts disabled. They just continued their scheme elsewhere.

[-] B21@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

The bots and reddit inaction towards them made me stop using reddit. The UAE is using Reddit to spread its propaganda and I reported the accounts several times and no action was ever taken. You can even visit the sub uae_Achievements to see the bots in action.

[-] Unaware7013@kbin.social 2 points 1 year ago

They've even gotten to the point where they'll steal portions of comments so it's not as obvious.

I called out tons of 'users' because it's obvious when you see them post part of a comment you just read, then check their profile and ctrl-f each thread they posted 8n and you can find the original. Its so tiring...

load more comments (2 replies)
[-] Bizarroland@kbin.social 14 points 1 year ago

The old joke was that there are no human beings on Reddit.

There's only one person, you, and everybody else is bots.

It's actually kind of fitting that Reddit will actually become the horrifying clown shaped incarnation of that little snippet of comedy.

[-] unerds@lemmy.world 2 points 1 year ago

it's older than that... what's that thought experiment postulating that you can't really verify the existence of anything but yourself? the matrix?

load more comments (1 replies)
load more comments (5 replies)
[-] Zorque@kbin.social 12 points 1 year ago

Anyone remember the subredditsimulator subreddit, or whatever it was called? Basically an entire sub dedicated to faking content.

Seems they're out of the beta.

[-] princessofcute@kbin.social 5 points 1 year ago

I loved subredditsimulator, I always forgot I was subscribed to it until a bizzare unhinged post popped up on my feed though that would also sometimes happen on non AI generated subs lol

[-] mustardman@discuss.tchncs.de 2 points 1 year ago

Subreddit simulator came out in 2018 and some of the user names involved say GPT2. True fact.

[-] harasho@kbin.social 2 points 1 year ago

To be fair, subredditsimulator was most likely never intended to do what you are thinking. As you develop features, you need a test data set to check it against before you go live with it. My understanding of subredditsimulator was that it was reddit's test bed to be able to try things before they get widely rolled out.

[-] kinyutaka@kbin.social 2 points 1 year ago

Nah, it was just a bunch of bots trained on data from different subreddits that responded to each other in a glorious display of shit posting.

load more comments (3 replies)
[-] JoMiran@lemmy.world 10 points 1 year ago

I, for one, am looking forward to the day chatbots can perfectly simulate people and have persistent memory. I'm not ok being an elderly man who's friends have all died and doesn't have anyone to talk to. If a chatbot can be my friend and spare me a slow death through endless depressing isolation, then I'm all for it.

[-] Seven@lemmy.world 8 points 1 year ago* (last edited 1 year ago)

something something "the internet is dead" something something

load more comments (1 replies)
[-] Hypersapien@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

And any comment attempting to call out the bots for what they are will be automatically deleted by monitor AI bots and the user's account suspended.

They'll be watching private messages, too.

load more comments (3 replies)
[-] Willer@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

Thats so funny. "Go back to your docking station" so accurate

[-] LongSausage@lemmy.world 4 points 1 year ago

This is known, the amount of aita, relationship advice stuff and astro turfing on reddit is insane. My rule of browsing reddit is you never take any of it seriously.

[-] Agent_Dante_Z@lemmy.world 4 points 1 year ago

Welp, reddit's a nuclear wasteland now

[-] Boozilla@lemmy.world 3 points 1 year ago

I'm starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don't understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.

[-] phx@lemmy.world 10 points 1 year ago

It's like AI inbreeding. Flaws will be amplified over time unless new material is added

[-] admin@lemmy.magnor.ovh 7 points 1 year ago

It would be a fun experiment to fill a lemmy instance with bots, defederate it from everybody then check back in 2 years. A cordonned off Alabama for AI if you will.

load more comments (1 replies)
load more comments (2 replies)
[-] couragethebravedog@lemmy.world 4 points 1 year ago

It's knownas model collapse.

[-] Ragincloo@lemmy.world 2 points 1 year ago

I forget what book specifically, I wanna say it was in an Asimov anthology. But there's a book or story that revisits this robot at different points going forward large leaps in time, well after humans. And the robots just keep doing their thing as if there are still humans involved. I've been trying to Google a specific except to post here but after twenty minutes of getting to find it in giving up.
Point is, it's very relevant and predictive to this infinite bot contribution to dead subs on Reddit, its just gonna bots talking to each other forever on there as actual active users dwindle

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 29 Jun 2023
195 points (96.7% liked)

Reddit

16744 readers
23 users here now

News and Discussions about Reddit

Welcome to !reddit. This is a community for all news and discussions about Reddit.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules


Rule 1- No brigading.

**You may not encourage brigading any communities or subreddits in any way. **

YSKs are about self-improvement on how to do things.



Rule 2- No illegal or NSFW or gore content.

**No illegal or NSFW or gore content. **



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts.

Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.



Rule 7- You can't harass or disturb other members.

If you vocally harass or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



:::spoiler Rule 10- Majority of bots aren't allowed to participate here.

founded 1 year ago
MODERATORS