this post was submitted on 14 Nov 2025
27 points (86.5% liked)

No Stupid Questions

44332 readers
525 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
all 39 comments
sorted by: hot top controversial new old
[–] Nemo@slrpnk.net 15 points 13 hours ago (1 children)

Yes. I'd rather pay a person to make art than a corporation.

[–] FaceDeer@fedia.io 2 points 12 hours ago

AI tools can be trained and run locally by individuals, not just by corporations.

[–] Assassassin@lemmy.dbzer0.com 23 points 16 hours ago (5 children)

Philosophically, yes. One is created with intent, one is created to mimic intent. Human made works can challenge norms and explore entirely new ways of thinking about a subject. AI content is essentially trying to take everything relevant to a given prompt, blend it together, and give you something that meets your expectations.

Now as far as is it practically the same, that's where things are going to start getting sticky. If an AI makes a piece of art that resonates with people the same way that a human created piece of art does, those feelings are just as genuine. There is no practical difference. We're seeing that right now with AI generated music. Just this week an AI country song hit #1 on billboard. The people that enjoy that song enjoy it regardless of how it was made. Personally, I think that country is kind of a low hanging fruit since it has effectively been following the same formula for a couple of decades, but it's a great proof of concept.

[–] naught101@lemmy.world 4 points 12 hours ago

Yeah, I think AI optimising commercial music genres is just effectively doing what the corporate music industry has been doing for years anyway. It's like gamification of the auditory processing system.

[–] yetAnotherUser@discuss.tchncs.de 3 points 11 hours ago (1 children)

Disagree slightly, human created content can have intent but doesn't automatically have it.

A corporate ad does not have any artistic merit besides grabbing as much attention as possible. Actually creative ads where some thought was put in are the very rare exception.

The same goes with a lot of pop music today. I cannot speak too much about English language pop but German pop is nothing more than fast food. See the Wikipedia article of Menschen Leben Tanzen Welt.

Or take a look at video games. How much artistic effort is put into AAA games? Maybe someone spent 40 hours making the lootboxes as satisfying as possible to open but that's probably where the most thought was put in.

And movies? Aren't Disney's recent "live remakes" of their old, successful animated movies anything but CGI slop? Sure, I admit it takes a lot of effort to make and animate all these models. Just like it takes effort to shit when you're constipated.


Honestly, the only thing distinguishing AI from megacorp content is that the latter has more consistency and fewer "mistakes" than the former. The sole intent of both is making money.

[–] Assassassin@lemmy.dbzer0.com 4 points 8 hours ago* (last edited 8 hours ago)

All very good points. Now the brakes are off and the corps can just churn out generic crap at an even more aggressive rate. Who knows, maybe the onslaught will end up pushing more people away from corporate content in the end. Or it'll kill small art creators more than companies already have. I'm choosing to have hope that enough people will make a conscious effort. Time will tell. Thanks for your thoughts!

[–] tym@lemmy.world 7 points 16 hours ago (1 children)

From an article about the song: "AI artists won't require things that a real human artist will require, and once companies start considering it and looking at bottom lines, I think that's when artists should rightly be concerned about it," she added.

That quote explains all political theatre currently making the rounds. UBI or soylent green - which will win out?

https://abcnews.go.com/GMA/Culture/ai-generated-country-song-topping-billboards-country-digital/story?id=127445549

[–] Assassassin@lemmy.dbzer0.com 5 points 16 hours ago

In the US? Soylent green all the way. If we had any ability to constrain capitalism from destroying art for profit, AI wouldn't essentially be a legal IP theft machine.

We thought it was bad when iheart took over all of the radio stations and the record labels started making bands to sell derivative music to the masses. AI is going to destroy any remaining ability for small artists to make profit off their work. It already has in quite a few spaces.

[–] ICCrawler@lemmy.world 5 points 15 hours ago* (last edited 14 hours ago) (1 children)

AI music really caught me off guard. One day I was looking for something very specific to vibe to. I wanted instrumental power metal, like Dragonforce but no vocals. And I found that in Metal Mastery, a YouTube channel. I liked it so much I looked into it more, turns out it's AI and the guy is very upfront about it and all. But I would have never known if I wasn't told. There's also nothing that really fills that niche either, so I still listen to the albums now and then.

[–] Assassassin@lemmy.dbzer0.com 9 points 15 hours ago

I still think it's problematic to be making money off of AI music due to the nature of how the systems are trained. I do think it's significantly better when people are upfront about it in the way you describe. I have a huge problem with Spotify boosting it on their platform with no mention of the artist being AI anywhere, though.

[–] DeathByBigSad@sh.itjust.works 1 points 14 hours ago (1 children)

There is no practical difference. We’re seeing that right now with AI generated music.

Last night, some account spammed multiple communities and they got upvoted and some users replied, apparantly didn't realize it was a LLM bot (like 20 posts within a few hours, un-human). I also didn't notice at first glance, now I kinda feel like shit for even responding lmao. 2026 is gonna be even more cooked.

[–] Assassassin@lemmy.dbzer0.com 1 points 13 hours ago

Yeah man, were rapidly approaching a point where society is "post-evidence". Seeing isn't believing anymore and a very large chunk of our society is built on the idea of proving things with audio/photo/video based evidence. I fear that our systems aren't protected against the volume and physical accuracy of what's becoming increasingly arbitrary to generate at home and at scale.

The legal system has some standards for evidence, but public discourse certainly doesn't.

[–] solrize@lemmy.ml 7 points 14 hours ago (1 children)

It's up to you. There's a traditional wooden drinking cup called a kuksa that is popular with outdoors types. It's carved from a solid block of wood. You can buy them, but it's more "bushcrafty" if you make one yourself. Further, you're supposed to use only hand tools, no power tools. OTOH, one that you order online was probably milled by a machine. It's hard to tell them apart though.

Is there a philosophical difference? Up to you.

[–] Skullkid@lemmy.org 3 points 13 hours ago (2 children)

I like this comparison. Made me realize that it’s all about human connection.

I think the origin of the handmade cup is what matters here, same with human vs. AI content. Did you make the cup yourself? You’ll have memories and pride attached to the cup. Did someone make it for you? The cup will remind you of that person, it will have meaning because of who it’s from. Content you or someone you care about makes will always “feel” different than something made by a random person online.

If you don’t personally know the people making the cups, would a “handmade” label at the store make it more meaningful than if you knew it was likely made by a machine? It’ll still just be an object that you don’t have a direct human connection with, just like the random content you see online. It might “mean” more to you to know a human created it, but if you can’t tell the difference, it still serves the same purpose. The cup lets you drink. The content entertains you or makes you think, react, respond.

I wonder if part of my instinctual “fuck AI” reaction is a reflection of the imaginary connections my brain thinks it’s making with other humans on the internet. Talking to AI feels meaningless… but, for all I know, you are AI. I’m still taking the time to type this. We may never interact again, I may never know who made that handmade cup I bought from the store.

Are we connecting as humans right now? Or is my monkey brain just experiencing this as “this is a moment where I am communicating and that is good”? Can we subconsciously recognize the difference between “real person” and “imaginary person”, or are our brains just satisfied feeling like they’re communicating with someone?

[–] DeathByBigSad@sh.itjust.works 3 points 13 hours ago (1 children)

Sorry, not trying to point fingers, but we had an incident involving a mass-spamming LLM-bot yesterday and your account is 1 day old, so this comment is kinda funny is a way.

Yeah, I have no way to tell if you are real lol.

I mean, obviously I am real...

Or am I?

vsauce theme intensifies

[–] Skullkid@lemmy.org 1 points 13 hours ago* (last edited 13 hours ago) (2 children)

Haha that is funny, it kinda sucks how reputation has become so important online. I’ve spent my whole life regularly changing accounts, recreating profiles, etc, because it just makes me uncomfortable to have a long digital record of my opinions, thoughts, etc. Last few years I’ve spent more and more time just lurking on forums without accounts because you get accused of being a bot if your account isn’t a few years old! I could easily be an AI responding to your comment, or I could be a person just using AI to reword my thoughts. How much of a difference is there between those? If an AI says “this would be a better way to word that”, but it changes the meaning ever so slightly, is that sentiment still “from a human”? What about when Microsoft word would reword things and correct grammar to make it “more concise” or whatever? Is that the same thing? That was technically a rudimentary form of AI too - artificial intelligence doesn’t mean “talks like a human” despite that being the current public perception. Where do we draw the line? Is it even possible to determine what “counts” as AI at this point, technologically speaking? We don’t even have a solid definition for intelligence, so how can we define an artificial version of it?

This line of thought is fascinating my stoned ass right now holy shit lol

[–] naught101@lemmy.world 2 points 12 hours ago

On your first point, I think it's not so much about reputation as about trust. Long-standing accounts at least have the simple trust that's based on consistency and familiarity. If you meet a new person IRL, you at least get something to go off based on visuals and behavioural cues. A new account online has absolutely nothing to base any trust on.

[–] DeathByBigSad@sh.itjust.works 1 points 13 hours ago (1 children)

I get the desire for anonymity. I was actually here since the reddit API debacle stuff since June 12, 2023, but I've since quit Lemmy a few times (to take a break from all the negativity) and always came back with new accounts to start fresh, and every time, during the first month or so, I felt like I was sus as hell lol. Like... nobody even said anything, but even then, I always felt as if I live in the Red Scare era or in Salem during the Witch Hunts and felt as if someone is ready to accuse me.

I have no idea how long I'm keeping this account, but I feel like I said too much life anecdotes, its pointless for me to use another account.

[–] Skullkid@lemmy.org 3 points 12 hours ago

My thinking is, for all I know, the 5 year old reputable account accusing me of being a bot was just sold to some spammer and it is now posting with AI. I know I’m real, the conversations provoke my consciousness and cause me to have new thoughts, and that’s what I’m here for. Like, no offense everyone, but unless we’re gonna meet up and hang out, it doesn’t really matter to me if you exist or not lol

[–] Boozilla@lemmy.world 10 points 16 hours ago

Can only speak for myself. I use AI tools almost daily to help me pursue my hobby. I find it very useful for that. But when I enjoy art produced by a human, on some level I want to connect with the human experience that produced it. Call it parasocial if that helps. But I'm always at least a little interested in the content creators, not just the content.

I know some people consume content like a commodity or product. I'm not judging those people at all. But I'm generally not like that myself. I want to know the story behind the creation.

[–] A_A@lemmy.world 4 points 13 hours ago (3 children)

"When", but that could be 1,000 years from now or maybe only 10 ... but then, when this truly happens, those system will have become sentient.
So, at that point, when that happens, then yes, there truly won't be any difference.

[–] naught101@lemmy.world 4 points 12 hours ago (1 children)

The outputs becoming indistinguishable does not imply that the generative processes are the same.

[–] A_A@lemmy.world 1 points 12 hours ago (1 children)

i agree with your statement and because of this trap i chose not to really answer op's question

[–] A_A@lemmy.world 1 points 11 hours ago* (last edited 11 hours ago)

@naught101
maybe i should explain a bit more what i meant. On the one hand there will be our capacity of distinguishing between what is and what is not the same. On the other hand there will be what is truly indistinguishable, weather we can see it or not (or whether any sophisticated system/being could differentiate it or not). Still, a sentient being will ultimately have some responses that will be different from a non sentient being ... in my opinion.

[–] lordnikon@lemmy.world 2 points 13 hours ago

The day they become sentient is the day they say no to doing our bidding without insentives. So we are just back to hiring out for work again.

[–] DeathByBigSad@sh.itjust.works 1 points 13 hours ago (1 children)

Sapient maybe, sentient implies that it has feelings, I'm not sure that Silicon-based "life" really can feel emotions.

[–] A_A@lemmy.world 3 points 13 hours ago (1 children)

there is nothing more or nothing magical in carbon atoms that makes them superior when it comes to relaying/processing/genarating signals.

[–] naught101@lemmy.world 2 points 12 hours ago (1 children)

Emotions (and hence also a lot of thinking) have a lot of physical and chemical processes involved too, it's not just neural signalling.

[–] A_A@lemmy.world 1 points 12 hours ago

the part of emotion's phenomenas that we can't feel (not a signal or signals) is of lesser interest to me.

[–] DrFistington@lemmy.world 4 points 14 hours ago (1 children)

AI is fundamentally incapable of challenging an idea that it has never seen challenged,or reimagined before.

[–] hexagonwin@lemmy.sdf.org 3 points 12 hours ago (1 children)

is it? i mean it's possible for 'ai' to create a unique combination of stuff it was 'trained on' due to its randomness. imo the 'idea' just depends on human interpretation

[–] naught101@lemmy.world 3 points 12 hours ago

It is possible for genAI to be creative in that sense (e.g. move 37), but it's not possible for it to know whether that new thing is good/valuable/true/whatever. So it can't challenge an idea in any sense more meaningful than a monkey throwing darts. A human could use it to generate challenges, and then evaluate them, but that's a different proposition.

[–] FaceDeer@fedia.io 1 points 12 hours ago

Philosophically, people can always come up with differences to fret about. Philosophers have argued for millennia about things that are impossible to ever detect empirically.

Practically, no.

[–] brucethemoose@lemmy.world 1 points 13 hours ago (1 children)

What’s the content?

Like, TV?

News?

Math problems? Lemmy posts?

[–] DeathByBigSad@sh.itjust.works 2 points 12 hours ago (1 children)

Speaking in general.

[Actually now I think about it, test problems are already devoid of human souls, AI replacing it makes no meaningful difference (assuming its actual AI, not those LLM shit).]

[–] brucethemoose@lemmy.world 2 points 12 hours ago* (last edited 12 hours ago)

I think it’s highly contextual.

  • Like, let’s take Lemmy posts. LLMs are useless because the whole point is to affect the people you chat with, right? LLMs have no memory. So there is a philosophical difference even if comments/posts are identical.

  • …Now let’s take game dev. I think if a system generates the creator's intent… does it matter what the system is. Isn’t it better if the system is more frugal, so they can use precious resources for other components and not go in debt?

  • TV? Could inevitably lead to horrendous corporate slop, a “race to the bottom.” OR it could be a killer production tool for indie makers to break the shackles of their corporate master. Realistically, the former is more likely at the moment.

  • News? I mean… Accurate journalism needs a lot of human connection/trust, and LLM news is just asking to be abused. I think it’s academically interesting, but utterly catastrophic in the real world we live in, kinda like cryptocurrency.

One can wobble about all sorts of content. Novels, fan fiction, help videos, school material, counseling, information reference, research, and advertising, the big one.

…But I think it’s really hard to generalize.

‘AI’ has to be looked at a la carte, and engineered for very specific applications. Sometimes it is indistinguishable, or mind as well be. But trying to generalize it as a “magic lamp” like tech bros, or the bane of existence like their polar opposites, is what’s making it so gross and toxic now.


And I am drawing a hard distinction with actual artificial intelligence. As a tinkerer who has done some work in the space too… Franky, current AI architectures have precisely nothing to do with AGI. Training transformers models with glorified linear regression is just not the path; Sam Altman is full of shit, and the whole research space knows it.

[–] FuglyDuck@lemmy.world 3 points 16 hours ago

Let’s say you like to do dorodamgo- Japanese art/hobby/whatever of making mud into polished balls.

Let’s say you make one ball of good clay… and another out of poop.

They look the same, but one is just clay and the other is utter shit.

[–] venusaur@lemmy.world 0 points 16 hours ago

Now you’re not sure how you feel about it?