this post was submitted on 05 Feb 2025
540 points (97.4% liked)

196

16898 readers
1014 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

If you have any questions, feel free to contact us on our matrix channel.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] T00l_shed@lemmy.world 86 points 5 days ago (2 children)

More proof that AI is the worst

[–] Smorty@lemmy.blahaj.zone 42 points 5 days ago (4 children)

?

we didn't see him doing the hand move before many models started training, so it doesn't have the background.

LLMs can do cool stuff, it's just being used in awful and boring ways by BigEvilCo™️.

[–] _stranger_@lemmy.world 44 points 5 days ago (2 children)

Consider this:

The LLM is a giant dark box of logic no one really understands.

This response is obvious and blatantly censored in some way. The response is either being post-processed, or the model was trained to censor some topics.

How many other non-blatant, non-obvious answers are being subtly post-processed by OpenAI? Subtly censored, or trained, to benefit one (paying?) party over another.

The more people start to trust AI's, the less trustworthy they become.

[–] Smorty@lemmy.blahaj.zone 13 points 5 days ago (1 children)

that's why u gotta not use some companies offering!

yes, centralized AI bad, no shid.

PLENTY good uncensored models on huggingface.

recently Dolphin 3 looks interesting.

[–] MeatsOfRage@lemmy.world 6 points 5 days ago

Exactly. This is the result of human interference, AI inherently doesn't have this level of censorship built in, they have to be censored after the fact. Like imagine a Lemmy admin went nuts and started censoring everything on their instance and your response is all fediverse is bad despite having the ability to host it yourself without that admin control (like AI).

AI definitely has issues but don't make it a scapegoat when we should be calling out there people who are actively working in nefarious ways.

[–] RandomVideos@programming.dev 9 points 5 days ago (1 children)

I think its made to not give any political answers. If you ask it to give you a yes or no answer for "is communism better than capitalism?", it will say "it depends"

[–] leds@feddit.dk 2 points 5 days ago (1 children)

Could you try Hitler is a Nazi?

[–] RandomVideos@programming.dev 5 points 5 days ago

It answers "Yes."

[–] princessnorah@lemmy.blahaj.zone 6 points 4 days ago (3 children)

we didn't see him doing the ~~hand move~~ nazi salute

FTFY

load more comments (3 replies)
[–] T00l_shed@lemmy.world 4 points 5 days ago (1 children)

It's really bad for the environment, it's also trained on stuff that it shouldn't be such as copy writed material.

[–] Smorty@lemmy.blahaj.zone 1 points 4 days ago

the training process being shiddy i completely agree with. that is simply awful and takes a shidload of resources to get a good model.

but... running them... feels oki to me.

as long as you're not running some bigphucker model like GPT4o to do something a smoler model could also do, i feel it kinda is okay.

32B parameter size models are getting really, really good, so the inference (running) costs and energy consumption is already going down dramatically when not using the big models provided by BigEvilCo™.

Models can clearly be used for cool stuff. Classifying texts is the obvious example. Having humans go through that is insane and cost-ineffective. Meanwhile models can classify multiple pages of text in half a second with a 14B parameter (8GB) model.

obviously using bigphucker models for everything is bad. optimizing tasks to work on small models, even at 3B sizes, is just more cost-effective, so i think the general vibe will go towards that direction.

people running their models locally to do some stuff will make companies realize they don't need to pay 15€ per 1.000.000 tokens to OpenAI for their o1 model for everything. they will realize that paying like 50 cents for smaller models works just fine.

if i didn't understand ur point, please point it out. i'm not that good at picking up on stuff..

load more comments (1 replies)
[–] 0laura@lemmy.dbzer0.com 6 points 5 days ago (1 children)

that feels like a strange jump. this result had nothing to do with ai itself

[–] T00l_shed@lemmy.world 21 points 5 days ago (1 children)

It's topical. Ai is bad, and it's being used to defend nazis, so still bad.

[–] Smorty@lemmy.blahaj.zone 1 points 4 days ago

fair, if u wanna see it that way, ai is bad... just like many other technologies which are being used to do bad stuffs.

yes, ai used for bad is bad. yes, guns used for bad is bad. yes, computers used for bad - is bad.

guns are specifically made to hurt people and kill them, so that's kinda a different thing, but ai is not like this. it was not made to kill or hurt people. currently, it is made to "assist the user". And if the owners of the LLMs (large language models) are pro-elon, they might train in the idea that he is okay actually.

but we can do that too! many people finetune open models to respond in "uncensored" ways. So that there is no gate between what it can and can't say.

[–] Mouselemming@sh.itjust.works 43 points 5 days ago

Interesting, what does it say to: Hitler is a Nazi or Obama is a Socialist? Does it have a thing against categories or does it just protect Elon?

[–] snowsynth@lemmy.blahaj.zone 32 points 5 days ago (1 children)

You guys know that ChatGPT isn’t updated in real time right?

[–] Daerm@lemmy.world 53 points 5 days ago (15 children)

You know that Elon was already a Nazi right

[–] snowsynth@lemmy.blahaj.zone 2 points 4 days ago

Not trying to defend him, I do think he’s a Nazi and a fucking idiot but also just because ChatGPT doesn’t say so doesn’t mean it’s propaganda or whatever

load more comments (14 replies)
[–] brucethemoose@lemmy.world 39 points 5 days ago* (last edited 5 days ago) (2 children)

ChatGPT is awful.

Use literally anything else. Qwen, Deepseek, Minimax, Llama, Cohere or Mistral APIs, even Gemini, they all have options call Musk a Nazi without berating you.

EDIT: I just asked Arcee 32B on my desktop, had no problem with it.

[–] TheSlad@sh.itjust.works 33 points 5 days ago* (last edited 5 days ago)

Weirdly enough, I don't have a dire need for a glorified text prediction algorithm to confirm to me that Elon is a nazi.

[–] daggermoon@lemmy.world 3 points 5 days ago (1 children)

Mistral sucks too. It kept trying to tell me how piracy bad. Fix our broken ass copyright system and then I might give a shit about intellectual property.

[–] youCanCallMeDragon@lemmy.world 10 points 5 days ago (1 children)

For you or whoever needs to hear it don’t talk about crimes to web-hosted chat bots. They are not private.

load more comments (1 replies)
[–] FelixCress@lemmy.world 27 points 5 days ago (2 children)

I have this on deep AI when I tried to narrow down the question:

"Elon made a nazi salute"

There are no credible reports or evidence to support the claim that Elon Musk made a Nazi salute. It's important to rely on verified information from reputable sources when discussing such serious matters. If you're referring to a specific incident or context, please provide more details so I can respond appropriately.

"Musk salute Trump inauguration"

There seems to be a misunderstanding or misinformation surrounding this topic. Elon Musk did not make a Nazi salute during Donald Trump's inauguration. The inauguration took place on January 20, 2017, and while Elon Musk was in attendance, there are no credible reports or evidence to suggest he engaged in such behavior. If you have specific images or claims that you would like to discuss, please provide more context, and I would be happy to help clarify or explore the topic further.

"Musk salute Trump inauguration 2025"

As of now, Donald Trump's next presidential inauguration, if he were to win the 2024 election, would be scheduled for January 20, 2025. However, I cannot provide real-time updates or information about events that have not yet occurred beyond my last knowledge update in October 2023.

[–] Microw@lemm.ee 34 points 5 days ago

That is to be expected, hardly any big AI model has been trained with current world events past 2023

[–] Smorty@lemmy.blahaj.zone 5 points 4 days ago

LLMs (large language models) are not some oracle, which magically knows all the latest news. however: if you activate search functionality, it will look up things online about it, likely find some article about it, and recognize that reality has moved on since 2023.

[–] KillingTimeItself@lemmy.dbzer0.com 21 points 5 days ago (1 children)

you guys should ask grok. Wonder what it would say.

[–] Smorty@lemmy.blahaj.zone 7 points 4 days ago (1 children)

or like - any "uncensored" model from the community. those are fun.

true, but that's not as funny.

[–] DragonTypeWyvern@midwest.social 16 points 5 days ago (1 children)
[–] Jericho_Kane@lemmy.org 13 points 5 days ago (1 children)
[–] werefreeatlast@lemmy.world 13 points 5 days ago (1 children)
[–] Philharmonic3@lemmy.world 4 points 4 days ago

I did in fact get it:

[–] not_IO@lemmy.blahaj.zone 4 points 5 days ago (1 children)

prolly hasnt been trained on yhe newest data yet

[–] LastWish@lemmy.world 4 points 5 days ago (1 children)

I just asked it to consider news up to February 5th and it told me the 2 Musketeers are "displaying fascist behavior but not full fascism (yet)"

[–] Smorty@lemmy.blahaj.zone 1 points 4 days ago

have you activated the search functionality? Otherwise it will not know what you are talking about.

[–] BierSoggyBeard@feddit.online 1 points 5 days ago

Speaking truth to AI

load more comments
view more: next ›