this post was submitted on 10 Aug 2025
100 points (99.0% liked)

technology

24086 readers
592 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] hollowmines@hexbear.net 64 points 3 months ago (5 children)
[–] FloridaBoi@hexbear.net 57 points 3 months ago (1 children)

this is the tech threatening your livelihood

[–] hollowmines@hexbear.net 34 points 3 months ago

The field I exited only a couple of years ago has already been decimated by it.

[–] yogthos@lemmygrad.ml 29 points 3 months ago
[–] D61@hexbear.net 24 points 3 months ago (1 children)

This picture is ready to be posted to c/main

[–] hollowmines@hexbear.net 44 points 3 months ago (5 children)

My favourite weird GPT5 fail so far:

[–] ghosts@hexbear.net 38 points 3 months ago (1 children)

Thought for 17s

Me before saying the dumbest shit imaginable kitty-birthday-sad

[–] invo_rt@hexbear.net 10 points 3 months ago

i-love-not-thinking same energy tbh

[–] CptKrkIsClmbngThMntn@hexbear.net 33 points 3 months ago (2 children)

For anyone confused, this answer comes from a very outdated riddle where a child gets in an accident, the father rushes him to the hospital, and upon arrival the doctor proclaims, "I can't operate on this child; he's my son."

By the time I heard it as a kid it was already obvious, but I guess at one point the idea of a woman being a doctor was so far outside the norm as to legitimately stump people.

[–] charly4994@hexbear.net 23 points 3 months ago

I think it's funny that the next obvious solution is that the child has two dads and everyone seems to ignore it as well.

load more comments (1 replies)
[–] segfault11@hexbear.net 15 points 3 months ago

tromp I like children who don’t get in accidents

[–] plinky@hexbear.net 10 points 3 months ago
load more comments (1 replies)
[–] Acute_Engles@hexbear.net 22 points 3 months ago

me trying to play any of the smart person engineering videogames without looking up guides

[–] adultswim_antifa@hexbear.net 59 points 3 months ago* (last edited 3 months ago) (6 children)

Sam Altman's job is to hype GPT-5 so the VCs will still put up with him burning probably the biggest pile of money anyone has ever burned. He's probably damaged the internet and the environment more than any single individual ever, terrorized people about their jobs for years. And he almost certainly knows it's all bullshit, making him a fraud. In a just world, he would be in prison when this is all over. He would almost certainly face the death penalty in China.

[–] Lussy@hexbear.net 36 points 3 months ago

He would almost certainly face the death penalty in China.

Lol

[–] gay_king_prince_charles@hexbear.net 31 points 3 months ago (1 children)

He would almost certainly face the death penalty in China.

Jack Ma, famously not executed, does similar things with Qwen and Alibaba in general.

[–] jackmaoist@hexbear.net 21 points 3 months ago

I would be perfectly fine with Sam Altman being sent to a reeducation camp.

[–] Hohsia@hexbear.net 26 points 3 months ago

Abused his sister too btw. Feel like that needs to be talked about more. He’s a grade A piece of shit

[–] WrongOnTheInternet@hexbear.net 15 points 3 months ago* (last edited 3 months ago) (1 children)

damaged... the environment more than any single individual ever

Still crypto by orders of magnitude. AI doesn't even measure.

For example, google data centres used 30 twh in the last year, with crypto using more like 170 twh.

It's not possible to figure out chatgpt's usage because all the data is bad, but it's still relatively small compared to crypto.

[–] adultswim_antifa@hexbear.net 10 points 3 months ago (1 children)

A lot of people contributed to those crypto numbers. The AI models run in special built data centers. Some have their own generators because the power usage is so high.

[–] WrongOnTheInternet@hexbear.net 6 points 3 months ago

I don't think there are any specific power plants for AI built yet and crypto mining market concentration is very high, so ultimately not that many people.

OpenAI appears to operate what is described as the world's largest single data center building, with an IT load capacity of around 300 MW and a maximum power capacity of approximately 500 MW. This facility includes 210 air-cooled substations and a massive on-site electrical substation, which further highlights its immense scale. A second identical building is already under construction on the same site as of January 2025. When completed, this expansion will bring the total capacity of the campus to around a gigawatt, a record.

So this largest one would take 4.5 twh a year, or 3 percent of current estimated crypto usage. With the expansion, 9 twh or 6 percent of estimated crypto usage.

[–] purpleworm@hexbear.net 14 points 3 months ago

He's probably damaged . . . the environment more than any single individual ever

JD Rockefeller, Rex Tillerson, Lee Raymond

Truman, LBJ

[–] terrific@lemmy.ml 46 points 3 months ago (1 children)

Good old Gary setting the record straight...

No hypothesis has ever been given more benefit of the doubt, nor more funding. After half a trillion dollars in that direction, it is obviously time to move on. The disappointing performance of GPT-5 should make that enormously clear.

Unlikely, but I like his optimism. This is how I have felt with the release of every new LLM for the two years, but the scam is somehow still going 🤷 .. I suppose many people stand to lose a lot of money when the bubble finally bursts.

[–] Lussy@hexbear.net 34 points 3 months ago* (last edited 3 months ago) (1 children)

Cryptocurrency is still going strong and that’s probably the biggest grift of all time. smacks top of AI This thing’s got decades of hoax left in it

load more comments (1 replies)
[–] Awoo@hexbear.net 34 points 3 months ago (4 children)

LLM has reached its limits. No matter what you do with it the thing is always going to be a glorified search engine.

AI has to be conceived from the ground up as something that learns and reproduces actual thinking based on needs/wants. A system that produces methods of walking that reduce energy use for a bot while also seeking out energy sources might only be reproducing the cognitive behaviour of a bacteria but it is closer to life than these LLMs and has more potential to iteratively evolve into something more complex as you give it more wants/needs for its program to evolve on.

Machine learning has more potential than this shit.

[–] yogthos@lemmygrad.ml 27 points 3 months ago (3 children)

I don't think an AI necessarily has to have needs or wants, but it does need to have a world model. That's the shared context we all have and what informs our use of language. We don't just string tokens together when we think. We have a model of the world around us in our heads, and we reason about the world by simulating actions and outcomes within our internal world model. I suspect that the path to actual thinking machines will be through embodiment. Robots that interact with the world, and learn to model it will be able to reason about it in a meaningful sense.

[–] InappropriateEmote@hexbear.net 10 points 3 months ago (16 children)

This is one of those things that starts getting into the fuzzy area around the unanswered questions regarding what exactly qualifies as qualia and where that first appears. But having needs/wants probably is a necessary condition for actual AI if we're defining actual (general) AI as having self awareness. In addition to what @Awoo@hexbear.net said, here's another thing.

You mention how AI probably has to have a world model as a prerequisite for genuine self aware intelligence, and this is true. But part of that is that the world model has to be accurate at least in so far as it allows the AI to function. Like, maybe it can even have an inaccurate fantasy-world world model, but it still has to model a world close enough to reality that it's modeling a world that it can exist in; in other words the world model can't be random gibberish because intelligence would be meaningless in such a world, and it wouldn't even be a "world model." All of that is mostly beside the point except to point out that AI has to have a world model that approaches accuracy with the real world. So in that sense it already "wants" to have an accurate world model. But it's a bit of a chicken and egg problem: does the AI only "want" to have an accurate model of the world after it gains self-awareness, the only point where true "wants" can exist? Or was that "want" built-in to it by its creators? That directionality towards accuracy for its world model is built into it. It has to be in order to get it to work. The accuracy-approaching world model would have to be part of the programming put into it long before it ever gains sentience (aka the ability to experience, self-awareness) and that directionality won't just disappear when the AI does gain sentience. That pre-awareness directionality that by necessity still exists can then be said to be a "want" in the post-awareness general AI.

An analogy of this same sort of thing but as it is with us bio-intelligence beings: We "want" to avoid death, to survive (setting aside edge cases that actually prove the rule like how extreme of an emotional state a person has to be in to be suicidal). That "want" is a result of evolution that has ingrained into us a desire (a "want") to survive. But evolution itself doesn't "want" anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) "want" to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive? It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains. This is just one example, but a pretty good one imo that shows how the ability to experience "wanting" something is still rooted in a kind of directionality that exists independently of (and before) the ability to experience. And also how that experience wouldn't have come about if it weren't for that initial directionality.

Wants/needs almost certainly do have to be part of any actual intelligence. One of the reasons for that is because those wants/needs have to be there in some form for intelligence to even be able to arise in the first place.


It gets really hard to articulate this kind of thing, so I apologize for all the "quoted" words and shit in parentheses. I was trying to make it so that what I was attempting to convey with these weird sentences could be parsed better, but maybe I just made it worse.

load more comments (16 replies)
[–] Awoo@hexbear.net 4 points 3 months ago (8 children)

Machine learning requires needs or wants in order to evolve. If your model is going to learn how to utilise energy efficiently between recharging then it needs to desire energy (need/want). This is just "eat" and "collect water" process of learning. Then you give it predators to learn how to avoid being killed in the process of doing this so it learns survival methods. Add complexity to the environment over time and it'll learn more and more and more.

Reproduction probably needs some sort of social cues to learn, the ability to communicate with other models that they wish to reproduce, or the ability to start working in teams.

It all has the requirement of needs/wants. The basis of all animal intelligence evolving into more efficient methods of doing something is having needs.

load more comments (8 replies)
load more comments (2 replies)
[–] Lussy@hexbear.net 17 points 3 months ago* (last edited 3 months ago) (1 children)

fell-for-it-again

But with people thinking AI is dead because ‘it’s not profitable!!!’

It's always worth pointing out that the cycle of any tech company starts with a low-cost service that gains marketshare while using VC money and reinvesting most to all profit back into the company, then transitions to operating off user money and squeezing users for money. OpenAI is still in the growth phase and will only switch to the squeeze when they have a stable enough lead over Google and Anthropic so lost marketshare isn't as much of an issue.

[–] jackmaoist@hexbear.net 14 points 3 months ago (5 children)

chatgpt is dogshit anyways and only surviving due to being the first and being free. So they have to burn money to stay relevant and hopefully not lose users to better models. GPT-5 is essentially a cost saving model and is the start of enshittification of the industry.

I use Claude for dev related stuff and it only provides limited queries so they can keep their model accurate since costs are limited.

Gemini already produces way better results than chatgpt ever did and is really good at research.

Perplexity can be a decent search engine.

Even i-am-adolf-hitler AI is better than chatgpt in most things although I'd rather not use it.

[–] yogthos@lemmygrad.ml 12 points 3 months ago

I find it hilarious that it does straight up worse than qwen or deepseek, which have been out for months now, on basic tasks.

load more comments (4 replies)
[–] FortifiedAttack@hexbear.net 10 points 3 months ago

waaaaooow what a surprise shocked-pikachu

load more comments
view more: next ›