this post was submitted on 26 Nov 2025
77 points (100.0% liked)

technology

24104 readers
541 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
top 25 comments
sorted by: hot top controversial new old
[–] GrouchyGrouse@hexbear.net 39 points 1 day ago* (last edited 1 day ago) (2 children)

I got this mental image of a bunch of guys trying to invent flight before the Wright Brothers. They’ve got this wingless prototype that shoots off some giant ramp. No matter how big the ramp it never achieves flight. It goes up and comes back down. And these scientists are just chain smoking, pounding black coffee by the pot, pulling all-nighters, trying to come up with a bigger ramp. They bulldoze the whole fucking planet to make the ramp. Now we’re Planet Ramp. The fucking prototype still won’t fly.

[–] TheBroodian@hexbear.net 13 points 1 day ago

Great analogy

[–] umbrella@lemmy.ml 3 points 1 day ago* (last edited 1 day ago)

a bunch of guys trying to invent flight before the Wright Brothers

you mean like santos dumont?

[–] Soot@hexbear.net 17 points 1 day ago

Cutting edge research proves that AI isn't as smart as humans.. well, yeah, my 3 year old knows that

[–] vovchik_ilich@hexbear.net 24 points 1 day ago

My dude Qui-Gon Jin redeemed yet again

[–] AOCapitulator@hexbear.net 28 points 1 day ago

wait was that not obvious?!

Math isn't intelligence either Idk what harvard grad I need to send a letter about this...

[–] SwitchyandWitchy@hexbear.net 24 points 1 day ago

I understand that here and in many of the irl circles I roll in that this statement is taken for granted, but it's still really nice to have research that backs it up too.

[–] LeninWeave@hexbear.net 20 points 1 day ago

Cutting-edge research demonstrating what everyone outside of silicon valley understood instinctually.

[–] Coca_Cola_but_Commie@hexbear.net 13 points 1 day ago (1 children)

Litbros and performative males in shambles.

[–] LeZero@hexbear.net 5 points 1 day ago

It's the wordcels vs shape rotators all over again isn't it ?

[–] miz@hexbear.net 12 points 1 day ago (2 children)
[–] yogthos@lemmygrad.ml 11 points 1 day ago

Blindsight is a great read.

[–] barrbaric@hexbear.net 5 points 1 day ago

I need to finally read Echopraxia one of these days...

[–] Monk3brain3@hexbear.net 12 points 1 day ago (1 children)

How do you replicate something you have no understanding of, like intelligence. The "ai" scam was doomed from the start.

[–] yogthos@lemmygrad.ml 23 points 1 day ago (1 children)

You don't need to fully understand a mechanism to replicate its function. We frequently treat systems as black boxes and focus entirely on the output. Also, consider that nature has zero 'understanding' of intelligence, yet it managed to produce the human brain through blind mutation shaped strictly by selection pressures. Clearly, comprehension is not a prerequisite for creation. We can mimic the process using genetic algorithms and biologically inspired neural networks.

In fact, we often gain understanding through the attempt to replicate. For instance, reverse engineering these structures is exactly how we learned that language isn't the basis of intelligence in the first place. We don't need a perfect theory of mind to build a system that works. All this shows is that LLM approach has limits and it's not going to lead to any sort of general intelligence on its own.

[–] Monk3brain3@hexbear.net 8 points 1 day ago (2 children)

Yeah I agree with you. A better way to make my point would be that I just think trying to replicate something as insanely complex as intelligence will require a much more thorough understanding of how it works. Like nature took billions of years to pull it off and only one species reached a high level of intelligence (from our perspective at least)

[–] GrouchyGrouse@hexbear.net 13 points 1 day ago (1 children)

The whole thing reeks of “cart before the horse” and always has. It bleeds into all facets of it, right down to it demanding energy outputs we don’t have yet.

[–] Monk3brain3@hexbear.net 9 points 1 day ago (1 children)

demanding energy outputs we don’t have yet.

The energy and data center stuff is just getting stupider by the day. "Orbital data centers"

i-cant

[–] Horse@lemmygrad.ml 7 points 1 day ago

literally one of the worst possible places to put a data center lol

[–] yogthos@lemmygrad.ml 2 points 1 day ago (1 children)

I think we have to be careful with assumptions here. The human brain is incredibly complex, but it evolved organically to do what it does under the selection pressures that weren't strictly selecting for intelligence. We shouldn't assume that the complexity of our brain is a prerequisite. The underlying algorithm may be fairly simple, and the complexity we see is just an emergent phenomenon from scaling it up to the size of our brain.

We also know that animals with much smaller brains, like corvids, can exhibit impressive feats of reasoning. That strongly suggests that their brains are wired more efficiently than primate brains. I imagine part of the reason is that they need to fly, which creates additional selection pressure for more efficient wiring that facilitates smaller brains. Even insects like bees can perform fairly complex cognitive tasks like mapping out their environment and complex communication. And perhaps that's where we should really be focusing our studies. A bee brain has around a million neurons, and that's a far more tractable problem to tackle than the human brain.

Another interesting thing to note is that human brains have massive amounts of redundancy. There's a case of a guy who effectively had 90% of his brain missing and was living a normal life. So, even when it comes to human style intelligence, it looks like the scope of the problem is significantly smaller than it might first appear.

I'd argue that embodiment is the key feature in establishing a reinforcement loop, and that robotics will be the path toward creating genuine AI. An organism’s brain maintains homeostasis by constantly balancing internal body signals with those from the external environment, making decisions to regulate its internal state. It’s a continuous feedback loop that allows the brain to evaluate the usefulness of its actions, which facilitates reinforcement learning. An embodied AI could use this same mechanism to learn about and interact with the world effectively. Robots build an internal world model based on the interaction with the environment that acts as the basis for their decision making. Such a system develops underlying representations of the world that are fundamentally similar to our own, and that would provide a basis for meaningful communication.

[–] Monk3brain3@hexbear.net 1 points 1 hour ago (1 children)

You make a lot of good points that I think are all valid. The only thing I can add is that the embodied AI is an interesting thing. The only thing im a bit of a sceptic on is that robots and other hardware on which the AI is being developed lacks the biological plasticity we have in living creatures. That might lead to incorporation of biological systems in ai development (and all the ethical issues that go with that).

[–] yogthos@lemmygrad.ml 1 points 3 minutes ago

That's something we'll have to see to know for sure, but personally I don't see that biological substrate is fundamental to the patterns of our thoughts. Neural networks within a computer have similar kind of plasticity because the connections within the neural network are balanced through training. They are less efficient than biological networks, but there are already analog chips being made which express neuron potentials in hardware. It's worth noting that we won't necessarily create intelligence like our own either. This might be the closest we'll get to meeting aliens. :)

I suspect that the next decade will be very interesting to watch.

[–] Bob_Odenkirk@hexbear.net 9 points 1 day ago* (last edited 1 day ago) (1 children)

Why does it matter if it’s actually intelligent? What does fact that the name is technically a lie meaningfully change if people still find uses for “AI”? Just kinda feels like a gotcha.

[–] BodyBySisyphus@hexbear.net 21 points 1 day ago (1 children)

Because LLMs aren't useful enough to be profitable and the investments companies are making in infrastructure only make sense if they represent a viable stepping stone toward AGI. If LLMs are a dead end, a lot of money may be about to go up in smoke.

The other problem is that they are mainly good at creating the illusion that they work well, and the main barrier to implementation, the tendency to hallucinate, may not be fixable.

[–] Kefla@hexbear.net 22 points 1 day ago

Of course it isn't fixable and I've been saying this since like 2021. Hallucination isn't a bug that mars their otherwise stellar performance, hallucination is the only thing they do. Since nothing they generate is founded on any sort of internal logic, everything they generate is hallucination, even the parts that coincidentally line up with reality.