this post was submitted on 26 Nov 2025
77 points (100.0% liked)

technology

24106 readers
275 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] yogthos@lemmygrad.ml 23 points 2 days ago (1 children)

You don't need to fully understand a mechanism to replicate its function. We frequently treat systems as black boxes and focus entirely on the output. Also, consider that nature has zero 'understanding' of intelligence, yet it managed to produce the human brain through blind mutation shaped strictly by selection pressures. Clearly, comprehension is not a prerequisite for creation. We can mimic the process using genetic algorithms and biologically inspired neural networks.

In fact, we often gain understanding through the attempt to replicate. For instance, reverse engineering these structures is exactly how we learned that language isn't the basis of intelligence in the first place. We don't need a perfect theory of mind to build a system that works. All this shows is that LLM approach has limits and it's not going to lead to any sort of general intelligence on its own.

[–] Monk3brain3@hexbear.net 8 points 2 days ago (2 children)

Yeah I agree with you. A better way to make my point would be that I just think trying to replicate something as insanely complex as intelligence will require a much more thorough understanding of how it works. Like nature took billions of years to pull it off and only one species reached a high level of intelligence (from our perspective at least)

[–] GrouchyGrouse@hexbear.net 13 points 2 days ago (1 children)

The whole thing reeks of “cart before the horse” and always has. It bleeds into all facets of it, right down to it demanding energy outputs we don’t have yet.

[–] Monk3brain3@hexbear.net 9 points 1 day ago (1 children)

demanding energy outputs we don’t have yet.

The energy and data center stuff is just getting stupider by the day. "Orbital data centers"

i-cant

[–] Horse@lemmygrad.ml 7 points 1 day ago

literally one of the worst possible places to put a data center lol

[–] yogthos@lemmygrad.ml 2 points 1 day ago (1 children)

I think we have to be careful with assumptions here. The human brain is incredibly complex, but it evolved organically to do what it does under the selection pressures that weren't strictly selecting for intelligence. We shouldn't assume that the complexity of our brain is a prerequisite. The underlying algorithm may be fairly simple, and the complexity we see is just an emergent phenomenon from scaling it up to the size of our brain.

We also know that animals with much smaller brains, like corvids, can exhibit impressive feats of reasoning. That strongly suggests that their brains are wired more efficiently than primate brains. I imagine part of the reason is that they need to fly, which creates additional selection pressure for more efficient wiring that facilitates smaller brains. Even insects like bees can perform fairly complex cognitive tasks like mapping out their environment and complex communication. And perhaps that's where we should really be focusing our studies. A bee brain has around a million neurons, and that's a far more tractable problem to tackle than the human brain.

Another interesting thing to note is that human brains have massive amounts of redundancy. There's a case of a guy who effectively had 90% of his brain missing and was living a normal life. So, even when it comes to human style intelligence, it looks like the scope of the problem is significantly smaller than it might first appear.

I'd argue that embodiment is the key feature in establishing a reinforcement loop, and that robotics will be the path toward creating genuine AI. An organism’s brain maintains homeostasis by constantly balancing internal body signals with those from the external environment, making decisions to regulate its internal state. It’s a continuous feedback loop that allows the brain to evaluate the usefulness of its actions, which facilitates reinforcement learning. An embodied AI could use this same mechanism to learn about and interact with the world effectively. Robots build an internal world model based on the interaction with the environment that acts as the basis for their decision making. Such a system develops underlying representations of the world that are fundamentally similar to our own, and that would provide a basis for meaningful communication.

[–] Monk3brain3@hexbear.net 1 points 8 hours ago (1 children)

You make a lot of good points that I think are all valid. The only thing I can add is that the embodied AI is an interesting thing. The only thing im a bit of a sceptic on is that robots and other hardware on which the AI is being developed lacks the biological plasticity we have in living creatures. That might lead to incorporation of biological systems in ai development (and all the ethical issues that go with that).

[–] yogthos@lemmygrad.ml 1 points 7 hours ago

That's something we'll have to see to know for sure, but personally I don't see that biological substrate is fundamental to the patterns of our thoughts. Neural networks within a computer have similar kind of plasticity because the connections within the neural network are balanced through training. They are less efficient than biological networks, but there are already analog chips being made which express neuron potentials in hardware. It's worth noting that we won't necessarily create intelligence like our own either. This might be the closest we'll get to meeting aliens. :)

I suspect that the next decade will be very interesting to watch.