view the rest of the comments
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
In it's current state,
I'd call it ML (Machine Learning)
A human defines the desired outcome,
and the technology "learns itself" to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.
That definition would also apply to teaching a baby to walk.
A baby isn't just learning to walk. It also makes its own decisions constantly and has emotions. An LLM is not an intelligence no matter how hard you try to argue that it is. Just because the term has been used for a long time didn't mean it's ever been used correctly.
It's actually stunning to me that people are so hyped on LLM bullshit that they're trying to argue it comes anywhere close to a sentient being.
You completely missed my point obviously. I'm trying to get you to consider what "intelligence" actually means. Is intelligence the ability to learn? Make decisions? Have feelings? Outside of humans, what else possesses your definition of intelligence? Parrots? Mice? Spiders?
I'm not comparing LLMs to human complexity, nor do I particularly give a shit about them in my daily life. I'm just trying to get you to actually examine your definition of intelligence, as you seem to use something specific that most of our society doesn't.
To be fair, I think we underestimate just how brute-force our intelligence developed. We as a species have been evolving since single-celled organisms, mutation by mutation over billions of years, and then as individuals our nervous systems have been collecting data from dozens of senses (including hormone receptors) 24/7 since embryo. So before we were even born, we had some surface-level intuition for the laws of physics and the control of our bodies. The robot is essentially starting from square 1. It didn't get to practice kicking Mom in the liver for 9 months - we take it for granted, but that's a transferable skill.
Granted, this is not exactly analogous to how a neural network is trained, but I don't think it's wise to assume that there's something "magic" in us like a "soul", when the difference between biological and digital neural networks could be explained by our "richer" ways of interacting with the environment (a body with senses and mobility, rather than a token/image parser) and the need for a few more years/decades of incremental improvements to the models and hardware
So what do you call it when a newborn deer learns to walk? Is that “deer learning?”
I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.