this post was submitted on 27 Dec 2024
356 points (94.9% liked)
Technology
61258 readers
6093 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.
If we ever get it, it won't be through LLMs.
I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.
There are already a few papers about diminishing returns in LLM.
They did! Here's a paper that proves basically that:
van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5
Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.
This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.
Thank you, it was an interesting read.
Unfortunately, as I was looking more into it, I've stumbled upon a paper that points out some key problems with the proof. I haven't looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.
https://arxiv.org/html/2411.06498v1
Doesn't that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.
I don't think "intelligence" needs to have a perfect "solution", it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it's absolutely not optimal.
Intractable problems of that scale can't be brute forced because the brute force solution can't be run within the time scale of the universe, using the resources of the universe. If we're talking about maintaining all the computing power of humanity towards a solution and hoping to solve it before the sun expands to cover the earth in about 7.5 billion years, then it's not a real solution.
Yeah, maybe you're right. I don't known where the threshold is.
I wonder if the current computational feasibility will cap out improvment of current generation LLMs soon?
The only text predictor I want in my life is T9
I still have fun memories of typing "going" in T9. Idk why but it 46464 was fun to hit.
I remember that the keys for "good," "gone," and "home" were all the same, but I had the muscle memory to cycle through to the right one without even looking at the screen. Could type a text one-handed while driving without looking at the screen. Not possible on a smartphone!
I just tried Google Gemini and it would not stop making shit up, it was really disappointing.
Gemini is really far behind. For me it's Chatgpt > Llama >> Gemini. I haven't tried Claude since they require mobile number to use it.
It's pretty good but I prefer gpt. Looking forward to trying deepseek soon.
Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind
His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.
All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.
What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn't know, for that we need to build nuclear power plants everywhere. It's sickening really.
So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.
Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?
Powering off a pile of switches is turning it off. Powering off a sentient being is killing it. Not to mention a million other issues it raises.
Until you can see the human soul under a microscope, we can't make rocks into people.
What do you think Sam Altman's net worth is currently?
I mean, human intelligence is ultimately too "just" something.
And 10 years ago people would often refer to "Turing test" and imitation games in the sense of what is artificial intelligence and what is not.
My complaint to what's now called AI is that it's as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.
But I disagree that this technology will not be present in a real AGI if it's achieved. I think that it will be.
I'm not sure that not bullshitting should be a strict criterion of AGI if whether or not it's been achieved is gauged by its capacity to mimic human thought
The LLM aren't bullshitting. They can't lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.
Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn't a single thing in the universe that can't be broken down to a mathematical equation for physics or chemistry? I'm curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it's a leap and I could be wrong but I thought I've heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.
Like I said in the beginning this is straight up bong rips philosophy and haven't looked up any of the shit I brought up.
I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won't see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can't out perform any more independently than a 3 year old.
This is a fun read
Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5
It's impossible to disprove statements that are inherently unscientific.
This is correct, and I don't think many serious people disagree with it.
Well... depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The "trick" is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do "fine". The key for generality is trying to learn after you've been trained, to try to solve something that you've not been prepared for.
Even OpenAI's O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.
I'm not sure if it's already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can't go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.
I’m pretty sure the simplest way to look at is an LLM can only respond, not generate anything on its own without prompting. I wish humans were like that sometimes, especially a few in particular. I would think an AGI would be capable of independent thought, not requiring the prompt.