this post was submitted on 19 Nov 2025
189 points (97.0% liked)
Technology
77043 readers
1781 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Lol 🤣 I'm SO EMBARRASSED. You're totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.
I'll never speak to this topic again since I've clearly been bested with your knowledge from a Google Blog.
yes, google reported about their ai discovering a novel cancer treatment, of course they did?
now tell me about how it isn't true. Do you have anything of substance to discredit this?
this reeks of confirmation bias, did you even try to invalidate your preconcieved notions?
I sure do. Knowledge, and being in the space for a decade.
Here's a fun one: go ask your LLM why it can't create novel ideas, it'll tell you right away 🤣🤣🤣🤣
LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.
I can already tell from your tone you're mostly driven by bullshit PR hype from people like Sam Altman , and are an "AI" fanboy, so I won't waste my time arguing with you. You're in love with human-made logic loops and datasets, bruh. There is not now, nor was there ever, a way for any of it to become some supreme being of ideas and knowledge as you've been pitched. It's super fast sorting from static data. That's it.
You're drunk on Kool-Aid, kiddo.
A decade in the space is impressive. It shows dedication and time invested. That alone deserves recognition.
Still, the points you are repeating are familiar. They are recycled claims from years ago. If the goal is to critique novelty, repeating the same arguments does not advance it.
You say LLMs have zero intentional logic. That is true if by intentional logic you mean human consciousness or goals. It is false if you mean emergent behaviors and the ability to combine information in ways no single source explicitly wrote. Eliminating nuance with absolute terms makes it easy to dismiss valid evidence.
Calling someone an AI fanboy signals preference for labels over analysis. That approach does not strengthen an argument. Specific examples do. Concrete failures, reproducible tests, or papers are what advance discussion.
It is also not accurate to suggest that anyone pitches LLMs as supreme beings. Most people treat them as complex tools that produce surprising results. Their speed, scale, and capacity to identify patterns exceed human ability, but they remain tools. Critiquing them as if they were gods is a strawman.
If you want this discussion to matter, show a single reproducible example where an LLM fails in a way your logic cannot explain. Otherwise, repeating slogans and metaphors only illustrates a resistance to evidence.
I am not here to argue for ideology. I am here to examine claims. That is a choice. It is also a choice to resist slogans and demand specificity. Fun, fun. Another fun day.
You sound drunk on kool-aid, this is a validated scientific report from yale, tell me a problem with the methodology or anything of substance.
so what if that's how it works? It clearly is capable of novel things.
🤦🤦🤦 No...it really isn't:
Not only is there no validation, they have only begun even looking at it.
Again: LLMs can't make novel ideas. This is PR, and because you're unfamiliar with how any of it works, you assume MAGIC.
Like every other bullshit PR release of it's kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It's not that it is intelligent or making "discoveries", it's just moving really fast.
You feed it 10^2^ combinations of amino acids, and it's eventually going to find new chains needed for protein folding. The thing you're missing there is:
It's a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM
Nothing at any stage if developed, is novel output, or validated by any models, because...they can't do that.
I was almost with you on the whole expert act until the part where you said we feed the model "10^2 combinations of amino acids." You realize 10^2 is literally just 100, right? You are writing paragraphs acting like the smartest guy in the room, but you think protein folding gets solved by checking a list shorter than a grocery receipt. That is honestly hilarious. It kind of explains your whole point though. No wonder you think it is just a "simple sorting mechanism" if you think the dataset is that small. You might want to check the math before the next lecture because being off by about 300 zeros makes the arrogance look a bit silly.
Wow, if you really do know something about this subject, you’re being a real asshole about it 🙄
He knows the basics, it's just that they don't lead to any of the conclusions he's claiming they do. He also boldly assumes that everyone who disagrees with him doesn't know anything. He's a beast of confirmation bias.
Nah, I'm just not going to write a novel on Lemmy, ma dude.
I'm not even spouting anything that's not readily available information anyway. This is all well known, hence everybody calling out the bubble.
You have not said one thing i did not already know, none of it has to do with anything
an ai did something novel, this is an easily verified fact. The only alternative is that somebody else wrote the hypothesis.
It most certainly did not...because it can't.
You find me a model that can take multiple disparate pieces of information and combine them into a new idea not fed with a pre-selected pattern, and I'll eat my hat. The very basis of how these models operates is in complete opposition of you thinking it can spontaneously have a new and novel idea. New...that's what novel means.
I can pointlessly link you to papers, blogs from researchers explaining, or just asking one of these things for yourself, but you're not going to listen, which is on you for intentionally deciding to remain ignorant to how they function.
Here's Terrence Kim describing how they set it up using GRPO: https://www.terrencekim.net/2025/10/scaling-llms-for-next-generation-single.html
And then another researcher describing what actually took place: https://joshuaberkowitz.us/blog/news-1/googles-cell2sentence-c2s-scale-27b-ai-is-accelerating-cancer-therapy-discovery-1498
So you can obviously see...not novel ideation. They fed it a bunch of trained data, and it correctly used the different pattern alignment to say "If it works this way otherwise, it should work this way with this example."
Sure, it's not something humans had gotten to get, but that's the entire point of the tool. Good for the progress, certainly, but that's it's job. It didn't come up with some new idea about anything because it works from the data it's given, and the logic boundaries of the tasks it's set to run. It's not doing anything super special here, just very efficiently.
Start chewing. You literally admitted it in your own comment: "Sure, it's not something humans had gotten to yet." That is the definition of a novel discovery. You are arguing that because the AI used logic and existing data to reach the conclusion, it doesn't count. By that definition, no human scientist has ever had a novel idea either since we all build on existing data and patterns. The AI looked at the same data humans had, saw a pattern humans missed, and created a solution humans didn't have. That is novelty. But honestly it is hard to take your analysis of these papers seriously when you just argued in the comment above that protein folding involves "10^2 combinations." You realize 10^2 is just 100 right? You think complex biology is a list shorter than a grocery receipt. If your math is off by about 300 zeros I am not sure you are the best judge of what these models are actually capable of.
No, that's not what novel ideation is whatsoever 🤦
Again...these models work from a list of boundaries, logic, and rules made by humans. They don't make it up themselves because...they.fucking.cant.
If they could make their own rules and conclusions without human intervention, then you have novel ideas. But...they.100%.FUCKING.CANT.DO.THAT.
Pearls to pigs my friend, pearls to pigs.
If there's one bad thing about modern medicine and living in an outsized society is that intelligence is no longer evolutionarily beneficial. We are artificially selecting morons and the latest pisa results are the canary in the coal mine for the idiocracy we're heading to.
Thank you for your efforts in demystifying these fucking ads in the form of breakthroughs that have these insufferable morons thinking "AI" can now do research.
So many people have successfully argued against claims I did not make.
https://www.emergentmind.com/papers/2409.06185
https://huggingface.co/papers/2409.04109
ai does new things all the time and this is easily validated and explained with the concept of temperature.
https://www.emergentmind.com/papers/2409.06185
https://huggingface.co/papers/2409.04109
You addressed that they haven't tested the hypothesis completely while completely overlooking the fact that an ai suggested a novel hypothesis... even if it comes out to be wrong it is still undeniably a novel hypothesis. This is what was validated by yale...
you have still failed to answer the question. You're also neglecting to include an explanation of temperature in your argument, which may be relevant here.
Wow, you stayed way cooler than I would have. Lemmy is extremely anti-LLM or AI in general.
Oof. Tell me you don't understand science without telling me you don't understand science.
It is validated, yale confirmed that it is the case that a novel hypothesis was generated, it was not REPRODUCED, which is irrelevant to my claim that they created a novel hypothesis.
i see how that wording is confusing.