News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source. Clickbait titles may be removed.
Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.
7. No duplicate posts.
If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners or news aggregators.
All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
Thank you for raising these points. Progress has certainly been made and in specific applications, AI tools has resulted in breakthoughs.
The question is wheither it was transformative, or just incremental improvements, i.e. a faster horse.
I would also argue that there is a significant distinction between predictive AI systems in the application of analysis and the use of LLM. The former has been responsible for the majority of the breakthroughs in the application of AI, yet the latter is getting all the recent attention and investment.
Its part of the reason why I think the current AI bubble is holding back AI development. So much investment is being made for the sake of extracting wealth from individials and investment vehicles, rather than in something that will be beneficial in the long term.
Predictive AI (old AI) overall is certainly going to be a transformative technology as it has already proven over the last 40 years.
I would argue what most people call AIs today, LLMs are not going to be transformative. It does a very good imitation of human language, but it completely lacks the ability to reason beyond the information it is trained on. There has been some progress with building specific modules for completing certain analytical tasks, like mathematics and statistical analysis, but not in the ability to reason.
It might be possible to do that through brute force in a sufficiently large LLM, but I strongly suspect we lack the global computing power by a few orders of magnatude before we get to a mammilian brain and the number of connections it can make.
But even if you could, we also need to improve power generation and efficiency by a few orders of magnatude as well.
I would love to see the AI bubble pop, so that the truely transformative work can progress, rather than the current "how do we extract wealth" focus of AI. So much of what is happening now is the same as the dot com bubble, but at a much larger scale.
You’re assuming that transformation only counts when it yields visible scientific breakthroughs. That overlooks how many technologies reshape economies by compressing time, labor, and coordination across everyday work. When a tool removes friction from millions of small interactions, its cumulative effect can be structural even if each individual use feels modest, much like spreadsheets, search engines, or email once did.
The distinction between predictive systems and LLMs is broadly right, but in practice the boundary is porous. Most high-impact AI systems still rely on classical predictive models, optimization methods, and domain-specific algorithms, while LLMs increasingly act as a control and translation layer. They map ambiguous human intent into structured actions, route tasks across tools, and integrate heterogeneous systems that previously required expert interfaces. This does not make LLMs the source of breakthroughs, but it does make them central to how breakthroughs scale, combine, and reach non-experts.
The reasoning critique strengthens when framed around control and guarantees rather than capability. LLMs do generalize to new problems, so their limitation is not simple memorization. Their reasoning emerges from next-token prediction, not from an explicit objective tied to truth, proof, or logical consistency. This architecture optimizes for plausibility and coherence, sometimes producing fluent but unfounded claims. The problem is not that LLMs reason poorly, but that they reason without dependable constraints.
The hallucination problem can be substantially reduced, but within a single LLM it cannot be eliminated. That limit, however, applies to models, not necessarily to systems. Multi-model and hybrid architectures already point toward ways of approaching near-perfect reliability. Retrieval and grounding modules can verify claims against live data, tool use can offload factual and computational tasks to systems with hard guarantees, and ensembles of models can cross-check, critique, and converge on shared answers. In such configurations, the LLM serves as a reasoning interface while external components enforce truth and precision. The remaining difficulty lies in coordination, ensuring that every step, claim, and interpretation remains tied to verifiable evidence. Even then, edge cases, underspecified prompts, or novel domains can reintroduce small error rates. But in principle, hallucination can be driven to vanishingly low levels when language models are treated as parts of truth-preserving systems rather than isolated generators.
The compute and energy debate is directionally sensible but unsettled. It assumes progress through brute-force scaling toward brain-like complexity, yet history shows that architectural shifts, hybridization, and efficiency gains often reset apparent limits. Real constraints are likely, but their location and severity remain uncertain.
Where your argument is strongest is on incentives. The current investment cycle undoubtedly rewards short-term monetisation and narrative dominance over long-term scientific and infrastructural progress. This dynamic can crowd out foundational research in safety, evaluation, and interpretability. Yet, as in past bubbles, the aftermath tends to leave behind useful assets, tools, datasets, compute capacity, and talent, that more serious work can build upon once the hype cools.