this post was submitted on 15 Mar 2026
8 points (70.0% liked)

Ask Lemmy

38538 readers
1616 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old

Well, assuming some agi breakthrough doesn't happen (which would in my opinion require a vastly different approach than llms). We will see more of this ai swarm type stuff. Essentially you end up with a bunch of specialized ai's, and then some ai coordinators. The ai that we will talk to will just farm out the work to other AIs that will include ones specialized in verifying the work that the ai does.

Most people preAI did work that was say 60% implementation, 30% figuring out what needs to be done, and 10% verifying what was done. That will shift to 15% implementation, 50% requirements gathering, and 35% verification.
Obviously those number are just to show the shift, not intended to be an accurate representation of the current way our work is divided.

Overall, if you give ai a way to verify what it is doing, and let it iterate, it is far more useful than just telling it to do a thing or asking it a question.

[–] ozymandias117@lemmy.world 4 points 5 hours ago (1 children)

LLMs are a dead end, and the massive amounts of money being wasted on them will make people too scared to invest in other forms of AI.

So we are currently at a local maxima that we won't overcome in 10 years. It will take much longer before we try a different approach to create "AGI," and the wasted money on LLMs will slow other forms of AI research, leaving us stagnating for >10 years

I'm not convinced that investors would know the difference between a company trying to improve llms vs taking a new approach. So I don't think it will stifle investment in other forms of AI research.

I also don't think they are a dead end overall. They sure aren't likely to get to agi, but you don't need agi to be useful.

[–] yessikg@fedia.io 2 points 5 hours ago

LLMs will go the way of NTFs. No AGI will exist yet

[–] Canopyflyer@lemmy.world 2 points 6 hours ago

The only AI companies that will exist in 10 years will be those started by a large company that has other unrelated profit streams. Such as Microsoft, Google, Amazon etc. All others will fail. Some will be bought by the big players if they develop a unique technology. Otherwise they all go broke.

If I had to guess, there will be only two major AI/LLM companies in existence. The nature of LLM's discounts that small companies and organizations can scale one to be profitable.

Micron comes back to the consumer market, but has to rebrand due to the ire of consumers for them being assholes. Same with Western Digital, although they have not "technically" left the consumer market.

The next 5 years will be spent by people trying to find SOMETHING for AI to do. Some very high end uses in research, or academics will be found. However, those will cost massive amounts of money and only available to governments, large corporations and academic institutions. Consumers will be left with creating images, music and a few other parlor tricks, but there will be nothing of any true value offered. In the mean time AI images and videos will be used to exacerbate the societal/ cultural issues across the globe, until the population becomes so jaded and cynical that this media looses efficacy. By that time enormous damage will have been done.

Consumers will also be left paying for the electricity, water, and other resources that the remaining data centers will consume.

I'm currently looking heavily into installing solar on my home, with a battery backup just because of these stupid data centers. It's just a matter of time that these things start causing issues on the grid.

[–] fizzle@quokk.au 4 points 8 hours ago

Improvement stagnates.

Venture capital availability reduces.

Mag 7 try to monetise to continue development.

Business adoption is tepid as long term heavy use reduces skills and productivity.

Some financial VC fund learns from a credible whistle blower that generative AI is not a pathway to AGI. Revalues their portfolio, enters administration.

The ensuing fallout triggers a global depression.

[–] HetareKing@piefed.social 3 points 8 hours ago

Bubble will burst, many AI companies will go under, the ones that remain will have to price themselves out of reach of most people. Lack of investor confidence will trigger a third AI winter, which will affect even actual valuable uses of machine learning models and the further development of locally-run models. People who graduated college between 2023 and 202X will have a harder time getting a job. AGI will still be a far-off dream.

[–] cronenthal@discuss.tchncs.de 7 points 13 hours ago

LLMs will be a standard part of software tooling like IDEs, and people won't talk about them much anymore.

LLMs and image/video generation will be a standard part of adult entertainment.

[–] Mr_Fish@lemmy.world 14 points 15 hours ago (1 children)

Narrow AI well get better, even faster than normal because of the research that big AI companies are doing now, but attempts at more general AI will stop being profitable.

[–] 30p87@feddit.org 2 points 12 hours ago* (last edited 12 hours ago)

General "AI" is not profitable at all, even rn. Raising money is not making profit

[–] sturmblast@lemmy.world 2 points 10 hours ago

Bubble go burst

[–] hitmyspot@aussie.zone 1 points 10 hours ago (1 children)

Most interaction between people and computers will move from keyboard and mouse to spoken word.

[–] makingStuffForFun@lemmy.ml 2 points 10 hours ago (1 children)

I have RSI and already do this. Apart from my condition, it's a game changer.

I'm so much faster using assistive tech than I ever was with a keyboard and shortcuts (and I'm a fast mover).

I agree. Once people taste the speed of talking to the pc, the keyboard will start to fall away.

For many at least.

[–] hitmyspot@aussie.zone 1 points 1 hour ago

Yes, and right now they are limited in what control they have over the os or other apps. Input is set up for keyboard and mouse. Just like with touch screen changing inputs, the other software will be redesiiover time to accommodate the new preference leading to more ease and more integration.

[–] raicon@lemmy.world 4 points 15 hours ago

People hate LLMs because of their unreliability, and they are right. But AI is a much more vast field.

As soon as we have more reliable, causal and general intelligence, the opinions will change.

I personally believe that humans have no clue how limited our brain power is. So much so that there will be no AGIs. Only ASIs. Same thing that happened with chess bots.

[–] quediuspayu@lemmy.dbzer0.com 2 points 13 hours ago

People will eventually learn where it is useful and it is not.

[–] jeena@piefed.jeena.net 4 points 15 hours ago (2 children)

I predict software engineers won't go away, but coders will go away.

[–] ViatorOmnium@piefed.social 3 points 7 hours ago

LLMs are shit a doing large code changes, and fundamentally will always be shit at it, because they fundamentally can't reason. LLMs are good text completers, and that's their place in the IDE.

My prediction is that most well run organizations are going to push against coding agents soon. Look at the reports that even Amazon is now demanding Senior Engineers reviews for AI code changes and take responsibility, which doesn't scale now, and will scale even less in the future if we train less coders.

[–] Mr_Fish@lemmy.world 1 points 14 hours ago

That's already a thing in some areas of programming. Block programming, where you just drag and connect blocks, is very possible, especially in game development.

[–] Iconoclast@feddit.uk 2 points 14 hours ago (1 children)

Impossible to make predictions that far away. We could have AI models that are barely better than our current ones or we might be extinct with our AGI system already spreading out into the universe.

[–] Thedogdrinkscoffee@lemmy.ca 3 points 11 hours ago

"It’s Difficult to Make Predictions, Especially About the Future"

~ apocryphal