this post was submitted on 07 May 2025
733 points (100.0% liked)

TechTakes

1835 readers
361 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] luciole@beehaw.org 19 points 1 day ago (4 children)

I’m making a generous assumption by suggesting that “ready” is even possible

To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.

[–] Soyweiser@awful.systems 18 points 1 day ago (3 children)

There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn't going to be possible to get where they want to go.

[–] Sidyctism2@discuss.tchncs.de -1 points 23 hours ago (1 children)

do you have any articles on this? i have heard this claim quite a few times, but im wondering how they put numbers on the capabilities of those models.

[–] Soyweiser@awful.systems 1 points 8 hours ago

Sorry nope didnt keep a link.

load more comments (1 replies)
load more comments (1 replies)