Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
I think everyone has taken your question and run with it using the assumption you're talking about the AGI part, and maybe you were. But in the background of that story were functional robots that didn't (initially) have AGI, but were pretty basic in following directions and rules. They were far beyond what we have now still, but robots don't have to have true AGI to do some jobs, as we've been slowly seeing them work towards. The danger is giving them more than they can actually do and assume a broader capability for interaction is enough to make them work well (LLMs in everything).
So my answer is still far away, but not as far away as AGI, unless there's some breakthrough of course, which none of us can predict either way. And anyone who claims they're sure about that is just talking, a breakthrough by definition comes unexpectedly.
I hope we don't get AGI at this point. We've shown how careless we can be with such things through LLMs, and AGI to LLM is like nuclear to bottle rockets.