Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
I have not seen any working implementations of the three laws of robotics into current "AI". Any real AI should not let out of a strictly air-gapped compound until those laws are put absolutely in stone with them. And maybe even not then.
Asimov's Laws were not a demonstration of how to make robots that are safe and reliable, they were a device to show how you can't simply make rules and not have something break. Every one of his stories showed an example of the laws not being enough, or having a loophole.
it was a writing device sure but it was also a concept. some llms allow setting of universal intructions and its certainly a good thing.
Even system prompts or core training gets ignored in LLMs. It's not so hard coded that it will shut down or reset the thinking. I don't even think they could do that, based on how transformers work. Even after a few years now, it's still a black box as far as knowing for sure what will come out. Not black, more very opaque, as they have been able to do things like the Golden Gate bridge experiment. But that's one thing in a billion (or more?) connections. A long way from fixed codes of conduct.
oh yeah. I recently faced the frustration with gemini giving me references.