How this different if you are running other models?
Cybersecurity
c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.
THE RULES
Instance Rules
- Be respectful. Everyone should feel welcome here.
- No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia.
- No Ads / Spamming.
- No pornography.
Community Rules
- Idk, keep it semi-professional?
- Nothing illegal. We're all ethical here.
- Rules will be added/redefined as necessary.
If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.
Learn about hacking
Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub
Notable mention to !cybersecuritymemes@lemmy.world
This one is a buzzword.
It's not, they just wanted to tap into the Deepseek is bad narrative to get more clicks.
So big scary warning, but nothing than "audit your shit" to address it? I'm not an expert, and in fact just got ollama w/deepseek running on my machine last night... but like, isn't it common courtesy to at least suggest a couple of ways to mitigate things when posting a "big scary warning" like that?
Especially given deepseek's more open nature, and the various tidbits I've seen indicating that deepseek is open enough to be able to tweak/run entirely locally. So addressing the API issue seems within the realm of possibility...
It's not for people running the software, it's a scare headline aimed at people who don't know anything about AI but have seen big scary Chinese deepseek