this post was submitted on 02 Mar 2026
266 points (95.5% liked)

Technology

82132 readers
3990 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/43768262

Some may have believed they were against AI being used for war. They just don’t want it to make the final kill decision.

The argument given by those supporting them is that AI in the military was inevitable, so their position is a reasonable one.

all 31 comments
sorted by: hot top controversial new old
[–] pineapplelover@lemmy.dbzer0.com 8 points 4 hours ago (1 children)

Anthropic had only 2 rules

  1. No fully automated killings

  2. No mass surveillance

You could automate killing with an approval by someone but trump still wanted to lift these restrictions

[–] Vlyn@lemmy.zip 1 points 14 minutes ago

No domestic mass surveillance. So fuck everyone not from the US.

[–] humanspiral@lemmy.ca 3 points 5 hours ago

Still, Skynet downgraded to ad-interrupted AI girlfriends may be a less scary skynet.

[–] crunchy@lemmy.dbzer0.com 60 points 12 hours ago (1 children)

Well yeah, Anthropic's statement said as much. And they already had a contract with the DoD. This isn't a gotcha.

[–] XLE@piefed.social 35 points 12 hours ago (1 children)

When mainstream media and various celebrities have cast Anthropic as the "good guy", it's important to point out.

[–] Serinus@lemmy.world 1 points 9 hours ago (1 children)

I mean, yeah. They're the only ones trying to hold out some reasonable constraints.

[–] XLE@piefed.social 7 points 9 hours ago* (last edited 9 hours ago) (1 children)

No. They aren't the "good guy," and they don't need to be lauded as such.

Anthropic has been willing to throw away so many reasonable constraints, and they always have. The companies are disturbingly similar, and Anthropic is worse in some ways.

[–] backalleycoyote@lemmy.today 2 points 3 hours ago (1 children)

There’s enough distrust and conspiracy theorist in me to question if the whole thing isn’t a good cop/bad cop publicity stunt to attract “conscientious” consumers.

“Hey all you anti-authoritarian, anti-AI, anti-surveillance types, we’re your friend; we’re the un-AI. Now be good idiots and plot your dissent on our service.”

[–] XLE@piefed.social 2 points 3 hours ago

Mark Zuckerberg, Elon Musk, Sam Altman, Sam Bankman-Fried, etc all positioned themselves as the Cool/Good Guy who was better than all the others. Dario Amodei is just playing using the same playbook.

[–] Iconoclast@feddit.uk 15 points 12 hours ago (3 children)

My understanding is that it's not military use broadly that they object to but the use of their systems for the development of fully autonomous drones.

[–] XLE@piefed.social 4 points 8 hours ago

This is incorrect.

Anthropic contradicts this, showing an aggressive willingness to work with the Trump department of "War."

Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense... We have offered to work directly with the Department of War on R&D to improve the reliability of these systems

[–] JoMiran@lemmy.ml 15 points 13 hours ago

The CEO was VERY clear about this in his interview with Larry Ellison's CBS News.

https://youtu.be/MPTNHrq_4LU

[–] criticon@lemmy.ca 1 points 10 hours ago (7 children)

Is there a "good-ish" AI chatbot? I uninstalled chatgpt and started using claude over the weekend 🤡

I don't use it that much, maybe a couple of questions a week and some help understanding Japanese grammar

[–] breadguy@kbin.earth 4 points 6 hours ago

check huggingface, it's got all the open models. inference is pay as you go, though most models are insanely cheap

[–] partial_accumen@lemmy.world 6 points 9 hours ago (1 children)

Depends on your definition of "good-ish". Do you mean:

  • performance/accuracy?
  • ethical origin?
  • ethical ongoing operation?
  • privacy/future data harvesting concerns?

Running one locally on your own hardware would likely reach "good-ish" with some sacrifices against performance/accuracy (unless you've got a lot of expensive hardware to run very large models). As far as ethical origins, there are few small models trained on public domain/nonstolen content, but their functions are far more limited.

[–] criticon@lemmy.ca 0 points 8 hours ago (1 children)

I mean good-ish in the lesser-evil type of thing. I don't expect any of those to be 100% ethical but there are some that are a lot worse than others

I don't really have a computer capable of running a local AI. I have an i3 laptop from around 6 years ago with 12gb of RAM and integrated graphics

[–] partial_accumen@lemmy.world 3 points 8 hours ago

I mean good-ish in the lesser-evil type of thing. I don’t expect any of those to be 100% ethical but there are some that are a lot worse than others

Ethics are subjective. "Good-ish" to you may mean you're fine if its trained on copyrighted works as long as it wasn't done with electricity from diesel generators belching exhaust into the local Memphis atmosphere (I'm looking at you Grok). Llama doesn't do the diesel generator thing, but its a product of Facebook corporation. So is that "good-ish" to you or not? I don't know. That's up to you.

It may not be fast, but your i3 laptop with 12GB of system RAM can absolutely run a local LLM. This is where that "performance/accuracy" question I raised comes in. It won't be very fast, and you won't be able to run the most common large models like GPT-5 etc. However, if your needs are light, light models exist. Give this a read

[–] floquant@lemmy.dbzer0.com 6 points 9 hours ago

Depends on your definition, Anthropic has been the somewhat less evil on the scene, doing a lot of research to actually understand what they're building instead of just making bold claims whenever they launch a new model, but it's still relative to huge AI companies. The more ethical choice would be local models, again depending on what you see as the ethical issues of AI

[–] onnekas@sopuli.xyz 2 points 9 hours ago* (last edited 9 hours ago)

I'm quite happy with Mistrals LeChat. I have not done much research on Mistral but from the headlines I read they don't seem like bad guys.

The general quality of the answers is slightly worse than chat gpt (IMO). But I like some features like agents and document libraries in the free tier

[–] parzival@lemmy.org 1 points 8 hours ago

If you have a decent GPU, Ollama lets you run them on your own hardware

[–] frongt@lemmy.zip 1 points 8 hours ago

If you can't validate the answers it gives, I would recommend not using it. It could be giving you complete nonsense in Japanese and you'll have no way to know until years later someone looks at you funny when you say something and you explain "I learned Japanese from chatgpt".

[–] Iconoclast@feddit.uk 1 points 9 hours ago (1 children)

If by "good" you mean one that more reliably answers your questions correctly, then no. That's not really what these systems are good at. They're fully capable of giving a solid, accurate answer, but you can simply never trust it to be correct. They're great for chit-chat and bouncing around ideas if you're into that, but it's not an oracle.

When it comes to translating languages, that's one of the few things LLMs are actually somewhat decent at, and I don't think there's much difference between them in that regard.

[–] criticon@lemmy.ca 1 points 9 hours ago

No, I mean good in the sense of less evil. I don't ask any question to a bot where I need complete accuracy

[–] thoralf@discuss.familie-will.at 1 points 10 hours ago

They had a long standing contract with the Pentagon. There was a weeks long fight between said Pentagon and Anthropic on how the software is licensed.

And you realize just now that Claude is used by the Pentagon?