this post was submitted on 09 Mar 2026
66 points (97.1% liked)

Technology

42449 readers
360 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] core@leminal.space 32 points 14 hours ago (2 children)

No it didn't, if there was oversight they saw the target and didn't care. AI is not the problem, its the scapegoat. They want to be able to shrug and point at AI, saying there was a misclassification and that it wasn't their fault. Meanwhile they ignore the fact that a human at any point in time could have stopped the attack or double checked the target. They chose not to because they don't care. Collateral damage, wanton destruction, and civilian casualties is the goal.

[–] Kichae@lemmy.ca 17 points 14 hours ago

This is the WHOLE point of why these generative models have been pushed so hard the past couple of years. They tested the waters to see if people would accept "it's the computer's fault" as an acceptable excuse, and then slammed on the gas.

Accountability sinks, as Dan Davies has named them, are the whole point. It's everything a slimy corporate CEO or government official has ever wanted.

[–] No_Money_Just_Change@feddit.org 2 points 10 hours ago

It can be a hundred percent their fault for not caring and still be a target selected by ai. Bombing innocents is never justified and never just something to call a technological error but the fact they are combining faulty non transparent software with unsupervised weapon attacks is dangerous and newsworthy