185
submitted 3 weeks ago by sverit@lemmy.ml to c/technology@lemmy.world

A team of researchers from prominent universities – including SUNY Buffalo, Iowa State, UNC Charlotte, and Purdue – were able to turn an autonomous vehicle (AV) operated on the open sourced Apollo driving platform from Chinese web giant Baidu into a deadly weapon by tricking its multi-sensor fusion system, and suggest the attack could be applied to other self-driving cars.

you are viewing a single comment's thread
view the rest of the comments
[-] EvilBit@lemmy.world 82 points 3 weeks ago

https://xkcd.com/1958/

TL;DR: faking out a self-driving system is always going to be possible, and so is faking out humans. But doing so is basically attempted murder, which is why the existence of an exploit like this is not interesting or new. You could also cut the brake lines or rig a bomb to it.

[-] ArbitraryValue@sh.itjust.works 19 points 3 weeks ago

People seem to hold computers to a higher standard than other people when performing the same task.

[-] phdepressed@sh.itjust.works 20 points 3 weeks ago

Because humans have more accountability. Also it has implications for military/police use of self-guided stuff.

[-] lolcatnip@reddthat.com 0 points 3 weeks ago

What is the purpose of accountability other than to force people to do better? If the lack of accountability doesn't stop a computer from outperforming a human, why worry about it?

[-] medgremlin@midwest.social 11 points 3 weeks ago* (last edited 3 weeks ago)

The lack of accountability means that there is nothing and no one to take responsibility when the robot/computer inevitably kills someone. A human can be faced with legal ramifications for their actions, the companies that make these computers have shown thus far that they are exempt from such consequences.

[-] Turun@feddit.de 3 points 3 weeks ago

That is true for most current "self driving" systems, because they are all just glorified assist features. Tesla is misleading its customers massively with their advertisement, but on paper it's very clear that the car will only assist in safe conditions, the driver needs to be able to react immediately at all times and therefore is also liable.

However, Mercedes (I think it was them) have started to roll out a feather where they will actually take responsibility for any accidents that happen due to this system. For now it's restricted to nice weather and a few select roads, but the progress is there!

[-] medgremlin@midwest.social 3 points 3 weeks ago

The driverless robo-taxis are also a concern. When one of them killed someone in San Francisco there was not a clear responsible entity to charge with the crime.

[-] lolcatnip@reddthat.com 0 points 3 weeks ago* (last edited 3 weeks ago)

That is simply not true. The law since basically forever had held that manufacturers are liable if their product malfunctions and hurts someone when it's being operated in accordance with their instructions.

Edit: I hope all y'all who think the rule of law doesn't exist are gonna vote against the felony party.

[-] Kanzar@sh.itjust.works 5 points 3 weeks ago

Excuse us for being sceptical that businesses will actually be held accountable. We know legally they are, but will forced arbitration or delayed court proceedings mean people too poor to afford a good lawyer for long will have to fuck off?

[-] medgremlin@midwest.social 4 points 3 weeks ago

The current court cases show that the manufacturers are trying to fob off responsibility onto the owners of the vehicles by way of TOS agreements with lots of fine print and Tesla in particular is getting slammed for false advertising about the capabilities of their self-driving features while they simultaneously try to force all legal liability onto the drivers that believed their advertising.

[-] Fedizen@lemmy.world 4 points 3 weeks ago

I think human responses vary too much: could you follow a strategy that makes 50% of human drivers crash reliably? probably. Could you follow a strategy to make 100% of autonomous vehicles crash reliably? Almost certainly.

[-] Imgonnatrythis@sh.itjust.works 7 points 3 weeks ago

Or if it's a Tesla you could hack someone's weather app and thus force them to drive in the rain.

[-] admin@lemmy.my-box.dev 5 points 3 weeks ago

Awwww. Why did you have to break the circlejerk? People were enjoying it!

[-] Eggyhead@kbin.run 4 points 3 weeks ago

I was so close to finishing, too. Time to look for another doomsday thread, I guess.

[-] uriel238@lemmy.blahaj.zone 4 points 3 weeks ago

More exciting would be an exploit that renders an unmoving car useless. But exploits like this absolutely will be used in cases were tire-slashing might be used, such as harassing genocidal vips or disrupting police services, especially if it's difficult to trace the drone to its controller.

[-] Beryl@lemmy.world 4 points 3 weeks ago* (last edited 3 weeks ago)

You don't even have to rig a bomb, a better analogy to the sensor spoofing would be to just shine a sufficiently bright light in the driver's eyes from the opposite side of the road. Things will go sideways real quick.

[-] EvilBit@lemmy.world 2 points 3 weeks ago

It’s not meant to be a perfect example. It’s a comparable principle. Subverting the self-driving like that is more or less equivalent to any other means of attempting to kill someone with their car.

[-] Beryl@lemmy.world 4 points 3 weeks ago

I don't disagree, i'm simply trying to present a somewhat less extreme (and therefore i think more appealing) version of your argument

this post was submitted on 07 Jun 2024
185 points (96.5% liked)

Technology

55693 readers
2853 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS