this post was submitted on 19 Mar 2025
1492 points (98.3% liked)

Not The Onion

15287 readers
1043 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
 

In the piece — titled "Can You Fool a Self Driving Car?" — Rober found that a Tesla car on Autopilot was fooled by a Wile E. Coyote-style wall painted to look like the road ahead of it, with the electric vehicle plowing right through it instead of stopping.

The footage was damning enough, with slow-motion clips showing the car not only crashing through the styrofoam wall but also a mannequin of a child. The Tesla was also fooled by simulated rain and fog.

you are viewing a single comment's thread
view the rest of the comments
[–] Simulation6@sopuli.xyz 14 points 3 days ago (3 children)

If the disengage to avoid legal consequences feature does exist, then you would think there would be some false positive incidences where it turns off for no apparent reason. I found some with a search, which are attributed to bad software. Owners are discussing new patches fixing some problems and introducing new ones. None of the incidences caused an accident, so maybe the owners never hit the malicious code.

[–] AA5B@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

The given reason is simply that it will return control to the driver if it can’t figure out what to do, and all evidence is consistent with that. All self-driving cars have some variation of this. However yes it’s suspicious when it disengages right when you need it most. I also don’t know of data to support whether this is a pattern or just a feature of certain well-published cases.

Even in those false positives, it’s entirely consistent with the ai being confused, especially since many of these scenarios get addressed by software updates. I’m not trying to deny it, just say the evidence is not as clear as people here are claiming

[–] FuglyDuck@lemmy.world 4 points 2 days ago (1 children)

if it randomly turns off for unapparent reasons, people are going to be like 'oh that's weird' and leave it at that. Tesla certainly isn't going to admit that their code is malicious like that. at least not until the FBI is digging through their memos to show it was. and maybe not even then.

[–] AA5B@lemmy.world 1 points 2 days ago

When I tried it, the only unexpected disengagement was on the highway, but it just slowed and stayed in lane giving me lots of time to take over.

Thinking about it afterwards, possible reasons include

  • I had cars on both sides, blocking me in. Perhaps it decided that was risky or it occluded vision, or perhaps one moved toward me and there was no room to avoid
  • it was a little over a mile from my exit. Perhaps it decided it had no way to switch lanes while being blocked in
[–] Dultas@lemmy.world 5 points 3 days ago (1 children)

I think Mark (who made the OG video) speculated it might be the ultrasonic parking sensors detecting something and disengaging.

[–] Simulation6@sopuli.xyz 1 points 3 days ago

That does sound more reasonable.