Remote drivers intervene in unusual situations
**The takeaway: As robotaxis and other AI-based technologies proliferate, so does the myth that these systems are fully autonomous. During a recent Senate hearing, industry leader Waymo provided the latest reminder that AI relies on human labor – often low-paid – more than people realize. **
Waymo's chief safety officer, Mauricio Peña, recently noted that when the company's robotaxis encounter unusual situations, they may request real-time input from a remote response agent, receiving human guidance when needed. While some of the contractors work in the US, many operate from other countries, such as the Philippines.
The admission is another example of human workers, often contractors, supporting supposedly autonomous AI systems from behind the curtain. Tesla's robotaxis still rely on human monitors sitting inside each vehicle.
Contract labor has been at the heart of AI since OpenAI sparked the latest wave of investment in the technology several years ago. ChatGPT relied heavily on workers from across the world to train its underlying large language model, often for as little as $15 an hour with no benefits.
Filipino remote workers also oversaw most of the orders taken through Presto Automation's supposedly autonomous fast-food drive-thru system. Meanwhile, Amazon's ill-fated Just Walk Out technology, which claimed to handle physical purchases automatically without involving cash registers, actually relied upon workers in India to monitor customers.
Tesla's robots, the primary reason why the company is discontinuing its most popular vehicles, became arguably the most notorious example of this phenomenon in late 2024. At the company's "We, Robot" event, the robots admitted that they still relied upon human intervention, and a video of a unit falling over after mimicking the motion of its remote operator removing their headset went viral.
However, the senators grilling Peña at the hearing were less concerned about the use of remote workers than the fact that many were not American.
Massachusetts Senator Ed Markey called the employment of foreign remote workers "completely unacceptable." While input lag from workers operating halfway across the world presents a safety issue, lawmakers were also concerned about Waymo's connections to China and other foreign countries.
Although Tesla uses its own cars, Waymo employs vehicles from various countries, including China. The decision drew suspicions that the Alphabet-owned company is attempting to circumvent import restrictions on Chinese vehicles. When asked about the use of internet-connected Chinese cars on American roads, Peña emphasized that the autonomous driving systems are installed in the US.
Correction (Feb 10, 2026): The original version of this article described Waymo vehicles as "switching control" to remote drivers in unusual situations. Waymo says its remote fleet response agents do not directly operate vehicle controls, but instead provide real-time contextual information that the autonomous system uses while remaining in control of the vehicle. The article has been updated to clarify this distinction.
Yes, that's exactly.
That's what the humans here are for in this case as well.
The systems can autonomously navigate a car from one waypoint to another as long as nothing unexpected happens. When something unexpected happens then some guy in the Philippines is the one that fixes it.
There are no companies which operates autonomous vehicles that do not have humans to handle unexpected conditions. By your definition there are no fully autonomous vehicles used commercially anywhere in the world.
Thats not new. We don't need AI to navigate a vehicle in ideal conditions, we've had the tech to do that for a long time. Using AI when simpler, more efficient algorithms can do a better job is an irresponsible waste of resources.
Yeah, that's what these autonomous cars are built on. Development on self-driving cars began in 2009, they are not built on AI.
They use sensors, the sensors are noisy and can be covered in dirt or debris. All of the higher level decisions are built on sensor data that may not be reliable and so they take all of the sensor data and the logic used to make the decision to calculate a confidence value (i.e. how likely the system assess the model it has created, based on the sensor data, matches the real world).
In situations where there are a lot of unknowns, like in unfamiliar traffic situations, the confidence score drops.
These tech workers exist so that when the confidence levels of the autonomous systems (which, again, are pre-AI) gets too low a person, who is more intelligent than a bunch of sensors and sat solvers, looks at the sensor data and fixes the waypoints which were generated by the autonomous systems or marks objects/hazards and then the autonomous systems calculate a higher confidence and so resume operation.
The only updates to autonomous cars where neural networks could be better than the human programmed systems is in object detection but those NN systems are, typically, run in parallel with the classic Computer Vision algorithms in order to achieve higher confidence. Routing is done through ML101 pathfinding algorithms, every car manufacturer programs their own ECS, sensors are largely lidar which use time of flight and not machine interpreted video (like Teslas), acceleration and braking response is human programmed.
There is nothing 'AI' happening here, this is a safety feature which provides a backstop for fallable, classicly programmed, autonomous systems.
The car isn't driven by 'ai' and there are no AI models, and certainly no LLMs like ClaudeAI, which can reliably make the kind of high level decisions made by the humans in question here nor are the humans driving the cars.
Do you think it's fair to call something "fully autonomous" if its operation occasionally requires human control?
Are you fully autonomous if you ask for advice?
Yes, clearly. But if you park your car in the middle of the road until someone shows up to give you advice you should probably have your license revoked.
You're making a false analogy, though. These supposedly fully autonomous vehicles aren't asking for help. They simply can't function in certain situations. Humans have agency to decide to ask for help, or try to figure it out themselves.
Well, if your entire point of contention is the word 'fully' then I think I'll leave that between you and Waymo.
That's fair. Waymo is the one presenting themselves as a fully autonomous car company while concealing how much is still being managed by humans. I'm honestly not sure what your angle is.
I think you're quibbling over semantics. Fully autonomous does not mean infallible.
Every company that operates autonomous vehicles has human staff which monitor and correct errors in the fleet. This isn't a new and secret behavior, it is an industry standard practice. No company operates autonomous vehicles without supervision, none.
I'm not sure where you have this idea that fully autonomous vehicles operate completely unsupervised without error that has never been the case in any commercial autonomous system. Even the cars in the original DARPA contest which spawn autonomous vehicles were actively monitored by humans. It is autonomous because it can drive without a human operator. The fact that it cannot handle every possible situation doesn't change the fact that they're driving autonomously.
At no point have we been talking about these vehicles being infallible, you're shifting the goal posts.
I'm also not sure what point you're trying to make by saying they used human intervention when they competed in the DARPA competition 20 years ago. Of course they did, it's not like technology has regressed since then.
According to the SAE standard, Waymo is NOT fully-autonomous. According to Waymo, they are, and no one knew humans were intervening at the level they still are until Waymo execs had to testify under oath before congress.
Human intervention doesn't mean that the cars are not autonomous, that is the point. A vehicle able to operate without a driver is autonomous.
Every autonomous system that is intended to be operated commercially has humans which monitor the autonomous fleet in order to resolve edge cases that the autonomous systems cannot handle.
You mean the SAE J3016 standard?
According to that standard, Waymo cars are SAE Level 4 Autonomous Driving Systems.
There is no SAE standard which differentiates between 'fully' autonomous and 'not fully' autonomous. A level 4 car is fully autonomous in its Operational Design Domain. A level 5 car is fully autonomous in all Operational Design Domains.
The reason Waymo cars are level 4 instead of level 5 is because they restrict their cars' Operation Design Domain so that they do not operate in heavy rain, fog, or at interstate speeds.
Here is the SAE J3016 chart:
Source?
Here's a paper, from Waymo, from 2 years ago where they correctly refer to their systems using the language of SAE standards:
https://waymo.com/research/comparison-of-waymo-rider-only-crash-data-to-human/#%3A%7E%3Atext=SAE+level+4+automated+driving+system
So, in summary:
I'm going to give you the benefit of the doubt and assume you were in a rush, rather than intentionally cherry picking evidence.
Waymo's FAQ on their website states:
Notice there's no mention of remote control by human operators.
SAE refers to level 4 as "high automation" and level 5 as "full automation." I think it's clear that "full automation" is synonymous with "fully autonomous."
The page at the URL on the chart you posted uses that terminology.
https://www.sae.org/standards/j3016_202104-taxonomy-definitions-terms-related-driving-automation-systems-road-motor-vehicles