"Tesla is in the kiddie pool"

5 hours ago 2
A Tesla car on June 11, 2025. (Photo by Jakub Porzyck via Getty Images)

Elon Musk claimed last week that Tesla had completed a successful "launch" of its long-awaited self-driving robotaxi product. Over the last eight days, a small fleet of about 20 Tesla Model Ys — piloted, in part, by the company's so-called "Full Self Driving" software — has been ferrying passengers around a limited area of Austin, Texas. But Bryant Walker Smith, a leading scholar on self-driving technology and regulatory law at the University of South Carolina, disputed the notion that Tesla is operating an autonomous car service.

Instead, he said Tesla's operations in Austin are a heavily supervised demonstration, requiring safety drivers in the vehicle and teleoperators who can control the cars remotely. "Right now, Tesla is in the kiddie pool, splashing around a lot, and everyone is asking, 'So, when are they going to swim in the Olympics?'" said Smith, who is in Beijing researching self-driving technology in China. "That's simply the wrong question at this point."

In an interview with Musk Watch, Smith discussed the misleading branding that Tesla has used to market its driver-assistance technology, how public discourse about Tesla has distracted from more meaningful advances achieved by its industry rival Waymo, and the state of self-driving regulations at the federal level and in key markets, such as Texas and California. He also explained how state and federal oversight often lags behind the rapidly shifting autonomous driving industry, particularly when it comes to Tesla.

This interview has been edited for clarity and length.

Musk Watch: What are your general thoughts on the Tesla robotaxi rollout so far?

Bryant Walker Smith: Yeah, there is no rollout. I think this is a fundamental misnomer and misconception. Tesla is doing a demonstration with safety drivers, and there is a huge difference between an automated vehicle that is deployed without a safety driver and an automated vehicle that is tested or demoed with a safety driver. It's the difference between climbing up a mountain without any rope and harness — free soloing — and climbing up with all kinds of safety gear that will save you if something goes wrong.

So, it's not just the competence of the system that needs to be much higher. The confidence of the developer needs to be much, much higher for a true deployment. And this is not one. The evidence so far shows why it's not one. Tesla is, of course, correct to have some kind of supervision. The form of supervision that it has chosen is one that I think is suited more for optics than for safety. The reason why someone is not in the driver's seat is so Tesla can say that there's no one in the driver's seat. But when you have an employee who is actively supervising and interacting with the vehicle to correct its real-time shortcomings, and when you potentially have some degree of not just remote assistance, but possibly remote driving, that is a heavily supervised test and demo.

Any individual ride can tell us that a system is not safe, but no individual ride can tell us that a system is safe, so the fact that many of the rides have been accomplished — potentially — without intervention by a human does not show that the system is mature. To the contrary, the fact that there have been any rides at this point that have manifested issues from the awkward to the dangerous is evidence that it's very much an immature system. I don't want to suggest that other actual deployments have been without mistake. Certainly, Waymo in its infancy, and to some extent even now, will have issues arise. But so much of the difference lies in what the companies know behind the scenes; the extent of error that can be predicted, which is unknown [to the public], and what the confidence is in the system's ability to deal with those situations. That's really a safety case, and it's not clear to me that Tesla has a safety case or has one that they should be confident in.

Tesla is marketing its system as automated, even though human supervision and intervention are still necessary. Do you see legal risks or regulatory confusion arising from this kind of branding, especially when it comes to a commercial service?

Absolutely. The terms Tesla has used and the way that Tesla has directly and indirectly marketed its so-called FSD are very irresponsible. I can call my umbrella a parachute, but that doesn't make it a parachute. It just makes it a really dangerous umbrella. And I think so much of the hype around every development with respect to Tesla has broadly confused what automated driving actually is and diminished the very real accomplishments of other companies, particularly Waymo, with respect to automated driving, because there's just so much hype and therefore so much disillusionment and so much confusion. There are real robotaxis on real roads carrying real people right now, and none of them is a Tesla. And it continues to baffle me how eight years after the beginning of a series of "next year" promises, everything that Tesla does still gets overwhelming coverage, while the activities of other companies, particularly Waymo, but not limited to Waymo, receive so much less attention. That's why, when Waymo actually did remove its safety drivers many years ago, there was very little attention, because it just felt like, oh, just another announcement, just another step, rather than a really big deal.

If one of Tesla's robotaxis is involved in a crash, what are the most plausible liability scenarios under current U.S. tort law? How do you see that playing out?

It's a fair question, and yet I always wonder, why do we care? You know, maybe talking about liability is a veneer cover for the human desire to talk about blood and guts and gore. But we can't say that. So we have to talk about liability. If anybody is concerned about getting hit by a Tesla, they should be terrified about driving generally. There are going to be 100 people who die on U.S. roads today, and I certainly hope that none of those is because of Tesla's robotaxi. But it's very unlikely that that will be the major danger to anyone out here right now, unless maybe you're in a small part of Austin, unfortunately. And if people are concerned about driving, driver assistance systems, or automated driving systems? Well, they should be just terrified about driving generally. And I think this gets missed. Our roads are really, really dangerous, and they don't have to be dangerous, because other countries have shown us that there are ways to dramatically reduce deaths and injuries on roads, and the American response has largely been a shrug.

That's really the first point from a public perspective, just be scared more generally, and be much more careful as a driver more generally. If someone were to be hit by one of these [Tesla] vehicles and were to survive, they would probably be in a better financial position than if they were hit by any of the other hundreds of millions of vehicles out there on the road right now, because many human drivers and owners, if they're insured at all, could have insurance for $25,000 and perhaps no other real assets to recover from. If Tesla is involved in a crash with this system, and there is any fault on the part of Tesla's vehicle, driver, or operations, Tesla may well settle very quickly with an NDA, and the victim could likely see more than the $25,000 that an insurer might pay in a different case.

From the perspective of the court system or academics, the issues aren't particularly interesting. Tesla is driving these vehicles. It's driving them through a combination of its machine systems and its human systems. It's acting through its hardware and software. It's acting through its employees in the vehicle. It's acting through its employees who might be providing some kind of remote assistance or remote driving outside of the vehicle, and it's acting through all of the communication systems that link those components. So while there are lots of different legal theories that might come into play, and each of those depends on the particular state and the particular facts, generally, it all comes down to, yeah, if the Tesla does something wrong, Tesla is going to be liable, and that includes Tesla's hardware, software, and employees.

From the business perspective, a single fatality clearly has not sunk the company, even when those point to what, in my view, is very irresponsible design. Automakers are sued all the time, and they often settle, and sometimes they have verdicts, and sometimes those are negotiated down. Sometimes there are appeals, right? There's a whole established process. Tesla is a huge company with a huge market value, and it can pay those judgments. I suppose there could be a truly egregious case where you have a jury that is absolutely outraged by Tesla's decision to put a manifestly immature system on the roads with a poorly designed system of supervision. But when we read about massive verdicts, that's just what the jury comes back with, and rarely is that the final figure. If somebody sues, they will put a number in the lawsuit, and they'll say, I asked for $100 billion and the headlines will read, "Person sues for $100 billion." That's meaningless, that has no relationship to reality, and then there might be a verdict, and that verdict will be for $10 million, and that verdict might get reported a little less prominently, but then that will get appealed, and the appeals court will slash it to a million dollars, and that will get even less coverage. And then at some point along that process, the parties will settle for $500,000, and that will never get reported. And so in the public mind, we think $100 billion, and that's just simply not reality.

Are current U.S. laws adequate for safeguarding the public as self-driving technology becomes more widespread?

If we cared about traffic safety in this country, we would have a very different set of rules. But we don't care, and we don't have them, and that is a very unfortunate reality.

If we had a robust system, then we would have vehicles inspected, and we would not have vehicles with bald tires and bad brakes everywhere. And we would have drivers who were trained and retrained to be safe, and we would have enforcement, including automated enforcement, to ensure that people are not speeding, and vehicle systems to ensure that people are not drunk. And roads that are designed in a way that reduces the severity of conflicts, systems that ensure that pedestrians and bicyclists are treated as humans with rights, rather than as roadkill.

Read Entire Article