Driverless cars like Waymo won’t be safer because of AI’s limits | Opinion
Dallas, like Austin before it, is California-ing your Texas. Not with income taxes or Hollyweird, but with Big Tech proliferation set to transform how we move by introducing new fleets of autonomous robotaxis through the expansion of Waymo into the 214.
The Alphabet subsidiary, which shares a parent company with Google, said in July that it intends to have fully driverless cars, powered by artificial intelligence (albeit manned by humans in case of an emergency) on Dallas streets and highways in coming weeks.
Forgive my bias, but the notion of driverless vehicles becoming common makes me itchy. I am admittedly skeptical about the efficacy and benefit of artificial intelligence in many of its forms, from the large language models mimicking Black identity and hallucinating wrong answers, to the antisocial smart glasses making it easier than ever to mock women who don’t want to give you their number. But the tech has arrived, and the rides will be soon. I see a few possible paths forward.
The first is that Waymo isn’t appreciably safer than human drivers because the AI operating it, no matter how sophisticated, will always be vulnerable to meltdowns. The deep learning models guide your 3,000-pound chauffeur to its destination by making calculated bets based on sophisticated pattern recognition that its choices from point A to point B won’t kill you or somebody else. But when the model fails to pick up the complexities of human movement or the kinds of objects it may encounter, you get Waymo roadkill.
Karen Hao described this problem in “Empire of AI,” her critical biography of OpenAI CEO Sam Altman and the perils of the rapidly expanding artificial intelligence industry.
“A deep learning model might recognize pedestrians only by the crosswalks underneath them and fail to register a person who is jaywalking,” Hao writes of the logic powering burgeoning autonomous vehicle (AV) technology. “It might learn to associate a stop sign with being on the side of the road and miss the same sign extended on the side of a school bus or being held by a crossing guard.”
Hao’s examples aren’t just thought exercises. In Tempe, Arizona, a self-driving Uber killed Elaine Herzenberg while she was pushing a bike with shopping bags outside the designated crosswalk. Where you and I would stop and maybe annoyingly honk our horns, the driverless car simply didn’t register Herzenberg as a person.
We might experience firsthand what the National Highway Traffic Safety Administration found when assessing 967 Tesla Autopilot crashes: A “critical safety gap” had led to 13 fatalities. We could see the problems quickly and change course, but only after learning a painful lesson.
But notably, these are different companies that, for all we know, may have inferior safety mechanisms.
Another possibility is that while self-driving cars are less safe in the aggregate, Waymo has, relatively speaking, cracked the code. That wouldn’t exactly be a great outcome, either.
I wouldn’t be surprised if Waymos are safer than the average driver, as suggested by some of the peer-reviewed studies that the company cites. I find it particularly plausible because I know who I am. Still a fairly virginal driver from my upbringing in New York City, I compare driving to learning a second language instead of growing up bilingual. I can do it, but not without rehearsing my grammar before every turn, lane change and parallel park.
When I’m back in Harlem, other than the occasional wedding or trip to Six Flags, I hardly ever need to drive — not when every retail, entertainment and work need is a quick stroll or train ride away. So, I arrived at car ownership reluctantly, a necessary condition for getting back and forth in North Texas. Conservative to a fault behind the wheel, I am almost certainly worse than the average driver. I might be destroying the curve.
For someone like me, Waymo is, in theory, a relief — I’d like road decisions literally out of my hands.
But at what price?
What if subsidizing (and obscuring the real cost) of hailing a robotaxi severs our state’s already precarious investment in healthy, cost-effective and sustainable alternatives like reliable public transit, safe bike routes, or even a walk around the block? Where would neglecting those options leave people who can’t afford the still-expensive personal taxi?
As Waymo rolls out teen accounts, do we want to further entrench the next generation to depend solely on the automobile? We tell kids to get off their phones and touch grass. Our actions betray our words.
Autonomous vehicles literalize the issues with orienting cities around cars instead of people. And even though Waymo’s cars are electric, your ride probably isn’t. Until that changes, driverless vehicles will congest your streets with car traffic, and the overwhelming majority of motorists, further incentivized by car-centered infrastructures, will continue congesting your lungs with exhaust.
I predict Waymo will straddle somewhere between the two, which is to say, the worst of both worlds. There will be horrific accidents that make it unclear whether driving a net positive for safety on its own terms, but the initial convenience paired with intense lobbying of our electeds will allow Waymo and its competitors to survive any blowback.
But my grim forecast isn’t a prophesy, nor is it algorithmically predetermined. We’re still in the driver’s seat.
This story was originally published August 13, 2025 at 4:24 AM with the headline "Driverless cars like Waymo won’t be safer because of AI’s limits | Opinion."