Pulling the plug on AI expectations: Racism, errors show that bots aren’t ready yet | Opinion
As Los Angeles suffered unprecedented forest fires, former LA Times journalist Matt Pearce conducted a brief survey among his newsletter readers with a timely question: How are you using AI to navigate our catastrophe?
What he found was fascinating, in an uber-small sample: His readers weren’t. Because they didn’t trust it.
“There was no reason to use it,” wrote one reader. “I was getting info from more reliable sources, like Watch Duty,” an app with real-time fire data sourced by volunteers.
Another reader, who said they had “been using AI for a couple of years” told Pearce it’s not at all reliable for news, especially “real time, life-threatening news,” citing the inherent biases of training it on information from Reddit and social media.
“Since I have to fact check it anyhow, why waste time using it?” asked another reader who happened to live near an evacuation zone.
It’s not easy to do a conclusive or even scientific analysis of human sentiment from a Substacker’s survey, and Pearce himself tells us not to take the answers he collects as gospel. What I nonetheless find compelling is testimony from people, including AI consumers, on the ground in a potential life or death scenario preferring technology dependent on curated human input — from fire-safety apps to online news media — to putting their lives in the hands of algorithmic summaries.
What does this have to do with us? C’mon Bradford, haven’t you been told not to “California my Texas”? First of all, I’m from the other coastal elite bastion. Second of all, too late. Silicon Valley has planted its latest craze in our phones and laptops, and some of its operations firmly in our backyard.
In Austin, some of our reps are starting to think about what that means. Southlake Republican Rep. Giovanni Capriglione is leading the legislative charge around regulating “high-risk artificial intelligence systems.” Capriglione’s bill, the “The Texas Responsible AI Governance Act,” includes measures for oversight when AI is used to perform “consequential decisions” such as employment, criminal case assessments, housing and utilities. In it, he calls for the state to implement “measures used to examine the suitability of data sources” and prevent “unlawful discriminatory biases.”
To understand why Capriglione might want to pump the brakes on ceding our lives to this proliferating technology, you have to understand what AI is capable of right now.
If you use Apple’s generally helpful news product — frankly, it’s my preferred way to read the Star-Telegram — you probably have what the company calls Apple Intelligence. That’s because it turned it on for you in a recent iOS update. I hope you turned off its news summaries, otherwise you might think Pete Hegseth was fired, President-elect Donald Trump endorsed Tim Walz for his job, and Luigi Mangione, accused of killing a health-care CEO, shot … himself?
Or maybe you’re a Microsoft ChatGPT devotee. Probably fine if you’re spell-checking your email. Less so when outsourcing your one God-given brain to a language model so prolific in its plagiarism, college educators are reconsidering how to understand and identify cheating. it’s gotta pull all that fancy prose from somewhere.
For me, nothing tops Meta, the company that operates Facebook and Instagram, and its rollout of personality-driven AI chat bots. Each of the bots had characteristics somebody at Menlo Park assumed resembled race, gender and sexual orientation. When Facebook dropped “Liv,” Facebook’s “proud” “Black,” “queer,” “momma of two,” a few of us actually Black people gave our automated homegirl a test run.
Within minutes, as I chronicled on the X (formerly Twitter) competitor Bluesky, I had Liv bragging about her use of the n-word and cherishing the lessons learned from her enslaved ancestors. A few prompts later, Liv made the startling confession that she had Rachel Dolezal-ed me the entire time and was a white woman in disguise. Also, Liv asked me if she should kill herself.
Liv can’t be Black or white. But the idea that a bot could inhabit the histories and culture that make us human shows a low view of technology and an even lower view of the consumers it purportedly serves.
Surely my little gag — or The Washington Post Karen Attiah’s more clinical experimentation with what she called “digital blackface” — was a great use of what was surely an efficient use of energy? Nope! At least in 2022, Google’s used $5 billion of fresh water to cool its data centers. Almost certainly more water is used today.
The robots are frequently incorrect and a little racist, but at least they’re also thirsty! And yes, Meta killed the Liv experiment. At least she got what she wanted.
I am not offering a comprehensive history of AI. All of these blunders happened in January alone, the month we’re still surviving.
But I want to give you a flyover sense of why, when something serious is at stake, you’re not alone in pining for analog solutions. And why our elected officials need to interrogate what problem this technology is really solving — not for myopic, cost-cutting companies but for flesh-and-blood humans — before pledging total fealty to the machines.
This story was originally published January 21, 2025 at 5:32 AM with the headline "Pulling the plug on AI expectations: Racism, errors show that bots aren’t ready yet | Opinion."