People are very polarized about the social media addiction lawsuits
Some are hailing the lawsuits against Meta and YouTube as victories. Others say that the lawsuits are helping groups that want to surveil marginalized voices. What is going on?
“There is no such thing as ‘social media addiction,’” independent technology journalist Taylor Lorenz posted on my X feed the other day.
Her post was in reference to two lawsuit verdicts that came in last month, where juries decided that Meta must pay $375 million in damages in New Mexico for violations of consumer safety standards, and $6 million in damages to a California woman, whose legal team argued that her usage of social media as a small child led to her anxiety and future mental health problems. Outside of America, in some countries, this debate is over. Last year, Australia banned social media for people under 16. Denmark is about to bring in a similar ban for kids under 15.
Meta’s attorneys, who will fight on appeal, attribute the California plaintiff’s mental health struggles to her mother’s emotional and physical abuse and neglect.
Now, big tech’s lawyers have found an unlikely ally: privacy and free speech activists who are a growing voice in the discourse about social media and children. What has emerged is a rift among people who might normally be uniform in their criticism or skepticism of big tech.
Beyond these lawsuits, and complicating things further, there is a bill Congress is considering called the Kids Online Safety Act, which suggests that identity verification be part of online access. Opponents, including digital freedom groups and progressives, are deeply concerned that the passage of such a bill is going to be a red herring for the surveillance and ultimate censorship of speech related to abortion, LGBTQ issues, human rights, sex-ed, and climate change. (It’s also worth noting that immunocompromised people, including Lorenz, speak fervently and emotionally about connecting over social media, given their disabilities may make it less possible to meet people in the physical realms as much.)
While there’s lots of research in this realm, it’s transcended the logical to become a viscerally emotional subject for people on both sides of the debate. There are parents who believe that they’ve lost their children to the Wild West of social media platforms (sometimes, literally)—and advocates for social media who claim that social media is in fact social, a realm of expression for the oppressed, and a conduit for critical information.
Wherever you land on the debate, the efforts of voices like Lorenz are changing minds. She has been one of the most vocal and visible critics of lawsuits, and her videos and posts are far-reaching—landing on the feeds of people who are on the fence or undecided about whether these lawsuits are net-positive for children, or rather a pathway to surveillance and censorship of speech online.
The other week, a friend who is a content creator came to me with a question: should she, on her TikTok, promote that lawyers are looking to represent minors who were harmed by the social media platforms? My friend is an advocate for online safety for women and children, but had seen Lorenz’s videos warning about the lawsuits.
While I know my friend has a tremendous amount of integrity and care when it comes to the safety of children and marginalized groups online, she wondered if there was more nuance to the conversation, especially as it relates to age-verification laws that could impinge on the privacy of marginalized Internet users hoping to share sensitive information or connect with one another.
It’s true that up-and-coming generations will inevitably ingest news and information—and also likely discover many social connections—on these social media platforms. And calling out the hidden motives of right-wing groups backing KOSA or other child-safety standards is important to the public interest. Yet I’d be loath to say that social media entities care about free speech or privacy anywhere close to the degree that free speech and privacy activists do.
AI companies are funding child safety groups and proposing policy ideas—not to necessarily be the “good guy,” but to appear that way and preempt regulation. Yet it’s not a form of “moral panic” to say that big social media companies, by design, have been proven in countless ways to be masterful at psychologically manipulating even adults with fully-formed brains.
Platforms can connect activist teenagers to other like-minded socially conscious teenagers, or allow people to share resources about reproductive rights—but they can also connect young children with provably malleable brains to unsafe people, as well as expose them to information they’re not yet ready to process. While being aware of ulterior censorship or surveillance goals from political groups, we also need to approach big tech’s arguments around children and profit motivations with that in mind.
What we’re reading…
Unsurprisingly, Meta has started removing ads from lawyers looking for clients who have been harmed by social media while they were under age 18.
Men Are Buying Hacking Tools to Use Against Their Wives and Friends—in Telegram groups, husbands and boyfriends are sharing notes on how to use spyware to harass friends, wives and girlfriends, and former partners.
Workers in Meta, Amazon, and Microsoft are coming together in Seattle to protect their co-workers against ICE raids - this is a compelling example of tech staffers organizing despite an industry wide layoffs and efforts by management to get workers not to discuss politics.
Gen Z is feeling increasingly anxious and angry about A.I., according to a recent survey. Relatedly, some argue that they are turning to prediction markets given the dissonance between the stock market and the actual economy.
Amazon is justifying a roughly $200 billion this year on capital expenditures, with much of it going toward AI infrastructure like data centers, chips and networking equipment.
Meanwhile, Anthropic says it is afraid to release its new model Mythos, fearing that hackers could leverage it to exploit software vulnerabilities.
A wild essay about a woman discovering her boyfriend’s uncertainties about their relationship in apparent confidant ChatGPT: I Stumbled Across My Boyfriend’s ChatGPT and It Ended Our Relationship
Penalties stack up as AI spreads through the legal system—lawyers are paying sanctions for AI-generated errors.
‘Hacks’ Star Hannah Einbinder Blasts AI Creators as ‘Losers’: ‘You Guys Suck… I Want to Put Your Head in the Toilet and Flush’
Patients and parents of patients are using A.I. chatbots to fight surprise medical bills.
The suspicious death of a former OpenAI employee has led to questions about whether there are enough protections for whistleblowers and insiders with information in the public interest.



Loved Lindsey Hall's piece, too!