New Court Filings Allege Depraved Internal Communications at Meta and Snapchat
Social media platforms are confronting allegations that they're intentionally addictive and designed to keep teenage users compulsively scrolling.
The company line from social media platforms like Facebook, Instagram, Snapchat, TikTok, and YouTube is that they care deeply about the mental health and safety of their users. These platforms claim to be extra mindful of their younger users; they all boast about parental controls and other safety tools, which they say are intended to limit underage exposure to explicit content and prevent sexual harassment and bullying.
But the effectiveness of the safety tools is an entirely different issue, which is why Meta CEO Mark Zuckerberg and other tech oligarchs are routinely summoned to give testimony to Congress. You’ve probably seen clips from the hearings: A bipartisan group of senators admonish a tech company CEO, the CEO furrows their brow and promises they’re working hard but will do better. Some additional tweaks and safety features are eventually slow-rolled out. (Historically, the new features are “opt-in,” and only adopted by a tiny fraction of users.)
This song-and-dance has been going on for a while, with little to show for it. Two separate lawsuits that are part of multidistrict litigation against the major social media platforms allege the reason for the lack of safety advancements and protections is simple: engagement is the priority over everything else, an intentional decision that’s most harmful to young people.
Both lawsuits also allege that the major social media platforms are purposely addictive; they’re designed to keep users (especially teenagers) compulsively scrolling, which is why their safety features are often “opt-in” and responsive to public pressure, as opposed to proactive and effective.
I’ll avoid getting into the intricacies of each lawsuit, or a longer explanation of how multidistrict litigation works, because it will get very confusing very quickly. You just need to know that these two lawsuits are premised on similar critiques, and that recent, unsealed filings in both cases include a bounty of internal messages and documents about Meta’s Facebook and Instagram, Google’s YouTube, as well as TikTok and Snapchat.
I sifted through the filings—one is 5,807 pages, and the other is a shorter 235-page brief—and compiled a laundry list of the most newsworthy tidbits. I’m choosing to focus on Meta (Facebook and Instagram) and Snapchat, because the allegations against them are in my opinion more egregious than what’s presented about TikTok and YouTube. Before I dive in, a few disclaimers and points of clarity:
Time magazine was the first to report on the 235-page brief, highlighting disturbing allegations against Meta. Politico and Reuters also published reports over the weekend.
A Meta spokesperson seems to have issued the same blanket statement to Time, Reuters, and Politico, which I’m quoting in full here: “We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture. The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens—like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences. We’re proud of the progress we’ve made and we stand by our record.”
I see no reason to doubt the authenticity and accuracy of these filings, but in fairness, I will note they both come from the plaintiffs’ side.
For simplicity’s sake and for maximum accuracy, when I write “Filing 1” I am referring to the unsealed document that is 5,807 pages. “Filing 2” is referencing the shorter, 235-page brief. There’s some overlap, as far as the evidence presented between the two filings—it seems they obtained portions of the same communications via discovery.
I am *not* Meta CMO Alex Schultz.
Okay! That’s it. Diving in.
Meta Allegations
It’s a frighteningly close competition, but the most grotesque allegation out of everything I reviewed comes from Instagram’s former head of safety and well-being Vaishnavi Jayakumar in Filing 2. She testified that Meta had a “17x” strike policy for accounts engaged in the ‘trafficking of humans for sex.”’ According to Jayakumar, “...You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended.” How they allegedly arrived at a number that high is a consideration I don’t really want to think about.
Also gross: allegations that Meta, and Zuckerberg, wanted to recruit more preteens and teens to their platforms. According to a 2017 chat between two anonymous Meta employees, one sarcastically wrote, “Oh good, we’re going after <13 year olds now?” The other employee responded, “[Zuckerberg] has been talking about that for a while.”
Conversations about acquiring new, active Instagram users were allegedly centered on teenagers. Filing 2 cites a number of examples:
Instagram researcher: “Teens are often the ones that other members of the household learn about not only Instagram, but social media in general.”
Former Instagram CEO Kevin Systrom: “Mark [Zuckerberg] is suggesting that teen time spent be our top goal in 2017.”
In February 2016, Zuckerberg allegedly sent an email to other Facebook higher-ups about Facebook Live, expressing his concerns about too many adults discovering the new product and then limiting it for their kids: “If we tell teens’ parents and teachers about their live videos, that will probably ruin the product from the start,” he wrote.. “My guess is we’ll need to be very good about not notifying parents / teachers.”
The rub, according to both filings, is that while Zuckerberg wanted more minors on Facebook and Instagram, he wasn’t equally invested in protecting those minors. Meta employees repeatedly made requests to Zuckerberg and other higher-ups to at least somewhat address child safety concerns. Quoting directly from Filing 1:
“In April 2017, then-head of Instagram Kevin Systrom asked for 13 additional engineering headcounts to make good on his ‘public commitment to making Instagram a place where people feel safe to be themselves, without criticism or harassment’ and to address ‘critical areas for safety on IG.’ In response, Mr. Zuckerberg noted that he would add Instagram to a ‘mix’ of other teams seeking access to a pool of unallocated engineers—but due to ‘more extreme issues on FB right now’ ‘probably can’t get you 13 engineers in the near term.”’
Two years later, another Meta executive allegedly requested hires for Instagram’s “well-being” program, and again was stonewalled. A year after that, more requests went straight to Zuckerberg, and according to the filings, there was still no action taken.
As this was all happening, Yoav Shapira, formerly Meta’s director of engineering, allegedly lamented to a colleague: “Child Safety is explicitly called out as a non-goal in our H2 plans. So if we can do something here, cool. But if we can do nothing at all, that’s fine too.” It’s not clear if he was being sarcastic in his 2020 email, which was pulled from Filing 1.
It’s not as if Meta was lacking for funds that could’ve been used to protect minors on Facebook and Instagram. The money has always been there. The problem, according to both lawsuits, is that new safety features negatively affect Meta’s bottom line.
For instance, there was a contentious internal debate over whether to make Instagram accounts private by default. According to Filing 2, most of Meta’s internal teams “recommended that private-by-default should ‘Launch Now,’ observing that this ‘will increase teen safety’ and is ‘in-line with teen user expectations,’ ‘parent expectations,’ and ‘regulator expectations.’” Meta’s growth team recommended against launching at the time, noting it would harm growth among teenage users. The growth team seemingly won out. No such program was introduced in 2020. A compromise position of sorts was introduced in 2021, which was not explicitly a private-by-default feature on Instagram, but was a step in that direction.
But Filing 2 goes on to explain how the new safety restrictions were still easy to evade—teenage users could still make public Instagram accounts with ease. An employee allegedly said of the watered-down feature: “[This] is about looking good to regulators so that they don’t block our under 13 year old IG version we are working on. That’s it. It has a terrible impact on teen engagement and retention and no detectable benefit on integrity metrics.” Private-by-default was eventually added to Meta’s teenage accounts in 2024, well into litigation.
At other points over the last decade, Meta backed away from the decent restrictions it put in place. For instance, Meta temporarily banned appearance filters over well-founded concerns that they can cause body dysmorphia and other mental health issues, especially among girls. But those filters were not banned on Snapchat or TikTok, which put Meta at a competitive disadvantage. According to Filing 2, a former Meta executive testified that the ban on filters was eventually overturned in May 2020 because “filters are incredibly popular.” After the ban was reversed, a Meta executive appealed directly to Zuckerberg, allegedly writing: “I respect your call on this and I’ll support it, but want to just say for the record that I don’t think it’s the right call given the risks. As a parent of two teenage girls—one of whom has been hospitalized twice in part for body dysmorphia—I can tell you the pressure on them and their peers coming through social media is intense with respect to body image.” Another executive allegedly wrote that the ban reversal would lead to Meta “rightly [being] accused of putting growth over responsibility.”
Filing 2 quotes other former Meta higher-ups who seem to be somewhat perturbed by the growth-over-responsibility mindset. Jayakumar, former head of safety and well-being at Instagram, allegedly testified that it was “generally pretty difficult to make safety changes that might impact growth or daily active users by any significant amount.” Dr. Joshua Simons, a former Facebook AI research scientist, allegedly testified, “If we were proposing something that would reduce engagement that went up for the executive team, Mark Zuckerberg, to review, the decision that came back would prioritize the existing system of engagement over other safety considerations.”
Filing 1 contains some interesting depositions, including Meta whistleblower Arturo Bejar, who previously testified in front of Congress in 2023. From 2009 through 2015, Bejar was Facebook’s director of engineering. He returned to Meta in 2019 as a consultant on Instagram’s well-being team. According to his deposition, he discovered how porous Meta’s safety features were and raised his concerns in an email to the Meta management side—meaning Zuckerberg, Instagram CEO Adam Mosseri, and others. He got nowhere, he said. “What I witnessed firsthand and had conversations with people about was that there were no substantive efforts to understand and mitigate addiction,” he noted during his deposition, adding, “Instagram is not a safe place, it’s not a supportive place for teenagers.”
I want to flag one more thing, which Reuters reporter Jeff Horwitz concisely summed up from his review of Filing 2. It is perhaps the most important takeaway from all of the documents:
“In a 2020 research project code-named ‘Project Mercury,’ Meta scientists worked with survey firm Nielsen to gauge the effect of ‘deactivating’ Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, ‘people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,’ internal documents said. Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the ‘existing media narrative’ around the company.”
Snapchat Allegations
David Boyle, Snapchat’s senior director of product, allegedly wrote to his colleagues that data he obtained about teenagers’ Snapchat usage made him “absolutely sick to [his] stomach.” To be clear, he was upset because there was a decline in the trend line for 13 to 17 year olds on Snapchat.
Snapchat has no qualms in these filings about who it’s targeting as its user base. An exhibit from Filing 2 reads: “Continuing to win with new 13 year olds is the most critical aspect to onboarding new users.”
According to Filing 2, Snapchat CEO Evan Spiegel “admitted under oath that he had never commissioned research to determine whether or not, or to what degree, Snapchat was endangering youth, and acknowledged that Snap never employed in-house child psychologists, neuroscientists, or mental health professionals to advise the company on matters related to the mental health of young people.”
And yet, Spiegel allegedly acknowledged that some of Snapchat’s features—especially “Snapstreaks”—are potentially addictive and harmful for minors. Snapstreaks accumulate by sending a Snapchat back and forth at least once a day with another user. You can see how this could become compulsive, or even heartbreaking, when a streak ends. Spiegel admitted that streaks cause “toxic behavior,” according to Filing 1. In 2017, a Snapchat employee allegedly wrote that through streaks, they had “tapped into some mass psychosis.” The same, alleged email chain included information about how streaks users “skew younger,” referring to teenagers between the ages of 13 and 17.
A year or so later, Josh Siegel, a senior Snap product manager, allegedly wrote: “The general product stance on Streaks is that we don’t love them (it was an accidentally addictive, somewhat unhealthy feature that gamifies friendships in a weird way), but they’re too delicate to touch right now. 50M+ users have streaks, a few million probably only use the app for streaks.”
He’s not kidding. One of the more mind-numbing statistics I came across was from Filing 1, which reported that by 2021, Snapchat was receiving 400,000 requests per day to restore a “streak” that had ended either purposely, accidentally, or by error. Snapchat still hasn’t gotten rid of the feature.
Nor has Snapchat abandoned its pursuit of a key demographic: 13 year olds. In a 2023 planning document, Spiegel allegedly asked a series of existential questions: “I think we should take a step back here and discuss how we want to frame community growth and what our priorities are ... i.e. are we trying to age up? grow in new geographies? acquire new 13 yo users? I think we need a more clear point of view here before we build out the rest of the story. We also should really try to provide a clear view of the opportunity for incremental growth.”
I’ll let the alleged responses to Siegel’s inquiries, which come from two redacted Snapchat employees, speak for themselves:
Employee No. 1: “We’ve addressed our growth priorities in the last bullet in the intro paragraph: l) maintaining deep penetration of younger demographics in our established markets, 2) aging up in established markets, and 3) grow penetration of younger demos in new markets.”
Employee No. 2: “We need to build confidence here that we are continuing to win with new 13yr olds ... this is super key to this section. Most compelling would be to provide what evidence we can that we are winning at the same rate we have in the past with this very young demo ... 13 or 13-15, etc.”
Given the emphasis on attracting teenagers to the platform, you might think Snapchat would have embraced stringent parental controls. Unfortunately, you would be wrong. In 2022, Snapchat implemented “Family Center,” which initially “only permitted parents to view kids’ friends and recent conversations, without the ability to limit use or control account settings,” according to Filing 2. Spiegel allegedly intervened to keep “Family Center” more hands-off; he wrote, “let’s get rid of the ‘manage your privacy controls.’ I think the concept here is visibility but not control.” According to data from Filing 2, one-third of 1% of teenage Snapchat users have implemented the “Family Center” feature.
I’ll end with a statistic: The Guardian reported that in 2024, Snapchat logged more adult grooming cases than the combined reported figures of “Facebook, Instagram, TikTok, X, Google and Discord.”



