Teenagers are experiencing a mental-health crisis. And though the science is messy and the matter isn’t settled, many suspect that social media is, in some substantial way, tangled up in the problem. Following this instinct, legislators and regulators at both the state and federal levels have suggested a slew of interventions aimed at protecting young people from the potential harms of social platforms. Many of these efforts have so far fallen short on legal grounds, and broadly speaking, the status quo remains.

This week, we learned of a new approach intended to protect kids from Big Tech. On Tuesday, a joint lawsuit was filed against Meta by the attorneys general of 33 states, deploying consumer-protection laws to try to hold the company accountable for harming young people. It claims that Meta deliberately got children and teenagers “addicted” to its platforms, that this addiction directly causes physical and mental harm, and that the company lied about it.

“Just like Big Tobacco and vaping companies have done in years past, Meta chose to maximize its profits at the expense of public health,” Colorado Attorney General Phil Weiser said in a press release. The Big Tobacco comparison has been made a number of times since fall 2021, when the whistleblower Frances Haugen leaked to the press internal Meta documents about Instagram and Facebook. Among them were the results of studies showing teenagers candidly reporting the negative effects that social media was having on their lives. When they felt bad about their bodies, Instagram made them feel worse. They had noticed increased anxiety and depression among their peers, and they considered Instagram to be one of the causes. Haugen’s so-called Facebook Files preceded the attorneys general investigation, which were announced several weeks after their release.

The suit is worth reading closely. As an effort to address incredibly serious social problems, it’s surprisingly slapdash. Our window into the case may be limited—many of its 233 pages are at least partially redacted, some blotted out entirely—but what is visible clearly relies on familiar, flawed tropes. It doesn’t engage seriously with the thorny question of just how social media affects kids and teenagers, and instead reads somewhat like a publicity stunt. Experts told me that the legal arguments made in the suit, even without knowing what is in the redactions, are not particularly convincing.

“I’m sympathetic overall to the dangers that social media pose to kids and how platforms have been poor stewards of their responsibility,” Mark Bartholomew, a professor at the University at Buffalo School of Law specializing in technology and the law, told me. “But when I look at the law … I do think it’s a stretch.” There are a couple of problems, he said. First, although social-media use might be a compulsive behavior, there is no official diagnosis for such a thing as social-media addiction. Second, proving that deception played a role in consumers’ use of Meta products will also be a challenge. That argument hinges on Meta’s public assurances that its products are safe, as well as the notion that consumers have taken that at face value to the point where they have been genuinely misled. “It’s hard to show that people were deceived,” Bartholomew said. “That they thought Instagram was one thing and it turned out to be another.”

In connection with these arguments, the suit puts forward the idea that Meta deliberately presents young users with content that will “provoke intense reactions,” such as “bullying content” and content related to eating disorders or violence. The problem with these arguments isn’t that they are unfair; it’s that the notion that Meta would deliberately hurt the people it wants to keep on its platforms is both extremely hard to prove and easy to deny. (Young people absolutely are bullied through Instagram, and they certainly might see harmful content there—as with any internet platform, it’s impossible to argue otherwise. But does Meta display such material on purpose to lock users into the platform? Not exactly.) “Teens don’t want to be exposed to harmful content or hurtful interactions, and advertisers don’t want their ads showing up alongside content that isn’t appropriate for teens,” Liza Crenshaw, a Meta spokesperson, told me, arguing that the attorneys general had misunderstood Meta’s “long-term commercial interests.”

Experts agreed that another aspect of the case feels considerably more cogent: namely, that Meta has violated the federal Children’s Online Privacy Protection Act. “That part’s more concrete,” Bartholomew said. “At least, it’s a little harder for Meta to wriggle out of.” COPPA prohibits tracking the online activity of children under the age of 13, or collecting their personal information, without explicit parental consent. If Meta has what COPPA terms “actual knowledge” of kids younger than 13 using its services, it’s violating the law. (“Instagram’s Terms of Use prohibit users under the age of 13. When we learn someone potentially under 13 has created an account, we work to remove them if they can’t demonstrate they meet our minimum age requirement,” Crenshaw said in a comment.)

Berin Szóka, a lawyer and the president of the libertarian-leaning think tank TechFreedom, highlighted one place where the suit’s argument could hold water: the complaint that, on Instagram’s sign-up page, where it asks for a new user’s birthday, the menu previously would automatically suggest a birth date 13 years prior. “That’s not a neutral age gate. That encourages the answer of Yes, I’m exactly 13 years old,” he told me. Meta recently changed this age gate, but it could be fined retroactively, and the attorneys general could ask to have some kind of continued supervision of the company’s COPPA practices. This would be a significant win, even if other elements of the suit are dismissed.

Most of the details in this part of the suit are redacted, so it’s possible that the states found new evidence of current lawbreaking activity as well. What is visible to the public so far is a bit ridiculous, however. For instance, to prove that Meta knows that kids use its apps, this suit cites the simple fact that various kid-oriented brands and media personalities (Lego, Hot Wheels, SpongeBob SquarePants, JoJo Siwa) have Instagram pages. The evidence in a similar (settled) case against YouTube was far more direct: While publicly denying that kids used YouTube, YouTube was also taking meetings with toy companies such as Mattel and Hasbro and literally pitching itself as a “leader in reaching children age 6–11,” as well as the “#1 website regularly visited by kids.”

Where does this leave us? Mostly, wondering what broader outcome the states are hoping for. The attorneys general say Meta has used “powerful and unprecedented technologies” to “ensnare” youth and teens. That might be a common rhetorical point in popular discourse, but it would require a lot of work to prove. And by far the weakest part of their argument comes when the states try to substantiate the claim that, as New York Attorney General Letitia James said in a press release, Meta is “to blame” for the mental-health crisis among kids and teenagers.

In the clearest statement of their position, the attorneys general write: “Increased use of social media platforms, including those operated by Meta, result in physical and mental health harms particularly for young users, who experience higher rates of major depressive episodes, anxiety, sleep disturbances, suicide, and other mental health concerns.” There is just one citation on this line, to a public Google Document maintained by Jonathan Haidt, a social psychologist at the NYU Stern School of Business (and a contributor to The Atlantic). That document summarizes dozens of studies with different findings, some of which contradict one another. Which ones are the attorneys referring to? They don’t say.

Later, they do cite a specific 2022 study by Amy Orben and Andrew Przybylski, well-known researchers in the field. They found that young people are vulnerable to decreases in life satisfaction (quantified with a questionnaire) as a result of excessive social-media use in particular age windows. (For girls, ages 11 to 13; for boys, ages 14, 15, and 19.) In the lawsuit, the attorneys summarize the study as finding that “going through puberty while being a heavy social media user interferes with a sensitive period for social learning.” This is not an accurate representation of that study at all. “We did not show that social media interferes with social learning,” Orben said when I emailed her the page of the lawsuit that cited her paper. In fact, the words social learning don’t appear in the study at all.

Bartholomew offered a theory of the case. “AGs get a certain amount of deference in the courts,” he told me. This isn’t a private class-action suit that can be quickly thrown out. “It’s unlikely to be dismissed anytime soon, and I think the main point here is to make some waves.” Maybe that’s fine. But neither the mental-health crisis nor the expansive power of social-media companies will be seriously dealt with this way. Whatever the intentions of this suit, it’s not striking anywhere close to the crux of our problems.