Health – The Conversation – Read More

Within 48 hours, the legal landscape governing social media and children shifted in ways that will take years to fully understand and verify.
On March 24, 2026, a Santa Fe jury ordered Meta to pay US$375 million for violating New Mexico’s consumer protection laws. The next day, a Los Angeles jury found Meta and Google’s YouTube negligent in the design of their platforms, awarding almost $6 million in damages to a single plaintiff.
The dollar figures are drawing headlines, but a $375 million penalty against a company worth $1.5 trillion is a rounding error. The award is less than 2% of Meta’s $22.8 billion net income in 2025. Meta’s stock rose 5% on the day of the New Mexico verdict, indicating how the market assessed the effect of the penalty on the company.
Fines without structural change are more akin to licensing fees than accountability. As a technology policy and law scholar, I believe the question of whether these verdicts will produce real changes to the products that millions of children use every day is more consequential than the jury awards.
The answer is not yet, and not automatically. A financial penalty does not rewrite a single line of code, remove an algorithm or place a safety engineer in a role that was eliminated to protect a quarterly earnings report. Meta and Google have signaled they will appeal, with First Amendment challenges to the product-design theory the likely central battleground.
The companies’ lawyers are likely to argue, with some justification, that the science linking the design of platforms to mental health harm remains contested, and that the companies have already implemented safety measures. In the meantime, Instagram, Facebook anf YouTube will continue to operate exactly as they did before the verdicts.
Consumer protection
Most coverage framing the New Mexico verdict casts it as a child safety case. It is that, but it also presents a more technically significant dimension: a consumer protection claim grounded in allegations of corporate deception. New Mexico Attorney General Raúl Torrez did not sue Meta for what users posted, but instead sued Meta for its false statements about its own platform safety, employing a novel legal approach.
For three decades, Section 230 of the Communications Decency Act has shielded internet platforms from liability for content generated by their users. Courts have interpreted Section 230 immunity broadly, and many earlier attempts to hold platforms accountable for child harm have foundered on it.
The New Mexico complaint, filed in December 2023, was drafted with explicit awareness of this obstacle. It asked a single question: Did Meta knowingly lie to New Mexico consumers about the safety of its products?
The jury’s answer was yes, on all counts, and its verdict rested on three distinct legal theories under New Mexico’s Unfair Practices Act.
The first was straightforward deception: Meta’s public statements, ranging from CEO Mark Zuckerberg’s congressional testimony claiming research about the platform’s addictiveness was inconclusive to parental guidance materials that omitted known risks of grooming and sexual exploitation, qualify as representations made in connection with a commercial transaction.
Users pay for Meta’s platforms not with money but with their data, which Meta then converts into advertising revenue. New Mexico successfully argued that this data-for-services exchange constitutes commerce under the state’s consumer protection statute, and that misrepresentations made within it are actionable regardless of Section 230.
The second theory was unfair practice, or conduct offensive to public policy, even if not technically deceptive. Here, the evidence centered on what Meta’s own engineers and executives knew and then ignored.
Internal documents showed repeated warnings. These alarm bells centered around child sexual abuse material proliferating on the platforms, about algorithms that amplified harmful content because it generated engagement, and about age verification systems that were essentially cosmetic. The company overrode those warnings for commercial reasons.
The jury was shown a specific sequence: Meta executives requested staffing to address platform harms, Zuckerberg declined, and the company continued to publicly represent its safety efforts as adequate.
The third theory was unconscionability: taking advantage of consumers who lacked the capacity to protect themselves. Children are the clearest possible case. Children cannot evaluate terms of service, cannot negotiate platform architecture, and cannot assess the neurological implications of engagement-maximizing design. Meta had comprehensive internal research documenting these vulnerabilities and chose to ignore rather than mitigate them.
Bellwether on addictiveness
The Los Angeles case, which concluded on March 25, tested a different theory. It was a personal injury trial rather than a government enforcement action.
The plaintiff, identified in court as KGM, is a 20-year-old woman who began using YouTube at age 6 and Instagram at age 9. Her lawyers argued that the platforms’ deliberate design choices such as infinite scroll, autoplay video and engagement-based recommendation algorithms were the causes of her addiction, depression and self-harm.
The jury found both Meta and YouTube negligent in the design of their platforms and found that each company’s negligence was a substantial factor in causing harm to KGM. Meta bears 70% of the liability; YouTube 30%. The individual $3 million compensatory award is modest. The punitive damages phase, still to come, will be calculated against each company’s net worth and is likely to produce a very different number.

Frederic J. Brown/AFP via Getty Images
Beyond the general precedent, this case matters because it is a bellwether. It was selected from a consolidated group of hundreds of similar lawsuits to test whether a product-design theory of liability could survive a jury trial, and it did. That finding has immediate and concrete implications: Each of those plaintiffs now litigates on a stronger footing, and if the damages awarded to KGM are even partially scaled across similar cases, the total financial exposure for Meta and YouTube moves from hundreds of millions to billions of dollars.
More importantly, the bellwether verdict signals to every other plaintiff, attorney and state attorney general that this legal pathway is viable, and to every platform that the courtroom is no longer a safe harbor. The legal strategy established that negligence claims against platform design are viable in California courts.
Public nuisance
Beginning May 4, 2026, Judge Bryan Biedscheid in the New Mexico case is scheduled to hear the public nuisance count without a jury in a bench trial. Public nuisance is a legal doctrine traditionally used to address conditions that harm the general public. This doctrine has been used in concern over contaminated water, lead paint in housing stock and opioid distribution networks.
New Mexico is arguing that Meta’s platform architecture constitutes exactly such a condition. If the judge agrees, the remedy is not a fine. Instead, it is an abatement: a court order requiring Meta to eliminate the harmful condition.
Attorney General Torrez has already been explicit about what he will ask for: real age verification, not a checkbox asking users to confirm they are old enough; algorithm changes; and an independent monitor with authority to oversee compliance. These are structural demands on how the platform operates.
This is where drawing a parallel with Big Tobacco is apt. The tobacco litigation of the 1990s ultimately produced not just financial settlements but the Master Settlement Agreement, which imposed permanent restrictions on marketing practices and funded public health programs for decades. The public nuisance theory in the New Mexico case is designed to produce an analogous structural outcome for social media.
Precedent for tidal wave of cases
The significant effects of two verdicts are about evidence and precedent. For the first time, a jury has examined Meta’s internal documents – emails from engineers warning about self-harm, the rejected safety proposals and Zuckerberg’s personal decisions to prioritize engagement over protection – and returned a verdict that those documents mean precisely what they appear to say.
That finding, and the legal theories that produced it, is now part of the foundation on which 40-plus pending state attorney general cases, thousands of individual lawsuits and a federal trial later this year are likely to be built.
The abatement phase, beginning May 4, may prove more consequential than the dollar amounts. If the judge in the New Mexico case – or any judge in a subsequent case – orders real age verification, algorithm changes and an independent monitor, that would be a true structural change.
![]()
I was staff at organizations including the Electronic Frontier Foundation, Public Knowledge, and the Harvard Berkman Klein Center, which were funded by various foundations and companies. Refer to their websites for disclosures. I was a staff member in the connectivity policy team at Facebook (2016-2018). I am an advisory board member of non-profits, including Internet Lab (Brazil) and Derechos Digitales (Chile). I am a senior advisor (without any honorarium) at the Datasphere Initiative and Portulans Institute. More details at https://www.carolinarossini.net/bio
