Meta just admitted they could've saved your kids all along

5 hours ago 1

Meta can now block viral challenges that kill children. After telling us for a decade that content moderation at scale was impossible, that they couldn’t catch everything, that parents needed to do better.

And we’re supposed to celebrate.

While parents buried their children—kids like Nylah Anderson, Ethan Burke Van Lith, Matthew Minor, Griffin McGrath, Jack Servi, Mason Bogard, and Erik Robinson (who died over 15 years ago)—Meta and it’s peers had the capability to stop it.

506788129_1104173088411809_5997561780486191776_n
“Matthew was 12 years old when he died as a result of accidental asphyxiation after participating in the online “Blackout Challenge. Matthew was loving, compassionate, and a big hugger with a charismatic personality. Matthew was active in martial arts, football, and basketball. He cherished his time at family gatherings at the family farm in Tappahannock, Virginia.” From ParentsSOS, Photo from Matthew Minor Foundation.

These aren’t stupid kids—they’re neurologically vulnerable. The adolescent brain is literally wired to seek social validation over physical safety. The prefrontal cortex—responsible for risk assessment and impulse control—won’t fully develop until their mid-twenties. Meanwhile, their social reward centers are firing at maximum capacity.

When a 13-year-old sees a choking challenge video with millions of views and thousands of comments calling the person “brave” or “legendary,” their brain doesn’t process “this could kill me.” It processes “this equals belonging.” Add in algorithms specifically designed to exploit that vulnerability—serving more extreme content to maximize engagement—and you have a deadly formula. These platforms weaponize the very neurobiology of adolescence, turning developmental vulnerability into profit.

For years, Meta’s response was a shrug in corporate speak: “We can’t police everything.” “Parents need to monitor their children.” “We remove violating content when we find it.”

Until now. Until the lawsuits poured in. Until Congress demanded more. Until the cost of dead children finally threatened their profitability.

Instagram CEO Adam Mosseri seems so proud to share that they will identify “certain risky stunts” and block them entirely from teen feeds. Not remove them after they go viral. Not wait for user reports. Block them. Proactively. Automatically.

This might be a win, I hope it is. I hope it saves lives. I hope it really does what they say and keeps this content out of teens feeds. But their most recent teen safety products have been proven to not work as designed, to be more PR than protection. And these latest announcements settle right into silicon valley’s very predictable playbook.

Griffin was 13 years old when he died as a result of accidental asphyxiation after participating in the online “Blackout Challenge.” “Griffin was an extraordinary and wickedly smart child. He placed third in the National Science Bowl competition just two weeks before he passed. Most of all he was a kind-hearted soul and touched everyone he met with his brilliance, genuineness, and quick wit.” From ParentsSOS, Photo of Griffin’s mom Annie McGrath and Griffin on left via NPR courtesy of Annie McGrath. Photo on right of Annie, left, with Mary Rodee mom of Riley Blasford, myself, and Christine McComas mom of Grace McComas, advocating for KOSA in the United States Senate building earlier this year.

Silicon Valley’s Child Safety Playbook

  • Step 1: Deny the problem exists (”We’ve never seen this type of content trend on our platform”)

  • Step 2: Minimize the scope (”This affects a very small number of users”)

  • Step 3: Blame the users (”Parents should monitor their children’s activity”)

  • Step 4: Claim technical impossibility (”The scale makes it impossible to catch everything”)

  • Step 5: When legal pressure mounts, suddenly discover a solution (”We solved it with 3 months’ work”)

  • Step 6: Launch a PR campaign to celebrate the “innovation”

The pattern is as predictable as it is profitable: Extract maximum value while externalizing maximum harm, then claim innovation when forced to implement basic safety features.

We saw this with Cambridge Analytica—Facebook couldn’t possibly protect user data until they could. We saw it with livestreamed violence—impossible to stop until it wasn’t. We’re seeing it now with AI chatbots grooming children—Reuters exposed Meta’s internal guidelines allowing bots to roleplay romance with minors until public outcry forced a hasty revision.

Every Meta engineer who built these recommendation systems knows the truth: the capability was always there. Every product manager who prioritized growth over safety knows. Every executive who sat in meetings where these trade-offs were discussed knows.

I know because I was one of them–after nearly 15 years at the company:

  • I watched a room full of men running Meta’s Horizon Worlds put profit before the safety of people

  • I experienced the propaganda fed to employees to keep them feeling like something, or everything possible, was being done

  • I witnessed extent Meta’s leadership was willing to go to punish and discard anyone willing to challenge this behavior

Here’s what makes my blood boil most violently: Meta admitted that filtering out viral challenges was a matter of “spending several months improving our technology.” Several months. Judy Rogg lost her son Erik over 15 years ago. How many kids have been lost since? The sad answer: not enough to threaten the stock price.

But those months were only worth spending when Meta faced:

We shouldn’t be clapping for Meta. We should be clapping for the advocacy groups, law firms, activists, representatives, whistleblowers, and most of all the heroic parents who relive their life’s biggest trauma every single day in the hopes of keeping the rest of our kids safe from a machine that won’t.

Do not believe these changes are evidence of a reformed Meta–this is nothing but additional proof that Meta can not be trusted to self regulate or even explain their technological limitations.

Share

“My son Erik died April 21, 2010 from what was then commonly called the Choking Game. He was a normal, healthy 6th grader at Lincoln Middle School in Santa Monica, California - an “A” student, avid athlete and boy scout and fully engaged in life. His dream was to go to West Point, enter the military and then law enforcement. He was the opposite of a youth “at risk”. Credible evidence indicates that Erik’s first exposure to this challenge was during school the day before he tried it at home and died.” -Judy Rogg via Erik’s Cause

What Now

If Meta can flip a switch to protect kids, they can be forced to keep it on, make sure it works, and expand these protections to fully cover the range of potential social media harms, like drug distribution, bullying, sextortion and more. But only if we act:

  1. Keep your kids off social media. These platforms have shown they will not protect your children until legally forced. The new “protections” are theater until proven otherwise.

  2. Call your representatives. The Kids Online Safety Act (KOSA) would legally require platforms to prevent harms to minors—not as a PR move, but as a duty of care.

  3. Demand real accountability. Not another apology tour. Not another “commitment to safety.” Criminal investigations into why known dangers were allowed to proliferate while the technology to stop them sat unused.

“If the Kids Online Safety Act had been in place, we believe this tragedy would not have happened to our family. We strongly support the passing of KOSA so that the tragic events that forever changed our family’s life do not happen to other children and families.”

- Todd and Mia Minor, Matthew Minor’s parents

“Not a single social media regulation has been passed in 25 years. Despite the innovation and growth of the Internet and social media over that time, protective measures and legislation have remained static. It is inexcusable and unconscionable, period. KOSA is long overdue. These new regulations are guaranteed to save children’s lives and I believe would have saved my son’s life.”

- Annie McGrath, Griffin’s mom

Meta wants us to applaud their “innovation” in teen safety. But you don’t get credit for finally installing smoke detectors after the house burned down—especially when you were selling matches to kids in the living room. Especially when you still blame the fire on their parents.

Listen to Judy on Scrolling2Death

Blaming parents like Judy Rogg who like many survivor parents has dedicated her life to preventing what happened to Erik from happening to other children.

Let us celebrate that a big tech is admitting what’s feasible, and then let’s make sure they’re held to account to implement these protections and expand them to other areas of harm to children and society.

The smoke detectors were always there, sitting in a box and never installed, or maybe they were ripped from the ceiling the moment the beeping threatened their peace and profits, batteries removed, and tossed to a corner.

Share

Other updates this week:
Read Entire Article