Regulate the Grift, Not the Kids
Why Australia's Social Media Ban Gets It Backwards—And What Would Actually Work
I expect this article to be controversial with my audience. Maybe even unpopular.
Like many of you, I have a deep distrust of social media platforms. I find their net impact on our social fabric contemptible. The algorithmic amplification of outrage, the erosion of shared reality, the documented mental health crisis among teenagers—I’ve watched it unfold with the same horror you have. When I see headlines about Australia banning social media for children under 16, part of me wants to cheer.
But I can’t. Because I think they got it wrong.
Please stick with me as I work through this argument. I’m not here to defend Big Tech or dismiss the very real harms that social media inflicts on young people. I’m here to make the case that the problem runs deeper than the platforms themselves—and that the solution won’t be found in legislating who can access these platforms. The solution lies in legislating the incentives for how people use them. In making individuals responsible for their own behavior, rather than asking platforms to serve as gatekeepers for an entire generation.
This is about offense versus defense. And I think we’re playing defense when we should be attacking.
What Australia Did
On December 9, 2025, Australia enacted the world’s first comprehensive social media ban for users under 16. The law—passed with broad bipartisan support—requires platforms including TikTok, Instagram, Facebook, X, YouTube, Snapchat, Reddit, Twitch, and Threads to take “reasonable steps” to prevent minors from creating or maintaining accounts.1
The mechanics are still being worked out. Age verification methods remain unspecified, though the government has made clear that the burden falls on platforms, not parents or children. Penalties for non-compliance can reach $50 million AUD. Meta has already begun removing accounts it believes belong to users under 16.
On the surface, this feels like progress. The government identified a harm—social media’s impact on children—and took decisive action. No more hand-wringing. No more waiting for platforms to self-regulate. Just a clean, bright line: under 16, you’re out.
I understand the appeal. I feel it myself.
But feeling satisfying and being effective are not the same thing.
The Harms Are Real
Let me be clear about something before I critique the ban: the harms that motivated it are not imaginary.
The research is substantial and troubling. Rates of depression, anxiety, and self-harm among adolescents—particularly girls—have risen sharply in the smartphone era. Studies have documented correlations between social media use and negative body image, sleep disruption, and feelings of social inadequacy. The algorithmic amplification of extreme content has created radicalization pipelines that pull vulnerable young people toward dangerous ideologies. Predatory design patterns—infinite scroll, variable reward schedules, notification bombardment—are engineered to maximize engagement regardless of psychological cost.
Then there’s the misinformation ecosystem. Anti-vaccine content. Conspiracy theories. Historical revisionism. Eating disorder communities that actively encourage self-destruction. All of it monetized, all of it algorithmically boosted because engagement is engagement, and outrage engages.
Parents aren’t wrong to be terrified. Researchers aren’t wrong to sound alarms. The instinct to protect children from this environment is not merely understandable—it’s morally correct.
The question is not whether the harms are real.
The question is whether banning children from the platforms addresses those harms—or whether it simply relocates our anxiety while leaving the underlying problem untouched.
The Prohibition Problem
We’ve tried this before.
When alcohol destroyed families and fueled violence in early 20th-century America, the solution seemed obvious: ban it. Prohibition was enacted with the best of intentions by people who had witnessed genuine devastation. They weren’t wrong about the harms of alcohol abuse. They were wrong about the solution.
Prohibition didn’t eliminate drinking. It drove it underground. It created speakeasies and bootleggers. It handed organized crime an industry. It made alcohol more dangerous by removing quality controls. And it lasted only 13 years before the country admitted the experiment had failed.
Australia’s social media ban will face the same structural problems.
Technical circumvention is trivial. VPNs are free and easy to use. A 14-year-old who wants access to TikTok will have it within minutes. Fake birthdays have been standard practice since the internet began. Borrowed parent accounts require no technical sophistication at all. The ban creates the appearance of a barrier while anyone motivated to cross it faces almost no real obstacle.
Age verification creates impossible tradeoffs. If platforms rely on self-attestation, enforcement is a joke. If they require government ID verification, we’ve created a surveillance infrastructure that applies to everyone—adults included—and hands platforms even more personal data than they already have. The privacy implications are staggering, and the potential for data breaches adds a new category of harm we haven’t even begun to reckon with.
The burden falls on platforms, not perpetrators. This is perhaps the most fundamental flaw. The law asks Meta and TikTok and Google to solve a problem they have no real incentive to solve—and every incentive to solve badly. Platforms will implement the minimum viable compliance. They’ll check boxes. They’ll point to their age-gating features when challenged. And the content that actually harms children will remain exactly as accessible as it was before, just with a slightly higher barrier to entry that motivated users will easily clear.
Prohibition failed because it targeted the venue rather than the behavior. Australia’s ban makes the same mistake.
Punishing Victims, Not Perpetrators
Here’s what bothers me most about the ban: it punishes the wrong people.
Under this law, a 15-year-old loses access to:
Group chats with friends
Educational content creators who make learning engaging
Communities built around shared interests—art, music, coding, activism
The primary communication infrastructure of their generation
Tools for creative expression and audience-building
Meanwhile, the influencer who monetizes pro-anorexia content keeps their revenue. The conspiracy theorist who radicalizes vulnerable people keeps their platform. The grifter selling miracle cures to desperate families keeps their audience. The engagement-bait factory pumping out rage content keeps their ad checks.
The kid is banned. The predator is untouched.
We’ve constructed a response that removes children from the room where poison is served—while continuing to serve poison to everyone who remains. Adults are still exposed to the same misinformation, the same manipulative design patterns, the same algorithmic amplification of the worst content. We haven’t addressed the harm. We’ve just narrowed the victim pool and called it progress.
And here’s the darker irony: we haven’t even effectively narrowed the victim pool. We’ve just driven it underground.
When a 14-year-old circumvents this ban—and they will, trivially—they don’t disappear from social media. They simply become invisible to the systems that might otherwise track, study, or protect them. Their usage becomes harder to monitor, their exposure to harm harder to measure, the sources of that harm harder to identify and hold accountable. We’ve created a shadow population of underage users operating outside any framework of transparency or protection.
This is worse than doing nothing. It’s doing something performative that makes the actual problem harder to solve. We pat ourselves on the back for passing a law. We declare victory. And meanwhile, the same kids are exposed to the same harms, except now we can’t see it happening—which means we’re less likely to do anything that might actually work.
This is not protection. This is theater. Worse, it’s theater that obscures the ongoing problem we claim to want to solve.
The Inoculation Problem
There’s another dimension to this that troubles me deeply, and it’s one I rarely see discussed: the developmental case against complete insulation.
By the time a person turns 16—or 18, or 21, depending on how these laws evolve—they will enter the full force of the digital attention economy with zero preparation. They’ll encounter sophisticated manipulation, predatory marketing, misinformation ecosystems, and algorithmic rabbit holes having never developed any resistance to them.
We don’t prepare soldiers for combat by keeping them away from all conflict until deployment day. We don’t teach people to swim by keeping them out of water until they’re adults. Controlled exposure, with guidance and increasing autonomy, is how humans learn to navigate dangerous environments.
Social media is a dangerous environment. I don’t dispute that. But the answer to a dangerous environment is not a bubble—it’s inoculation.
The things that harm children on social media—manipulative marketing, misinformation, engagement bait, predatory design—are not unique to social media. They exist throughout modern life. They saturate advertising, news media, political campaigns, and commercial culture. A child who has never learned to recognize these patterns online will be helpless against them offline. And when the bubble finally pops, they’ll face the full assault with no immunity whatsoever.
What we should want is not a world where children never encounter manipulation. That world doesn’t exist and never will. What we should want is a world where:
The volume of harmful content is dramatically reduced
Children encounter it in manageable doses that allow for education and development of critical thinking
The worst actors face consequences severe enough to deter the behavior
A ban accomplishes none of these goals. It doesn’t reduce the content. It doesn’t enable graduated exposure. It doesn’t touch the people creating the harm.
There is an approach that does all three. But it requires us to stop playing defense and start playing offense.
Regulate the Grift, Not the Kids
I’ve written previously about a framework I call The Grifter Tax—a legislative approach designed to kill the misinformation economy by making monetized deception financially catastrophic.
The core insight is simple: almost everything harmful on social media is monetized. Someone is getting paid.
The influencer pushing dangerous diet content? Monetized through sponsorships and platform revenue.
The conspiracy theorist radicalizing young men? Monetized through merchandise, donations, and ad revenue.
The engagement-bait factory producing outrage content? Monetized through algorithmic amplification that drives advertising.
The pseudo-expert selling miracle cures? Monetized directly through product sales.
Follow the money. Always follow the money.
The Grifter Tax framework doesn’t ban anyone from social media. It doesn’t ask platforms to verify ages or police content. Instead, it targets the individuals who monetize harm—directly, with consequences that scale to match their profits.
The mechanism is disgorgement: you don’t get to keep money you earned through deception. If you monetize content that provably harms people—through misinformation, manipulation, or exploitation—you forfeit the revenue. Not a fine. Not a penalty. Everything you made.
The framework includes bounty systems that create financial incentives for identifying and prosecuting monetized harm. Suddenly, every grifter has to worry that someone is building a case against them—someone who stands to collect a percentage of everything they’ve ever earned from the grift.
I won’t rehash the entire framework here—readers who want the full legislative architecture, constitutional analysis, and enforcement mechanisms should read the original article. But the key point for our purposes is this:
The Grifter Tax targets individuals, not platforms. It regulates behavior, not access. It punishes perpetrators, not victims.
This is what offense looks like.
What This Actually Solves
Apply this lens to the harms that motivated Australia’s ban, and the picture changes dramatically.
Harmful influencer content: Under a Grifter Tax framework, the influencer monetizing pro-anorexia content faces disgorgement of all revenue from that content—plus bounty hunters motivated to build the case. The economic incentive flips completely. Today, harmful content pays. Under this framework, harmful content bankrupts.
Radicalization pipelines: The conspiracy theorists and extremist recruiters who pull vulnerable young people toward dangerous ideologies are almost universally monetizing their reach. Disgorgement makes that monetization a liability rather than an asset. The content becomes economically irrational to produce.
Predatory marketing: Content creators who target minors with manipulative advertising, gambling mechanics, or body-image exploitation face scaled consequences tied to their revenue. The bigger the grift, the bigger the forfeiture.
Misinformation ecosystems: The entire framework was designed to make lying unprofitable. Anti-vaccine influencers, medical grifters, historical revisionists—all of them face the same calculus. You can spread misinformation for free. The moment you monetize it, you’ve created a target on your back.
None of this requires banning children from social media. None of it requires invasive age verification. None of it asks platforms to solve a problem they have no incentive to solve.
It simply makes the harmful behavior—the monetized harmful behavior—financially catastrophic for the individuals who engage in it.
The result? Children stay online. They maintain access to community, education, creative tools, and communication with their peers. But the environment they’re navigating is fundamentally different. The worst content becomes rare because producing it is economic suicide. The remaining harmful content exists at volumes low enough to manage through education and parental guidance rather than total prohibition.
That’s the goal: not a world without risk, but a world where risk is manageable and children can develop the skills to navigate it.
There’s more below, but first: If work like this—challenging popular assumptions with reasoned alternatives, even when it’s uncomfortable—feels worth having in the world, please consider supporting The American Manifesto. Paid subscriptions make it possible to do the careful work of building frameworks that might actually change something, rather than just applauding whatever feels satisfying in the moment.
The Challenge
I understand why Australia’s ban feels right.
It’s tangible. It’s immediate. It puts a clear burden on identifiable actors. It lets us say we did something. And it comes from a place of genuine concern for children that I share completely.
But it’s the wrong solution. It treats the symptom—children’s exposure to harmful content—while ignoring the disease: the economic incentives that make harmful content profitable to produce in the first place.
Bans are defensive. They try to hide potential victims from harm.
The Grifter Tax is offensive. It tries to destroy the business model that creates harm.
I know which approach I’d bet on.
We’ve spent years now watching social media platforms promise to do better, implement half-measures, and continue profiting from the attention economy regardless of who gets hurt. We’ve watched governments hold hearings, express concern, and accomplish nothing. Australia has now tried something more aggressive—and I genuinely respect the impulse behind it, even as I think the execution is misguided.
But there’s a harder path available. A path that doesn’t ask platforms to police themselves or children to be locked out of the digital commons. A path that goes directly at the people causing harm and makes them pay—literally—for the damage they’ve done.
Stop building walls around children.
Start bankrupting the people who would exploit them.
That’s how we actually win this.
Your Move
This article is meant to challenge, not lecture. I’m genuinely curious how it lands.
Does the “inoculation vs. bubble” framing resonate with you, or does it feel like rationalization for inaction?
If you’re a parent, how do you balance protecting your kids from online harms with preparing them to navigate those harms independently?
Do you see the Grifter Tax framework as more or less realistic than platform bans as a legislative path forward?
What am I missing in this analysis?
The comments on these kinds of pieces are often more valuable than the articles themselves. Tell me where I’m wrong.
Josh Taylor, “Millions of children and teens lose access to accounts as Australia’s world-first social media ban begins“, The Guardian, December 9, 2025.
Comprehensive reporting on the implementation of Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024, detailing which platforms are affected (TikTok, Facebook, Instagram, X, YouTube, Snapchat, Reddit, Kick, Twitch, and Threads), the enforcement mechanisms placing burden on platforms rather than users, and the $50 million AUD penalty structure for non-compliance. The article documents Meta’s early compliance efforts in removing accounts believed to belong to users under 16, providing the factual foundation for this article’s analysis of the ban’s implementation and limitations.



I have not but I will tonight. Thank you.
Very well done. Deep analysis shows a lot of thought and explanations were clear.
Thank you for the insights and information.