Jessaline Caine works in planning and writes about politics and policy.
Badenoch hit the nail on the head today on BBC’s Laura Kuenssberg: children are spending hours on platforms “profiting from their anxiety [and] distraction,” designed to be addictive. Her call for age limits (effectively banning under-16s from social media) cuts through the noise with common-sense conservatism: protect kids, free adults, and work with industry to curb the “Wild West.”
But if we’re serious about this, the Online Safety Act must first prove its mettle by clamping down on AI tools enabling child sexualisation. Britain has the law and a regulator equipped for age checks at scale, yet until this week, platforms like X dragged their feet on blatant harms.
British users have already seen protest clips treated as “adult content” pending age estimation, which is crude, but immediate. That establishes the operational reality: when the state and a platform decide something must be slowed down, it can be done quickly and at scale.
X is choosing not to apply that capability to AI-enabled sexual abuse.
Consider the late December 2025 reply-chain trend on X when users posted under women’s photos with prompts like “Hey Grok, put her in a string bikini.” X’s in-house AI, Grok, complied for weeks, churning out sexualised edits of real images in seconds, with mina effort from the requester, until some restrictions hit on 9 January.
As a jaded survivor of child sexual abuse, I’m pragmatic about incentives; systems primed to sexualise women inevitably target girls. Testing Grok myself; I fed it an adult photo; it edited instantly. Then an underage one, with equally inappropriate results.
There’s a lot of discourse that restricting Grok is “no different” from banning Photoshop. Nonsense. Photoshop takes time, skill and intent – it has friction, so abuse stays relatively niche. Grok removes friction, outsources agency to automation, and turns sexualised edits of real people into a one-click, near instantaneous, nauseous process.
Effortless abuse breeds ubiquity. Platforms enabling it by design become complicit. This infested replies: debates devolved into a rebuttal of immediate machine-delivered humiliation.
I flagged Grok putting children in bikinis and the response from some users wasn’t to dispute it, but to prompt Grok to put me in a bikini. One went further and asked Grok to put the censored underage me in a bikini.
After days of this, Ofcom stirred on 5 January, announcing “urgent contact” with X and xAI for a “swift assessment.” Ofcom’s initial statement read like optics management rather than stopping harm.
The prompts had circulated for weeks; the child-age outputs were testable on 30 December. No deep probe was needed. A regulator waiting for virality before finding its voice is no safeguard. Rather, a spectator.
X’s fix on 9 January has since limited image generation in replies to Premium subscribers. This is partial at best. free users can still generate via the app, website, or in X’s in-app tab. What’s truly “fixed”?
Badenoch’s vision aligns here: she wants industry collaboration to curb addictive designs, recognising links to depression, anxiety, and even economic inactivity. But her emphasis on protecting child freedoms from digital pitfalls shows why AI sexualisation can’t be an afterthought. If we’re banning kids to safeguard them, we must first enforce robust, proactive blocks on tools that sexualise them.
That is the failure people reacted to, a system that wakes up only once damage is viral, shoving the burden onto victims. The “safety” is inconsistent, and that is proof of control failure.
Conservatives must lead because the rule of law exists to shield women and girls from abuse, ensuring power serves responsibility, not convenience. The Act, born under Boris Johnson, promised to tackle illegal harms by mandating “safety by design” for AI tools, not post-hoc reports.
The Online Safety Act can throttle the messy politics; it must also bite the ugly crime.
If the Act can police protest, it can stop child sexual exploitation. If it can’t (or won’t), then “online safety” is just selective enforcement with better branding, and Conservatives should not tolerate it.
That is precisely why Labour’s flirtation with banning X is the wrong diagnosis and the wrong remedy. It’s an admission of administrative defeat that the British state cannot enforce its own safety regime, so it must outlaw the venue.
That is not Conservative.
Conservatives don’t abandon the rule of law when enforcement is inconvenient; we enforce it. Keep the principle of free expression intact; regulate the product that is industrialising abuse. The answer is narrow and auditable requirements with real consequences. Restrict the abusive capability, not the forum, starting with AI harms, to make Badenoch’s age limits a true safeguard, not a sidestep.








![Florida Officer Shot Twice in the Face During Service Call; Suspect Killed [WATCH]](https://www.right2024.com/wp-content/uploads/2025/12/Inmate-Escapes-Atlanta-Hospital-After-Suicide-Attempt-Steals-SUV-Handgun-350x250.jpg)

![Keith Ellison Caught Promising to Fight State Agencies for Somali Fraudsters [WATCH]](https://www.right2024.com/wp-content/uploads/2026/01/Keith-Ellison-Caught-Promising-to-Fight-State-Agencies-for-Somali-350x250.jpg)





