aiCommentConservatismdigital IDFeaturedRegulationsafeguarding

Clive Rates: AI risks turbo-charging the regulatory state

Clive Rates is a Chartered Accountant and Constituency Officer for Dulwich & West Norwood, and a Conservative Candidate in the May 2026 local elections. He writes here.

Artificial Intelligence promises efficiency, clarity, and consistency. It could make government faster, cheaper, and more responsive. But in the regulatory state, AI raises a risk that should concern every Conservative: it removes the last meaningful limit on bureaucratic power.

Whereas ministers are accountable to voters and Parliament operates through statute, regulators have historically been restrained only by practical limits on their capacity. Time, budget, and manpower forced them to prioritise, triage, and exercise judgment. That restraint is now vanishing.

In the age of AI, regulators no longer face bandwidth limits. They can issue automated requests, scan every document, flag every anomaly, and escalate enforcement without hiring a single new staff member. The danger is not that they might overreach, but that they will have no reason not to.

By ‘regulators’, I mean not just statutory bodies such as Ofcom or the FCA, but the wider enforcement ecosystem: Whitehall departments, arms-length bodies, professional self-regulators such as the ICAEW or the Law Society, and increasingly, private institutions such as banks, schools, and universities behaving as quasi-regulators under ESG, DEI, GDPR, or reputational pressures. Their structures may differ, but their incentives, especially when amplified by AI, look strikingly similar.

Regulation is often described as “risk-based,” focused on genuine harm rather than technical breaches. But when the marginal cost of enforcement drops to zero, the incentive to triage disappears. Why prioritise, when it becomes possible to enforce everything, everywhere, all at once?

This is not a hypothetical concern. New digital ID infrastructure may begin as a tool for streamlining services. However, once linked to AI systems, it can quietly evolve into a mechanism of continuous bureaucratic control, not by new law, but by automation and drift.

Consider speeding. Everyone accepts the need for limits and even for enforcement cameras. But no one wants a system that automatically issues a fine every time they briefly touch 31 miles per hour in a 30 zone. A state that enforces the letter of the law with mechanical perfection, regardless of context, is not merely efficient. It is tyrannical.

What protects us today is not perfect compliance, but the reality of human discretion. Enforcement is expensive, judgment-based, and often forgiving. In the AI world, that margin for discretion disappears. Citizens will be expected to trust the system not to act on everything it sees. That is no safeguard at all.

Private organisations already scan behaviour, speech, and beliefs against sprawling rulebooks and codes of conduct. Once AI is added to the mix, it will no longer take a meeting or a memo to trigger consequences. The system can mine your activity and construct a rationale for sanction, plausible, policy-aligned, and automated.

While AI may reduce costs for regulators, it raises them for everyone else. Burdens do not vanish; they shift. Where you might once have been contacted by your regulator every few years, you might now face rolling digital oversight. This may come in the form of automated emails, expanding questionnaires, and tailored alerts. These are cheap to send but costly to respond to. The burden falls on small businesses, charities, school governors, sole traders, precisely the people Conservatives are meant to stand up for. This is regulation by stealth.

Here lies the real danger. In an AI-driven world, complexity itself becomes a threat. Regulations that were once obscure and inconsistently applied can now be enforced in real time. A rule no one quite understood becomes a trap, enforced by machines that never sleep. The result is that you still do not know where you stand, but now you cannot count on being overlooked. Today, a minor breach might go unnoticed or be resolved with a phone call. Tomorrow, it could be flagged and pursued automatically. Even if a human later steps in to drop the case, the process will already have punished you in time, money, and stress.

A Conservative response must begin with changing what success looks like. At present, regulators are incentivised to avoid blame at all costs. This is not public service; it is institutional self-defence. We must allow regulators to use discretion, and we must accept that occasional failure is the cost of a free society.

Success should mean not just catching risk, but reducing burden, sharpening focus, and shrinking footprint. And crucially, we must now recognise that regulatory complexity is no longer tolerable. The more sprawling and ambiguous our codes become, the more dangerous they are in an AI-enforced future. If the law becomes indecipherable to the citizen but perfectly legible to the machine, then it is no longer fit for purpose. Conservatives should lead the effort to simplify.

If we get these incentives right, then we need not fear regulators using AI. Let them use it to become leaner, clearer, and better targeted. Not because they were told to, but because the system rewards the right kind of behaviour. AI is not the problem. Misaligned incentives are. The more power regulators have, the more disciplined their purpose must be. A Conservative state must be smarter, but also smaller, more restrained, and aligned with the citizen’s freedom. That means saying yes to AI, but only once we are confident it is serving the right master. AI is inevitable. But the kind of state it powers is still ours to decide.

Source link

Related Posts

1 of 31