
The ink is barely dry on the UK’s Data Use and Access Act, and yet, for those of us working on the frontline of digital infrastructure and data governance, it already feels like a missed opportunity.
On paper, the Act promises much: streamlined Subject Access Requests, legitimate interest pathways for fraud prevention, and frameworks for smart data sharing and digital identity.
These are the kinds of moves that could, if properly executed, modernise how data supports both innovation and public interest. But scratch beneath the surface, and the sheen of progress gives way to a sobering truth: this is a bill that plays it safe at a time when bold, technically informed leadership is urgently needed.
Make AI more accountable
Take artificial intelligence. The Lords rightly tried to insert measures to ensure greater transparency around AI training data and decision-making, recognising that as machine learning systems crawl and scrape their way across the digital landscape, our laws need to be far more specific about how data is harvested, used, and regulated. The Commons, however, rebuffed nearly every attempt to meaningfully include AI governance, citing costs. Costs? Compared to what? The societal cost of unregulated AI scraping personal or proprietary data en masse?
The end result? We’re left with vague allusions to “automated processing” and a few measures around bots and copyright, but little to equip regulators, companies or the public to address the real, escalating risks of AI misuse. There’s more about cookies than there is about model transparency.
And this lack of teeth matters. Right now, insurers, Fintechs, and digital identity platforms are having to innovate inside regulatory grey zones, while also preparing for a compliance regime that increasingly looks like a patchwork of overlapping responsibilities. Meanwhile, some of the most powerful actors, AI developers, data aggregators, state-backed surveillance operations, remain effectively unchallenged.
Small wins
This is not to say the entire Act is a failure. Streamlined SARs, reduced red tape on low-risk cookies, and new digital ID frameworks are welcome. But they’re incremental, not transformational. Worse, they give the illusion of progress while kicking the bigger questions down the road.
More transparency and greater regulatory teeth
So, what’s needed?
First, transparency has to be a non-negotiable principle, especially for AI models trained on public or proprietary data. If a bot scrapes your website to feed an LLM, you should know about it. If an algorithm makes a pricing decision that affects your mortgage or insurance policy, you should be told how and why.
Second, regulation must be designed for openness, not just as a principle, but in practice. We need open-source platforms underpinning government access to personal data, complete with public audit logs of who accessed what, when, and why. This is the only way to earn public trust.
Finally, the Information Commission must show it’s not just a rebadged ICO with a shinier logo. It must act decisively and transparently, particularly around cross-border data sharing, AI model governance, and large-scale scraping.
The Data Use and Access Act could still be the skeleton of a strong framework. But without urgent regulatory muscle and a willingness to tackle AI’s real impact head-on, it risks being remembered not for what it achieved, but for what it failed to do when it mattered most.
At Intersys, we are passionate advocates for the use of AI with the right guardrails. You may also want to read our advice on the use of AI in the workplace where you’ll find a free AI governance policy template.