HMRC Tribunal Ruling Exposes AI Use in R&D Tax Relief Decision
Introduction
A tribunal decision has compelled HMRC to disclose whether it relies on artificial intelligence (AI) in processing research and development (R&D) tax credit claims, raising fundamental questions about transparency, accountability, and fairness in government decision-making.
The First Tier Tribunal’s ruling in Thomas Elsbury [2025] UKFTT 915 (GRC) follows HMRC’s refusal to confirm or deny its use of AI, citing exemptions under the Freedom of Information Act. The tribunal found this stance untenable, warning that secrecy undermines public trust and risks deterring legitimate claimants.
Judge observations pointed to signs of automated involvement, such as correspondence containing American spellings, suggesting machine-generated language may have been used. The ruling emphasised that concealment of AI in high-stakes assessments threatens confidence in the tax system and could frustrate the policy aims of the R&D scheme.
Transparency, accountability, fairness under the microscope
The decision brings three key risks of government AI use into sharp relief:
- Transparency: Failure to disclose AI use leaves taxpayers uncertain about how critical financial decisions are made.
- Accountability: Concerns were raised that HMRC officers may be informally using AI tools without oversight, creating gaps in governance.
- Fairness: Unlike human assessors, AI often operates as a “black box,” making it harder for claimants to understand decisions or mount effective appeals.
Implications for R&D claimants
For businesses seeking R&D tax relief, this ruling is both a safeguard and a warning. On one hand, companies now have stronger legal grounds to demand clarity over whether AI influenced their claims. On the other hand, the judgment reveals that some claimants may already have been subject to automated processing without their knowledge.
Given HMRC’s intensified scrutiny of R&D claims, highlighted by a 23% year-on-year fall in SME applications as of September 2024, the potential use of AI raises further concern. If automated systems reject claims without grasping the nuances of innovation, genuine projects could be unfairly dismissed, discouraging startups and scale-ups from applying altogether. These risks undermine the scheme’s original purpose: to stimulate UK innovation.
AI’s own view of its limits
When asked to reflect on its suitability for tax assessments, one AI system admitted serious shortcomings: a lack of contextual understanding, susceptibility to data bias, and opacity that complicates appeals. While AI can improve consistency and efficiency, it cannot replicate the nuanced judgment required for complex R&D determinations. The system itself recommended “human-in-the-loop” models where technology assists but does not replace human decision-makers.
Beyond HMRC: wider lessons for government
The Elsbury ruling sets a precedent that could extend across public administration, from welfare benefits to planning and licensing. It confirms that government departments cannot rely on secrecy when AI is used in decisions affecting citizens’ rights and livelihoods. Courts are willing to intervene where safeguards are absent.
This points to an urgent need for a robust AI governance framework in government covering disclosure, oversight, and appeal mechanisms. Without these, automated decision-making risks eroding trust in public institutions.
Looking ahead
AI should play a supporting role in government administration, not a decisive one in matters with significant financial or personal impact. For HMRC and beyond, the path forward lies in transparent disclosure, clear accountability, and models where humans remain firmly in control.
AI in the tax industry more broadly
The tax industry is increasingly exploring AI applications beyond R&D relief processing. AI is already assisting with fraud detection, data analytics, and compliance monitoring, offering potential benefits in efficiency and consistency. However, the challenges highlighted in this ruling echo across the sector:
- Complexity of tax law: AI struggles with interpreting nuanced legislation and case-specific contexts, areas where human expertise remains indispensable.
- Risk of bias: Training data may embed systemic biases, leading to unfair outcomes in tax assessments or audits.
- Trust and transparency: Taxpayers must have confidence in the integrity of the system. If decisions appear to be outsourced to opaque algorithms, public trust could erode rapidly.
- Opportunity for augmentation: Properly designed, AI can serve as a powerful tool for human tax professionals by flagging anomalies, analysing large datasets, and streamlining routine checks while leaving judgment calls to trained officers.
In short, AI’s future in tax lies not in replacing human judgment but in enhancing it. Governments and firms alike must prioritise oversight, disclosure, and accountability to ensure AI supports, rather than undermines, the fairness and legitimacy of tax administration.