Why ChatGPT Won't Replace R&D Specialists (And Wasn’t Meant To)!

Why ChatGPT Won't Replace R&D Specialists (And Wasn’t Meant To)!

There’s been a trend lately of R&D advisors testing ChatGPT to see if it can assess a project’s eligibility for R&D tax credits. The experiment typically goes like this: upload DSIT guidelines, ask the AI if a project qualifies, and wait for a “yes” or “no.”

Spoiler alert: the answer is often wrong, especially if the project doesn’t qualify. But the real issue here isn’t ChatGPT’s “failures.” It’s the approach.

Imagine giving a new intern the DSIT guidelines to digest, and then asking them to assess a complex R&D project. We all know that no one—not even the sharpest intern—would get it right under those conditions. Being able to accurately assess an R&D project requires not just knowledge of the guidelines, but significant experience in how to apply them in the real world.

The exact same applies to using large language models (LLMs) like ChatGPT. Any model needs training and instruction, just like a person does. R&D tax credit assessment requires?specific?insights, industry knowledge, and an understanding of project context. In other words, you can’t just feed it some generic information and expect magic.

Here’s why this approach misses the mark:

  1. LLMs Need Context, Training, and Specificity Language models excel at synthesising?known?information. But if you ask them to apply complex HMRC guidance without the nuanced knowledge that a trained specialist or a purpose-built tool has, they’ll be unreliable. It’s like hiring someone with astonishing general knowledge, like the winner of Mastermind, and then asking them to perform brain surgery—they might know some general theory, but a successful outcome requires extensive (and very specific) experience and context.
  2. Confirmation Bias Experiments done by advisors testing the effectiveness of ChatGPT, seeing it fail, and then declaring all LLMs are not helpful for R&D tax is a clear case of confirmation bias. Used out of the box, general foundational models like GPT are indeed limited and unreliable for specific use cases, but there is genuine potential with models that are trained effectively or combined with tools that bring project-specific expertise to the table.
  3. There Are Solutions That?Do?Work Instead of expecting a generic AI model to intuit complex tax criteria, look for purpose-built solutions, like?Claimer’s Project Studio.?It’s designed specifically for this task and trained by experts who know what HMRC looks for. With it, you’re not simply asking ChatGPT to play expert. You’re equipping it with the tools it needs to give an informed answer.

An Example:?Using ChatGPT in this way is like buying a sports car and then driving it off-road, then complaining about the lack of traction. Sure, it’s a high-performance machine, but only when used in the right context! For R&D tax claims, if you don’t have a properly trained, dedicated system, you’re likely spinning your wheels.

In short: don’t expect a non-expert AI to do the job of a seasoned R&D advisor! Heck even the tools developed specifically for this purpose are not designed to entirely replace R&D advisors. The purpose of these tools is to augment advisors, giving them the tools to be more efficient and ensure they make fewer errors.

I get why some advisors might be keen to flag up the deficiencies with LLM's in the context of R&D tax credits. If a machine could do the job to the same standard as an advisor then the job of the advisor might become obsolete. Personally, I don't see the happening anytime soon, but I do think that 2025 is going to be a big year when it comes to technology adoption in the R&D advisory space.

AI isn’t going to replace R&D advisors, but R&D advisors utilising AI are going to outperform those who don’t. By integrating AI into their processes, advisors will be able to analyse data more efficiently, identify eligibility faster, and deliver more accurate, tailored claims. It’s not about replacing expertise but amplifying it—combining human insight with cutting-edge technology to provide a smarter, more streamlined approach to navigating complex R&D tax criteria."

If you are considering utilising technology to assist with R&D tax credit claims, use the right tools for the job. Let’s give credit (and credibility) where it’s due—and that’s with solutions specifically designed to make R&D claims easier, faster, and more accurate.

If you want to know more about Claimer's Project Studio, drop me a DM and we can set up a call/demo.

Chris Williams

Managing Director at ThirdRock - unlocking innovation reliefs to empower growth

4 个月

Derek Granger - Innovation Funding Great insight with the use of AI to evaluate R&D tax claims. The essence of researching and establishing the scientific or technological baseline lies with the “Competent Professional” ie. the individual(s) involved with the project(s) who are endeavouring to make the “technological advance” and that actions to overcome the identified “uncertainties” are not “readily deducible”. While AI can add value with reviewing the baseline technology, does AI not have a tendancy to illusinate and can it differentiate technology that maybe a trade secret and, therefore still qualify? I would not trust an AI tool over the opinion of a Competent Professional and therefore the use of AI would also be useful in verifying who should be qualified as a CP for R&D tax purposes…

回复
Adam McCann

Automating R&D tax | Co-founder & CEO at Claimer

4 个月

Great points, especially capturing the nuance of this whole space!

Phil Chambers PhD

Senior Manager (R&D & Patent Box) | Expert in Tax Relief

4 个月

Great article Derek!

要查看或添加评论,请登录

Derek Granger的更多文章