RQA Question Time: AI in Quality

We held a special hour long Question Time event for RQA members on the 27th September, all about the role and use of AI in Quality. Before you get to an AI generated summary of the various questions and takeaways from the event, here's the video of a question about what attendees use AI for.

AI and Quality Assurance: Key Takeaways from Our Latest Q&A Session

In our most recent Q&A, we explored the many ways artificial intelligence (AI) intersects with quality assurance (QA). With a diverse range of professionals joining the conversation, we addressed 16 critical questions about the practical use of AI in QA, its regulatory implications, and the future of the field. Below, we’ve captured the key questions asked, the summaries of the answers provided, and the insights that emerged.


1. Has anyone integrated AI into their quality assurance processes?

Most attendees had not yet fully integrated AI into their QA processes. Several are experimenting with AI for tasks like writing audit checklists, summarising information, or working with their IT departments for implementation. However, high costs and the complexity of validation remain barriers.

Key Insight:

While full integration of AI in QA processes is rare, there’s growing interest in specific applications. Many participants emphasised the importance of human oversight.


2. What are the regulatory challenges when implementing AI in GXP-regulated environments?

Several participants noted that their organisations face restrictions in using AI for confidential data, and some are only allowed to use AI for public or non-sensitive information. There is uncertainty around validating AI systems for GXP environments, and human sign-off remains essential.

Key Insight:

Regulatory challenges are significant, especially around data confidentiality and validation. Human review is required to ensure compliance.


3. Has anyone utilized AI tools, such as ChatGPT, to handle QA and GXP queries?

Many participants said they’ve used AI to help with SOP writing, summarising complex documents, and even brainstorming ideas. However, the consensus was that while AI can be useful, it often requires fine-tuning and clarification of questions to get the most accurate results.

Key Insight:

AI can assist in answering QA and GXP queries, but it’s essential to ask precise questions and verify the answers it provides.


4. Has your organization developed a policy on the use of AI in GXP-regulated environments?

The majority of participants shared that their organizations have not yet developed specific AI policies, though some companies are discussing it. Those with policies in place typically restrict the use of AI to non-confidential tasks.

Key Insight:

AI policies are still in development across many organizations, especially regarding confidentiality and regulatory compliance.


5. Where do you foresee AI being particularly useful in quality assurance?

Participants highlighted several potential uses, including document translation, audit planning, data summarisation, trend analysis, and risk assessments. Some expressed hope that AI could handle routine tasks, freeing up QA professionals for more complex problem-solving.

Key Insight:

AI could be highly valuable in automating routine QA tasks, enabling teams to focus on higher-level analysis and decision-making.


6. How should we handle companies claiming to use AI for tasks that could be done with traditional computing?

There was skepticism around companies marketing basic machine learning or computing as AI. Many participants felt that as AI becomes more widespread, it will be important to challenge exaggerated claims and assess whether AI is truly adding value.

Key Insight:

Be cautious of AI claims from vendors and ensure that AI is genuinely providing value, not just repackaged traditional computing.


7. How can CSV and CSQA evolve to cover the new challenges and risks of AI in GXP environments?

The slow development of GXP guidelines was acknowledged, with many noting that regulators like the EMA and FDA are starting to address AI use. Participants highlighted the importance of risk assessments, human oversight, and clear governance.

Key Insight:

While GXP guidelines evolve, organizations must focus on risk assessments, governance, and human oversight to safely implement AI.


8. Do we need GXP guidelines for AI, or should we focus on the core principles like patient safety and data integrity?

Several participants emphasised that patient safety, data integrity, and the protection of rights should always remain at the forefront, regardless of whether specific AI guidelines are in place. It was noted that the core principles of GXP still apply.

Key Insight:

Core GXP principles—patient safety and data integrity—should guide AI use, even before formal guidelines are established.


9. Has AI identified any errors in your workflows?

Few participants reported AI uncovering errors, with many noting they hadn’t used AI in that way yet. However, one attendee shared an example of AI finding discrepancies in communication and reporting.

Key Insight:

While AI hasn’t yet been widely used for error detection, its potential to identify issues in workflows remains promising.


10. Has anyone successfully implemented an off-the-shelf AI solution?

Most attendees hadn’t used off-the-shelf AI solutions, with one noting that their research team had used it for publication review. Overall, the consensus was that fully implemented AI solutions remain rare.

Key Insight:

Off-the-shelf AI solutions have not yet seen widespread adoption in QA, though some research teams are experimenting with them.


11. Should AI use be officially stated in QA documents?

Opinions were split. Some felt that AI’s role should be clearly stated if used, especially if it involved generating large portions of content. Others believed that as long as human review was involved, official declarations weren’t necessary.

Key Insight:

While some believe AI’s involvement should be noted in documents, the consensus is that human review remains crucial regardless of AI’s role.


12. Will AI improve work efficiency or make us over-reliant, as with calculators?

This question sparked a lively debate. Some felt AI would enhance efficiency by handling repetitive tasks, while others worried it could diminish critical thinking skills if overused. There was consensus that AI should support, not replace, human judgment.

Key Insight:

AI can improve efficiency, but it’s important to use it in a way that complements—rather than replaces—human expertise.


13. Has anyone used AI in risk assessment?

Few attendees had used AI for risk assessments, with most noting that they are still working on risk-assessing AI itself. One participant shared that AI is being tested for identifying risk trends in data.

Key Insight:

AI has the potential to assist in risk assessments, but most organisations are still in the early stages of using it in this area.


14. How do you deal with AI hallucinations (when AI confidently gives incorrect answers)?

Many participants noted they had encountered AI hallucinations and emphasised the need for thorough validation of AI outputs. AI’s tendency to provide incorrect but confident answers highlights the importance of cross-checking results.

Key Insight:

AI hallucinations are a known issue, reinforcing the need for human oversight and validation of AI-generated content.


15. What do you understand by AI in QA, and what systems do you consider AI?

Attendees agreed that while many systems claim to be AI, much of what is being marketed is closer to machine learning or deep learning. There was consensus that true AI in QA is still in its infancy, and it’s important to differentiate between AI and advanced computing.

Key Insight:

Be aware of the distinction between AI and machine learning—many marketed AI solutions are still quite basic.


16. Is the problem AI solves bigger than the potential problems it could create?

This question divided the group, with some arguing that AI’s potential to solve problems outweighs the risks, while others expressed concern about the new problems AI might introduce. There was agreement that AI must be used responsibly, with careful consideration of its long-term effects.

Key Insight:

While AI holds great promise, its adoption must be balanced with a clear understanding of the risks and potential problems it might create.



Hans de Raad

Owner at OpenNovations

5 个月

Sounds like a very interesting session! I had the pleasure of giving an AI keynote at the #hsraa24 conference last week, here are the slides, they seem to fit the topic as well, so hope this helps! https://www.dhirubhai.net/posts/hansderaad_2024-09opennovationshsraaaimlarchiving-activity-7244651026346110977-2xD-?utm_source=share&utm_medium=member_android

要查看或添加评论,请登录

Research Quality Association (RQA)的更多文章

社区洞察

其他会员也浏览了