Sally's Thoughts: Why humans are needed to conduct quality reviews and why automated systems on their own don’t cut it

Sally's Thoughts: Why humans are needed to conduct quality reviews and why automated systems on their own don’t cut it

We recently had some feedback from a client who had made use of an automated tool to complete ‘external validation’. I wanted to highlight some of the issues that arose from this review and our responses to it. It’s important in our industry to get our terminology right (we are in compliance, after all). Time and time again, we see the incorrect use of the word ‘validation’. As per the Standards, validation is a process that occurs after assessment has been completed and involves determining whether your assessment tools are consistently producing valid assessment judgments. However, in our industry, the terms 'quality review' (or 'quality assurance') and 'validation' are often used interchangeably. We are often asked if we can provide clients with ‘validation reports’ along with our learning and assessment packs when we provide these to new clients as some resource providers offer these, but we use this as an educative opportunity with our RTO clients to clarify the concepts and the confusion that often surrounds them. A validation report provided by a resource provider is really not worth the paper it is written on as the tools have been quality reviewed in house (or externally), but the process of validation has not occurred. It’s simply the resource provider’s way of saying, ‘We have reviewed our work and believe it to be of a quality standard’. This is really a standard process that needs to be occurring in any industry when a product is provided to a client and if this is not occurring then larger issues are at stake.

Now, back to these automated validation or quality review tools. I want to make it clear that these systems look for keywords. These types of systems can be useful in picking up missing content, but they may not be accurate if assessment avoids using unit language (that is, the unit language has been taken, unpacked and written in a way that makes sense to industry and to the future workers in that industry). I also have a problem in that these types of systems cannot consider how well the assessment works from a practical implementation perspective. Has it been contextualised to the student cohort? Are instructions detailed and clear? Do benchmarks provide enough guidance to assessors so they can make consistent judgments? Are benchmarks actually correct? Do simulations, where provided, allow for students to complete the requirements of the unit (even if all the instructions in the steps include this detail)? Does the simulation itself allow the student opportunities to demonstrate them all? If a unit has been well written and covers these points above but avoids unit language to make for a better assessment experience, an automated tool will view this as ‘not compliant’. But a poorly constructed assessment that hits all of the keywords might pass through as being ‘compliant’. While I believe in automation, AI and utilising technology to make processes more efficient, these automated quality review systems need to be used in conjunction with a person who is experienced and qualified to make the judgment about whether the system has done what it has intended to do. You wouldn’t write a piece of content using Chat GPT and not fact check it and rewrite certain sections to insert a more human tone before sending it out, so quality review tools should not be solely relied on to make a judgment that an intelligent human should be making.

Automated tools are the way of the future – as long as an intelligent human is in the driving seat.

Stacey Murray

VET Learning, Quality and Compliance Consultant | Learning and Development Specialist | Assessment Specialist | ITECA | MAICD

1 年

I'm all for anything that helps to simplify or streamline, however I feel such tools should always be only part of the picture. Whilst they might map against keywords or other prompts, there has to be consideration of deliverability, practicality and industry context. In short, mapping against a UoC is only part of the picture. A good product needs to take a much broader approach.

Milo Jovanovic

Business Development Maverick (Manager)

1 年

Shaffy Makkar Shekher Kishore I am wondering if either of you have used automated assessment review tools? If so, how did you find them if you care to share?

回复
Milo Jovanovic

Business Development Maverick (Manager)

1 年

Phill Bevan I've just finished reading through your recent posts on insights into ASQA. On the back of Sally's opinion piece above, do you think ASQA would approve the use of automated systems for quality reviews under the new draft Standards Clause 1.3 that discusses testing prior to use?

Cathy Grundy

Director, Publishing at Reubarquin Press

1 年

Great post Sally! Anything that makes quality review faster, easier and more reliable is, of course, a bonus. But as you point out, the way we apply the tools is the important part.? I see automated quality review as a great place to start - certainly useful in quickly highlighting areas for further investigation. Still requires a human to QR the QR though!

Vanessa McCarthy??

The creator of Prickly2sweet. The system saving thousands in time and money whilst reviewing assessment.

1 年

Interesting post Sally Tansley our system is an automated Asessment review system however along with reviewing terms and 'key words' in the content it actually also looks at alternatives, provides a report on dimensions of competency covered, rules of evidence of sufficient methods, over and under assessment plus more. Rectification will always be require a human judgement. Considering hours used by a manual system to review assessment for quality and compliance that hasn't worked in the past, previous audit history has shown, innovation is a must in this area. It is working and providing data to make systematic change.

要查看或添加评论,请登录

Sally Tansley的更多文章

社区洞察

其他会员也浏览了