We need to talk about liveness

We need to talk about liveness

There's been a lot of fuss in the identity world about so-called "liveness detection". In fact, there's even confusion over the correct terminology; the National Institute of Standards and Technology (NIST) categorises liveness detection as a subset of Presentation Attack Detection (PAD) and defines it as follows:

"The measurement and analysis of anatomical characteristics or involuntary or voluntary reactions, in order to determine if a biometric sample is being captured from a living subject present at the point of capture."

Applied to the use case of remote onboarding, it simply addresses the question: is the person taking the selfie really who they claim to be, or are they using a printed picture, mask, or other "spoofing" technique to try and fool the system?

Early PAD "Active Liveness" technologies focused on instructing users to carry out voluntary reactions to prompts, such as tilting their head a certain way or following a randomly moving object with their eyes (this was the method used in early iterations of Innovatrics Digital Onboarding Toolkit). These methods, while effective to a point, can be outsmarted with a little effort and ingenuity.

More recently, technology vendors have come up with more advanced methods of detecting presentation attacks. These new methods fall under the category of "Passive Liveness", as they do not require the user to carry out any actions in order to allow the algorithm to calculate their liveness score. This is where things start to get interesting.

While Active Liveness technology is easy to understand, Passive Liveness is more of a mystery because it's difficult to explain what's actually happening in the background. Difficult as it may be, I'll try....

Passive Liveness algorithms are neural networks trained using machine learning techniques on very large datasets containing many variations of spoof vectors (images of presentation attacks), i.e. printed masks, 3D masks, images from mobile and PC screens, etc. The training dataset also contains genuine selfie images and each image is labelled "genuine" or "fake". The neural network then runs through multiple rounds of training, tweaking, and tuning until it is able to detect presentation attacks with a high degree of accuracy using just one frame as a reference image.

So - Active Liveness = follow the moving object; Passive Liveness = neural networks. What does that mean for real-world applications? Quite a lot, as it turns out.

One of our early-adopter customers initiated their project using Active Liveness, then took the decision to upgrade to Passive Liveness in 2020. The result?

Active Liveness: 63% of customers successfully completed this step, taking an average of 13 seconds

Passive Liveness: 99.9% of customers successfully completed this step, taking an average of 1 second

What's more, when we ran the Passive Liveness algorithm over their existing database (of over 30 million onboarding images), we found that approximately 1% of all previous onboardings had been completed with the help of a presentation attack. Armed with this information, our customer was able to close down these accounts and protect their business from potential fraud.

By introducing Passive Liveness, we simultaneously improved the user onboarding experience and increased the overall security of the application.

Another strong argument supporting Passive Liveness is the fact that there is an ISO standard (30107-3), which sets out principles and methods for performance assessment of presentation attack detection mechanisms. A testing lab named iBeta, based in Denver, Colorado was the first to carry out testing according to this ISO standard in two levels; iBeta Level 1 PAD and iBeta Level 2 PAD.

In 2020, Innovatrics achieved Level 1 PAD accreditation and we are currently preparing our submission for Level 2 PAD. We also keep an open mind to new testing bodies which may emerge. Biometric benchmarks are useful to a point, however, I always recommend that organisations test biometric technology on their own data against two main criteria; speed and accuracy. If it's very fast but not accurate, you can't use it. If it's very accurate but not fast, you shouldn't use it either.

And most importantly, if vendors claim to be "iBeta Certified", "3D Certified", or "Level 4/5 Certified", do a little more research. No such independent certifications exist.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了