Biometric identity was once the future: but what happens when they aren’t accurate or secure?
Biometrics is a powerful technological advancement in the identification and security space. But with that power comes a deep need for accountability and close ethical scrutiny. After all, journalists have recently found that in places like Detroit, facial recognition devices were used primarily on Black populations — even though such systems were known to be inaccurate a shocking 96% of the time. We need to be cautious and judicious in how we use biometric and what we say about their accuracy and security.
Despite these concerns, the use of our bodies – something you “are” in the common parlance of the security industry – continues to grow in popularity. For example, the e-passport, introduced in 2006 in the UK, has emerged as the most prominent use of biometrics to validate identity. Today, most passports in the UK and other EU countries that accept this type of identity include the e-passport, with a biometric chip inserted in the final page.
In addition to border access, financial service access has also become a central focus for biometrics. Payments are now increasingly relying upon biometric authentication as well.
It’s worth noting that this surge is not just driven by technology company promotion. Consumers trust biometrics (perhaps more than they should). In fact, 93% of respondents prefer biometrics to passwords (per a recent study , sponsored by MasterCard and Oxford University). The consumer need being met by biometrics is clear: many consumers have 90+ online accounts all of which should have separate passwords. Remembering 90+ passwords just doesn’t scale. In fact, divesting from passwords is imperative – up to one-third of online transactions are abandoned because of forgotten passwords.
Today, banking systems are beginning to use biometrics with voice and face authentication systems at scale in order identify customers and facilitate transactional access to accounts. The Covid-19 crisis has accelerated this use of biometrics as many businesses are deploying robust security for mass numbers of remote workers.
The most important focus for biometrics is of course secure authentication. However, in tandem with this level of (perceived) security, businesses also want to preserve a relatively frictionless user experience. The two needs are often at odds. Today’s mix of multi-factor identification processes, which includes a two-factor off-device loop of security codes, adds a great deal of friction and slows transactions down in often-debilitating ways. In contrast to this multi-step process, banks are beginning to authenticate via voice itself. For example, HSBC is now using voice ID for telephone banking customers. HSBC claims its system has prevented £400 million of telephone fraud.
Using the physical characteristics of our bodies has been the core focus of biometric identification for the past decade, but these are now also being joined by behavioral tracking to add a layer of authentication.
“We are now starting to see some interesting use cases for biometrics, for example, camera software that can recognize you by your walking gait,” Keiron Shepherd, principal security systems engineer at F5.
“In the future, imagine a scenario where gait analysis is carried out as you walk up to a building by the security camera, and then your RFID security pass lets you through the door, but only if the ID on the security pass matches the walking gait.”
It’s now possible to assess how fast you type on your phone, or how you usually access online services, including your banking and payment activity. Tracking this behavior could deliver a new and more accurate set of biometric data.
However, would consumers give their consent for this information to be used to create a detailed profile of their behavior? The Safe Face Pledge attempts to provide guidelines here, as well as new privacy guidelines from institutions like the Biometric Institute.
And guidelines like those from the Safe Face Pledge are all that exist, as no universal, comprehensive and interoperable ID systems have been created with or without a biometric component. So far, businesses and organizations have developed their own bespoke platforms based on their authentication needs. Clearly, artificial intelligence (AI) and pattern recognition achieved with AI is a key attribute that will develop the future of biometrics. Yet how fast should we move? And what are the consequences of moving too fast?
The Viability of Biometric Security
As the mobile phone is now ubiquitous, it has become a focus for biometric security applications. As we’re increasingly encouraged to make contactless payment where possible in the wake of COVID-19, smartphones look likely to become the center for identification and authentication for many people making everyday transactions. Also, over 20 banks across the world are testing contactless payment cards using the established VISA and MasterCard networks with added biometric security. Ensuring these systems can’t be compromised is a core component of more mass use of biometric security systems.
Since biometric technologies became viable, the ways in which these systems could be beaten or undermined have filled news headlines. The 3D printing of fingerprints or the duplication of faces a la the Mission Impossible films have often been pointed to. While it’s true the issue of false positives has dogged the industry for some time, the biometric systems being deployed today are very difficult to compromise.
It’s not just in-person payments where biometric authentication is becoming more important. With the massive rise in smart speaker ownership – 1-in-5 UK households own one of these devices according to Strategy Analytics – voice has become another focus for biometric security. As more of us shop at home, ordering and paying via voice commands is increasingly popular as it’s frictionless – you can literally order and authenticate yourself in one breath – and is also seeing a big push from banks and retailers.
But there’s a dark cloud on the horizon. The rise of deepfake images and now deepfake synthesised voices is a real and present danger. Last year, an energy company was tricked into making a substantial payment via a phone conversation which was later revealed to be a faked voice of the company’s chief executive. There is still plenty of work to be done to remove this level of potential fraud.
Last year, I worked very closely with identity verification provider Mitek. Joe Bloemendaal, head of strategy at Mitek, says: “The rise of deepfake technologies is concerning for biometric security. Bad actors are taking advantage of more sophisticated AI and Big Data to defraud the public. The good news is that the ability to identify deepfakes will only improve with time, as researchers are experimenting with AI to spot even the deepest of fakes using facial recognition and behavioural biometrics.”
The Biometric Future
The coronavirus pandemic threw sharp relief the many weaknesses we have in our security systems that protect the most sensitive aspect of our lives. Proving who we are is a complex problem, especially when there are absolutely no card-present transactions to be found. How do you consistently prove that a person is who they say they are, when everyone is entirely virtual?
The onboarding process, where a new customer is authenticated, will continue to accelerate the use of digital systems – many of which will have a biometric component. Early adopters in Africa, for instance, have shown how these systems can work. In the West, the biometric market has become fragmented, with many competing and mutually incompatible systems.
A future saturated in biometric identification, as portrayed in films like Minority Report where retinas can be scanned thousands of times, may not be here yet, but mass face recognition is in active development. A major issue with these systems is the racial bias that can be inherent in these applications.
Research from the National Institute of Standards and Technology that analyzed 189 algorithms showed higher inaccuracy rates for African Americans and Asians than with Caucasians. For a pertinent recent example, see the previous note regarding Detroit’s police-owned facial recognition systems. When you add in the bias evident in many AI systems used to analyze these facial images, the reliability of accurate identification is called into question.
“It’s important to note that not all approaches to implementing biometrics are the same,” explains Andrew Shikiar, executive director of the FIDO Alliance. “The critical differentiator is how and where this most sensitive form of data is stored.”
“Breaches such as the one against the Biostar 2 platform last summer have demonstrated the risks associated with mismanagement of user biometrics,” Shikiar says. “While it’s certainly inconvenient and damaging to have one’s password stolen, the impact of a stolen biometric is far worse as they inherently cannot be changed. While every organisation wants to optimise security and convenience, this should never be done at the cost of taking on added liability and risk to one’s brand and reputation.”
To further guide the development and implementation of biometric technologies, the Biometrics Institute has developed its Ethical Principles for Biometrics. This kind of guidance is vital to ensure the technologies in development are applied safely and without discrimination.
Using passwords and passcodes is still mainstream, but the rapid expansion of AI is driving the development of advanced biometric systems. Consumers and businesses alike can see the benefit yet, remain concerned about the collection of yet more data points to further personalize their digital profiles. Convenience is likely to win out, as we move into a post-COVID-19 environment where contactless payments and access to digital service can be achieved with our phones using fingerprint, voice and face biometrics.