Authentic identity

SMART Access & Identity 2024 Access Control & Identity Management

If we reach back into history, the notion of provable, trusted identity was limited to people who were well known or could be vouched for by another trusted individual. Over time, as our world became more connected, the notion of identity documents like passports and driver’s licenses was developed, and these documents were made more secure through physical features, standard formatting, and other factors.

However, as the world has become both thoroughly global and digital, with goods and services exchanged across borders and without any in-person interaction, traditional means for confirming authentic identity, and understanding what is real and what is fake has become impractical.

Today, automated identity checks have become critical, and often rely on the most fundamental approach to identity: recognising someone by their face. In air travel, Automated Border Clearance has become the norm; in banking, eKYC and ID Verification are critical enablers for digital banks and cryptocurrencies; in physical security, face-as-ID is emerging as a compelling alternative to access cards. In other areas, such as social media and video conferencing, strong identity has not been fully embraced, but the need is apparent; users often accept what they see at face value, without any notion of authentication.

Meanwhile, although the need for remote, automated identity verification and the use of digital video for media and communications is skyrocketing, the tools to falsify an identity have become easier to access, and the results have become more compelling.

Seeing is no longer believing. Presented identities can no longer be expected to be authentic identities. The technology to create hyper-realistic synthetic face imagery is now widely available, and in many cases, it is impossible for people to distinguish real from fake. This creates risks for democracy, national security, business, human rights, and personal privacy.

In this paper, we will explore the specific challenges to authentic identity in automated identity verification use cases, as well as applications where we conventionally accept faces as real, and perhaps should no longer do so. We will also dive into what can be done to support authenticity and detect attempts to undermine it.

What is Authentic Identity?

With the relative ease of creating physical reproductions or digital manipulations, matching one face to another with highly accurate face recognition is not enough to prove that a presented identity is authentic. Authentic identity is a collection of technologies, systems, policies, and processes to create trust around identity in both physical and digital domains.

A focus on faces

No doubt, identity can be established, authenticated, or undermined with factors that go well beyond our faces. However, here, we will focus on authentic identity, specifically on faces. Our foundational human reliance on face for identity, the emergence of face recognition as the dominant biometric modality in many applications, and the importance of faces in video for establishing trust in small groups or public communications all demand a special focus.


Identity in the modern world

The implicit question of “Who are you?” and “Can I trust you?” span a number of distinct domains. These include:

1. Identification and authentication. Remote or in-person, the goal of identification and authentication is to confirm that someone is who they say they are for the sake of entry to a building, accessing a bank account, logging into a web service, and travelling into a country. The use cases are very broad by nature and have historically been addressed by some combination of authentication factors (i.e., something you know, something you have, and something you are).

2. Traditional and social media. Historically speaking, identity has been implicitly authentic in media: You see a broadcaster on television, and you believe they are real; you believe that what they are showing or saying is real. However, as traditional media has been augmented or displaced by social media, the means of production and distribution have been decentralised, and misinformation or disinformation has been weaponised; identity presented in media can no longer be implicitly accepted.

3. Communications. Again, the notion of identity has historically been implicit in many aspects of communications where identification and authentication were not explicit requirements (as they are, for instance, when calling a bank). The simultaneous rise of hybrid work and video conferencing due to the COVID-19 pandemic, alongside powerful new AI technologies, argue for a new approach to identity in communications.

Work, banking, travel, news, and entertainment all rely on identity, and so a strategy for authentic identity should be considered in order to deliver trusted results.

Challenges to trust

In order to properly understand the challenges to establishing trust in presented identities, we must consider both threats in the physical world and the digital world.

Physical world: Presentation Attacks

Broadly speaking, challenges to biometric identity in the physical world are referred to as Presentation Attacks (also known as ‘Spoofs’). These direct attacks can subvert a biometric system by using tools called presentation attack instruments (PAIs). Examples of such instruments include photographs, masks, fake silicone fingerprints, or video replays.

Presentation attacks pose serious challenges across all major real-time biometric modalities (such as face, fingerprint, hand vein, and iris). As noted above, we will focus on face recognition-based presentation attacks.

ISO 30107-3[1] defines PAIs as needing to fulfil three requirements: They must appear genuine to any Presentation Attack Detection mechanisms, as genuine to any biometric data quality checks, and must contain extractable features that match the targeted individual.

In practical applications, it is useful to establish a hierarchy in the sophistication and complexity of presentation attacks, which is beyond the scope of ISO 30107-3.

Notably, iBeta https://www.ibeta.com/biometric-testing/) and the FIDO Alliance (https://fidoalliance.org/) have established a three-level presentation attack sophistication hierarchy.


An image of a non-existent person from https://thispersondoesnotexist.com/

Digital world: Deepfakes and beyond

The term ‘Deepfake’ has become a popular way to describe any digital face manipulation and the exact description of what constitutes a deepfake may be argued. Broadly speaking, as defined by Springer Handbook of Digital Face Manipulation and Detection (2022)[2], there are six main categories of digital face manipulations which are relevant to this discussion:

1. Identity swap.

2. Expression swap.

3. Audio- and text-to-video.

4. Entire face synthesis.

5. Face morphing (merging two faces into a single image).

6. Attribute manipulation (synthetically adding features such as eyeglasses, headwear, hair, or otherwise to source images).

We would also add a 7th category:

7. Adversarial template encoding (invisible integration of template information from one face into the image of another face; this is related to, but separate from, face morphing).

Each of these can undermine trust in a presented identity, and we are already beginning to see them play out in public. Perhaps the most broadly known digital face manipulation, DeepTomCruise, set the standard for identity swaps, adding actor Tom Cruise’s face to videos of another person closely resembling him in a way that is largely indistinguishable from reality [3].

In March 2022, a faked video of Ukrainian president Volodymyr Zelenskyy appearing to tell his soldiers to lay down arms and surrender to Russia was widely distributed. It was quickly debunked, but set the stage for more sophisticated political deepfakes[4].

Social media is not immune. In March 2022, it was reported that thousands of records on LinkedIn were fraudulently created using synthetic faces of the type found (for instance) on https://thispersondoesnotexist.com[5]. In August 2022, the chief communications officer of Binance (the world’s largest crypto exchange) reported that hackers had used a deepfake of him in order to fool investors in live Zoom meetings[6]. His account has not been verified, but the case reinforces the insidious nature of misinformation, which is that it becomes increasingly difficult to distinguish reality from fiction.

In addition to these digital face manipulations, the digital world is also prone to cyberattacks. Most specifically, the risk of injection or replay attacks is very real. In this case, data collected from an originally authentic user is replayed at the data level (as opposed to in the physical space or digital image space). Here, ensuring the provenance of data is critical and that data being communicated is real, live, and non-replicable.

Ensuring authentic identity

At this point, the challenges posed to authentic identity may seem overwhelming in both the physical and digital space. Let us understand the opportunities for attack detection or prevention.

Presentation Attack Detection

In the physical world, there is a wide range of available technologies for Presentation Attack Detection (PAD), using a combination of advanced AI detection methods as well as multi-spectral imaging, depth-sensing, and other software – and sensor-level technologies. As noted above, ISO -0.07 codifies PAD, and global test labs offer technology certification. NIST FRVT is now planning a new testing track on PAD as well, which will help foster transparency and stimulate continued technology development. For more information on PAD, please also see Paravision’s white paper, An Introduction to Presentation Attack Detection (available at https://www.paravision.ai/whitepaper-an-introduction-to-presentation-attack-detection/, or via the short link: www.securitysa.com/*paravision1).

Digital Face Manipulation (‘Deepfake’) Detection

Digital face manipulation is a much newer threat to authentic identity, and while PAD largely concerns identification and authentication applications, digital face manipulations such as deepfakes will take shape in a wide range of use cases that will also include traditional and social media, video communications, and any place where people’s faces are presented through digital channels.

With this in mind, we make a few broad assertions about deepfake detection. AI-based detection technologies will play a critical role in helping to assert authentic identity. Deepfakes and synthetic face generators are already more advanced than most people’s ability to discern them from reality.

Automated analysis will not be sufficient to protect the public from the harms of fraudulent presented identities. Both human-in-the-loop analysis, human analysis and dissemination of automated results and public discussion (to stimulate awareness of generic and specific threats) will be a critical complement to automated detection technology.

Cryptographic and related approaches that help ensure the provenance of data sources will play an important role in helping to support authentic data sources. Broad industry consortia have already been formed to begin addressing this issue, such as https://c2pa.org/ and https://contentauthenticity.org/.

Nevertheless, there will be a constant ‘hill-climbing’ issue as is often seen in cybersecurity. New attack vectors can be expected to constantly emerge along with new detection and protection techniques.

Paravision’s approach to Authentic Identity

At Paravision, we look at authentic identity holistically; authentication of real identity and detection of fraudulent identity, in both the physical space and digital space. We have products available that perform advanced Presentation Attack Detection, and in conjunction with trusted government partners, we are actively developing products to detect any of the wide range of digital face manipulations, including, but not limited to, deepfakes. There may be nuanced differences between physical and digital presentation attacks, and so our philosophy is to provide tools to detect attacks and ensure provenance across all domains.

Faces have always been the first line of determining identity, and with recent advances in AI, face recognition has emerged as a very capable tool for biometric matching. Combining best-in-class face recognition technology with Presentation Attack Detection, deepfake detection, and related technology can help to ensure authenticity in cases where automated authentication is key. Meanwhile, in applications where automated face recognition may not be necessary, these detection technologies can be used to ensure trusted communications and news sources and the protection of privacy and human rights.

Our goal is to provide a trust layer in the physical and digital worlds, to power authentic identity, and to protect against malicious actors, fundamentally supported by an understanding of truth and reality in presented identity.

Find out more at https://paravision.ai/HID

This paper has been shortened, the full version is available at https://www.hidglobal.com/sites/default/files/documentlibrary/Authentic_Identity_WhitePaper_Paravision_HID.pdf (or use the short link: www.securitysa.com/*hid7).

Resources

[1] https://www.iso.org/standard/67381.html

[2] https://link.springer.com/book/10.1007/978-3-030-87664-7

[3] https://www.youtube.com/watch?v=nwOywe7xLhs

[4] https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia

[5] https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles

[6] https://www.theverge.com/2022/8/23/23318053/binance-comms-crypto-chief-deepfake-scam-claim-patrick-hillmann


Credit(s)




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

New State of Physical Access Control Report from HID
HID Global Editor's Choice Access Control & Identity Management News & Events
HID released the 2024 State of Physical Access Control Report, identifying five key trends shaping access control's future and painting a picture of an industry that has been undergoing considerable transformation.

Read more...
Smart intercoms are transforming access control
Access Control & Identity Management Products & Solutions
Smart intercoms have emerged as a pivotal tool in modern access control. They provide a seamless and secure way to manage entry points without the need for traditional security guards to validate visitors before granting them access.

Read more...
Easy, secure access for student apartments
Paxton Access Control & Identity Management Surveillance
Enhancing Security and Convenience at Beau Vie II Student Accommodation, a student apartment block located at Banghoek Road, Stellenbosch, with Paxton's access control and video management solution

Read more...
Invixium acquires Triax Technologies
News & Events Access Control & Identity Management
Invixium has announced it has acquired Triax Technologies to expand its biometric solutions with AI-based RTLS (Real-Time Location Systems) offering for improved safety and productivity at industrial sites and critical infrastructure.

Read more...
ControliD's iDFace receives ICASA certification
Impro Technologies News & Events Access Control & Identity Management
The introduction of Control iD's iDFace facial biometric reader, backed by mandatory ICASA certification, underscores the commitment to quality, compliance, and innovation.

Read more...
The future of workplace access
HID Global Access Control & Identity Management
Mobile credentials are considerably more secure than physical access control, because they eliminate the need for physical cards or badges, support multiple security protocols, and add layers of protection on top of basic card encryption.

Read more...
Integrated, mobile access control
SA Technologies Entry Pro Technews Publishing Access Control & Identity Management
SMART Security Solutions spoke to SA Technologies to learn more about what is happening in the estate access world and what the company offers the residential estate market.

Read more...
Bespoke access for prime office space
Paxton Access Control & Identity Management Residential Estate (Industry)
Nicol Corner is home to a six-star fitness club, prime office space, and an award-winning rooftop restaurant. It is also the first building in South Africa to have its glass façade fully incorporate fritted glazing, saving 35% on energy consumption.

Read more...
Next-generation facial recognition access control system
Enkulu Technologies Products & Solutions Access Control & Identity Management Residential Estate (Industry)
With a modern and innovative design, iDFace is the ideal device for monitoring and controlling people entering and exiting a building using facial recognition technology, including liveness detection, for enhanced security.

Read more...
Long-distance vehicle identification
STid Security Products & Solutions Access Control & Identity Management Residential Estate (Industry)
The STid SPECTRE reader can identify vehicles up to 14 metres away, across four traffic lanes, ensuring secure access to an estate without disrupting the traffic flow.

Read more...