DataBreaches.Net

Menu
  • About
  • Breach Notification Laws
  • Privacy Policy
  • Transparency Report
Menu

Deepfakes Expose Cracks in Virtual ID Verification

Posted on January 27, 2021 by Dissent

One of the things I have come to understand from reading research reports from GeminiAdvisory.io is that criminals are quite nimble and creative as conditions change, the market changes, or new security protocols are adopted.  So now that financial institutions, cryptocurrency exchanges, and businesses deploy more sophisticated techniques to verify identity virtually, how are criminals responding?  When it comes to facial identification, Gemini analysts have noted an increasing number of posts on dark web forums about using face-change technology that uses selfies or videos. The resulting images are known as “deepfakes.”

The technology they are trying to defeat is still being developed and refined, but is reportedly considered fairly convenient and secure. Gemini reports that  many companies now

require users to upload an official ID, a selfie, or a specifically constructed selfie based on instructions such as holding up fingers or holding a note. Some companies have gone as far as requiring a live video feed in which the user must perform specific gestures and movements.

With the increasing use of video and selfie images, there has been a corresponding increase in the number of firms offering technology that claims to accurately match or verify identities. And some firms or cryptocurrency exchanges have their own verification systems in place. But while they work in one direction, criminals are working to defeat their efforts. Gemini reports that threat actors have shifted to using software such  as DeepFaceLab and Avatarify.

These tools leverage advancements in machine learning, neural networks, and artificial intelligence (AI) to create “deepfake” counterfeits. Deepfakes are images or videos in which the content has been manipulated so that an individual’s appearance or voice looks or sounds like that of someone else. At the current moment, widely available deepfake detection technology lags behind deepfake creation technology; counterfeits can only be detected after careful analysis using specialized AI, which has a 65% detection rate.

Fakes
Video: Demonstration of deepfake technology and implications for malicious use (full video via NOVA PBS Official; https://www.youtube.com/watch?v=T76bK2t2r8g).

Read more on Gemini Advisory for details on some of the verification services and software that is currently available.


Related:

  • ModMed revealed they were victims of a cyberattack in July. Then some data showed up for sale.
  • Confidence in ransomware recovery is high but actual success rates remain low
  • Protected health information of 462,000 members of Blue Cross Blue Shield of Montana involved in Conduent data breach
  • Resource: NY DFS Issues New Cybersecurity Guidance to Address Risks Associated with the Use of Third-Party Service Providers
  • Bombay High Court Orders Department of Telecommunications to Block Medusa Accounts After Generali Insurance Data Breach
  • Cyber-Attack On Bectu’s Parent Union Sparks UK National Security Concerns
Category: Commentaries and AnalysesFinancial Sector

Post navigation

← Emotet botnet goes offline as cops seize servers
NetWalker ransomware leak site seized (UPDATE2) →

Now more than ever

"Stand with Ukraine:" above raised hands. The illustration is in blue and yellow, the colors of Ukraine's flag.

Search

Browse by Categories

Recent Posts

  • Threat actors have reportedly launched yet another campaign involving an application connected to Salesforce
  • Russian hackers target IVF clinics across UK used by thousands of couples
  • US, allies sanction Russian bulletproof hosting services for ransomware support
  • Researchers claim ‘largest leak ever’ after uncovering WhatsApp enumeration flaw
  • Large medical lab in South Africa suffers multiple data breaches
  • Report released on PowerSchool cyber attack
  • Sue The Hackers – Google Sues Over Phishing as a Service
  • Princeton University Data Breach Impacts Alumni, Students, Employees
  • Eurofiber admits crooks swiped data from French unit after cyberattack
  • Five major changes to the regulation of cybersecurity in the UK under the Cyber Security and Resilience Bill

No, You Can’t Buy a Post or an Interview

This site does not accept sponsored posts or link-back arrangements. Inquiries about either are ignored.

And despite what some trolls may try to claim: DataBreaches has never accepted even one dime to interview or report on anyone. Nor will DataBreaches ever pay anyone for data or to interview them.

Want to Get Our RSS Feed?

Grab it here:

https://databreaches.net/feed/

RSS Recent Posts on PogoWasRight.org

  • Closing the Privacy Gap: HIPRA Targets Health Apps and Wearables
  • Researchers claim ‘largest leak ever’ after uncovering WhatsApp enumeration flaw
  • CIPL Publishes Discussion Paper Comparing U.S. State Privacy Law Definitions of Personal Data and Sensitive Data
  • India’s Digital Personal Data Protection Act 2023 brought into force
  • Five major changes to the regulation of cybersecurity in the UK under the Cyber Security and Resilience Bill

Have a News Tip?

Email: Tips[at]DataBreaches.net

Signal: +1 516-776-7756

Contact Me

Email: info[at]databreaches.net
Security Issue: security[at]databreaches.net
Mastodon: Infosec.Exchange/@PogoWasRight
Signal: +1 516-776-7756
DMCA Concern: dmca[at]databreaches.net
© 2009 – 2025 DataBreaches.net and DataBreaches LLC. All rights reserved.