DataBreaches.Net

Menu
  • About
  • Breach Notification Laws
  • Privacy Policy
  • Transparency Report
Menu

Deepfakes Expose Cracks in Virtual ID Verification

Posted on January 27, 2021 by Dissent

One of the things I have come to understand from reading research reports from GeminiAdvisory.io is that criminals are quite nimble and creative as conditions change, the market changes, or new security protocols are adopted.  So now that financial institutions, cryptocurrency exchanges, and businesses deploy more sophisticated techniques to verify identity virtually, how are criminals responding?  When it comes to facial identification, Gemini analysts have noted an increasing number of posts on dark web forums about using face-change technology that uses selfies or videos. The resulting images are known as “deepfakes.”

The technology they are trying to defeat is still being developed and refined, but is reportedly considered fairly convenient and secure. Gemini reports that  many companies now

require users to upload an official ID, a selfie, or a specifically constructed selfie based on instructions such as holding up fingers or holding a note. Some companies have gone as far as requiring a live video feed in which the user must perform specific gestures and movements.

With the increasing use of video and selfie images, there has been a corresponding increase in the number of firms offering technology that claims to accurately match or verify identities. And some firms or cryptocurrency exchanges have their own verification systems in place. But while they work in one direction, criminals are working to defeat their efforts. Gemini reports that threat actors have shifted to using software such  as DeepFaceLab and Avatarify.

These tools leverage advancements in machine learning, neural networks, and artificial intelligence (AI) to create “deepfake” counterfeits. Deepfakes are images or videos in which the content has been manipulated so that an individual’s appearance or voice looks or sounds like that of someone else. At the current moment, widely available deepfake detection technology lags behind deepfake creation technology; counterfeits can only be detected after careful analysis using specialized AI, which has a 65% detection rate.

Fakes
Video: Demonstration of deepfake technology and implications for malicious use (full video via NOVA PBS Official; https://www.youtube.com/watch?v=T76bK2t2r8g).

Read more on Gemini Advisory for details on some of the verification services and software that is currently available.

Related posts:

  • Russian Nationals Charged With Hacking One Cryptocurrency Exchange and Illicitly Operating Another
  • Three North Korean Military Hackers Involved in Sony Pictures Hack, Wanna Cry, and Numerous Other Attacks Indicted in Wide-Ranging Scheme to Commit Cyberattacks and Financial Crimes Across the Globe
Category: Commentaries and AnalysesFinancial Sector

Post navigation

← Emotet botnet goes offline as cops seize servers
NetWalker ransomware leak site seized (UPDATE2) →

Now more than ever

"Stand with Ukraine:" above raised hands. The illustration is in blue and yellow, the colors of Ukraine's flag.

Search

Browse by Categories

Recent Posts

  • Alert: Scattered Spider has added North American airline and transportation organizations to their target list
  • Northern Light Health patients affected by security incident at Compumedics; 10 healthcare entities affected
  • Privacy commissioner reviewing reported Ontario Health atHome data breach
  • CMS warns Medicare providers of fraud scheme
  • Ex-student charged with wave of cyber attacks on Sydney uni
  • Detaining Hackers Before the Crime? Tamil Nadu’s Supreme Court Approves Preventive Custody for Cyber Offenders
  • Potential Cyberattack Scrambles Columbia University Computer Systems
  • 222,000 customer records allegedly from Manhattan Parking Group leaked
  • Breaches have consequences (sometimes) (1)
  • Kansas City Man Pleads Guilty for Hacking a Non-Profit

No, You Can’t Buy a Post or an Interview

This site does not accept sponsored posts or link-back arrangements. Inquiries about either are ignored.

And despite what some trolls may try to claim: DataBreaches has never accepted even one dime to interview or report on anyone. Nor will DataBreaches ever pay anyone for data or to interview them.

Want to Get Our RSS Feed?

Grab it here:

https://databreaches.net/feed/

RSS Recent Posts on PogoWasRight.org

  • Germany Wants Apple, Google to Remove DeepSeek From Their App Stores
  • Supreme Court upholds Texas law requiring age verification on porn sites
  • Justices nix Medicaid ‘right’ to choose doctor, defunding Planned Parenthood in South Carolina
  • European Commission publishes its plan to enable more effective law enforcement access to data
  • Sacred Secrets: The Biblical Case for Privacy and Data Protection
  • Microsoft’s Departing Privacy Chief Calls for Regulator Outreach
  • Nestle USA Settles Suit Over Job-Application Medical Questions

Have a News Tip?

Email: Tips[at]DataBreaches.net

Signal: +1 516-776-7756

Contact Me

Email: info[at]databreaches.net

Mastodon: Infosec.Exchange/@PogoWasRight

Signal: +1 516-776-7756

DMCA Concern: dmca[at]databreaches.net
© 2009 – 2025 DataBreaches.net and DataBreaches LLC. All rights reserved.