DataBreaches.Net

Menu
  • About
  • Breach Notification Laws
  • Privacy Policy
  • Transparency Report
Menu

Deepfakes Expose Cracks in Virtual ID Verification

Posted on January 27, 2021 by Dissent

One of the things I have come to understand from reading research reports from GeminiAdvisory.io is that criminals are quite nimble and creative as conditions change, the market changes, or new security protocols are adopted.  So now that financial institutions, cryptocurrency exchanges, and businesses deploy more sophisticated techniques to verify identity virtually, how are criminals responding?  When it comes to facial identification, Gemini analysts have noted an increasing number of posts on dark web forums about using face-change technology that uses selfies or videos. The resulting images are known as “deepfakes.”

The technology they are trying to defeat is still being developed and refined, but is reportedly considered fairly convenient and secure. Gemini reports that  many companies now

require users to upload an official ID, a selfie, or a specifically constructed selfie based on instructions such as holding up fingers or holding a note. Some companies have gone as far as requiring a live video feed in which the user must perform specific gestures and movements.

With the increasing use of video and selfie images, there has been a corresponding increase in the number of firms offering technology that claims to accurately match or verify identities. And some firms or cryptocurrency exchanges have their own verification systems in place. But while they work in one direction, criminals are working to defeat their efforts. Gemini reports that threat actors have shifted to using software such  as DeepFaceLab and Avatarify.

These tools leverage advancements in machine learning, neural networks, and artificial intelligence (AI) to create “deepfake” counterfeits. Deepfakes are images or videos in which the content has been manipulated so that an individual’s appearance or voice looks or sounds like that of someone else. At the current moment, widely available deepfake detection technology lags behind deepfake creation technology; counterfeits can only be detected after careful analysis using specialized AI, which has a 65% detection rate.

Fakes
Video: Demonstration of deepfake technology and implications for malicious use (full video via NOVA PBS Official; https://www.youtube.com/watch?v=T76bK2t2r8g).

Read more on Gemini Advisory for details on some of the verification services and software that is currently available.


Related:

  • Michigan ‘ATM jackpotting’: Florida men allegedly forced machines to dispense $107K
  • Bitcoin holds steady as hackers drain over $40 million from CoinCDX, India's top exchange
  • North Country Healthcare responds to Stormous's claims of a breach
  • Gladney Adoption Center had serious data exposures in the past few months. What will they do to prevent more?
  • 70% of healthcare cyberattacks result in delayed patient care, report finds
  • Hackers Can Remotely Trigger the Brakes on American Trains and the Problem Has Been Ignored for Years
Category: Commentaries and AnalysesFinancial Sector

Post navigation

← Emotet botnet goes offline as cops seize servers
NetWalker ransomware leak site seized (UPDATE2) →

Now more than ever

"Stand with Ukraine:" above raised hands. The illustration is in blue and yellow, the colors of Ukraine's flag.

Search

Browse by Categories

Recent Posts

  • Clorox Files $380M Suit Alleging Cognizant Gave Hackers Passwords in Catastrophic 2023 Cyberattack
  • Cyberattacks Paralyze Major Russian Restaurant Chains
  • France Travail: At least 340,000 job seekers victims of new hack
  • Legal Silence and Chilling Effects: Injunctions Against the Press in Cybersecurity
  • #StopRansomware: Interlock
  • Suspected XSS Forum Admin Arrested in Ukraine
  • PowerSchool commits to strengthened breach measures following engagement with the Privacy Commissioner of Canada
  • Hungarian police arrest suspect in cyberattacks on independent media
  • Two more entities have folded after ransomware attacks
  • British institutions to be banned from paying ransoms to Russian hackers

No, You Can’t Buy a Post or an Interview

This site does not accept sponsored posts or link-back arrangements. Inquiries about either are ignored.

And despite what some trolls may try to claim: DataBreaches has never accepted even one dime to interview or report on anyone. Nor will DataBreaches ever pay anyone for data or to interview them.

Want to Get Our RSS Feed?

Grab it here:

https://databreaches.net/feed/

RSS Recent Posts on PogoWasRight.org

  • Meta Denies Tracking Menstrual Data in Flo Health Privacy Trial
  • Wikipedia seeks to shield contributors from UK law targeting online anonymity
  • British government reportedlu set to back down on secret iCloud backdoor after US pressure
  • Idaho agrees not to prosecute doctors for out-of-state abortion referrals
  • As companies race to add AI, terms of service changes are going to freak a lot of people out. Think twice before granting consent!
  • Uganda orders Google to register as a data-controller within 30 days after landmark privacy ruling
  • Meta investors, Zuckerberg reach settlement to end $8 billion trial over Facebook privacy violations

Have a News Tip?

Email: Tips[at]DataBreaches.net

Signal: +1 516-776-7756

Contact Me

Email: info[at]databreaches.net

Mastodon: Infosec.Exchange/@PogoWasRight

Signal: +1 516-776-7756

DMCA Concern: dmca[at]databreaches.net
© 2009 – 2025 DataBreaches.net and DataBreaches LLC. All rights reserved.