DataBreaches.Net

Menu
  • About
  • Breach Notification Laws
  • Privacy Policy
  • Transparency Report
Menu

Deepfakes Expose Cracks in Virtual ID Verification

Posted on January 27, 2021 by Dissent

One of the things I have come to understand from reading research reports from GeminiAdvisory.io is that criminals are quite nimble and creative as conditions change, the market changes, or new security protocols are adopted.  So now that financial institutions, cryptocurrency exchanges, and businesses deploy more sophisticated techniques to verify identity virtually, how are criminals responding?  When it comes to facial identification, Gemini analysts have noted an increasing number of posts on dark web forums about using face-change technology that uses selfies or videos. The resulting images are known as “deepfakes.”

The technology they are trying to defeat is still being developed and refined, but is reportedly considered fairly convenient and secure. Gemini reports that  many companies now

require users to upload an official ID, a selfie, or a specifically constructed selfie based on instructions such as holding up fingers or holding a note. Some companies have gone as far as requiring a live video feed in which the user must perform specific gestures and movements.

With the increasing use of video and selfie images, there has been a corresponding increase in the number of firms offering technology that claims to accurately match or verify identities. And some firms or cryptocurrency exchanges have their own verification systems in place. But while they work in one direction, criminals are working to defeat their efforts. Gemini reports that threat actors have shifted to using software such  as DeepFaceLab and Avatarify.

These tools leverage advancements in machine learning, neural networks, and artificial intelligence (AI) to create “deepfake” counterfeits. Deepfakes are images or videos in which the content has been manipulated so that an individual’s appearance or voice looks or sounds like that of someone else. At the current moment, widely available deepfake detection technology lags behind deepfake creation technology; counterfeits can only be detected after careful analysis using specialized AI, which has a 65% detection rate.

Fakes
Video: Demonstration of deepfake technology and implications for malicious use (full video via NOVA PBS Official; https://www.youtube.com/watch?v=T76bK2t2r8g).

Read more on Gemini Advisory for details on some of the verification services and software that is currently available.


Related:

  • Michigan ‘ATM jackpotting’: Florida men allegedly forced machines to dispense $107K
  • Bitcoin holds steady as hackers drain over $40 million from CoinCDX, India's top exchange
  • North Country Healthcare responds to Stormous's claims of a breach
  • Gladney Adoption Center had serious data exposures in the past few months. What will they do to prevent more?
  • 70% of healthcare cyberattacks result in delayed patient care, report finds
  • Hackers Can Remotely Trigger the Brakes on American Trains and the Problem Has Been Ignored for Years
Category: Commentaries and AnalysesFinancial Sector

Post navigation

← Emotet botnet goes offline as cops seize servers
NetWalker ransomware leak site seized (UPDATE2) →

Now more than ever

"Stand with Ukraine:" above raised hands. The illustration is in blue and yellow, the colors of Ukraine's flag.

Search

Browse by Categories

Recent Posts

  • Scattered Spider Hijacks VMware ESXi to Deploy Ransomware on Critical U.S. Infrastructure
  • Hacker group “Silent Crow” claims responsibility for cyberattack on Russia’s Aeroflot
  • AIIMS ORBO Portal Vulnerability Exposing Sensitive Organ Donor Data Discovered by Researcher
  • Two Data Breaches in Three Years: McKenzie Health
  • Scattered Spider is running a VMware ESXi hacking spree
  • BreachForums — the one that went offline in April — reappears with a new founder/owner
  • Fans React After NASCAR Confirms Ransomware Breach
  • Allianz Life says ‘majority’ of customers’ personal data stolen in cyberattack (1)
  • Infinite Services notifying employees and patients of limited ransomware attack
  • The safe place for women to talk wasn’t so safe: hackers leak 13,000 user photos and IDs from the Tea app

No, You Can’t Buy a Post or an Interview

This site does not accept sponsored posts or link-back arrangements. Inquiries about either are ignored.

And despite what some trolls may try to claim: DataBreaches has never accepted even one dime to interview or report on anyone. Nor will DataBreaches ever pay anyone for data or to interview them.

Want to Get Our RSS Feed?

Grab it here:

https://databreaches.net/feed/

RSS Recent Posts on PogoWasRight.org

  • Congress tries to outlaw AI that jacks up prices based on what it knows about you
  • Microsoft’s controversial Recall feature is now blocked by Brave and AdGuard
  • Trump Administration Issues AI Action Plan and Series of AI Executive Orders
  • Indonesia asked to reassess data privacy terms in new U.S. trade deal
  • Meta Denies Tracking Menstrual Data in Flo Health Privacy Trial
  • Wikipedia seeks to shield contributors from UK law targeting online anonymity
  • British government reportedlu set to back down on secret iCloud backdoor after US pressure

Have a News Tip?

Email: Tips[at]DataBreaches.net

Signal: +1 516-776-7756

Contact Me

Email: info[at]databreaches.net

Mastodon: Infosec.Exchange/@PogoWasRight

Signal: +1 516-776-7756

DMCA Concern: dmca[at]databreaches.net
© 2009 – 2025 DataBreaches.net and DataBreaches LLC. All rights reserved.