Deepfakes and Wire Fraud: How to Protect Transactions from AI Impersonators

What are deepfakes and how are they being used to commit wire fraud?

Deepfakes and Wire Fraud: How to Protect Transactions from AI Impersonators

What are deepfakes and how are they being used to commit wire fraud?

AI is used to generate a deepfake, or convincing avatar for a genuine participant in a business transaction, which cyber criminals can use to commit wire fraud.Deepfakes and Wire Fraud: How to Protect Transactions from AI Impersonators
Written by:

Tom Cronkright

Read time:

3 min

Category:

Artificial Intelligence

Published on:

Jun 24, 2024

Key Takeaways

  • Deepfakes are AI-generated photos, videos, or audio that convincingly mimic authentic content. They are often used to impersonate real people without their permission.
  • Deepfakes can be used to impersonate legitimate, trusted parties to a transaction, manipulate wire transfer instructions, and divert funds to a cybercriminal’s bank account.
  • Adopt sophisticated authentication software that includes “liveness detection” to confirm identities, recognize deepfakes, and prevent wire fraud.

Despite some unease about the impacts of artificial intelligence (AI), the technology has real benefits for a range of applications—everything from saving time on tedious manual data entry tasks to more accurately diagnosing cavities at the dentist. But for every benefit, there is an equally legitimate concern about potential harm resulting from AI.

Deepfakes are the latest AI threat to earn the attention of consumer advocates and federal lawmakers. They have reignited public debate about privacy, intellectual property, disinformation, and consent. Yet the use of deepfakes for committing financial crimes hasn’t garnered the same scrutiny. Cyber criminals are exploiting the relative lack of awareness to commit real estate wire fraud with increasing frequency. 

So how can you, as a real estate professional, protect your business assets and clients from deepfake-enabled fraud? Let's dive in.

What are deepfakes?

Deepfakes are AI-generated photos, videos, or audio that convincingly mimic authentic content. They are often used to impersonate real people without their permission. Deepfake technology works by ingesting original audiovisual data and using it to generate avatars matching the look, facial expressions, tone, style, gestures, and other unique characteristics of a real person. 

This technology can enable face-swapping (generating a digital version of a real person’s face and applying it to another body in a video or photo) and speech synthesis (interpreting text and generating speech in the voice of a real person). Unlike Snapchat’s “face swapping” effect, or the photo editing tools in Photoshop, deepfake technology is frequently used with malicious intent, rather than for creativity or amusement. Deepfakes are disturbingly difficult to recognize compared to the digitally-enhanced media from other apps.

How can deepfakes be used to commit wire fraud?

There are myriad positive applications for deepfake technology, such as dubbing media in other languages, but it is too often used for social engineering, or manipulating people to gain their trust and enable crime. Deepfakes can be used to impersonate legitimate, trusted parties to a transaction, manipulate wire transfer instructions, and divert funds to a cybercriminal’s bank account.

For example, a fraudster could use deepfake technology to imitate a lender or real estate professional on a video conference call. Cloaked as a legitimate party, they could instruct a property seller to wire their mortgage payoff to the fraudster’s bank. Similarly, scammers can use deepfake AI to imitate sellers and convince title companies or banks to send the buyer’s funds to the scammer’s untraceable account.  

Deepfakes have become so sophisticated that experienced professionals cannot always spot the difference between an AI-generated voice or face on a video call and the real thing. Take the example of a financier who mistakenly wired $25 million to a fraudster, convinced he was following his Chief Financial Officer’s instructions from a recent video conference call. The fraudsters used deepfake voices and face-swapping to convincingly imitate multiple colleagues in the meeting, who all cosigned the deal. 

Rapid advancements in technology are creating new ways for scammers to trick businesses and individuals.

Identifying deepfakes is even more difficult with the advent of simswap technology. Simswap software allows fraudsters to disguise their real phone numbers and manipulate caller ID so that it appears like they are calling from a trusted business. If a fraudster calls from a familiar number and has a recognizable voice, it’s no wonder that even savvy professionals are falling victim to tech-enabled wire fraud.

Deepfakes can even fool other AI-enabled technologies, including some identity verification tools. Certain software now collects users’ biometrics from facial scans and uses them to confirm the user’s identity before authorizing their logins. Fraudsters can use face-swapping during the biometric scans to pass as authentic users. Once they’ve logged in, they may be able to authorize wire transfers to their accounts.

How can I protect transactions from deepfakes and avoid wire fraud?

Lawmakers have introduced the DEEPFAKES Accountability Act, but Congress has yet to pass any legislation that would sanction deepfake criminals. In the meantime, professionals can disrupt deepfakes and wire fraud with the right training and technology.

Many cybersecurity training programs focus on enabling employees to recognize phishing emails and spoofed websites, but additional training is necessary to identify  deepfake encounters. Professionals can learn the red flags that might indicate the use of face-swapping or speech synthesis during video calls:

  • Some facial features may lack definition.
  • Color, lighting, or texture may appear inconsistent across a participant’s face.
  • Gestures and facial expressions may be slightly unnatural, like unblinking eyes.
  • The participant’s face may be familiar, but their clothing and background are unusual.
  • A voice may sound convincing, but the speaker may use uncharacteristic vernacular.

High quality deepfakes rely on significant source material, like thousands of recordings or images of the real human target. These high quality versions are only possible when impersonating someone who has a large online presence and digital footprint, like a celebrity or politician. In most cases, when there is limited data available, AI cannot generate convincing impersonations. Staff and clients can learn to spot the glitches in these deepfakes and disrupt fraud.

The best line of defense is sophisticated authentication software for high-value transactions. Solutions like CertifID are engineered to outsmart deepfakes, simswap, phishing, and other tech-enabled fraud tactics. The software verifies the identities and banking details of participants before every wire transfer by matching biometrics from a selfie photo to the government-issued ID document for every participant. While some technologies can’t distinguish between a deepfake selfie and authentic one, CertifID’s technology includes “liveness detection” to confirm genuine human selfies and deny synthetic renderings. 

CertifID empowers professionals to authorize wire transfers with confidence that funds are entering the correct bank accounts and reaching the real account holders. Lean on our technology to identify deepfakes and disrupt fraud so you can focus on leading your business. Learn more.

Tom Cronkright

Co-founder & Executive Chairman

Tom Cronkright is the Executive Chairman of CertifID, a technology platform designed to safeguard electronic payments from fraud. He co-founded the company in response to a wire fraud he experienced and the rising instances of real estate wire fraud. He also serves as the CEO of Sun Title, a leading title agency in Michigan. Tom is a licensed attorney, real estate broker, title insurance producer and nationally recognized expert on cybersecurity and wire fraud.

Key Takeaways

  • Deepfakes are AI-generated photos, videos, or audio that convincingly mimic authentic content. They are often used to impersonate real people without their permission.
  • Deepfakes can be used to impersonate legitimate, trusted parties to a transaction, manipulate wire transfer instructions, and divert funds to a cybercriminal’s bank account.
  • Adopt sophisticated authentication software that includes “liveness detection” to confirm identities, recognize deepfakes, and prevent wire fraud.

Despite some unease about the impacts of artificial intelligence (AI), the technology has real benefits for a range of applications—everything from saving time on tedious manual data entry tasks to more accurately diagnosing cavities at the dentist. But for every benefit, there is an equally legitimate concern about potential harm resulting from AI.

Deepfakes are the latest AI threat to earn the attention of consumer advocates and federal lawmakers. They have reignited public debate about privacy, intellectual property, disinformation, and consent. Yet the use of deepfakes for committing financial crimes hasn’t garnered the same scrutiny. Cyber criminals are exploiting the relative lack of awareness to commit real estate wire fraud with increasing frequency. 

So how can you, as a real estate professional, protect your business assets and clients from deepfake-enabled fraud? Let's dive in.

What are deepfakes?

Deepfakes are AI-generated photos, videos, or audio that convincingly mimic authentic content. They are often used to impersonate real people without their permission. Deepfake technology works by ingesting original audiovisual data and using it to generate avatars matching the look, facial expressions, tone, style, gestures, and other unique characteristics of a real person. 

This technology can enable face-swapping (generating a digital version of a real person’s face and applying it to another body in a video or photo) and speech synthesis (interpreting text and generating speech in the voice of a real person). Unlike Snapchat’s “face swapping” effect, or the photo editing tools in Photoshop, deepfake technology is frequently used with malicious intent, rather than for creativity or amusement. Deepfakes are disturbingly difficult to recognize compared to the digitally-enhanced media from other apps.

How can deepfakes be used to commit wire fraud?

There are myriad positive applications for deepfake technology, such as dubbing media in other languages, but it is too often used for social engineering, or manipulating people to gain their trust and enable crime. Deepfakes can be used to impersonate legitimate, trusted parties to a transaction, manipulate wire transfer instructions, and divert funds to a cybercriminal’s bank account.

For example, a fraudster could use deepfake technology to imitate a lender or real estate professional on a video conference call. Cloaked as a legitimate party, they could instruct a property seller to wire their mortgage payoff to the fraudster’s bank. Similarly, scammers can use deepfake AI to imitate sellers and convince title companies or banks to send the buyer’s funds to the scammer’s untraceable account.  

Deepfakes have become so sophisticated that experienced professionals cannot always spot the difference between an AI-generated voice or face on a video call and the real thing. Take the example of a financier who mistakenly wired $25 million to a fraudster, convinced he was following his Chief Financial Officer’s instructions from a recent video conference call. The fraudsters used deepfake voices and face-swapping to convincingly imitate multiple colleagues in the meeting, who all cosigned the deal. 

Rapid advancements in technology are creating new ways for scammers to trick businesses and individuals.

Identifying deepfakes is even more difficult with the advent of simswap technology. Simswap software allows fraudsters to disguise their real phone numbers and manipulate caller ID so that it appears like they are calling from a trusted business. If a fraudster calls from a familiar number and has a recognizable voice, it’s no wonder that even savvy professionals are falling victim to tech-enabled wire fraud.

Deepfakes can even fool other AI-enabled technologies, including some identity verification tools. Certain software now collects users’ biometrics from facial scans and uses them to confirm the user’s identity before authorizing their logins. Fraudsters can use face-swapping during the biometric scans to pass as authentic users. Once they’ve logged in, they may be able to authorize wire transfers to their accounts.

How can I protect transactions from deepfakes and avoid wire fraud?

Lawmakers have introduced the DEEPFAKES Accountability Act, but Congress has yet to pass any legislation that would sanction deepfake criminals. In the meantime, professionals can disrupt deepfakes and wire fraud with the right training and technology.

Many cybersecurity training programs focus on enabling employees to recognize phishing emails and spoofed websites, but additional training is necessary to identify  deepfake encounters. Professionals can learn the red flags that might indicate the use of face-swapping or speech synthesis during video calls:

  • Some facial features may lack definition.
  • Color, lighting, or texture may appear inconsistent across a participant’s face.
  • Gestures and facial expressions may be slightly unnatural, like unblinking eyes.
  • The participant’s face may be familiar, but their clothing and background are unusual.
  • A voice may sound convincing, but the speaker may use uncharacteristic vernacular.

High quality deepfakes rely on significant source material, like thousands of recordings or images of the real human target. These high quality versions are only possible when impersonating someone who has a large online presence and digital footprint, like a celebrity or politician. In most cases, when there is limited data available, AI cannot generate convincing impersonations. Staff and clients can learn to spot the glitches in these deepfakes and disrupt fraud.

The best line of defense is sophisticated authentication software for high-value transactions. Solutions like CertifID are engineered to outsmart deepfakes, simswap, phishing, and other tech-enabled fraud tactics. The software verifies the identities and banking details of participants before every wire transfer by matching biometrics from a selfie photo to the government-issued ID document for every participant. While some technologies can’t distinguish between a deepfake selfie and authentic one, CertifID’s technology includes “liveness detection” to confirm genuine human selfies and deny synthetic renderings. 

CertifID empowers professionals to authorize wire transfers with confidence that funds are entering the correct bank accounts and reaching the real account holders. Lean on our technology to identify deepfakes and disrupt fraud so you can focus on leading your business. Learn more.

Tom Cronkright

Co-founder & Executive Chairman

Tom Cronkright is the Executive Chairman of CertifID, a technology platform designed to safeguard electronic payments from fraud. He co-founded the company in response to a wire fraud he experienced and the rising instances of real estate wire fraud. He also serves as the CEO of Sun Title, a leading title agency in Michigan. Tom is a licensed attorney, real estate broker, title insurance producer and nationally recognized expert on cybersecurity and wire fraud.

Getting started with CertifID is easy.

Request a Demo