An AI Generated Deepfake Costs A Company $25 Million
Well, we seem to have an example of one of the worst case scenarios that many envisioned when it comes to AI. By that I mean this story where fraudsters used AI-generated deepfakes to impersonate the CFO at a multinational company to trick a finance employee into sending them over $25 million:
This incident marks the first of its kind in Hong Kong involving a large sum and the use of deepfake technology to simulate a multi-person video conference where all participants (except the victim) were fabricated images of real individuals. The scammers were able to convincingly replicate the appearances and voices of targeted individuals using publicly available video and audio footage. The Hong Kong police are currently investigating the case, with no arrests reported yet.
The scam was initially uncovered following a phishing attempt, when an employee in the finance department of the company’s Hong Kong branch received what seemed to be a phishing message, purportedly from the company’s UK-based chief financial officer, instructing them to execute a secret transaction. Despite initial doubts, the employee was convinced enough by the presence of the CFO and others in a group video call to make 15 transfers totaling HK$200 million to five different Hong Kong bank accounts. Officials realized the scam occurred about a week later, prompting a police investigation.
Kevin Vreeland, General Manager of North America at Veridas had this to say:
“The presentation attack employed by the threat actors targeting this multinational company for millions showcased a high level of sophistication. The employee initially followed proper protocols, correctly identifying the attack as potentially rooted in phishing. However, the escalation of the incident highlights how artificial intelligence has given attackers a leg up and created a plethora of security challenges for organizations, particularly in the era of widespread remote work.
With the evolution of artificial intelligence and increased identity-based security threats, companies must implement updated and improved methods of verification and authentication. These measures should focus on detecting the liveness and proof-of-life of their employees. Currently, there are companies developing biometric solutions focused on how to face the new forms of fraud, through a robust biometric engine and aligned to quality and security certifications, such as NIST and iBeta.
It’s also important that companies educate their employees about the dangers of deepfakes similar to other types of scams. Deepfakes usually contain inconsistencies when there is movement. For example, an ear might have certain irregularities, or the iris doesn’t show the natural reflection of light.”
If you want an example of what Kevin Vreeland is talking about in the last paragraph of his comment, I’ll use this example of the Apple Vision Pro Persona feature. If you keep what he said in mind, you’ll see what he’s talking about.
This case highlights the challenges posed by AI and its use by threat actors. We all need to alter how we look and view the universe so that we can protect ourselves from all the threats that are sure to come because threat actors have found ways to use AI for criminal gain.
UPDATE: Shawn Loveland, COO, Resecurity had this comment:
The deepfake market is a multifaceted domain involving academia, hobbyists, emerging technology, commercial services, and threat actors.
Initially, deepfakes were developed by researchers as a byproduct of machine learning and AI studies. However, such technology has quickly spread beyond the academic circle to include hobbyists, enthusiasts, and commercial services who also contribute to building deepfake tools. Often, they share these tools on forums and open-source platforms. Some of these services are marketed to cybercriminals and fraudsters as threat actors have determined this technology is valuable for scams, identity theft, and misinformation campaigns.
The actual size of the dark market deepfake industry is challenging to determine due to its secretive nature, as malicious actors utilize this technology. Similarly, the size of the commercial deepfake market is also hard to determine due to its rapidly evolving nature and marketing hype/misinformation. Moreover, as the relatively low barrier to entry for new services providing deepfake technology continues to expand, we can expect an increase in the number of scenarios that will benefit from it.
There is a growing demand for deepfake content, specifically in the entertainment, gaming, and advertising sectors. This includes using deepfake technology for creating films, marketing campaigns, and virtual customer service representatives. Unfortunately, there is also a dark side to the technology, which involves the creation of illegal deepfakes. These are used to produce fake pornographic content, impersonate individuals for fraudulent purposes, or spread fake news.
And the spectrum ranges widely. On one end, legitimate companies use similar technology for benign purposes like dubbing movies and creating digital avatars. Conversely, a significant portion of the deepfake market is associated with cybercrime. This includes creating non-consensual adult content, extortion, and undermining public trust in media.
The rise of deepfake technology is a cause for concern for organizations across the globe. This technology has dual-use capabilities, which can be used for beneficial and malicious purposes. Although deepfakes have legitimate uses, their potential for harm, particularly in cybercrime, makes them a serious issue that requires an active and robust response from individuals, businesses, and governments alike.
Deepfakes violate the terms of use (TOU) or terms of service (TOS) of many online commercial platforms, especially when used to impersonate others, spread misinformation, or create non-consensual adult content. Most social media platforms, content-sharing services, and online communities have specific guidelines against posting deceptive or abusive content and infringing on another person’s rights. It is recommended that potential TOU and TOS issues be reported to the commercial service hosting or distributing the content.
However, despite the rules and regulations established by many online platforms, services catering to threat actors can still offer deepfake services. This is why such services are readily available for threat actors to use.
The emergence of deepfakes has caused concerns about verifying digital identities, protecting media content integrity, and preventing potential political manipulation. Businesses must invest in detection technology and training to avoid fraud and protect their reputations.
It is worth noting that deepfakes aren’t just a theoretical attack. They have already been used to impersonate executives for financial gain and create false narratives that sway public opinion or affect stock prices.
Ultimately, the problem with deepfakes is an ever-changing one. The technology and its usage are evolving rapidly, and those who use deepfakes to cause harm are also improving their methods to avoid detection. Regulations and laws are still struggling to keep up with this technology, but there is an increasing movement to create legislation to combat the malicious use of deepfakes.
This entry was posted on February 5, 2024 at 12:31 pm and is filed under Commentary with tags Scam. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
An AI Generated Deepfake Costs A Company $25 Million
Well, we seem to have an example of one of the worst case scenarios that many envisioned when it comes to AI. By that I mean this story where fraudsters used AI-generated deepfakes to impersonate the CFO at a multinational company to trick a finance employee into sending them over $25 million:
This incident marks the first of its kind in Hong Kong involving a large sum and the use of deepfake technology to simulate a multi-person video conference where all participants (except the victim) were fabricated images of real individuals. The scammers were able to convincingly replicate the appearances and voices of targeted individuals using publicly available video and audio footage. The Hong Kong police are currently investigating the case, with no arrests reported yet.
The scam was initially uncovered following a phishing attempt, when an employee in the finance department of the company’s Hong Kong branch received what seemed to be a phishing message, purportedly from the company’s UK-based chief financial officer, instructing them to execute a secret transaction. Despite initial doubts, the employee was convinced enough by the presence of the CFO and others in a group video call to make 15 transfers totaling HK$200 million to five different Hong Kong bank accounts. Officials realized the scam occurred about a week later, prompting a police investigation.
Kevin Vreeland, General Manager of North America at Veridas had this to say:
“The presentation attack employed by the threat actors targeting this multinational company for millions showcased a high level of sophistication. The employee initially followed proper protocols, correctly identifying the attack as potentially rooted in phishing. However, the escalation of the incident highlights how artificial intelligence has given attackers a leg up and created a plethora of security challenges for organizations, particularly in the era of widespread remote work.
With the evolution of artificial intelligence and increased identity-based security threats, companies must implement updated and improved methods of verification and authentication. These measures should focus on detecting the liveness and proof-of-life of their employees. Currently, there are companies developing biometric solutions focused on how to face the new forms of fraud, through a robust biometric engine and aligned to quality and security certifications, such as NIST and iBeta.
It’s also important that companies educate their employees about the dangers of deepfakes similar to other types of scams. Deepfakes usually contain inconsistencies when there is movement. For example, an ear might have certain irregularities, or the iris doesn’t show the natural reflection of light.”
If you want an example of what Kevin Vreeland is talking about in the last paragraph of his comment, I’ll use this example of the Apple Vision Pro Persona feature. If you keep what he said in mind, you’ll see what he’s talking about.
This case highlights the challenges posed by AI and its use by threat actors. We all need to alter how we look and view the universe so that we can protect ourselves from all the threats that are sure to come because threat actors have found ways to use AI for criminal gain.
UPDATE: Shawn Loveland, COO, Resecurity had this comment:
The deepfake market is a multifaceted domain involving academia, hobbyists, emerging technology, commercial services, and threat actors.
Initially, deepfakes were developed by researchers as a byproduct of machine learning and AI studies. However, such technology has quickly spread beyond the academic circle to include hobbyists, enthusiasts, and commercial services who also contribute to building deepfake tools. Often, they share these tools on forums and open-source platforms. Some of these services are marketed to cybercriminals and fraudsters as threat actors have determined this technology is valuable for scams, identity theft, and misinformation campaigns.
The actual size of the dark market deepfake industry is challenging to determine due to its secretive nature, as malicious actors utilize this technology. Similarly, the size of the commercial deepfake market is also hard to determine due to its rapidly evolving nature and marketing hype/misinformation. Moreover, as the relatively low barrier to entry for new services providing deepfake technology continues to expand, we can expect an increase in the number of scenarios that will benefit from it.
There is a growing demand for deepfake content, specifically in the entertainment, gaming, and advertising sectors. This includes using deepfake technology for creating films, marketing campaigns, and virtual customer service representatives. Unfortunately, there is also a dark side to the technology, which involves the creation of illegal deepfakes. These are used to produce fake pornographic content, impersonate individuals for fraudulent purposes, or spread fake news.
And the spectrum ranges widely. On one end, legitimate companies use similar technology for benign purposes like dubbing movies and creating digital avatars. Conversely, a significant portion of the deepfake market is associated with cybercrime. This includes creating non-consensual adult content, extortion, and undermining public trust in media.
The rise of deepfake technology is a cause for concern for organizations across the globe. This technology has dual-use capabilities, which can be used for beneficial and malicious purposes. Although deepfakes have legitimate uses, their potential for harm, particularly in cybercrime, makes them a serious issue that requires an active and robust response from individuals, businesses, and governments alike.
Deepfakes violate the terms of use (TOU) or terms of service (TOS) of many online commercial platforms, especially when used to impersonate others, spread misinformation, or create non-consensual adult content. Most social media platforms, content-sharing services, and online communities have specific guidelines against posting deceptive or abusive content and infringing on another person’s rights. It is recommended that potential TOU and TOS issues be reported to the commercial service hosting or distributing the content.
However, despite the rules and regulations established by many online platforms, services catering to threat actors can still offer deepfake services. This is why such services are readily available for threat actors to use.
The emergence of deepfakes has caused concerns about verifying digital identities, protecting media content integrity, and preventing potential political manipulation. Businesses must invest in detection technology and training to avoid fraud and protect their reputations.
It is worth noting that deepfakes aren’t just a theoretical attack. They have already been used to impersonate executives for financial gain and create false narratives that sway public opinion or affect stock prices.
Ultimately, the problem with deepfakes is an ever-changing one. The technology and its usage are evolving rapidly, and those who use deepfakes to cause harm are also improving their methods to avoid detection. Regulations and laws are still struggling to keep up with this technology, but there is an increasing movement to create legislation to combat the malicious use of deepfakes.
Share this:
Like this:
Related
This entry was posted on February 5, 2024 at 12:31 pm and is filed under Commentary with tags Scam. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.