Phishing Attacks: AI Used in Virtual Meeting to Steal Millions

      Comments Off on Phishing Attacks: AI Used in Virtual Meeting to Steal Millions

To be completely upfront, this article isn’t strictly healthcare related. The story it contains doesn’t happen to a healthcare organization, but it has massive impacts on the future of cybersecurity and HIPAA compliance. Phishing attacks are nothing new. In fact, you probably receive dozens of phishing emails each week. Luckily, most are poorly crafted and would likely fool only the most trusting and naive of users. In addition, increased awareness and training have helped to reduce the number of successful phishing attacks around the world. However, that is changing. When the possible reward to the attacker is millions of dollars, they will use some truly impressive tactics to achieve their goals. In this article, we will see an example of attackers going to lengths that most probably considered impossible.

Quick Primer on Phishing Attacks Today

Everyone is likely familiar with the different types of phishing. You have received emails pretending to be from your bank, from Amazon, or from the IRS, all trying to get you to click on a link or open an attachment. You may also be familiar with the text phishing, known as smishing, where you receive a link via a text message (SMS). When this link is clicked, depending on the type of malware used, your device can become compromised. Phishing, in all of its forms, is the single most deadly type of cyberattack due to its simplicity. Attackers can craft a campaign and send the email out to millions of users and, in the end, it’s a numbers game. Even if just 0.001% click on the link, the attacker stands a good chance of a payday.

Spear phishing attacks takes this further by tailoring the phishing email to be specific to the target. If you think about most of the phishing emails you receive, most are broad and may only have your name in the email. But with spear phishing, the email will contain as much information that is specific to you to help convince you that the email is legitimate. For example, it may appear to come from a person in your company and have their signature/contact information, the company logo, and refer to topics that they would normally discuss with you. Spear phishing requires a good deal more time for planning and preparation, but the payoff can be a lot higher. This leads us to the focus of our article.

Deep Fakes Are Being Used in Phishing

A deep fake video uses existing video of a person to create a model that can then be used to have the subject of the video say just about anything. If enough video exists to train the artificial intelligence (AI) software used to create the video, the subject could be used to do or say anything the attacker would need for their attack to work.

Here is an example of a deep fake video being used for a purpose that the subject would not have normally done. This video is of Vladimir Putin that was used in a video marketing campaign for the dangers to American democracy. It was created over three years ago and the technology has improved greatly since then.

Not all deep fakes are used with a malicious intent. Check out this content creator who uses Tom Cruise deep fakes in everyday situations.

This brings us to deep fakes in phishing. If an attacker has access to enough video of key members of a company, they would be able to create a video of that person saying things to help induce the victim to perform an action. For the smaller companies where everyone is in the same office, this is much less of a threat. But imagine a company that is spread across the globe where employees have met each other via video meetings but maybe never in person. The danger becomes much more apparent in these types of situations. Imagine taking a Zoom call with a manager based in London. You have had these sorts of meetings numerous times, and this one looks and feels just like every other one. But what you didn’t know is that this meeting is actually a pre-recorded deep fake video that anticipates the conversation flow and gives you specific instructions on a task to perform. A task, that is a scam and only benefits the attacker. That is what happened in our example below.

AI Generated Video Conference Used to Steal Millions of Dollars

According to the South China Morning Post (SCMP), unknown attackers targeted a multinational company with a deep fake video scam that cost the company HK$200 million (US$25.6 million). A video conference call, in which every participant other than the intended victim was a deep fake video, attackers convinced the victim to initiate money transfers. The victim knew each of the other participants in the video but because everything was using their normal video conference system and because he did know them, it made this all the more convincing. It was the first of its kind attack in Hong Kong and the results were significant.

The attackers used publicly available videos of the other people in the meeting to create very convincing representations. These were then mapped onto the faces of the attackers so they could mimic the company executives.

What Can You Do to detect deep fake email phishing?

The first step in preventing these sorts of attacks is to secure your email system. The attack, above, was initiated through a spear phishing email. While no technology can block all forms of phishing, using a good filtering system on your email servers, along, with endpoint protection, will greatly reduce the number of opportunities an attacker has to get an email through to you.

There are many email filtering services that you could use such as:

Ensure that you have email filtering setup to block as much of the phishing as possible, and then have the endpoint protection to protect your network from the ones that might have made it through.

But what can you do if you think you’re already watching a deep fake video? How can you tell?

The first thing is to ask the person to turn their head to either the right or to the left. Deep fake videos have a lot of data for a person’s face, but not nearly as much for a profile. This will cause the video to have warps, areas of the head that do not look real, or to have blurring during the motion.

Another test would be to let the person in the video know that the lighting isn’t  so good and if they could please use another light source. This would put new lighting on the person’s face. If the lighting doesn’t show up, it is because the deep fake algorithm couldn’t do it. It doesn’t have the data for it.

But in the end, if you suspect something is up, after the call is over, simply pick up the phone and directly call the person to make absolutely sure.

Following these steps can help reduce your exposure to becoming a victim of deep fake phishing attacks.