Worldwide concern is growing about the negative effects that deepfakes can have on society, and for good reason. In 2019, an employee of a UK-based energy company was tricked into believing he was on the phone with his boss, the CEO of the German parent company, who asked him to transfer $243,000 to a Hungarian supplier.
Of course, the employee was not talking to the real CEO, but to a fraudster who impersonated the real CEO through voice-changing AI.
This kind of social engineering attack is not entirely new. In fact, in 2019, cybersecurity researchers identified several successful deepfake audio attacks on companies. In the companies, their "CEO" called a financial officer to ask for an urgent transfer. The voices of the real CEO had been taken from earnings calls, YouTube videos, TED Talks and other recordings and inserted into an AI program that allowed fraudsters to mimic the voices.
These types of incidents are the audio version of what are known as deepfake videos, which have caused global panic in the last few years. As we get used to the existence of deepfakes, this can affect our trust in any videos we watch or audio recordings we hear, including the real ones. Videos that once used to be the ultimate form of truth, transcending edited images that can be easily altered, can now also deceive us.
What are deepfakes?
Deepfakes are fake video and audio recordings of individuals designed to make them look like they've said and done things they haven't actually. "Deep" relates to the "deep learning" technology used to produce media and "fake" to its artificial nature.
In most cases, a person's face is superimposed on someone else's body, or their actual figure is altered in such a way that they appear to say and do things that they have never done.
The term was created in 2017 when a user on the social media Reddit posted a fake adult video showing the faces of some Hollywood celebrities. Later, the user also published the machine learning code used to make the video
Can we detect and stop deepfakes?
Right now, researchers and companies are exploring how they can use AI to identify and thus avoid falling for deepfakes. New advances are starting to emerge that are designed to help us identify which images and footage are real and which are fake.
For example, Facebook, Microsoft, the Partnership on AI coalition and academics from several universities have created a competition to help improve the detection of deepfakes. It aims to encourage people to produce technology that can be used by anyone to detect when deepfake material and false information has been created.
The Deepfake Detection Challenge includes a dataset and leaderboard, along with grants and prizes, to motivate participants to design new methods to identify and stop fake footage intended to deceive others.
Yet this does not stop fake media from being created, shared, seen and heard by millions of people on the internet and social media before it is removed. And without doubt, once malicious material is distributed, it can be extremely difficult to minimise the damage.
How can you recognise deepfake videos?
Until some highly reliable technical solutions are designed, everyone should learn to identify the telltale signs of deepfakes. So here are the mistakes to look for:
Flashing - According to research, flashing seems to be less well represented in deepfake videos.
Head position - Sometimes the edges of the face of the person being imitated will be blurred.
Fake-looking skin - If the face looks extra smooth, as if it's been edited, it could be a sign of deepfakes.
Slow speech and different intonation - Sometimes you will notice that the person being imitated speaks quite slowly, or there is not quite a match between the real person's voice and the fake one.
A generally odd look and feel - Ultimately, you should rely on your common sense. Sometimes you can simply sense that there's something wrong with a video.
Today, in some cases, it can be easy to spot deepfakes. But in the future, as technology advances, it will gradually become much harder to detect
Deepfakes can have consequences for many
Here's a list of what deepfakes can have a very negative impact on:
Deepfakes can influence elections as they can put words in the mouths of politicians and make them look like they have done or said certain things that they have not actuallye has. Deepfake producers can target their content to popular social media channels, where the shared content can instantly go viral. A deepfake video is a great way to create fake news.
False evidence for criminal cases can be used against people in court, and in this way they can be accused of crimes they did not commit. This can put the wrong people in jail. And on the other hand, people who are guilty can be set free based on false evidence.
3. The stock market
Deepfakes can be used to manipulate stock prices when altered footage of influential people making certain statements is distributed. Imagine what would happen if a fake video of a CEO of a company like Apple, Amazon or Google states that they have done something illegal. For example, back in 2008, Apple's stock dropped 10 points based on fake news that Steve Jobs had suffered a serious heart attack.
4. Online bullying
Deepfake technology can also be used to amplify cyberbullying, especially as it is now becoming widely available. People can easily be victimised when manipulated media of them is posted online, typically on social media. Or they can be blackmailed by cyber criminals who threaten to leak the footage if, for example, they do not pay a certain amount.
Someone may make false statements about your business to destabilise and degrade it. Malicious actors can make it appear that you or someone in your organisation admits to having been involved in consumer fraud, bribery or sexual abuse. Clearly, these kinds of false statements can damage your organisation's reputation and make it difficult for you to prove otherwise
What can you do about it?
Because of the current loopholes in the law, producers of fake videos are not criminalised. But the Deepfakes Accountability Act aims to create measures to criminalize this type of fake media.
Such measures could mean that the maker of deepfakes would be required to disclose that the footage has been altered. And if they fail to do so, it would be considered a crime. The existence of these kinds of rules is mandatory to protect deepfake victims and also the public from misinformation.
How can you protect your business from artificial intelligence and deepfakes?
Your competitors may resort to deepfake blackmail to try to eliminate you from the industry or spread misinformation about your business.
No matter how good technological deepfake detection solutions become, they will not prevent manipulated media from being shared and reaching large numbers of people. So the best way is to teach your employees to identify fake footage and question anything that seems suspicious in your organisation.
1. Train your staff
The topic of deepfakes can become part of your cybersecurity training. For example, if your employees receive an unexpected call from the CEO asking them to transfer $1 million to a bank account, they should first question whether the person on the other line is who they say they are. Perhaps a good countermeasure would be to have a few security questions in place that need to be asked to confirm the identity of the person on the line.
2. Monitor your brand's online presence
Your brand's presence is probably already being monitored online. So make sure there's someone in your organisation watching out for fake content involving your organisation, and if anything suspicious turns up, they should have a plan to deal with it.
3. Be transparent
If you become a victim of deepfakes, make sure your stakeholders are aware of the targeted attack. Trying to ignore what happened or assuming people didn't believe what they saw or heard won't make the problem go away. Therefore, your PR efforts should be centered around communicating that someone from your company has been impersonated and highlighting the artificial nature of the distributed footage.
The dangers of deepfakes are real and should not be underestimated. A single ill-intentioned rumour can destroy your business. So you, both as an individual and an organisation, should be prepared to deal with the threat of deepfakes.
Sofie Meyer is a copywriter and phishing aficionado here at Moxso. She has a master´s degree in Danish and a great interest in cybercrime, which resulted in a master thesis project on phishing.