The world of cybercrime never stands still, and hackers are constantly keeping up to date with technological developments. This of course includes the development of AI, which is a double-edged sword as it is used in both cybersecurity and cybercrime.
One of the ways hackers exploit AI for cybercrime is through generative AI and large language models (LLMs).
Read on to learn how to avoid serious consequences such as data breaches, far-reaching cyberattacks like ransomware and associated financial costs, and attacks on your friends, family and colleagues due to AI-driven phishing.
Generative AI plays a key role
Generative AI and large language models such as ChatGPT or Gemini are tools that we can all benefit from for different purposes. But just as you can use AI as a tool to efficiently solve personal or professional tasks, cybercriminals can do the same.
Although many AI models are designed in a way that prevents them from generating content for malicious activity, cybercriminals have managed to crack the code to take advantage of them. They have discovered that by using certain commands, they can actually bypass the limitations of the technology. This way, they can use generative AI to optimize their work, just like you can.
How AI is changing phishing
Hackers leverage generative AI for different types of phishing and social engineering with one goal in mind: to get you to take the bait.
In the past, phishing was often characterized by poorly written language or very generic content. Most people have either experienced it themselves or heard of someone who has received an email from a rich prince or won an improbably large sum of money in a competition where you could become a millionaire by clicking on a link or sharing your account details. Fortunately, most people have learned that such offers are too good to be true and are also clear signs of phishing. Hackers know this too, and that's why phishing has changed.
This development has become particularly favorable due to the spread of generative AI. Below we look at some of the ways hackers are exploiting the possibilities of AI.
Spear phishing
Businesses are at particular risk of business email compromise (BEC) due to hackers' widespread use of AI for phishing attacks. Using generative AI, hackers can easily tailor attacks to suit a specific recipient. One way they can do this is by analyzing a person's behavior and interests on social media. They collect a lot of personal information and then use a large language model to translate the information into credible and targeted phishing email content. Therefore, the content of these phishing emails will typically be very specific.
One type of spear phishing that is important to be aware of is CEO fraud. In this type of attack, the hackers will impersonate your boss. The hackers use generative AI to mimic your boss's writing style to increase credibility. The content will typically still be very specific, and the hackers may have found information about both you and your boss to create a convincing attack.
Vishing
Vishing, short for voice phishing, is increasingly being used by hackers to scam people out of large sums of money or confidential information. Often, victims are called by some form of authority. This could be hackers pretending to be from the police or the bank. Imagine getting a call from the bank saying that there is suspicious activity on your account and they need your credentials so they can restore security. Once you provide certain information, the person on the other end of the line can access your account - and empty it of money. You might think you'd never fall for this kind of trick yourself. But hackers can use generative AI to mimic the voice of your real bank advisor to increase credibility. Similarly, they can use the voice of other people you know, such as friends, family members or your colleague or boss. It's actually a very common phenomenon, and you don't necessarily have to be particularly naive to fall into such a trap.
Social engineering
Hackers use generative AI to convince and emotionally manipulate you to take the bait. They can do this in several different ways. Often they will try to build up your trust, which they can do over the course of several emails. This means that you can actually have a longer conversation with a hacker over email. This increases credibility and can make you believe that it's someone you know or an authority you trust that you're communicating with. Hackers use generative AI to create well-crafted and compelling replies for you that'll keep the conversation flowing between you.
Protect yourself from AI-driven phishing
While hackers use different methods to find useful information about their victims, and it's close to impossible to completely avoid receiving phishing in your inbox, there are some habits you can implement to reduce the risk of hackers successfully taking you down.
Prevent and minimize the risk
There are several things you can do to minimize the hackers' ability to target you with sophisticated and targeted phishing. Start by limiting your visibility on social media. You can do this by making your profiles and accounts private and limiting who can see your personal and private information such as full name, date of birth, email address, location information, etc. Additionally, it's a good idea to consider whether it's necessary to share your family relationships, vacation destinations, job position information, etc. when sharing posts. Think about what information could potentially be misused and used against you in a hacker attack or identity theft.
Continuous employee training
In order to protect yourself from AI-driven hacking, or indeed any kind of hacking attack, it's important to be aware of the threat. Cybersecurity training is a great tool to ensure that employees are familiar with the threat, know the specific warning signs and how to protect themselves and their colleagues with safe behavior. Good cybersecurity training keeps users up to date on the latest threats and trends in the cyber landscape.
Know the red flags
In order to identify AI-driven phishing, you need to know what exactly to look out for. This is especially important because AI is what makes phishing so targeted and sophisticated. For the same reason, it's a good idea to take focus away from the actual content of the email, i.e. everything that will typically be very personal and targeted at you.
Instead, keep an eye out for:
- The sender address: Always check the legitimacy of the sender address. Unless the hacker is writing from a hacked email account, which in principle could be owned by someone you know, the sender address will often reveal whether it's phishing or not. If the hackers are pretending to be your boss or colleague, the email address may look like their real email address, but often there will be a small change. It could be a single letter that makes the difference, or perhaps a different domain name. Here you need to look carefully and in detail to make sure the email address matches the real one.
- Links, files and requests: The biggest red flag in phishing is and will always be links, attachments or requests for confidential information. So as soon as an email contains either a link or a file, or you are asked to share your credentials or perhaps pay an invoice, make it a habit to stop before you do anything. If the email appears to be sent from someone you know, it's a good idea to get them to confirm or deny via a secure communication method. This could be a phone call - or in person to be sure.
The impact of AI on the cyber threat
According to Danish Centre for Cyber Security, the misuse of generative AI by hackers does not change the overall threat assessment, which is already rated very high. Instead, generative AI is seen as one of several tools available to cybercriminals that help drive threats and determine the threat level.
As the Centre emphasizes, it's important to remember that generative AI is not inherently capable of planning and executing cyberattacks. However, hackers can use generative AI as a tool to produce the content of phishing emails, for example. They can use large language models to translate personal information into coherent text that can be used for phishing attacks - and that is also written linguistically correct. Similarly, they can use deep learning-based generative AI to make a familiar voice say whatever will get you to take the bait.
Phishing in the future
Unfortunately, we are looking into a future where phishing is likely to become increasingly sophisticated. Hackers are quick to exploit the tools made possible by AI. Therefore, it's extremely important to stay on top of the threat, and as mentioned, cybersecurity training is an excellent tool to help with this.
The most important key takeaway when it comes to phishing is to always be critical when you receive a link or file, or when someone asks you to share confidential information - no matter who the sender is. If you're ever in doubt about its legitimacy, by all means, do not act on it.
Emilie Hartmann
Emilie is responsible for Moxso’s content and communications efforts, including the words you are currently reading. She is passionate about raising awareness of human risk and cybersecurity - and connecting people and tech.
View all posts by Emilie Hartmann