ChatGPT is an incredible technology that is used by many around the world. This also means that not everyone has the same good intentions for the technology. Hackers have found ways to use ChatGPT to their advantage and manipulate malware code. Here we shed light on the dark side of ChatGPT and how hackers are exploiting it.
The rapid evolution of AI
We've already written about how ChatGPT has and will change cybersecurity - but now we're going to take a closer look at the dark side of ChatGPT. While ChatGPT benefits many, not everyone uses it for good-intentioned activities.
AI and ChatGPT have changed the way we work and use technology. It has proven to be useful and life-changing in healthcare and helpful in financial institutions, agriculture, transportation and education. AI is a disruptive technology for humanity, but we also need to keep up. It's happening at an accelerating pace, and this means there are vulnerabilities that can't be addressed in time before it can do damage.
This means the cybersecurity industry is busy and businesses rely on it to keep up. Hackers and cybercriminals have figured out how to use ChatGPT and AI to their advantage in the development of new malware and malicious code.
The way around the filters
OpenAI, the company behind ChatGPT, launched the feature in November 2022 - it's a beta version to be tried and tested by us consumers. ChatGPT uses machine learning to communicate with the user. It uses information from the internet to provide answers and solutions to us.
What sets ChatGPT apart from other chatbots and programs is that it can write software and code in different programming languages, correct code and explain complex topics.
For higher security, ChatGPT is programmed to filter out inappropriate content. This means illegal actions that could harm others - for example, it can't write malware code and phishing emails. This was before the hackers bypassed the filters in the chatbot.
You can easily bypass the filters if you phrase questions and commands in a certain way. And this is what hackers do. They avoid using words like malware, phishing and hacking - by avoiding these, the chatbot doesn't detect the malicious activity and doesn't report it.
Hacking for beginners
The hacker community has therefore gained a high-tech tool to code malware and produce phishing. This means that there will be more instances of Malware-as-a-Service (MaaS) and thus more criminals to carry out hacking attacks. MaaS makes it much easier for criminals to carry out malware attacks as all the technical work lies with the malware provider. In addition, the coding of malware has been handed over to ChatGPT. While it used to require hacking experts to create malicious code, now you can leave the work to artificial intelligence.
ChatGPT can help novice hackers perform hacking attacks as long as they ask the right questions to the machine. This significantly increases the cyber threat, as many more malicious actors have suddenly appeared on the scene.
Not only can cybercriminals use ChatGPT to code malware, but they can ask the machine to produce convincing phishing in any language. In the past, one of the most characteristic features of phishing was that the language was not correct. It was therefore easy to recognize that a fake sender was trying to lure money out of a phishing victim.
But now it has become even more difficult for us to recognize if the emails we receive in our inbox are legitimate. Our big challenge now is to figure out the hackers and their methods.
More languages for more features
There's a good reason why ChatGPT got cybercriminals excited. With easier ways to hack, it's easier for them to make money.
A few months after the release of ChatGPT, several hackers took to forums on the dark web explaining how to use the technique to reconstruct and modify existing malware code. This makes it easier and convenient to create new malware.
As mentioned, ChatGPT can rewrite malware code in several different programming languages, which means that the hacker ultimately has many different tools to hack a victim's device and software.
These are programming languages such as:
There are many different programming languages - and more than mentioned above - because each language has its own function. Some programming languages are used to create websites, others for financial trading and analyzing statistics and stocks.
Remember cyber security
It is difficult for the average user to prevent hackers from developing malware code. OpenAI is constantly working to improve ChatGPT. They can make the filters better and thus recognize when there is a hacker on the other end of the technology.
We, as consumers, can only secure our own technology to avoid hacking attacks. We can do this by keeping our devices and software up to date, patching any holes in the software code. It's these holes that hackers exploit when launching cyber attacks.
In addition, we need to pay attention to our inbox. As mentioned, hackers and scammers are getting even better at tricking us in phishing emails by using proper language and convincing content.
When we receive an email or message, we should always look at who the sender is, their domain name and see what the email is about. If there are links in the email, you should not click on the link, but hover over it so you can see the website you are taken to. You can always check if the website starts with HTTPS or HTTP - the "s" stands for secure, so if a website has this in its IP address, it's safe to visit.
Awareness training is designed to help you and your colleagues to be aware of cybersecurity in the workplace, but just as much when you're not at work. By being aware and vigilant, you are more likely to avoid becoming the next victim of the malware developed with ChatGPT.
Caroline is a copywriter here at Moxso beside her education. She is doing her Master's in English and specializes in translation and the psychology of language. Both fields deal with communication between people and how to create a common understanding - these elements are incorporated into the copywriting work she does here at Moxso.View all posts by Caroline Preisler