How ChatGPT is changing cybersecurity

ChatGPT can be used for making up recipes, coding and essays. But it can be used for other things too - is ChatGPT as good for us as we think?

01-05-2023 - 6 minute read. Posted in: cybercrime.

How ChatGPT is changing cybersecurity

ChatGPT took the world by storm when the technology emerged. The technology in ChatGPT is used by many different people and for many different purposes; high school students, engineers and college students use it, and some even use it to get quick and easy recipes or to code a function on their computer. The use of ChatGPT is wide, but is it really that good?

The sophisticated AI

When ChatGPT first came out, it was the user-friendly interface and the AI-generated answers that amazed many. With a single keyword, ChatGPT can give the user a detailed answer that other AI technologies have not been able to provide until now.

Developed by OpenAI, ChatGPT is powered and based on a huge dataset. The knowledge the machine holds spans over a truly vast amount of topics. This means that the AI machine can easily and quickly give the user the answer they are looking for - this is probably the biggest reason why everyone from tech giants in the industry to ordinary users were so impressed with the technology.

With technology like this, you can't help but ask: What are the downsides of ChatGPT - and similar technologies? When you look at the answers and solutions that ChatGPT provides, they are incredibly convincing and realistic answers - you might not realize that a computer actually provided the answer. Therefore, ChatGPT can be used to make the recipient believe that they are communicating with a human - and not a machine.

When you think of AI-assistants, many people may think of Google Assistant or Amazon's Alexa, which you can ask about the weather or a piece of music. When you ask them questions, you can expect short and concrete answers. It is to a very small degree that the "older" AI systems can give a long and complicated answer, as well as have a conversation with the consumer.


Unlike Google and Amazon, ChatGPT takes a keyword and turns the answer to the question into an in-depth response - this response can fill entire paragraphs. On top of that, ChatGPT also remembers the previous conversations and responses. It can build on these answers if it is asked a follow-up question.

However, ChatGPT has its limitations, just like its competitors at Google and Amazon. ChatGPT does not answer questions that are outside the scope of the dataset. In addition, it is important to remember that ChatGPT's answers are not necessarily correct. The AI system does not "think" like humans, it generates answers based on data from the internet. Therefore, there is a probability that the data it retrieves is not correct. The data it uses to respond to a user may be biased towards the person who posted it on the internet.

Risks of AI assistants

As mentioned above, there is a risk that ChatGPT may not give you correct answers to your questions. However, there are also other risks of using AI technology.

ChatGPT can be incredibly convincing in its answers, so you'll believe that it is a human being sitting on the other side of the messages. With convincing technology comes exploitation.

Cybercriminals steal personal data and gain access to companies' (as well as individuals') software systems - by accessing them, they have access to anything they want:

  • Personal data
  • Documents and files
  • Programs and software
  • Bank details

The list could go on and on. The point is that hackers use phishing to scam people by email. They write an email using social engineering to challenge the victim's subconscious - for example, if there is a time limit on what the victim has to respond to, they will often be more likely to click on links or download documents.

The new phishing

But one of the things that has exposed the scammers so far is the bad language in the phishing they send. If the victim is to believe that a legitimate person is writing to them, their language has to match the person - very few people write bad English, especially bad enough to be noticed.

This is where ChatGPT might come into play. As described, one of the things that really surprised people was the language of ChatGPT. If there's a cybercriminal in Russia, India, Ghana or Finland who doesn't necessarily write convincing English, ChatGPT can help them.

You can ask ChatGPT to write an email, for example, pretending to be Google writing to a customer telling them to reset their password. ChatGPT will construct a text that looks exactly like a legitimate email. However, the hacker has to fill in the name of the recipient and the link they need to click on. Now phishing has become much easier for cybercriminals.

However, there are some security measures in relation to ChatGPT. You cannot directly ask ChatGPT to create a text requesting personal data. However, if you send a phishing email generated by ChatGPT, and the recipient replies that they are not sure that they are writing to a legitimate person, ChatGPT can generate a convincing reply to the recipient, who may end up providing the data the hacker needs.

The future of AI-generated phishing

It may become more difficult to tell if the email you have received is written by a real person or if it is ChatGPT that has produced it.

Fortunately, several programs are available to identify AI-generated texts - although they cannot guarantee that the text was created by AI. However, it does give an insight into which parts of a message sound artificially generated.

In addition, organizations are hopefully also becoming more aware of the threat that AI can pose. Several schools and educational institutions have banned ChatGPT because it is so developed already - this is just the beginning, as one can imagine it being banned in many other contexts.

It will become easier to pretend to be someone else, or to use ChatGPT and other AI-powered systems to solve your problems. Again, it is important to remember that ChatGPT gets its information from the internet. And you can't always rely on what's on the world wide web.

Author Caroline Preisler

Caroline Preisler

Caroline is a copywriter here at Moxso beside her education. She is doing her Master's in English and specializes in translation and the psychology of language. Both fields deal with communication between people and how to create a common understanding - these elements are incorporated into the copywriting work she does here at Moxso.

View all posts by Caroline Preisler

Similar posts