2024: The year of AI

2024 has already appeared to be a year full of development in the AI-world. Here, we’ll take a closer look at how AI shapes modern technology.

04-04-2024 - 6 minute read. Posted in: case.

2024: The year of AI

2024 has already appeared to be a year full of development in the AI-world, both when it comes to chatbots, virtual reality and augmented reality, as well as cyberattacks assisted by AI.

We’ll take a closer look at how the world has changed since the introduction of artificial intelligence, and further look at how it has shaped modern technology that we know and use today.

A new generation of AI

If you have the slightest interest in technology you’ve no doubt heard of the term artificial intelligence aka AI. Even people who aren't necessarily interested in tech have heard of the phenomenon as it took the world by storm when it first came around.

Artificial intelligence mimics human behavior and intelligence. It humanizes machines and has been programmed to learn human tendencies and mannerism.

AI was created to automate different tasks to ease our workload and improve processes - both human-related processes and technological operations. Initially AI has been used for virtual assistants like Google Assistant and Apple's Siri. AI is based on machine learning that analyzes human natural language structures to carry out user commands; it has become a central part of many user’s everyday lives as it has become an actual virtual assistant.

Emergence of chatbots

Another technological wonder that surprised many, was the emergence of chatbots. Many bigger corporations have developed chatbots for their customer’s entertainment and to improve our work and private life.

Most chatbots are accessible to the general public and it’s almost only our imagination that sets the limit for what we can use them for. Examples of how people use chatbots are e.g.:

  • For writing shorter assignments and articles
  • For creating images
  • For analyzing text, sound or images
  • To make up recipes or meal plans
  • Making patient journals and analysis
  • As a partner for brainstorming ideas

Another AI-based technology that has seen the light of day is virtual and augmented reality. VR and AR has changed the way we work and e.g. play videogames with specialized headsets that allow us to be absorbed into the virtual and augmented reality. A newer invention of 2024 is Apple’s AR goggles that combine cameras so you can see the real world with a technology that allows you to get notifications, videos and messages in your viewpoint, all the while you’re walking around in the street or sitting in a meeting.

This is just the beginning of what developers can do with AI and how it is (fairly) easily implemented into our everyday lives.

Cyberattacks based on AI

AI gives users an endless amount of possibilities. Unfortunately, it also gives malicious actors new ways to execute cyberattacks.

With new technology comes new devices and softwares to hack. Usually a user has to give personal information in order to create a user account to the designated software or to log in to social media platforms via their headset.

This, however, means that you have confidential and vulnerable data connected to your device and thus another target for the hacker to hit. Hacking attacks and data breaches connected with VR/AR has fortunately not become a widespread phenomenon yet but it might be the next thing hackers will target in the future.

Another way hackers exploit AI is by using chatbots to write malicious code. When chatbots initially came around, they had limited filtering and no extensive security features that prevented hackers from getting the computer to write code and malicious software.

This meant that hackers could get their hands on new kinds of coding that they perhaps hadn’t done before - and that security systems hadn’t proper measures to prevent and keep the hackers out.

Fortunately, the big chatbot developers have created extensive filtering systems now, so if a malicious actor enters a prompt like "write a code for installing malware on gmail user accounts" the chatbot will recognize certain words and characteristics, and thus deny this command. This means that it has become a lot harder for hackers to exploit chatbots, but it has not entirely stopped them from using the bots for writing code. They just have to be a lot more creative and advanced in their commands.

Since the filtering of benevolent chatbots came around, things like WormGPT have emerged as a counterpart to the chatbots and more specifically OpenAI’s ChatGPT. It’s based on the same structure as ChatGPT but works without security measures as seen in the other chatbots.

Laws concerning AI

The emergence and the succeeding rise of the technology has broadened the use of AI in the general public and in businesses. This has led to concerns regarding the use of AI as many businesses process personal and confidential data on a daily basis.

Since the technology using AI hasn’t proven to be completely safe when it comes to hacking attacks and sharing of data, many companies and even countries find it to be a tricky phenomenon to handle.

This means that organizations have implemented regulations and laws to limit the use of AI. One of the biggest institutions, the EU, has attempted to implement an AI legislation which has then been approved to be taken into effect in 2024.

The AI Act categorizes different AI systems by risk ranging from "unacceptable" to "minimal". This gives an indication of how vulnerable an AI system is, and thus how likely it is to experience data breaches with the system.

The AI Act is furthermore creating more transparency when it comes to companies using AI systems. This means that companies will be held accountable of any misuse and thus be punished if this is discovered. It also implies that there will be more strict requirements of AI systems before they can be used.

Beside the EU, many institutions and companies have implemented AI regulations to their set rules, since it has been discovered that AI has posed a security risk as well as a way of e.g. cheating on assignments or compromise test results (which can be in school, corporate settings, and in health care systems).

The year ahead

While AI has proven to be a helpful tool to many, it has further proved to be a tool that can be exploited.

This calls for regulations and system management from developers but also for organizations using the wondrous technology as we’ll probably see further development of AI and new ways to use it in 2024.

What’s important is to be cautious with the technology and stay updated on regulations; organizations might want to consider implementing a set of rules regarding the use of AI. This will improve corporate guidelines as well as an organization’s general cybersecurity.

Author Caroline Preisler

Caroline Preisler

Caroline is a copywriter here at Moxso beside her education. She is doing her Master's in English and specializes in translation and the psychology of language. Both fields deal with communication between people and how to create a common understanding - these elements are incorporated into the copywriting work she does here at Moxso.

View all posts by Caroline Preisler

Similar posts