The capabilities of artificial intelligence (AI) are impressive - but they also harbor dangers. It has now been officially confirmed that cybercriminals have used OpenAI's ChatGPT in several cases to develop malware. The AI company has published this in a report documenting over 20 cases in which ChatGPT was involved in cyberattacks and malware creation. But how did this happen and what does it mean for the future of cyber security?

How cybercriminals use ChatGPT

According to the report "Influence and Cyber Operations: An Update", state-sponsored hacker groups from countries such as China and Iran have exploited the capabilities of ChatGPT. This involved not only the creation of malware, but also the improvement of existing malware. The main focus was on debugging malware code and generating content for phishing campaigns. Phishing attacks, in which deceptively genuine emails or websites are used to obtain users' personal data, could be made even more perfidious through the use of ChatGPT.

The "CyberAv3ngers" group from Iran, which is said to be linked to the Islamic Revolutionary Guards, used ChatGPT for an even more dangerous task: they used AI to investigate vulnerabilities in industrial control systems. This could potentially lead to attacks on critical infrastructure that could have a devastating impact on entire countries.

No major breakthroughs - but growing risk

Although OpenAI's report confirms that cybercriminals are using ChatGPT for their purposes, the company is reassuring: There have been no significant breakthroughs in malware development to date that can be attributed to the use of ChatGPT. There has also been no increase in successful malware attacks. Nevertheless, there is still a risk that the AI technology will continue to be misused and that the threat situation could worsen in the future.

Most alarmingly, ChatGPT has also been used to create phishing malware to steal user data such as contacts, call logs and location information. This type of attack poses a serious threat to individuals, businesses and even governments.

Liability issues: Who bears responsibility?

An important aspect in the discussion about the misuse of AI technologies is the question of liability. If a chatbot like ChatGPT is used for criminal purposes, who is responsible? Former US federal prosecutor Edward McAndrew points out that companies like OpenAI could potentially be held accountable if their technologies are used to commit cybercrimes.

Reference is often made here to the Communications Decency Act (Section 230). This law protects platform operators by stipulating that they cannot be held responsible for illegal content created by their users unless they have created it themselves. However, this law may not be sufficient in the case of AI-generated content - such as malware code. Since ChatGPT creates the malicious code directly, OpenAI could be held legally responsible.

What happens next?

The publication of the report clearly shows that AI technology brings both opportunities and risks. OpenAI emphasizes that they are taking measures to prevent the misuse of their models. These include implementing security measures to prevent ChatGPT from being used for illegal purposes. Nevertheless, it will be crucial in the future that governments, companies and technology developers work together to prevent the misuse of AI and strengthen cybersecurity.

Conclusion: Artificial intelligence such as ChatGPT has the potential to revolutionize many areas of life - but it also harbors dangers. The OpenAI report shows that cybercriminals are already trying to use this technology for malicious purposes. Although no major breakthroughs in malware development through AI have been made so far, the misuse of AI technology must be taken seriously. It is up to the tech industry and policymakers to find ways to both encourage innovation and ensure security.

Subscribe to the newsletter

and always up to date on data protection.