Artificial intelligence (AI) has made great progress in recent years and is increasingly being used in various areas. One of the best-known chatbots is ChatGPT, developed by OpenAI. Millions of people use this AI application every day to obtain information or hold conversations. However, OpenAI is currently under pressure from an investigation by the US Federal Trade Commission (FTC) due to possible violations of consumer protection laws.

Risk to data protection and reputation

The FTC has raised concerns about the protection of personal data and people's reputations by the popular chatbot ChatGPT. According to a report in the "Washington Post"(https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/), the authority has requested information from OpenAI in a 20-page letter in order to better understand the risks associated with artificial intelligence. Neither the FTC nor OpenAI have yet officially commented on the allegations.

Generative AI and the use of personal data

The basis of ChatGPT and similar AI models is so-called generative AI, in which large amounts of data, including personal entries in social media, are used to train the model. In addition, user input, also known as "prompts", is used to further train the AI. This practice has led to concerns about data protection. Google is also facing a similar lawsuit in which the company is accused of using personal and copyrighted information for the training of its AI applications without permission.

Concerns of the European supervisory authorities

The European supervisory authorities have also expressed concerns, particularly with regard to the use of personal data in chatbots and other AI-based services. In Italy, ChatGPT was temporarily blocked(https://www.spiegel.de/netzwelt/netzpolitik/openai-bessert-beim-datenschutz-nach-chatgpt-wieder-in-italien-verfuegbar-a-d8d9d24f-8f34-4882-a8e4-dc5bc1c7fd84) but later unblocked again. However, the final decision on the admissibility of the service is still pending. One of the main concerns of data protectionists is the dissemination of false information and defamatory statements by services such as ChatGPT.

The FTC's demands and the challenges for OpenAI

The FTC is demanding detailed information from OpenAI about how ChatGPT was trained and what security measures the company has taken to prevent potentially harmful false claims. While AI service providers emphasize that their models do not necessarily reflect the truth, they are gaining traction and are being integrated into various applications. However, the apparent eloquence of chatbots often leads to human users being misled. Even experts cannot always distinguish between fact and fiction, as the case of two lawyers in New York who were fined for including unverified claims from ChatGPT in their pleadings shows.

Outlook for increased regulation of AI applications

The FTC's investigations into OpenAI mark a further step towards increased regulation of AI applications. Privacy and the protection of personal data are important concerns that must be taken into account when developing and using AI technologies. It is to be hoped that these investigations will lead to clear guidelines to ensure the protection of privacy and the responsible use of AI systems.

Subscribe to the newsletter

and always up to date on data protection.