Is OpenAI's Chatbot ChatGPT Mishandling Personal Data?
As artificial intelligence (AI) technology penetrates many areas of our lives, concerns about privacy and data protection also come to the fore. Most recently, leading artificial intelligence developer OpenAI was the focus of a complaint filed by Noyb, an Austria-based data protection advocacy group. Noyb claims that ChatGPT, the popular chatbot developed by OpenAI, incorrectly processes personal data. This situation raises questions about the compliance of artificial intelligence applications with privacy rules in the European Union (EU).
According to the complaint, an unnamed public figure used OpenAI’s ChatGPT chatbot to obtain information about himself. However, the robot consistently presented false information to the public figure. The public figure later requested that the data be corrected or deleted, but OpenAI denied the request. OpenAI said the fix was not possible and also refused to provide information about the training data.
Noyb data protection lawyer Maartje de Graaf criticized this situation with the following words: “If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology must comply with legal requirements, not the other way around.”
Noyb’s complaint was forwarded to the Austrian data protection authorities. The complaint requested an investigation into OpenAI’s data processing processes and how its large language models ensure accuracy when processing personal data. “It is clear that companies are currently unable to ensure that chatbots such as ChatGPT comply with EU law when processing data about individuals,” De Graaf said.
This complaint highlights potential problems with AI chatbots’ data processing practices. However, this is not the first incident in Europe. An investigation conducted in December 2023 revealed that Microsoft’s Bing AI chatbot (renamed Copilot) provided misleading information about local elections in Germany and Switzerland. Similarly, Google’s Gemini AI chatbot was found to produce inaccurate and “woke” content in its image generator. These examples also raise concerns about the quality and impartiality of the data used when training artificial intelligence models with data.
The European Union has strict data protection laws under the General Data Protection Regulation (GDPR). These laws impose responsibilities on companies regarding the processing and protection of personal data. Organizations like Noyb also engage in strategic litigation and media initiatives to ensure these laws are enforced.
OpenAI has not yet provided an official response to Noyb’s complaint. However, this incident underlines that AI developers need to pay more attention to data protection and transparency issues. Companies need to be more clear about how they collect, use and protect user data. At the same time, the impartiality and accuracy of the data used in training artificial intelligence models should also be questioned.
As artificial intelligence technology continues to develop rapidly, it is necessary to confront the ethical and legal issues that come with it. The OpenAI complaint could be an important step towards the responsible development and use of AI applications by sparking debate in this area.