The Italian Data Protection Authority stops ChatGPT, and we asked them what their defense will be.
The Italian Data Protection Authority (aka Garante per la Protezione dei dati Personali o Garante Privacy), with provision number [web doc n. 9870832] of March 31, 2023, has urgently imposed a temporary limitation on the processing (Article 58, paragraph 2, letter f of the GDPR) against the US company OpenAI LLC, data controller of Italian data subjects through its well-known and controversial artificial intelligence application ChatGPT.
The provision, with immediate effect, contests the violation of Articles 5, 6, 8, 13, and 25 of the GDPR for the following reasons:
absence of information to data subjects under Article 13 of the GDPR
absence of a suitable legal basis for the processing in order to train the operating algorithms of the platform
inaccuracy of some personal data returned by ChatGPT following interaction with users (sometimes ChatGPT returns inaccurate personal data)
absence of any verification of users' age in relation to the ChatGPT service, despite the platform's contractual terms reserving use for those over 13 years old. Such minors, due to the lack of filters, are exposed to responses considered unsuitable for their level of self-awareness.
We have submitted the full provision to ChatGPT 4 with the following question: The provision is against ChatGPT of OpenAI. How could you defend yourself?
Below is the response generated by ChatGPT4, which we strongly hope follows its own, impeccable advice:
If the provision were directed against ChatGPT of OpenAI, there would be several defense strategies that could be adopted to respond to the accusations and demonstrate compliance with the GDPR. Here are some possible actions:
Revision of the legal bases for data processing: OpenAI should carefully review the legal bases for the processing of personal data carried out by ChatGPT and ensure that explicit and informed consent of users has been obtained, or that other valid legal bases provided for by the GDPR, such as the fulfillment of a contract or legitimate interest, can be relied upon.
Improving transparency and information: OpenAI could review and update its privacy and cookie policies to ensure that they are clear, accessible, and complete regarding the purposes and legal bases for the processing of personal data by ChatGPT. In addition, it may be useful to provide users with specific information about the operation of ChatGPT and the measures taken to protect their data.
Implementing adequate technical and organizational measures: OpenAI should ensure that adequate measures have been taken to protect the personal data processed by ChatGPT, such as anonymization, pseudonymization, or the implementation of advanced security solutions. It is also important to ensure that OpenAI staff are adequately trained on GDPR and internal data protection procedures.
Data Protection Impact Assessment (DPIA): In the event that ChatGPT poses high risks to the rights and freedoms of individuals, OpenAI could carry out a DPIA to identify and mitigate such risks, in accordance with Article 35 of the GDPR.
Appointment of a Data Protection Officer (DPO): If not already present, OpenAI could appoint a DPO to monitor compliance with the GDPR, provide advice, and cooperate with supervisory authorities, as provided for by Articles 37-39 of the GDPR.
Cooperation with supervisory authorities: OpenAI should collaborate with the Garante for the protection of personal data and other supervisory authorities, providing all necessary information and demonstrating compliance with the GDPR.
By resorting to these defense strategies, OpenAI could demonstrate compliance with the GDPR and mitigate potential consequences arising from the provision. However, it is important to remember that this is a hypothetical response, and the specific situation may require further actions or different approaches.