Investigation by Canadian privacy authorities into OpenAI results in joint report and recommendations to improve privacy provisions within ChatGPT models

May 6, 2026

A report was issued today summarizing a joint investigation into OpenAI and ChatGPT by the Office of the Information and Privacy Commissioner (OIPC) of Alberta, along with its counterparts in Quebec and British Columbia, as well as the Office of the Privacy Commissioner of Canada (OPC).

The investigation concluded that the initial development of ChatGPT was not compliant with provincial and federal privacy laws and identified a number of privacy issues that were present in the training and deployment of certain ChatGPT models. The report contains a number of recommendations and also notes that OpenAI has begun to take steps to address privacy concerns relating to the ongoing development and operation of ChatGPT.

The investigation was originally triggered by a complaint in April 2023 alleging collection, use and disclosure of personal information without consent. The joint investigation was announced in May 2023. It examined whether OpenAI’s collection, use and disclosure of the personal information of individuals in Canada, in relation to its AI-powered chatbot, ChatGPT, complied with federal and provincial private sector privacy laws. The investigation examined the workings of certain versions or models of ChatGPT that were in use at the time the investigation began in 2023. These models were ChatGPT 3.5 and ChatGPT 4.

“From the Alberta perspective, I want to note first that it is unfortunate and disappointing that technology companies have moved ahead so quickly with new developments and innovations, without first ensuring that they are adhering to privacy legislation,” said Alberta Information and Privacy Commissioner Diane McLeod. “Our investigation found that that OpenAI did not appear to turn its mind adequately to privacy compliance in its development and deployment of ChatGPT, which is very troubling. The first ChatGPT model was launched in 2022, nearly two decades after privacy law in Canada applied to the private sector.”

Throughout the course of the investigation and in response to findings and recommendations made by the regulators, OpenAI began to take steps to improve privacy protections in the ongoing development and delivery of ChatGPT. However, further work is required.

“An important aspect of this investigation is that each of the four regulators investigated compliance with the specific legislation that they oversee,” said McLeod. “As a result, the conclusions reached by each office varied due to the differences in the laws that they enforce. In the case of our office, we were investigating whether the development of ChatGPT was compliant with Alberta’s Personal Information Protection Act or PIPA, which governs private organizations such as corporations, unincorporated associations, professional regulatory organizations, trade unions and partnerships.”

The Act in British Columbia is similar to Alberta’s law, so both OIPC Alberta and OIPC BC found that OpenAI’s models are based on data scraped off publicly-accessible websites, for which OpenAI has not obtained, and cannot obtain, consent under PIPA-BC and PIPA-Alberta. While both offices are encouraged by the new measures aimed at compliance taken by OpenAI since this investigation began, and those which it has further committed to implement, none of them are sufficient to meet the foundational requirement for consent in the Alberta and BC laws. Despite this finding, OIPC Alberta joins OIPC BC, the Commission d’accès à l’information du Québec (CAI) and OPC in making the joint recommendations and in monitoring the implementation of the measures to which OpenAI has committed.

“The privacy laws we currently have in Canada were drafted and enacted during a time when today’s incredible advancements in technologies, such as AI, would have strained believability,” added McLeod. “Legislators now face the challenge of modernizing privacy laws in ways that will enable AI companies to continue to develop these innovative technologies, but only in a manner that safeguards privacy, reduces potential harms to citizens, and requires accountability and transparency. My hope is that OpenAI has learned from this investigation and that other technology companies that are developing and deploying AI or other novel technologies also learn from this report that privacy must be a top priority and cannot be an afterthought.”

Given the potential impacts of AI on the privacy of Canadians, the federal and provincial privacy authorities conducted this joint investigation to leverage combined expertise and resources to enforce privacy laws effectively while avoiding duplication of efforts. This collaboration demonstrates a shared commitment to protecting Canadians’ fundamental right to privacy.

Please view the report and the overview at the links below:

Through the OIPC, the Information and Privacy Commissioner performs the responsibilities set out in Alberta’s access to information and privacy laws, the Access to Information Act, the Protection of Privacy Act, the Freedom of Information and Protection of Privacy Act during the transition period, the Health Information Act, and the Personal Information Protection Act. The Commissioner operates independently of government.

For more information:

Elaine Schiman
Communications Manager
Office of the Information and Privacy Commissioner of Alberta
communications@oipc.ab.ca
Mobile: (587) 983-8766

www.oipc.ab.ca

Backgrounder to News Release: Summary of joint investigation into OpenAI’s ChatGPT

The Office of the Privacy Commissioner of Canada (OPC), along with the Commission d’accès à l’information du Québec, the Office of the Information and Privacy Commissioner for British Columbia, and the Office of the Information and Privacy Commissioner of Alberta, conducted a joint investigation into OpenAI’s ChatGPT to assess whether the company’s collection, use, and disclosure of Canadians’ personal information complied with federal and provincial privacy laws.

  • Overview and key findings
  • Jurisdictional differences and investigative outcomes
  • OpenAI’s response and future commitments
  • Key takeaways for organizations

Overview and key findings

The investigation focused on ChatGPT’s early models, examining how OpenAI sourced its training data – including publicly-scraped content, licensed datasets, and user interactions – and whether it adhered to the key privacy principles such as consent, transparency, and data accuracy.

The regulators’ findings highlighted privacy concerns related to the scale and sensitivity of data collected, and the adequacy of user consent, among other issues. As a result, the regulators concluded that the way that OpenAI had initially trained ChatGPT was not compliant with federal and provincial privacy laws. Specifically, the regulators found:

  • Overcollection of personal information: OpenAI gathered vast amounts of personal information without adequate safeguards to prevent use of that information to train its models. This could include sensitive details such as individuals’ health conditions and political views, as well as information about children.
  • Lack of valid consent and transparency: OpenAI did not obtain valid consent for the collection of personal information, as required under privacy laws. Many users were unaware that their data was collected and used to train ChatGPT. OpenAI did not clearly explain that personal information collected from publicly-accessible sources could include data from social media, discussion forums, and other similar websites.
  • Factual inaccuracies and fabricated “hallucinations”: OpenAI provided insufficient notifications about potential inaccuracies in ChatGPT responses. Until recently, it had not conducted an assessment to validate the accuracy of any personal information included in ChatGPT responses to user prompts.
  • Access, correction and deletion: OpenAI did not provide all individuals with an easily accessible and effective mechanism to access, correct, and delete their personal information.
  • Lack of accountability: OpenAI released ChatGPT without having fully addressed known privacy risks, and without establishing data-deletion rules. This exposed individuals to risks of harm, including privacy breaches, inaccuracy of information, and discrimination on the basis of information provided about them.

Jurisdictional differences and investigative outcomes

While privacy legislation in British Columbia, Alberta, and Québec is considered substantially similar to the federal private-sector privacy law, each jurisdiction investigated compliance with the specific laws that they oversee. The conclusions reached by each office varied due to the differences in the laws that they enforce.

Privacy authority Applicable law  Investigative finding   Notes
Office of the Privacy Commissioner of Canada Personal Information Protection and Electronic Documents Act (PIPEDA)  Complaint is well-founded and conditionally resolved The OPC considers that the measures implemented, or that will be implemented by OpenAI, will significantly reduce the residual risk of harm to individuals associated with the collection, use, and disclosure of their personal information in the development and deployment of ChatGPT models.
Office of the Information and Privacy Commissioner for BC (OIPC-BC)

 

Personal Information Protection Act – BC

 

Complaint is well-founded and unresolved       The OIPC-BC determined that OpenAI’s models, based on scraped data, are in contravention of PIPA-BC’s consent requirements, which set different criteria than PIPEDA. However, OIPC-BC acknowledged OpenAI’s efforts to improve compliance.
Office of the Information and Privacy Commissioner of Alberta (OIPC-AB) Personal Information Protection Act – AB

 

Complaint is well-founded and unresolved       The OIPC-AB determined that OpenAI’s models, based on scraped data, are in contravention of PIPA-AB’s consent requirements, which set different criteria than PIPEDA. However, OIPC-AB acknowledged OpenAI’s efforts to improve compliance.
Commission d’accès à l’information du Québec (CAI) Act respecting the protection of personal information in the private sector Complaint is well-founded and conditionally resolved on the following issues: appropriate purposes, individual rights and accountability.

Complaint is well-founded and unresolved on the issue of consent.

No findings were issued on complaint related to openness and accuracy given the specificities of Quebec’s law.

The CAI has made specific recommendations with respect to consent and retention to bring OpenAI in compliance with Quebec’s private-sector privacy act. The CAI intends to monitor OpenAI’s implementation of the joint recommendations, as well as its own specific recommendations.

OpenAI’s response and future commitments

OpenAI has indicated it has already put in place measures which address some of the concerns raised in the report of findings, most importantly by significantly limiting the use of personal information and sensitive information that is used to train new ChatGPT models. OpenAI has indicated that it has retired its earlier ChatGPT models that were trained in a manner that contravened Canadian privacy laws.

Open AI has indicated that its current models powering ChatGPT were developed and deployed using the new safeguards, which has helped to improve their privacy practices by:

  • Limiting use of personal information: OpenAI has implemented a filtering tool to detect and mask personal information (such as names or phone numbers) in publicly accessible internet data and licensed datasets used to train its models. The tool significantly reduces the amount of private and sensitive information used in training.

 

  • Improving accuracy: OpenAI has introduced a new web search feature which, when activated, conducts real-time web searches and references specific web sources for the content output by ChatGPT, allowing users to verify information independently.

 

  • Improving access: OpenAI has improved the auto-response email that users receive when they submit an access request, better explaining how different types of personal information can be accessed.

 

  • Facilitating corrections: OpenAI leverages the web search feature to process correction requests, allowing the models to retrieve up-to-date publicly accessible information about an individual and use that information in its response.

 

  • Enhancing correction and deletion: OpenAI has developed a technical solution to block specific personal details about a public figure from appearing in model outputs, ensuring that ChatGPT continues to provide access to relevant public information while respecting privacy rights.

 

  • Implementing retention policies: OpenAI has implemented formal retention policies and schedules to govern the retention and deletion of personal information processed in connection with ChatGPT.

Future improvements

OpenAI has also committed to implementing additional measures within specific timeframes to improve openness, access, retention, and children’s privacy:

  • [Concurrently with the publication of the Report of Findings] OpenAI will publish more information explaining its privacy practices, including information about the sources of content used to train its models.

 

  • [Within three months of the issuance of the Report of Findings] OpenAI will provide notice that chats may be reviewed and used to train models, and advise users not to share sensitive information, before the individual inputs their first user prompt in the signed-out ChatGPT web version.

 

  • [Within six months of the issuance of the Report of Findings] OpenAI will make it easier to understand and use the data exports that it provides to users who request their personal information. They will also better explain the avenues available to users who want to challenge the completeness, accuracy, or nature of the information provided.

 

  • [Within six months of the issuance of the Report of Findings] OpenAI will confirm to the offices that it has implemented strong protection for future datasets which are retired and used only as historical references, so they are not used for active model development. The company will also regularly review whether these datasets should still be kept.

 

  • [Within six months of the issuance of the report of findings] OpenAI will test protective measures for the minor family members of public figures, who are themselves not public figures, to ensure that the models refuse requests for their name or date of birth.

OpenAI will provide quarterly reports to the OPC and its provincial partners to demonstrate compliance with the above commitments until they have all been met.

Key takeaways for organizations

Organizations have a responsibility to ensure that products and services that are using AI comply with existing domestic – both federal and provincial – and international privacy legislation and regulation.

The Principles for responsible, trustworthy and privacy-protective generative AI technologies can help support organizations in developing, providing or using generative AI in Canada.

Further resources for organizations: