The OpenAI logo is seen displayed on a cellphone with an image on a computer screen generated by ChatGPT's Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. (AP Photo/Michael Dwyer)

OpenAI did not respect Canadian privacy laws in developing ChatGPT, probe finds

May 6, 2026 | 1:00 AM

OTTAWA — OpenAI failed to respect Canadian privacy laws when training its artificial intelligence-powered ChatGPT chatbot, federal and provincial watchdogs have found.

The conclusion came Wednesday in a report on a joint investigation by federal privacy commissioner Philippe Dufresne and his counterparts from British Columbia, Alberta and Quebec.

ChatGPT, released in November 2022, is a popular conversation-style tool that responds to online users’ prompts with a wide range of information almost instantly — responses that may or may not be accurate.

The privacy watchdogs found OpenAI’s collection of information to train its models was overly broad, resulting in the compilation and use of sensitive personal details.

They said this could include data about individuals’ health conditions and political views, as well as information concerning children.

The probe found OpenAI did not clearly explain that personal information collected from publicly accessible sources could include data from social media, discussion forums and other similar websites.

“OpenAI launched ChatGPT without having fully addressed known privacy issues,” Dufresne said in French at a news conference. “This exposed Canadians to potential risks of harm such as breaches and discrimination on the basis of information about them.”

The privacy regulators said OpenAI provided inadequate notifications about potential inaccuracies in ChatGPT responses, and until recently had not conducted an assessment to validate the accuracy of any personal information included in responses.

OpenAI also did not provide all individuals with an easily accessible and effective mechanism to access, correct and delete their personal information, the watchdogs said.

Dufresne said OpenAI took important steps to improve privacy protections and has also agreed to implement further measures to address his office’s concerns.

“These measures will significantly limit the personal information that is used to train new ChatGPT models, and will better protect the fundamental right to privacy of Canadians,” he said. “They will also make Canadians more aware of the implications of using ChatGPT.”

The watchdogs said current models powering ChatGPT were developed and deployed using the new safeguards, which has helped to improve privacy practices by limiting use of personal information, improving accuracy, facilitating corrections and governing the retention and deletion of personal information.

The report said OpenAI has also retired its earlier ChatGPT models that were trained in a manner that contravened Canadian privacy laws.

Upon announcing his office’s investigation in April 2023 in response to a complaint, Dufresne said AI technology and its effects on privacy were priority issues. The four privacy watchdogs decided to pursue a joint probe to make best use of their expertise and avoid duplication.

Privacy legislation in British Columbia, Alberta, and Québec is considered substantially similar to the federal private-sector privacy law, but each jurisdiction looked at whether OpenAI’s actions complied with the specific laws they oversee.

OpenAI came under intense scrutiny after Jesse Van Rootselaar fatally shot eight people Feb. 10 in Tumbler Ridge, B.C., including six children, before killing herself.

The company had banned the mass shooter from using ChatGPT due to worrisome interactions but did not alert law enforcement. The shooter got around the ChatGPT ban by having a second account.

Dufresne said the matter “really raises a separate issue of, what should be the duty of organizations in terms of disclosing risks to authorities? And that’s an important question. Canadians have to be protected.”

“And this may lead to some amendments,” he added. “This may well be taken up by Parliament at the end of the day.”

Artificial Intelligence Minister Evan Solomon recently said the federal government would weigh information provided by OpenAI on the shooting before taking action.

The privacy watchdogs said OpenAI has committed to implementing various measures within specific time frames, including publication of more information about its privacy practices and the sources of content used to train its models.

A spokesperson for OpenAI said “following a collaborative process with the commissioners,” the company published a blog post to help explain to Canadian users how their data could be used for training of AI models, the privacy safeguards that are in place, and the controls users have available.



“We care very deeply about protecting our users’ privacy,” Shane Bauer said in an email.

The privacy watchdogs said within three months OpenAI is to provide notice that chats may be reviewed and used to train models, and advise users not to share sensitive information.

They added that within six months OpenAI is to:

— make it easier to understand and use the data exports that it provides to users who request their personal information;

— better explain the avenues available to users who want to challenge the completeness, accuracy or nature of the information provided;

— confirm it has implemented strong protection for future datasets which are retired and used only as historical references so they are not used for active model development; and

— test protective measures for the children of public figures to ensure the models refuse requests for their name or date of birth.

OpenAI will provide quarterly reports to demonstrate compliance with these commitments until they have all been met, the privacy regulators added.

This report by The Canadian Press was first published May 6, 2026.

Jim Bronskill and Anja Karadeglija, The Canadian Press