Refine by MP, party, committee, province, or result type.

Results 1-15 of 17
Sorted by relevance | Sort by date: newest first / oldest first

Industry committee  I cannot speak to specific examples, but I will say that representation is certainly a problem across the AI life cycle—in the data collection, in the data creation, and all the way to how models are tested, on which groups they are tested and how they are deployed.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  The applications that we use on a daily basis include social media, for example. Companies use recommendation and moderation systems that categorize users to sell them products or show them content knowing that it will interest them. For children, this creates mental health issues.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  The EU act creates different thresholds of reporting and transparency requirements for companies deploying different types of AI systems. In Canada, we have reporting and transparency requirements for only a specific class of systems. It means we're more exclusionary. The EU includes more systems within its scope.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  Yes. I think rights are certainly very important to have, but in order to act on rights, it creates an unfair burden on the everyday person. For example, I contested the use of my data. It was a financially, emotionally and physically exhausting process. I did that when I was living in the U.S. as a researcher at Princeton.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  I understand what you're saying. The deficiency arises when there's not an independent commissioner who is empowered to proactively investigate situations and commission audits. Yes, it would be illegal, but it would be dealt with at the courts, and that would take a lot of time and resources.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  Excellent question. I'm happy to respond in French. I know that, so far in Canada, we have six cases involving Black people who were misidentified by facial recognition systems and who lost their refugee status as a result. These cases are currently before the Supreme Court of Canada.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  That's a really good question. I would say that it is important to have in place legislation on artificial intelligence in the country, and I think that legislation should work towards facilitating collaboration across different sectors and departments. What is happening right now in the country is that we have departments working on their own guidelines and their own standards without being able to speak to other experts in other departments—

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  It is my understanding that departments in Canada are doing similar work; it's just that they don't have the same powers that agencies and commissions in the United States have. The FTC, for example, can issue orders and fines and penalties and such, but I don't think that is the case for Canada.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  With the code of conduct, the main flaw is that it is voluntary. Companies can choose to adopt it, but it doesn't mean they're obliged to. In order to protect Canadians against harms caused by generative AI, things need to be enforceable.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  Personally, as a researcher, I don't think so. I think the code of conduct is something that industry would have a lot more to say about. What I'll say is that the code of conduct is part of a bigger puzzle on the regulation of artificial intelligence. It's not the only piece needed in order to safeguard Canadians against harms.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  Thank you for your question. The amendment to AIDA proposed by the minister would call for a class of systems that would be considered high-impact, and the class of systems would be subject to a schedule, which would be updated through regulations, if my memory is correct. The European Union, in contrast, has, in its law, explicit systems that are considered unacceptable.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  Yes. I think this is an issue that reflects the lack of conversations between Canadian Heritage and Justice, which is handling the online harms bill, and the people who are handling AIDA. I don't know if they're talking to each other, but the fact that there are already concerns with industry actors speaks to the importance of having collaboration among different departments in the country.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  No, I don't think so.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  No, I wouldn't agree with that, because we already have systems being deployed actively, and we can build on the existence of their application to build frameworks that are flexible as well. I think it's really a question of building an infrastructure of regulation that is flexible and also inclusive of the different stakeholders who are present in the deployment, development and design of the AI systems.

February 14th, 2024Committee meeting

Christelle Tessono

Industry committee  Thank you. Transparency reporting requirements are very useful to policy-makers, researchers and journalists who understand systems and how to better address them, but for the everyday person who is facing these systems, I am reminded of this expression in French: an ounce of prevention is worth a pound of cure.

February 14th, 2024Committee meeting

Christelle Tessono