Refine by MP, party, committee, province, or result type.

Results 1-15 of 22
Sorted by relevance | Sort by date: newest first / oldest first

Information & Ethics committee  It's not always clear what your image is used for. I don't know that phones these days do this, but they could be collecting that data and using it to turn other models. Consent isn't always clear, and that applies to all of these cases.

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  There's a new version of training called “federated learning”, where you can keep the image on your device the entire time, but you still consent to an update. You tell them all how it should adjust its parameters such that it can better classify your own image. In this case, the

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  Yes, I do. We understand too little right now. We shouldn't deploy them yet, if ever.

April 4th, 2022Committee meeting

Angelina Wang

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  The ones I can think of are HireVue and some of these interviewing platforms.

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  I think that is for me. What the revised tool mostly does is it tries to find different patterns and correlations present in datasets that are likely to propagate into models that are trained on the dataset. It is not guaranteed by any means to find all the possible correlation

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  Sure, yes. Two of the points that I brought up are interpretability and brittleness. For brittleness, back actors are able to just trick the model in different ways. In the specific study I'm referring to they print a particular pattern on a pair of glasses, and through this, t

April 4th, 2022Committee meeting

Angelina Wang

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  It's very hard to think about, because none of these technologies are ever going to be used in a vacuum, and they're always situated in a particular social context. Even if you had some sort of facial recognition system that worked perfectly, or at least the same across different

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  I'm also not familiar with this.

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  Thank you. I think that each model is developed in the context of the different study that it's made by, and so models developed in Asia also have lots of biases. They are just a different set of biases than models that have been developed by Canadians or Americans. For example

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  I'm sorry. I don't think I'm familiar enough with that.

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  I think that because you can acquire facial images without any sort of consent, and that there are so many errors and you don't really know why a model would make a particular decision, then that would go against human rights.

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  Bias amplification refers to a notion of bias that is often thought of as just a correlation in the data. This correlation could be between some particular demographic group and some concept that they are stereotypically related to. Because machine learning models are trying to p

April 4th, 2022Committee meeting

Angelina Wang

Information & Ethics committee  Sure, in predictive policing, if communities of colour and different neighbourhoods with higher proportions of Black citizens may have higher levels of crime, then predictive policing models may over-report those communities in the future to be more likely to have crime, even if

April 4th, 2022Committee meeting

Angelina Wang