Thank you so much, and thank you to the witnesses for being here.
This is our last day listening to witnesses. All of the witnesses have really contributed to a very interesting conversation on the subject. Many and various points have been brought forward that have complemented each other.
I want to speak to a point that's been brought up by more than one person, specifically about how machine learning works and how data.... I think you just referenced data coming from many, many different sources. If the data that we're using is building the AI through machine learning, there's no question that bias will be embedded into the technology we're building.
Technology mirrors society as a whole. Here's a good example. If AI were being used in the judicial system, it would look at, let's say, the last 70 years of court cases. If that were the case, and if we acknowledged that the AI would be built from that machine learning and datasets that have lasted the 70 years, we would now be making decisions based on that data, and there would be a bias embedded in it if we acknowledged that the system had systemic barriers in place.
The big question is this. I think Mr. Soucy brought up the fact that we need to be careful that the technology that we're putting forward doesn't set bias against some workers. I guess my question for the union representative is, how do we use the collective agreement process and how do we hold companies accountable when the datasets they're using are often in a black box-based information set that's not shared with the public? These algorithms are private.
How do we ensure that we can find a balance between what's being built and how it serves workers in general?
That question is for Mr. Soucy.