Thank you, Ms. Gaudreau.
I'll try to answer as quickly as possible, because there were a couple of questions.
For the first part, you made reference to our Viewshare, which basically would allow people who had put content on the site to actually partake in it. A person whose identification we do not have or who is not part of the model program actually cannot partake in that. It's not like a random person can upload something to the site and get paid for it. The system doesn't work like that. You'd actually have to be part of the model program or part of the content partner program, in which case we'd either have signed contracts, the 2257, or have the identification of the individuals who actually uploaded it.
I wanted to ensure that at least that was understood. You can't just put it up and get a cheque randomly for anything you put up on the site. The system doesn't work that way. It would just be too open to fraud.
To your other concern, about the upload, removal and then re-upload, as we said before, we were dealing with a third party vendor for many years, and we are still dealing with that third party vendor. What we saw is that over a long period of time, with many variables in the video exchange—which could happen when you re-encode it or basically reprocess the video—it could be harder. When we created SafeGuard, we had that in mind.
We actually do a frame-by-frame analysis. Basically, in one second you have 30 frames. We actually analyze the frames, so if we have to reconstitute the image.... Now, obviously there are algorithms and stuff like that, to enhance it. We saw it as an issue, and that's why we started developing this two years ago. We're finally ready to get it out. We're using it on photos already. It will be made available for videos within this month. Then we're going to make it available to any other website on the Internet that wants to use it. We not only saw an issue with that, but there also wasn't a centralized—