That's interesting. Computers never make mistakes. They will calculate the same answer every single time in exactly same way. The only errors that are ever made are human errors in the programming of that computer.
The reason I'm asking this is that when it comes to the analysis of data and as we look at the review of this legislation, I would suggest that actually it's more than information. It's not information that's being passed, but data, so I don't know if the act is actually properly named. That's a moot point.
However, if I were an analyst working on a large set of data—we can call it metadata or we can call it whatever we want—it will come in all shapes and forms. It will be shared in different varieties and different formats and different platforms, depending on the agency that shares that information, depending on whether it's domestically sourced or whether it's internationally sourced, and I would want as much data as I possibly could to run analytics on. Do you believe that we should be limiting ourselves to the amount of data that we actually have? There's a good discussion here at the table.
I'll give you an analogy. I'm a fisherman as well. Why would you want to limit me to fishing in a certain corner of the lake? Wouldn't you rather just change the nature of the hook that I use and allow me to fish everywhere?