Actually, I was referring to trouble with automated data use. This comes back to something Ms. Bates mentioned earlier. The expression “open data“ does not just mean public and freely accessible data, it also means data in a format that computer applications can use.
Some data from the Canadian government are perfectly accessible to people. I had no problem viewing a certain number of maps and accessing certain data. It is just that it so happens that some data is in zipped Word files. As a human being, I have no problem dezipping and reading a Word file. However, this unstructured data is much more difficult to analyze directly with a computer, unless you use natural language processing technologies to extract unstructured text and structure it so it can be used.
One solution would be to take suitable data sets and put them into formats that are already structured. CSV files are an example of structured data. It is also possible to go all the way to true RDF format, the champion of “reusability“.
I am not an expert in all formats, for example, those for visual geographic data or maps. I am not an expert on the part of those files that is digital data and the part that enables them to be viewed. There is certainly a data subset that could be stored in tabular format.
The idea is to structure the information. That is what will make the data more open in the intended sense. The fact that the data is accessible to Canadians poses no problem whatsoever, but that is not what actually makes for open data. You may have 190,000 data sets, go through them and try to find something of interest. However, the principal of open data is about having more easily reusable formats.