Thank you, Mr. Chair.
I do support the idea of a federal advisory council, as all folks here today have testified. This is moving very fast. It poses new opportunities and new challenges. Bringing in top expertise in an advisory role is an excellent idea.
Of the three topics I would most address, one is how to use the technology to augment labour rather than automate it. I don't think we should take as a given that augmentation necessarily occurs. Countries steer technologies. Nuclear energy is used by North Korea solely for offensive weapons. It's used by Japan solely for energy generation. They have no offensive nuclear weapons. That's a choice of a country; it's not a characteristic of technology.
How to use it well to augment workers is the first thing.
The second thing is protection for workers. As I noted, undue surveillance, high-stakes decision-making by opaque algorithms, and AI's appropriation of workers' creative work without compensation should be regulated. We have fair use when it comes to intellectual property, but the laws were not written for AI.
The final thing I would say is on visibility into these technologies. They are opaque. They're making high-stakes decisions, and often the creators of technologies will not even disclose what sources of data have been used for training. I don't think that's acceptable.
I think there's a public interest in making sure that machines that are making important decisions—and valuable decisions; I use and support AI—need to be understandable to regulators and to consumers.