Thank you very much.
I'm pleased to be here to testify as an individual.
I'm a strategic advisor in artificial intelligence. I' ve spent my entire career using AI technology, which became available in the early 2000s. I worked in operational research, artificial intelligence, and applied mathematics. I developed tools and software that have been used around the world. In 2016, I founded Element AI and was the company's president until it was sold to ServiceNow in 2021.
I have frequently collaborated internationally. For two years, I was the co‑chair of the working group on innovation and marketing for the Global Partnership on Artificial Intelligence. I also represented Canada on the European Commission's high-level expert group on artificial intelligence. Canada was the only country to have participated that was not in the European Union. I co‑chaired the drafting of the main deliverable on regulation and investment for trustworthy artificial intelligence.
I was involved in many events held by the Organization for Economic Co‑operation and Development and the Institute of Electrical and Electronics Engineers, in addition to many other international contributions. I was also a member of federal sectoral economic strategy tables for digital industries.
Despite Canada's track record in artificial intelligence research, and its undeniable contribution to basic research, it has gradually been losing its leadership role. It's important to be aware of the fact that we are no longer in the forefront. Our researchers now have limited resources. Conducting research and understanding what is happening in this field today is extremely expensive, and many innovations will emerge in the private sector. It's a fact. Much of the work being published by researchers has been done in collaboration with foreign firms, because that's how they can get access to the resources needed to train models and conduct tests, so that they can continue to publish and come up with new ideas.
Canada has always been somewhat less competitive than the United States, and although things have not got worse, they haven't improved. For a technology as essential as artificial intelligence, which I like to compare literally to energy, we're talking about intelligence, know-how and capabilities. It's a technology that is already being deployed in every industry and every sphere of life. Absolutely no corner of society is unaffected by it.
What I would like to underscore is the importance of not treating artificial intelligence homogeneously, just as the various regulations and statutes for oil, natural gas and electricity are not so treated. I could even start breaking it down into all the subsidiary aspects of production for each of these resources. It's very difficult to treat artificial intelligence in the same way for each of its applications. Everything is moving forward very quickly and it's highly complex, and when you put all the facts together, we feel overwhelmed. That, unfortunately, is what we hear all too often in the media. We've been here for quite a while and we've already heard words like "fear" and "advancement". there has also been talk of uncertainty about the future.
So, to return to the subject at hand, yes, it's absolutely urgent to take action. I am in no way hinting that measures ought not to be taken, but they ought to be appropriate for the situation now facing us.
We are facing a rapidly evolving complex situation that affects every sphere of society. It' s important to avoid adopting a single, straightforward and overly forceful response. What would happen if we took that kind of approach? We would perhaps protect ourselves, but it would certainly prevent us from taking advantage of opportunities and promoting the kind of economic development and productivity growth that would enrich the whole country. That's simply a fact. We can't deal with every single potential situation, because it would be too complex.
If we try to do everything and cover all aspects, our regulations will be too vague, ineffective and misunderstood. The economic outcome of vague regulation—you know this better than I do—will be that investments will not flow in. If consequences are unclear or definitions left until later, companies will simply invest elsewhere. It's a highly mobile digital field. Many Canadian workers compile and train models in the United States, beyond the reach of our own rules for our companies and our universities. It's important to be aware of that.
I believe that these are the key elements. They are central to our deliberations about how to write the rules, and in particular the way that they will be fine-tuned. Not only that, but they will guide the effort required to do the work properly to come up with a clear and accurate regulatory framework that promotes investment. With a framework like that, we'll know exactly what we are going to get if we make such and such an investment, and would understand exactly what the costs will be to provide transparency, to be able to publish data and to check that they have been anonymized.
That would enable organizations to invest as much as they and we want. If we are clear, organizations will be able to do the computations and decide whether or not to invest in Canada and deploy their services here. It will then be up to us to determine whether the bar has been set too high and whether the criteria are overly restrictive.
Vague regulations would guarantee that nothing will happen. Companies will simply go elsewhere because it's too easy to do so. Various other elements are on my list and I will summarize these. Please excuse me for not having done so prior to my presentation. I will send the committee all the details and recommendations with respect to the adjustments that should have been made.
In this regulatory framework, I believe that transparency will be very important if there is to be a climate of trust. It's important to ensure that users of the technology are aware that they are interacting with it. Some questions and subjects arise in all industries. It's important to be able to know what we are getting.
I'm talking about the underlying principles: stating what services we can access, their parameters and their specifications. If a service changes or its model is updated, that would enable us to assess the repercussions of using it. There are also all the other principles that would ensure people are not being manipulated and that require compliance with ethical and other issues. These are fundamental principles that must be part of the regulatory framework.
One of my most serious concerns is the lack of specificity and the possibility that the law would be too broad in scope. I learned a lesson from my participation in what led to the European Union's artificial intelligence law. Europe tried to come up with exhaustive legislative measures that attempted to include almost everything. However, many of the recommendations made by the committee at the time focused on the need to work with industry, the need for accuracy and avoiding a piece of legislation that tried to cover everything.
Of course, something new always comes up. It could be generative artificial intelligence or the next generation of artificial intelligence as applied to cybersecurity, health and all aspects of the economy, services and our lives. There's always something that has to be amended or altered.
My view is that caution is needed in this respect, as well as an extremely surgical approach that would lead to the development of regulations specific to each and every industry sector, with their assistance, the automobile sector for instance.