I'm interested in the question of AI. I see that it gets promoted a lot in smart governance. We can use AI to help fight climate change. I'm not making that up; I saw that. We can use AI, and it will help to mitigate natural disasters.
No offence to my colleagues on the government side, but governments love things with all the bells and whistles, and that are magic and seem to offer miracle cures.
I'm interested in the disenfranchisement of citizens through the use of AI, and the way that some end up as winners and losers in the digital and social realms. The ideas that are used in the modelling of AI will have social impacts.
How do we, in our work in committee, ensure that when we're talking about AI, there's an ethical lens that is transparent and makes sure that people are not being targeted or disenfranchised because they don't fit the algorithm?