Thank you.
I think that each model is developed in the context of the different study that it's made by, and so models developed in Asia also have lots of biases. They are just a different set of biases than models that have been developed by Canadians or Americans.
For example, a lot of object recognition tools have shown that they are not as good at recognizing the same objects—for example, soap—from a different country than the country where the dataset came from.
There are ways to get around this, but this requires a lot of different people involved with different perspectives, because there really is just no universal viewpoint. I think there's never a way of getting rid of all the biases in the model, because biases themselves are very relative to a particular societal context.