I'm happy to answer this.
It's a very important question, how we both identify bias as it's picked up by algorithms and then mitigate it once we know that it's there. I think the simplest way to explain the problem is that we live in a biased world and we're training algorithms and AI with data about the world, so it's inevitable that they pick up these biases and can end up replicating them or even creating new biases that we're not aware of.
We tend to think of bias in terms of protected attributes—things such as ethnicity, gender or religion, things that are historically protected for very good reasons. What's interesting about AI is that it can create entirely new sets of biases that don't map onto those characteristics or even characteristics that are humanly interpretable or humanly comprehensible. Detecting those sorts of biases in particular is very difficult and requires looking essentially at the set of decisions or outputs of an algorithmic system to try to identify when there is disparate impact upon particular groups, even if they are not legally protected groups.
Besides that, there is quite a bit of research, and methods are being developed to detect gaps in the representativeness of data and also to detect proxies for protected attributes that may or may not be known in the training phase. For example, postal code is a very strong proxy for ethnicity in some cases. It's discovering more sorts of proxies like that.
Again, there are many types of testing—automated methods, auditing methods—whereby essentially you are doing some sort of analysis of training data of the algorithm while it's performing processing and of the sets of decisions that it produces.
There is, then, no simple answer to how you do it, but there are methods available at all stages.