One area is the use of artificial intelligence in medical diagnosis. We look at three criteria: Can it approve or deny consequential services? Is there infringement on human rights or human dignity? Are there health and safety issues at hand?
In one case, researchers were training artificial intelligence algorithms on chest X-rays. They then wanted to put that onto the emergency room floor, and they said, “Hey, this might impact patients. We need to understand how this works.” Our committee came together. We reviewed the datasets. We reviewed the validity of that open-source dataset and the number of people there. Then we provided guidance back to the researchers who were putting this in place. The guidance was along the lines of, “Hey, this is not for clinical use”, because software as a medical device is a completely different area. It requires certifications and whatnot.
However, if we're trying to assess whether or not artificial intelligence could potentially have the ability to learn from these scans, then that would be a good use. That's how we would tend to look at that situation.