Good afternoon.
Thank you for your question. It's a good one.
I was at a telecommunications symposium recently, and one of the issues discussed was how reliable AI systems were when trained on data that aren't entirely reliable. For example, when the Internet is used to train an AI system, it really captures everything out there, even though some of that information is false and some is true.
How do you make sure an AI system trained on those data is reliable?
When that question was put to business people in the telecommunications sector, they all evaded the question. The reason I'm telling you that story is that, afterwards, I spoke with the person moderating the panel discussion. She, herself, is a technology expert, and she said that the only way to make sure the data are high quality is to require companies to disclose where the data used to train their systems came from. Developers would have to tell companies purchasing AI software whether the systems were trained on data pulled from the Internet, private corporate data, academic data or government data.