That's a great question.
We see a lot of personal damage, such as psychosis or people taking their own lives. It stems greatly from a lack of education on what AI is and where the boundaries are. We go directly to ChatGPT, a platform like Google, which requires people to ask it questions. There is no mention that AI will hallucinate 28.4% of the time or that it is trained to say what we want to hear.
Right now, we have a kind of intrinsic relationship where we trust AI as we would a doctor, a Ph.D. or someone who passed the bar exam. However, the damage comes from the fact that it hallucinates to such an enormous degree. In that regard, I think users really lack education.
