Thank you so much.
I'm Joel Finkelstein, the chief science officer and the founder of the Network Contagion Research Institute.
Our organization profiles a lot of different threats that are facing governments, democracy and vulnerable communities. There are two that I want to bring to the attention of lawmakers today because I think they're highly emblematic of the kinds of threats that lawmakers often can't see, that platforms themselves have challenges policing and that have the capacity—I think intrinsically—for a profound breakout in the near future in ways that I think could create terrible harms for society and for vulnerable communities.
The first one that we talk about a lot is child harms. There's been a surge of online child harms through deceptive practices using AI.
The second is platform-scale manipulation by state actors. In this case, we're talking about TikTok.
In the first case, we found that there were cyber criminal syndicates in west Africa using AI to impersonate beautiful women—complete with videos, pictures and images. They would speak to teenagers. There was a 1,000% increase of these cases where they would impersonate women to get these teenagers into compromising positions and then “sextort” them. This has created a rash of 21 suicides—with several in Canada—of troubled children who have been sextorted this way.
You can well imagine the application that this is going to have towards the elderly. Platforms are terrible at policing this. This criminal syndicate from Nigeria was passing out manuals on how to do this on TikTok, YouTube and Scribd. This is facilitating a breakout of this kind of crime, which is only one example of something that has the capacity to be severely disarming to lawmakers as it begins interfering with other processes, among the elderly and youth.
These kinds of catfishing schemes and harms are very challenging to police. We need investigative mechanisms to understand them and unearth them more rapidly in order to address them. I sent you reports on that and I encourage everyone to take a look.
The other issue is not just that you have individual actors who are empowered by technology, but manipulations of entire platforms. NCRI performed research on TikTok, with its 1.5 billion users, and looked at inexplicable discrepancies in material that was sensitive to the Chinese Communist Party. This looked at whether the hashtags were on Israel, Ukraine or Kashmir or whether they pertained to Tibet or the South China Sea.
We saw in some cases it was 50 to one that these were more prevalent on comparable platforms than they were on TikTok, which suggested to us an incredible discrepancy that argued for a mass suppression of information and promotion of others through a charm offensive.
Genocide denial.... These problems are rampant on TikTok in a way that creates an “Alice in Wonderland” reality for 1.5 billion users. Our social psychology analysis suggests that this is impactful and alters the psychology of users towards a more friendly, pro-China stance on a massive scale.
Understanding these kinds of problems requires that parliamentarians and democratic bodies have greater insight and investigative capacity rapidly at their fingertips to be able to explore and understand emerging threats before those threats can get the better of them.
I will cede the rest of my time.