Thank you for the question. It's a great question.
We spend a lot of time at the GDI thinking through how we define the problem. I think so many of the mainstream or common definitions of disinformation lie in a sort of false dichotomy of true and false. People say that disinformation is intentionally lying on the Internet, but I always say that this sort of simplistic definition doesn't pass what I like to call “the Santa Claus test”, meaning that if it were really just about someone intentionally lying on the Internet, we would be clamouring to deplatform every mention of Santa Claus, and we're clearly not doing that.
We look at disinformation through the lens of what we call “adversarial narrative conflict”. This is anytime someone is peddling an intentionally misleading narrative, either implicitly or explicitly, often using combinations of cherry-picked elements of the truth combined with falsehoods. Quoting someone who said something, and saying, “Well, that was just quoting them accurately,” without presenting a fuller picture is an example of cherry-picking an element of the truth to craft a potentially misleading narrative.
Any time someone is intentionally peddling one of those misleading narratives that, in our view, is adversarial against an at risk group or individual, an institution like science or medicine, or a democratically elected government and that, most importantly, carries with it a risk of harm, that to us is disinformation.