That's a superb question. The accuracy of citizen science data is paramount, or it risks actually contaminating and hindering the scientific process—which of course would be the exact opposite of what we all intend.
We have a number of different safeguards. One of the things we do in our program, which is replicated in many others, is to ask citizen scientists to submit their observations in the form of a digital photograph. It is a very easy thing to do these days with cellphone technologies. That's one layer.
When that photograph is submitted, we have a panel of experts, people who are the very best folks in the country for identifying butterflies. They can look at that photograph and say, “Okay, this is what you think it is.” Then we can do other things as well. We can say, “Okay, you just said you saw a monarch butterfly, but it's January. I think you might be thinking of something else.” We can do little checks like that. We can evaluate the known flight seasons of different species and say, “Okay, this is when this butterfly is reasonably active and there's a little bit of error on either side of that time. Is it possible you could have seen this butterfly in this place at this time?”
There are a number of different layers. We don't use the unvalidated data for science purposes. We use only the materials that have gone through a number of different independent quality checks.
That's a really good question.