Variation between observers is one of the trickiest biases to account for in observational data. If 10 birders independently walk around a park for an hour and record what they see, they’ll all have a different checklist of birds. This is not a bad thing—as long as we can understand these differences. By understanding this variation, we can ensure that every eBird checklist is as valuable as it possibly can be. Beginning birders can submit complete checklists and know that they’re still collecting valuable data, and anyone using eBird data for analysis can minimize inter-observer biases. And all you have to do is go eBirding! Thanks to lead author Ali Johnston for the below summary of her recent work on observer expertise, most recently “Estimates of observer expertise improve species distributions from citizen science data” as published in Methods in Ecology and Evolution. This new paper builds upon the 2015 paper (with Ali as co-author) that described the estimation of observer expertise by using species accumulation curves.
Your valuable eBird sightings contribute to many scientific studies. They have helped scientists discover more about bird migration, investigate species hybridization, map the distributions of rare birds, optimize bird conservation, and so much more. It is this link between birdwatchers and science that makes eBird so powerful.
It’s great to know that, as a birdwatcher myself, the birds I report contribute to so many scientific advances. However, as a more beginner-level birdwatcher, I sometimes wonder if my checklists are good enough to contribute to all this high-quality science. We all know those experienced birders who can identify a flyover shorebird by hearing only a distant call, or confidently call out a warbler as it flits past through the trees. I know that my checklists will generally include far fewer species than the checklists from these experienced birders. There might have been some songbird calls I struggled to identify and some distant ducks that I couldn’t place to species. For this reason, my finger sometimes hovers over the complete checklist question, wondering whether my list really qualifies as a complete checklist. If I tick this checklist as ‘complete’, am I adding poor data into the database? Not at all!
We’ve recently published research that uses eBird data to demonstrate how every eBird checklist can be useful in scientific research. We developed an approach that allows us to identify the more experienced birders, the average birdwatchers, and those like me, who are just learning how to birdwatch. When this expertise information is included in analyses, the statistical algorithms take account of the different levels of skill among different birdwatchers. We found that estimates of bird distributions had improvements when we accounted for these differences among birdwatchers.
By including skill level in the analyses of eBird data, we can be confident that data from all eBirders can contribute to science. When I miss an overhead Pectoral Sandpiper call or fail to identify a distant female duck, the analysis takes account of the fact that I’m still learning some of these species and that I generally detect fewer species than those who know just about everything that they encounter. This means that the ‘complete checklist’ check really means whether this was a complete checklist for me. Did I include all the species that I was able to identify? Complete checklists are very important for use of the eBird data and now I can confidently tick that it was ‘complete’ because all the species that I was able to identify were included. This research is another step forward in helping to make the bird sightings contributed to eBird as useful as possible for science and conservation of birds.
If you do research that uses eBird data, and want your work featured for the eBird community as an eBird Science post, please write to us and include the words “eBird Science” in the subject.