Automating the Surveillance of Mosquito Vectors from Trapped Specimens Using Computer Vision Techniques


Mona Minakshi (University of South Florida)
Pratool Bharti (Northern Illinois University)
Willie B. McClinton III (University of South Florida)
Jamshidbek Mirzakhalov (University of South Florida)
Ryan M. Carney (University of South Florida)
Sriram Chellappan (University of South Florida)


Session: 3.3. Data-driven sustainability

Abstract: Among all animals, mosquitoes are responsible for the most deaths worldwide. Interestingly, not all types of mosquitoes spread diseases, but rather, a select few alone are competent enough to do so. In the case of any disease outbreak, an important first step is surveillance of vectors (i.e., those mosquitoes capable of spreading diseases). To do this today, public health workers lay several mosquito traps in the area of interest. Hundreds of mosquitoes will get trapped. Naturally, among these hundreds, taxonomists have to identify only the vectors to gauge their density. This process today is manual, requires complex expertise/ training, and is based on visual inspection of each trapped specimen under a microscope. It is long, stressful and self-limiting. This paper presents an innovative solution to this problem. Our technique assumes the presence of an embedded camera (similar to those in smart-phones) that can take pictures of trapped mosquitoes. Our techniques proposed here will then process these images to automatically classify the genus and species type. Our CNN model based on Inception-ResNet V2 and Transfer Learning yielded an overall accuracy of 80% in classifying mosquitoes when trained on 25, 867 images of 250 trapped mosquito vector specimens captured via many smart-phone cameras. In particular, the accuracy of our model in classifying Aedes aegypti and Anopheles stephensi mosquitoes (both of which are especially deadly vectors) is amongst the highest. We also present important lessons learned and practical impact of our techniques in this paper.