Name
#47 Classification of Gender and Systemic Disease from External Eye Photographs using Deep Learning
Speakers
Content Presented on Behalf of
DHA
Services/Agencies represented
US Army
Session Type
Posters
Room#/Location
Prince Georges Exhibit Hall A/B
Focus Areas/Topics
Medical Technology
Learning Outcomes
1. Describe the means by which external eye photographs
can be used to detect health conditions.
2. Discuss the potential of remote diagnosis/screening using
smartphone cameras for military health readiness and care.
3. Discuss the results of explainability analysis in illustrating
deep learning models.
can be used to detect health conditions.
2. Discuss the potential of remote diagnosis/screening using
smartphone cameras for military health readiness and care.
3. Discuss the results of explainability analysis in illustrating
deep learning models.
Session Currently Live
Description
Background: Artificial intelligence (AI) and machine learning (ML) have been explored in the realm of medical diagnostics and screening utilizing an array of imaging modalities. We investigated the utility of AI/ML in the diagnosis of health conditions by developing deep learning models to train on external eye photographs taken using a smartphone camera.
Methods: A dataset of over 25,000 external eye images and corresponding clinical data points collected in various locations throughout India was utilized. The images and clinical data were used in the training and testing of deep learning models for the classification of gender and detection of systemic diseases including hypertension, diabetes, and COVID-19. External eye images were captured via smartphone camera by trained technicians. The collection of data complied with relevant research ethics regulations. Explainability techniques, including saliency maps, were employed in an effort to provide insight on the deep learning models.
Results: The developed ML models had an 81.0% accuracy (75.0% sensitivity, 84.7% specificity) in classifying gender and an 80.2% accuracy (47.4% sensitivity, 84.0%
specificity) in detecting hypertension. Models showed a 92.0% accuracy (18.7% sensitivity, 96.3% specificity) in detecting diabetes and 83.3% accuracy (52.9% sensitivity, 85.5% specificity) in detecting COVID-19. Explainability analysis demonstrated that the developed ML models made classifications based on relevant regions of external ocular anatomy.
Conclusions: Deep learning models were developed for the diagnosis of health conditions using external eye images taken by a smartphone camera, exhibiting moderate accuracy despite marked class imbalances. Our findings represent a novel diagnostic approach with the potential for development into a remote and highly-accessible diagnostic screening alternative for healthcare professionals and patients. This work shows particular promise as an effective means for diagnosis and screening in the context of military health readiness and care; further work is warranted to expand upon and address the limitations of this study.