Explainable Features in Classification of Neonatal Thermograms

Loading...
Thumbnail Image

Date

2020

Authors

Journal Title

Journal ISSN

Volume Title

Publisher

IEEE

Open Access Color

OpenAIRE Downloads

OpenAIRE Views

Research Projects

Journal Issue

Abstract

Although deep learning models perform high performance classifications (+90% accuracy), there is very limited research on the explanability of models. However, explaining why a decision is made in computer-assisted diagnoses and determining why untrained deep learning models cannot be trained is crucial for medical professionals to evaluate the decision. In this study, 190 thermal images of 38 different neonates who were hospitalized in the Neonatal Intensive Care Unit of the Faculty of Medicine, Selcuk University were trained to perform an ESA model unhealthy-healthy classification and visualization of the intermediate layer outputs. The train-validation-test accuracy of the model was 9738%, 3736% and 94.73%, respectively. By visualizing the intermediate layer outputs, it has been shown that ESA filters learn the characteristics of the baby (edge, tissue, body, temperature) rather than background (incubator, measurement cables) when performing unhealthy-healthy classification.

Description

28th Signal Processing and Communications Applications Conference (SIU) -- OCT 05-07, 2020 -- ELECTR NETWORK

Keywords

classification, convolutional neural network, explainable artificial intelligence, neonate

Turkish CoHE Thesis Center URL

Fields of Science

Citation

WoS Q

N/A

Scopus Q

N/A

Source

2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU)

Volume

Issue

Start Page

End Page

SCOPUS™ Citations

1

checked on Feb 03, 2026

Google Scholar Logo
Google Scholar™

Sustainable Development Goals

SDG data is not available