Örnek, Ahmet HaydarCeylan, Murat2021-12-132021-12-132020978-1-7281-7206-42165-0608https://hdl.handle.net/20.500.13091/106928th Signal Processing and Communications Applications Conference (SIU) -- OCT 05-07, 2020 -- ELECTR NETWORKAlthough deep learning models perform high performance classifications (+90% accuracy), there is very limited research on the explanability of models. However, explaining why a decision is made in computer-assisted diagnoses and determining why untrained deep learning models cannot be trained is crucial for medical professionals to evaluate the decision. In this study, 190 thermal images of 38 different neonates who were hospitalized in the Neonatal Intensive Care Unit of the Faculty of Medicine, Selcuk University were trained to perform an ESA model unhealthy-healthy classification and visualization of the intermediate layer outputs. The train-validation-test accuracy of the model was 9738%, 3736% and 94.73%, respectively. By visualizing the intermediate layer outputs, it has been shown that ESA filters learn the characteristics of the baby (edge, tissue, body, temperature) rather than background (incubator, measurement cables) when performing unhealthy-healthy classification.trinfo:eu-repo/semantics/closedAccessclassificationconvolutional neural networkexplainable artificial intelligenceneonateExplainable Features in Classification of Neonatal ThermogramsConference Object2-s2.0-85100297526