Please use this identifier to cite or link to this item:
Title: Explainable Features in Classification of Neonatal Thermograms
Authors: Örnek, Ahmet Haydar
Ceylan, Murat
Keywords: classification
convolutional neural network
explainable artificial intelligence
Issue Date: 2020
Publisher: IEEE
Abstract: Although deep learning models perform high performance classifications (+90% accuracy), there is very limited research on the explanability of models. However, explaining why a decision is made in computer-assisted diagnoses and determining why untrained deep learning models cannot be trained is crucial for medical professionals to evaluate the decision. In this study, 190 thermal images of 38 different neonates who were hospitalized in the Neonatal Intensive Care Unit of the Faculty of Medicine, Selcuk University were trained to perform an ESA model unhealthy-healthy classification and visualization of the intermediate layer outputs. The train-validation-test accuracy of the model was 9738%, 3736% and 94.73%, respectively. By visualizing the intermediate layer outputs, it has been shown that ESA filters learn the characteristics of the baby (edge, tissue, body, temperature) rather than background (incubator, measurement cables) when performing unhealthy-healthy classification.
Description: 28th Signal Processing and Communications Applications Conference (SIU) -- OCT 05-07, 2020 -- ELECTR NETWORK
ISBN: 978-1-7281-7206-4
ISSN: 2165-0608
Appears in Collections:Mühendislik ve Doğa Bilimleri Fakültesi Koleksiyonu
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collections
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collections

Files in This Item:
File SizeFormat 
  Until 2030-01-01
794.72 kBAdobe PDFView/Open    Request a copy
Show full item record

CORE Recommender

Page view(s)

checked on Feb 19, 2024

Google ScholarTM



Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.