Örnek, Ahmet HaydarCeylan, Murat2024-04-202024-04-2020240765-00191958-5608https://doi.org/10.18280/ts.410105https://hdl.handle.net/20.500.13091/5366Deep learning models are proficient at predicting target classes, but they need to explain their predictions. Explainable Artificial Intelligence (XAI) offers a promising solution by providing both transparency and object detection capabilities to classification models. Mask detection plays a crucial role in ensuring the safety and well-being of individuals by preventing the spread of infectious diseases. A new visual XAI method called HayCAM+ is proposed to address the limitations of the previous method known as HayCAM, such as the need to select the number of filters as a hyper -parameter and the use of fully -connected layers. When object detection is performed using activation maps created via various methods, including GradCAM, EigenCAM, GradCAM++, LayerCAM, HayCAM, and HayCAM+, it is found that HayCAM+ provides the best results with an IoU score of 0.3740 (GradCAM: 0.1922, GradCAM++: 0.2472, EigenCAM: 0.3386, LayerCAM: 0.2476, HayCAM: 0.3487) and a Dice score of 0.5376 (GradCAM: 0.3153, GradCAM++: 0.3923, EigenCAM: 0.5003, LayerCAM: 0.3928, HayCAM: 0.5098). By using dynamical dimension reduction to eliminate unrelated filters in the last convolutional layer, HayCAM+ generates more focused activation maps. The results demonstrate that HayCAM+ is an advanced activation map method for explaining decisions and detecting objects using deep classification models.eninfo:eu-repo/semantics/openAccessclass activation mappingexplainable artificial intelligenceHayCAMdeep learningvisual explanationweakly-supervised object detectionArtificial-IntelligenceNeural-NetworksImproving Explainability in Cnn-Based Classification of Mask Images With Haycam Plus : an Enhanced Visual Explanation TechniqueArticle10.18280/ts.410105