PubMed İndeksli Yayınlar Koleksiyonu / PubMed Indexed Publications Collections
Permanent URI for this collectionhttps://hdl.handle.net/20.500.13091/5
Browse
Browsing PubMed İndeksli Yayınlar Koleksiyonu / PubMed Indexed Publications Collections by Department "Fakülteler, Mühendislik ve Doğa Bilimleri Fakültesi, Elektrik-Elektronik Mühendisliği Bölümü"
Now showing 1 - 17 of 17
- Results Per Page
- Sort Options
Article Citation - WoS: 9Citation - Scopus: 11Adrenal Tumor Segmentation Method for Mr Images(ELSEVIER IRELAND LTD, 2018) Barstuğan, Mücahid; Ceylan, Rahime; Asoğlu, Semih; Cebeci, Hakan; Koplay, MustafaBackground and objective: Adrenal tumors, which occur on adrenal glands, are incidentally determined. The liver, spleen, spinal cord, and kidney surround the adrenal glands. Therefore, tumors on the adrenal glands can be adherent to other organs. This is a problem in adrenal tumor segmentation. In addition, low contrast, non-standardized shape and size, homogeneity, and heterogeneity of the tumors are considered as problems in segmentation. Methods: This study proposes a computer-aided diagnosis (CAD) system to segment adrenal tumors by eliminating the above problems. The proposed hybrid method incorporates many image processing methods, which include active contour, adaptive thresholding, contrast limited adaptive histogram equalization (CLAHE), image erosion, and region growing. Results: The performance of the proposed method was assessed on 113 Magnetic Resonance (MR) images using seven metrics: sensitivity, specificity, accuracy, precision, Dice Coefficient, Jaccard Rate, and structural similarity index (SSIM). The proposed method eliminates some of the discussed problems with success rates of 74.84%, 99.99%, 99.84%, 93.49%, 82.09%, 71.24%, 99.48% for the metrics, respectively. Conclusions: This study presents a new method for adrenal tumor segmentation, and avoids some of the problems preventing accurate segmentation, especially for cyst-based tumors. (C) 2018 Elsevier B.V. All rights reserved.Article Automatic Phase Reversal Detection in Routine Eeg(CHURCHILL LIVINGSTONE, 2020) Yıldırım, Sema; Koçer, Hasan Erdinç; Ekmekçi, Ahmet HakanElectroencephalograph (EEG), a valuable tool in the clinical evaluation, is readily available, safe and provides information about brain function. EEG interpretation is important for the diagnosis of neurological disorders. The long-term EEG data may be required to document and study neurosciences that include many epileptic activities and phase reversal (PR) etc. However, analyze of the long-term EEG done by an expert neurologist is much time consuming and quite difficult. Therefore, an automatic PR determination method for analyzing of long-term EEG is described in this study. The presented technique was applied to the pathological EEG recordings that were obtained from two different datasets gathered as a retrospective in Selcuk University Hospital (SUH) and Boston Children's Hospital (BCH). With this method, PR in the dataset was determined and then compared with the ones detected by the specialist doctor. Two tests were carried out in the SUH dataset and the classification success of the method was 83.22% for test 1 and 85.19% for test 2. On the other hand, three tests were carried out for two different position values for BCH dataset. The highest classification success of the six tests was 75% for test 5, while the lowest classification success appeared as 58.33% for test 6. As a result, the overall success in the detection of PR with the conducted method is 84.20% for SUH and 66.7% for BCH. According to these results, the determination of PR that is known to be indicative of neurological disorders and presenting them to expert information will accelerate the interpretation of long-term EEG recordings.Article Citation - WoS: 67Citation - Scopus: 108Classification of Coronavirus (covid-19) Fromx-Rayandctimages Using Shrunken Features(WILEY, 2021) Öztürk, Şaban; Özkaya, Umut; Barstuğan, MücahidNecessary screenings must be performed to control the spread of the COVID-19 in daily life and to make a preliminary diagnosis of suspicious cases. The long duration of pathological laboratory tests and the suspicious test results led the researchers to focus on different fields. Fast and accurate diagnoses are essential for effective interventions for COVID-19. The information obtained by using X-ray and Computed Tomography (CT) images is vital in making clinical diagnoses. Therefore it is aimed to develop a machine learning method for the detection of viral epidemics by analyzing X-ray and CT images. In this study, images belonging to six situations, including coronavirus images, are classified using a two-stage data enhancement approach. Since the number of images in the dataset is deficient and unbalanced, a shallow image augmentation approach was used in the first phase. It is more convenient to analyze these images with hand-crafted feature extraction methods because the dataset newly created is still insufficient to train a deep architecture. Therefore, the Synthetic minority over-sampling technique algorithm is the second data enhancement step of this study. Finally, the feature vector is reduced in size by using a stacked auto-encoder and principal component analysis methods to remove interconnected features in the feature vector. According to the obtained results, it is seen that the proposed method has leveraging performance, especially to make the diagnosis of COVID-19 in a short time and effectively. Also, it is thought to be a source of inspiration for future studies for deficient and unbalanced datasets.Article Citation - WoS: 237Citation - Scopus: 311Cnn-Based Transfer Learning-Bilstm Network: a Novel Approach for Covid-19 Infection Detection(ELSEVIER, 2021) Aslan, Muhammet Fatih; Ünlerşen, Muhammed Fahri; Sabancı, Kadir; Durdu, AkifCoronavirus disease 2019 (COVID-2019), which emerged in Wuhan, China in 2019 and has spread rapidly all over the world since the beginning of 2020, has infected millions of people and caused many deaths. For this pandemic, which is still in effect, mobilization has started all over the world, and various restrictions and precautions have been taken to prevent the spread of this disease. In addition, infected people must be identified in order to control the infection. However, due to the inadequate number of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, Chest computed tomography (CT) becomes a popular tool to assist the diagnosis of COVID-19. In this study, two deep learning architectures have been proposed that automatically detect positive COVID-19 cases using Chest CT X-ray images. Lung segmentation (preprocessing) in CT images, which are given as input to these proposed architectures, is performed automatically with Artificial Neural Networks (ANN). Since both architectures contain AlexNet architecture, the recommended method is a transfer learning application. However, the second proposed architecture is a hybrid structure as it contains a Bidirectional Long Short-Term Memories (BiLSTM) layer, which also takes into account the temporal properties. While the COVID-19 classification accuracy of the first architecture is 98.14%, this value is 98.70% in the second hybrid architecture. The results prove that the proposed architecture shows outstanding success in infection detection and, therefore this study contributes to previous studies in terms of both deep architectural design and high classification success. (C) 2020 Elsevier B.V. All rights reserved.Article Citation - WoS: 7Citation - Scopus: 12A Comprehensive Study of Brain Tumour Discrimination Using Phase Combinations, Feature Rankings, and Hybridised Classifiers(SPRINGER HEIDELBERG, 2020) Koyuncu, Hasan; Barstuğan, Mücahid; Öziç, Muhammet ÜsameThe binary categorisation of brain tumours is challenging owing to the complexities of tumours. These challenges arise because of the diversities between shape, size, and intensity features for identical types of tumours. Accordingly, framework designs should be optimised for two phenomena: feature analyses and classification. Based on the challenges and difficulty of the issue, limited information or studies exist that consider the binary classification of three-dimensional (3D) brain tumours. In this paper, the discrimination of high-grade glioma (HGG) and low-grade glioma (LGG) is accomplished by designing various frameworks based on 3D magnetic resonance imaging (3D MRI) data. Accordingly, diverse phase combinations, feature-ranking approaches, and hybrid classifiers are integrated. Feature analyses are performed to achieve remarkable performance using first-order statistics (FOS) by examining different phase combinations near the usage of single phases (T1c, FLAIR, T1, and T2) and by considering five feature-ranking approaches (Bhattacharyya, Entropy, Roc,ttest, and Wilcoxon) to detect the appropriate input to the classifier. Hybrid classifiers based on neural networks (NN) are considered due to their robustness and superiority with medical pattern classification. In this study, state-of-the-art optimisation methods are used to form the hybrid classifiers: dynamic weight particle swarm optimisation (DW-PSO), chaotic dynamic weight particle swarm optimisation (CDW-PSO), and Gauss-map-based chaotic particle-swarm optimisation (GM-CPSO). The integrated frameworks, including DW-PSO-NN, CDW-PSO-NN, and GM-CPSO-NN, are evaluated on the BraTS 2017 challenge dataset involving 210 HGG and 75 LGG samples. The 2-fold cross-validation test method and seven metrics (accuracy, AUC, sensitivity, specificity, g-mean, precision, f-measure) are processed to evaluate the performance of frameworks efficiently. In experiments, the most effective framework is provided that uses FOS, data including three phase combinations, the Wilcoxon feature-ranking approach, and the GM-CPSO-NN method. Consequently, our framework achieved remarkable scores of 90.18% (accuracy), 85.62% (AUC), 95.24% (sensitivity), 76% (specificity), 85.08% (g-mean), 91.74% (precision), and 93.46% (f-measure) for HGG/LGG discrimination of 3D brain MRI data.Article Citation - WoS: 93Citation - Scopus: 123Covid-19 Diagnosis Using State-Of Cnn Architecture Features and Bayesian Optimization(Pergamon-Elsevier Science Ltd, 2022) Aslan, Muhammet Fatih; Sabancı, Kadir; Durdu, Akif; Ünlerşen, Muhammed FahriThe coronavirus outbreak 2019, called COVID-19, which originated in Wuhan, negatively affected the lives of millions of people and many people died from this infection. To prevent the spread of the disease, which is still in effect, various restriction decisions have been taken all over the world. In addition, the number of COVID-19 tests has been increased to quarantine infected people. However, due to the problems encountered in the supply of RTPCR tests and the ease of obtaining Computed Tomography and X-ray images, imaging-based methods have become very popular in the diagnosis of COVID-19. Therefore, studies using these images to classify COVID-19 have increased. This paper presents a classification method for computed tomography chest images in the COVID-19 Radiography Database using features extracted by popular Convolutional Neural Networks (CNN) models (AlexNet, ResNet18, ResNet50, Inceptionv3, Densenet201, Inceptionresnetv2, MobileNetv2, GoogleNet). The determination of hyperparameters of Machine Learning (ML) algorithms by Bayesian optimization, and ANN-based image segmentation are the two main contributions in this study. First of all, lung segmentation is performed automatically from the raw image with Artificial Neural Networks (ANNs). To ensure data diversity, data augmentation is applied to the COVID-19 classes, which are fewer than the other two classes. Then these images are applied as input to five different CNN models. The features extracted from each CNN model are given as input to four different ML algorithms, namely Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), Naive Bayes (NB), and Decision Tree (DT) for classification. To achieve the most successful classification accuracy, the hyperparameters of each ML algorithm are determined using Bayesian optimization. With the classification made using these hyperparameters, the highest success is obtained as 96.29% with the DenseNet201 model and SVM algorithm. The Sensitivity, Precision, Specificity, MCC, and F1-Score metric values for this structure are 0.9642, 0.9642, 0.9812, 0.9641 and 0.9453, respectively. These results showed that ML methods with the most optimum hyperparameters can produce successful results.Article Citation - WoS: 9Citation - Scopus: 14Covid-19 Discrimination Framework for X-Ray Images by Considering Radiomics, Selective Information, Feature Ranking, and a Novel Hybrid Classifier(ELSEVIER, 2021) Koyuncu, Hasan; Barstuğan, MücahidIn medical imaging procedures for the detection of coronavirus, apart from medical tests, approval of diagnosis has special significance. Imaging procedures are also useful for detecting the damage caused by COVID-19. Chest X-ray imaging is frequently used to diagnose COVID-19 and different pneumonias. This paper presents a task-specific framework to detect coronavirus in X-ray images. Binary classification of three different labels (healthy, bacterial pneumonia, and COVID-19) was performed on two differentiated data sets in which corona is stated as positive. First-order statistics, gray level co-occurrence matrix, gray level run length matrix, and gray level size zone matrix were analyzed to form fifteen sub-data sets and to ascertain the necessary radiomics. Two normalization methods are compared to make the data meaningful. Furthermore, five feature ranking approaches (Bhattacharyya, entropy, Roc, t-test, and Wilcoxon) are mentioned to provide necessary information to a state-of-the-art classifier based on Gauss-map-based chaotic particle swarm optimization and neural networks. The proposed framework was designed according to the analyses about radiomics, normalization approaches, and filter-based feature ranking methods. In experiments, seven metrics were evaluated to objectively determine the results: accuracy, area under the receiver operating characteristic (ROC) curve, sensitivity, specificity, g-mean, precision, and f-measure. The proposed framework showed promising scores on two X-ray-based data sets, especially with the accuracy and area under the ROC curve rates exceeding 99% for the classification of coronavirus vs. others.Article Citation - WoS: 3Citation - Scopus: 5Covid-19 Isolation Control Proposal Via Uav and Ugv for Crowded Indoor Environments: Assistive Robots in the Shopping Malls(Frontiers Media Sa, 2022) Aslan, Muhammet Fatih; Hasikin, Khairunnisa; Yusefi, Abdullah; Durdu, Akif; Sabancı, Kadir; Azizan, Muhammad MokhzainiArtificial intelligence researchers conducted different studies to reduce the spread of COVID-19. Unlike other studies, this paper isn't for early infection diagnosis, but for preventing the transmission of COVID-19 in social environments. Among the studies on this is regarding social distancing, as this method is proven to prevent COVID-19 to be transmitted from one to another. In the study, Robot Operating System (ROS) simulates a shopping mall using Gazebo, and customers are monitored by Turtlebot and Unmanned Aerial Vehicle (UAV, DJI Tello). Through frames analysis captured by Turtlebot, a particular person is identified and followed at the shopping mall. Turtlebot is a wheeled robot that follows people without contact and is used as a shopping cart. Therefore, a customer doesn't touch the shopping cart that someone else comes into contact with, and also makes his/her shopping easier. The UAV detects people from above and determines the distance between people. In this way, a warning system can be created by detecting places where social distance is neglected. Histogram of Oriented-Gradients (HOG)-Support Vector Machine (SVM) is applied by Turtlebot to detect humans, and Kalman-Filter is used for human tracking. SegNet is performed for semantically detecting people and measuring distance via UAV. This paper proposes a new robotic study to prevent the infection and proved that this system is feasible.Article Citation - WoS: 11Citation - Scopus: 12Deep Learning-Based Approaches To Improve Classification Parameters for Diagnosing Covid-19 From Ct Images(SPRINGER, 2024) Yaşar, Hüseyin; Ceylan, MuratPatients infected with the COVID-19 virus develop severe pneumonia, which generally leads to death. Radiological evidence has demonstrated that the disease causes interstitial involvement in the lungs and lung opacities, as well as bilateral ground-glass opacities and patchy opacities. In this study, new pipeline suggestions are presented, and their performance is tested to decrease the number of false-negative (FN), false-positive (FP), and total misclassified images (FN + FP) in the diagnosis of COVID-19 (COVID-19/non-COVID-19 and COVID-19 pneumonia/other pneumonia) from CT lung images. A total of 4320 CT lung images, of which 2554 were related to COVID-19 and 1766 to non-COVID-19, were used for the test procedures in COVID-19 and non-COVID-19 classifications. Similarly, a total of 3801 CT lung images, of which 2554 were related to COVID-19 pneumonia and 1247 to other pneumonia, were used for the test procedures in COVID-19 pneumonia and other pneumonia classifications. A 24-layer convolutional neural network (CNN) architecture was used for the classification processes. Within the scope of this study, the results of two experiments were obtained by using CT lung images with and without local binary pattern (LBP) application, and sub-band images were obtained by applying dual-tree complex wavelet transform (DT-CWT) to these images. Next, new classification results were calculated from these two results by using the five pipeline approaches presented in this study. For COVID-19 and non-COVID-19 classification, the highest sensitivity, specificity, accuracy, F-1, and AUC values obtained without using pipeline approaches were 0.9676, 0.9181, 0.9456, 0.9545, and 0.9890, respectively; using pipeline approaches, the values were 0.9832, 0.9622, 0.9577, 0.9642, and 0.9923, respectively. For COVID-19 pneumonia/other pneumonia classification, the highest sensitivity, specificity, accuracy, F-1, and AUC values obtained without using pipeline approaches were 0.9615, 0.7270, 0.8846, 0.9180, and 0.9370, respectively; using pipeline approaches, the values were 0.9915, 0.8140, 0.9071, 0.9327, and 0.9615, respectively. The results of this study show that classification success can be increased by reducing the time to obtain per-image results through using the proposed pipeline approaches.Article Citation - WoS: 10Citation - Scopus: 16Design and Validation of Multichannel Wireless Wearable Semg System for Real-Time Training Performance Monitoring(HINDAWI LTD, 2019) Örücü, Serkan; Selek, MuratMonitoring of training performance and physical activity has become indispensable these days for athletes. Wireless technologies have started to be widely used in the monitoring of muscle activation, in the sport performance of athletes, and in the examination of training efficiency. The monitorability of performance simultaneously in the process of training is especially a necessity for athletes at the beginner level to carry out healthy training in sports like weightlifting and bodybuilding. For this purpose, a new system consisting of 4 channel wireless wearable SEMG circuit and analysis software has been proposed to detect dynamic muscle contractions and to be used in real-time training performance monitoring and analysis. The analysis software, the Haar wavelet filter with threshold cutting, can provide performance analysis by using the methods of moving RMS and %MVC. The validity of the data obtained from the system was investigated and compared with a biomedical system. In this comparison, 90.95% +/- 3.35 for left biceps brachii (BB) and 90.75% +/- 3.75 for right BB were obtained. The output of the power and %MVC analysis of the system was tested during the training of the participants at the gym, and the training efficiency was measured as 96.87% +/- 2.74.Article Citation - WoS: 11Citation - Scopus: 10An Extensive Study for Binary Characterisation of Adrenal Tumours(SPRINGER HEIDELBERG, 2019) Koyuncu, Hasan; Ceylan, Rahime; Asoğlu, Semih; Cebeci, Hakan; Koplay, MustafaOn adrenal glands, benign tumours generally change the hormone equilibrium, and malign tumours usually tend to spread to the nearby tissues and to the organs of the immune system. These features can give a trace about the type of adrenal tumours; however, they cannot be observed all the time. Different tumour types can be confused in terms of having a similar shape, size and intensity features on scans. To support the evaluation process, biopsy process is applied that includes injury and complication risks. In this study, we handle the binary characterisation of adrenal tumours by using dynamic computed tomography images. Concerning this, the usage of one more imaging modalities and biopsy process is wanted to be excluded. The used dataset consists of 8 subtypes of adrenal tumours, and it seemed as the worst-case scenario in which all handicaps are available against tumour classification. Histogram, grey level co-occurrence matrix and wavelet-based features are investigated to reveal the most effective one on the identification of adrenal tumours. Binary classification is proposed utilising four-promising algorithms that have proven oneself on the task of binary-medical pattern classification. For this purpose, optimised neural networks are examined using six dataset inspired by the aforementioned features, and an efficient framework is offered before the use of a biopsy. Accuracy, sensitivity, specificity, and AUC are used to evaluate the performance of classifiers. Consequently, malign/benign characterisation is performed by proposed framework, with success rates of 80.7%, 75%, 82.22% and 78.61% for the metrics, respectively.Article Citation - WoS: 36Citation - Scopus: 42Hvionet: a Deep Learning Based Hybrid Visual-Inertial Odometry Approach for Unmanned Aerial System Position Estimation(Pergamon-Elsevier Science Ltd, 2022) Aslan, Muhammet Fatih; Durdu, Akif; Yusefi, Abdullah; Yılmaz, AlperSensor fusion is used to solve the localization problem in autonomous mobile robotics applications by integrating complementary data acquired from various sensors. In this study, we adopt Visual- Inertial Odometry (VIO), a low-cost sensor fusion method that integrates inertial data with images using a Deep Learning (DL) framework to predict the position of an Unmanned Aerial System (UAS). The developed system has three steps. The first step extracts features from images acquired from a platform camera and uses a Convolutional Neural Network (CNN) to project them to a visual feature manifold. Next, temporal features are extracted from the Inertial Measurement Unit (IMU) data on the platform using a Bidirectional Long Short Term Memory (BiLSTM) network and are projected to an inertial feature manifold. The final step estimates the UAS position by fusing the visual and inertial feature manifolds via a BiLSTM-based architecture. The proposed approach is tested with the public EuRoC (European Robotics Challenge) dataset and simulation environment data generated within the Robot Operating System (ROS). The result of the EuRoC dataset shows that the proposed approach achieves successful position estimations comparable to previous popular VIO methods. In addition, as a result of the experiment with the simulation dataset, the UAS position is successfully estimated with 0.167 Mean Square Error (RMSE). The obtained results prove that the proposed deep architecture is useful for UAS position estimation. (c) 2022 Elsevier Ltd. All rights reserved.Article Citation - WoS: 23Citation - Scopus: 31A New Deep Learning Pipeline To Detect Covid-19 on Chest X-Ray Images Using Local Binary Pattern, Dual Tree Complex Wavelet Transform and Convolutional Neural Networks(SPRINGER, 2021) Yaşar, Hüseyin; Ceylan, MuratIn this study, which aims at early diagnosis of Covid-19 disease using X-ray images, the deep-learning approach, a state-of-the-art artificial intelligence method, was used, and automatic classification of images was performed using convolutional neural networks (CNN). In the first training-test data set used in the study, there were 230 X-ray images, of which 150 were Covid-19 and 80 were non-Covid-19, while in the second training-test data set there were 476 X-ray images, of which 150 were Covid-19 and 326 were non-Covid-19. Thus, classification results have been provided for two data sets, containing predominantly Covid-19 images and predominantly non-Covid-19 images, respectively. In the study, a 23-layer CNN architecture and a 54-layer CNN architecture were developed. Within the scope of the study, the results were obtained using chest X-ray images directly in the training-test procedures and the sub-band images obtained by applying dual tree complex wavelet transform (DT-CWT) to the above-mentioned images. The same experiments were repeated using images obtained by applying local binary pattern (LBP) to the chest X-ray images. Within the scope of the study, four new result generation pipeline algorithms having been put forward additionally, it was ensured that the experimental results were combined and the success of the study was improved. In the experiments carried out in this study, the training sessions were carried out using the k-fold cross validation method. Here the k value was chosen as 23 for the first and second training-test data sets. Considering the average highest results of the experiments performed within the scope of the study, the values of sensitivity, specificity, accuracy, F-1 score, and area under the receiver operating characteristic curve (AUC) for the first training-test data set were 0,9947, 0,9800, 0,9843, 0,9881 and 0,9990 respectively; while for the second training-test data set, they were 0,9920, 0,9939, 0,9891, 0,9828 and 0,9991; respectively. Within the scope of the study, finally, all the images were combined and the training and testing processes were repeated for a total of 556 X-ray images comprising 150 Covid-19 images and 406 non-Covid-19 images, by applying 2-fold cross. In this context, the average highest values of sensitivity, specificity, accuracy, F-1 score, and AUC for this last training-test data set were found to be 0,9760, 1,0000, 0,9906, 0,9823 and 0,9997; respectively.Article Citation - WoS: 47Citation - Scopus: 64A Novel Comparative Study for Detection of Covid-19 on Ct Lung Images Using Texture Analysis, Machine Learning, and Deep Learning Methods(SPRINGER, 2021) Yaşar, Hüseyin; Ceylan, MuratThe Covid-19 virus outbreak that emerged in China at the end of 2019 caused a huge and devastating effect worldwide. In patients with severe symptoms of the disease, pneumonia develops due to Covid-19 virus. This causes intense involvement and damage in lungs. Although the emergence of the disease occurred a short time ago, many literature studies have been carried out in which these effects of the disease on the lungs were revealed by the help of lung CT imaging. In this study, 1.396 lung CT images in total (386 Covid-19 and 1.010 Non-Covid-19) were subjected to automatic classification. In this study, Convolutional Neural Network (CNN), one of the deep learning methods, was used which suggested automatic classification of CT images of lungs for early diagnosis of Covid-19 disease. In addition, k-Nearest Neighbors (k-NN) and Support Vector Machine (SVM) was used to compare the classification successes of deep learning with machine learning. Within the scope of the study, a 23-layer CNN architecture was designed and used as a classifier. Also, training and testing processes were performed for Alexnet and Mobilenetv2 CNN architectures as well. The classification results were also calculated for the case of increasing the number of images used in training for the first 23-layer CNN architecture by 5, 10, and 20 times using data augmentation methods. To reveal the effect of the change in the number of images in the training and test clusters on the results, two different training and testing processes, 2-fold and 10-fold cross-validation, were performed and the results of the study were calculated. As a result, thanks to these detailed calculations performed within the scope of the study, a comprehensive comparison of the success of the texture analysis method, machine learning, and deep learning methods in Covid-19 classification from CT images was made. The highest mean sensitivity, specificity, accuracy, F-1 score, and AUC values obtained as a result of the study were 0,9197, 0,9891, 0,9473, 0,9058, 0,9888; respectively for 2-fold cross-validation, and they were 0,9404, 0,9901, 0,9599, 0,9284, 0,9903; respectively for 10-fold cross-validation.Article Citation - WoS: 3Citation - Scopus: 2A Novel Study for Automatic Two-Class Covid-19 Diagnosis (between Covid-19 and Healthy, Pneumonia) on X-Ray Images Using Texture Analysis and 2-d/3-d Convolutional Neural Networks(Springer, 2022) Yaşar, Hüseyin; Ceylan, MuratThe pandemic caused by the COVID-19 virus affects the world widely and heavily. When examining the CT, X-ray, and ultrasound images, radiologists must first determine whether there are signs of COVID-19 in the images. That is, COVID-19/Healthy detection is made. The second determination is the separation of pneumonia caused by the COVID-19 virus and pneumonia caused by a bacteria or virus other than COVID-19. This distinction is key in determining the treatment and isolation procedure to be applied to the patient. In this study, which aims to diagnose COVID-19 early using X-ray images, automatic two-class classification was carried out in four different titles: COVID-19/Healthy, COVID-19 Pneumonia/Bacterial Pneumonia, COVID-19 Pneumonia/Viral Pneumonia, and COVID-19 Pneumonia/Other Pneumonia. For this study, 3405 COVID-19, 2780 Bacterial Pneumonia, 1493 Viral Pneumonia, and 1989 Healthy images obtained by combining eight different data sets with open access were used. In the study, besides using the original X-ray images alone, classification results were obtained by accessing the images obtained using Local Binary Pattern (LBP) and Local Entropy (LE). The classification procedures were repeated for the images that were combined with the original images, LBP, and LE images in various combinations. 2-D CNN (Two-Dimensional Convolutional Neural Networks) and 3-D CNN (Three-Dimensional Convolutional Neural Networks) architectures were used as classifiers within the scope of the study. Mobilenetv2, Resnet101, and Googlenet architectures were used in the study as a 2-D CNN. A 24-layer 3-D CNN architecture has also been designed and used. Our study is the first to analyze the effect of diversification of input data type on classification results of 2-D/3-D CNN architectures. The results obtained within the scope of the study indicate that diversifying X-ray images with tissue analysis methods in the diagnosis of COVID-19 and including CNN input provides significant improvements in the results. Also, it is understood that the 3-D CNN architecture can be an important alternative to achieve a high classification result.Article Citation - WoS: 71Citation - Scopus: 98Residual Lstm Layered Cnn for Classification of Gastrointestinal Tract Diseases(ACADEMIC PRESS INC ELSEVIER SCIENCE, 2021) Öztürk, Şaban; Özkaya, Umutnowadays, considering the number of patients per specialist doctor, the size of the need for automatic medical image analysis methods can be understood. These systems, which are very advantageous compared to manual systems both in terms of cost and time, benefit from artificial intelligence (AI). AI mechanisms that mimic the decision-making process of a specialist increase their diagnosis performance day by day, depending on technological developments. In this study, an AI method is proposed to effectively classify Gastrointestinal (GI) Tract Image datasets containing a small number of labeled data. The proposed AI method uses the convolutional neural network (CNN) architecture, which is accepted as the most successful automatic classification method of today, as a backbone. According to our approach, a shallowly trained CNN architecture needs to be supported by a strong classifier to classify unbalanced datasets robustly. For this purpose, the features in each pooling layer in the CNN architecture are transmitted to an LSTM layer. A classification is made by combining all LSTM layers. All experiments are carried out using AlexNet, GoogLeNet, and ResNet to evaluate the contribution of the proposed residual LSTM structure fairly. Besides, three different experiments are carried out with 2000, 4000, and 6000 samples to determine the effect of sample number change on the proposed method. The performance of the proposed method is higher than other state-of-the-art methods.Article Citation - WoS: 96Citation - Scopus: 120Skin Lesion Segmentation With Improved Convolutional Neural Network(SPRINGER, 2020) Öztürk, Şaban; Özkaya, UmutRecently, the incidence of skin cancer has increased considerably and is seriously threatening human health. Automatic detection of this disease, where early detection is critical to human life, is quite challenging. Factors such as undesirable residues (hair, ruler markers), indistinct boundaries, variable contrast, shape differences, and color differences in the skin lesion images make automatic analysis quite difficult. To overcome these challenges, a highly effective segmentation method based on a fully convolutional network (FCN) is presented in this paper. The proposed improved FCN (iFCN) architecture is used for the segmentation of full-resolution skin lesion images without any pre- or post-processing. It is to support the residual structure of the FCN architecture with spatial information. This situation, which creates a more advanced residual system, enables more precise detection of details on the edges of the lesion, and an analysis independent of skin color can be performed. It offers two contributions: determining the center of the lesion and clarifying the edge details despite the undesirable effects. Two publicly available datasets, the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 Challenge and PH2 datasets, are used to evaluate the performance of the iFCN method. The mean Jaccard index is 78.34%, the mean Dice score is 88.64%, and the mean accuracy value is 95.30% for the proposed method for the ISBI 2017 test dataset. Furthermore, the mean Jaccard index is 87.1%, the mean Dice score is 93.02%, and the mean accuracy value is 96.92% for the proposed method for the PH2 test dataset.

