Bilgisayar ve Bilişim Fakültesi Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.13091/10834
Browse
Browsing Bilgisayar ve Bilişim Fakültesi Koleksiyonu by Language "en"
Now showing 1 - 20 of 195
- Results Per Page
- Sort Options
Article A 3d U-Net Based on Early Fusion Model: Improvement, Comparative Analysis With State-Of Models and Fine-Tuning(Konya Teknik Univ, 2024) Kayhan, Beyza; Uymaz, Sait AliMulti-organ segmentation is the process of identifying and separating multiple organs in medical images. This segmentation allows for the detection of structural abnormalities by examining the morphological structure of organs. Carrying out the process quickly and precisely has become an important issue in today's conditions. In recent years, researchers have used various technologies for the automatic segmentation of multiple organs. In this study, improvements were made to increase the multi-organ segmentation performance of the 3D U-Net based fusion model combining HSV and grayscale color spaces and compared with state-of-the-art models. Training and testing were performed on the MICCAI 2015 dataset published at Vanderbilt University, which contains 3D abdominal CT images in NIfTI format. The model's performance was evaluated using the Dice similarity coefficient. In the tests, the liver organ showed the highest Dice score. Considering the average Dice score of all organs, and comparing it with other models, it has been observed that the fusion approach model yields promising results.Article Citation - WoS: 1Academic Text Clustering Using Natural Language Processing(2022) Taşkıran, Fatma; Kaya, ErsinAccessing data is very easy nowadays. However, to use these data in an efficient way, it is necessary to get the right information from them. Categorizing these data in order to reach the needed information in a short time provides great convenience. All the more, while doing research in the academic field, text-based data such as articles, papers, or thesis studies are generally used. Natural language processing and machine learning methods are used to get the right information we need from these text-based data. In this study, abstracts of academic papers are clustered. Text data from academic paper abstracts are preprocessed using natural language processing techniques. A vectorized word representation extracted from preprocessed data with Word2Vec and BERT word embeddings and representations are clustered with four clustering algorithms.Article Citation - WoS: 29Citation - Scopus: 54Alexnet Architecture Variations With Transfer Learning for Classification of Wound Images(Elsevier B.V., 2023) Eldem, H.; Ülker, E.; Işıklı, O.Y.In medical world, wound care and follow-up is one of the issues that are gaining importance to work on day by day. Accurate and early recognition of wounds can reduce treatment costs. In the field of computer vision, deep learning architectures have received great attention recently. The achievements of existing pre-trained architectures for describing (classifying) data belonging to many image sets in the real world are primarily addressed. However, to increase the success of these architectures in a certain area, some improvements and enhancements can be made on the architecture. In this paper, the classification of pressure and diabetic wound images was performed with high accuracy. The six different new AlexNet architecture variations (3Conv_Softmax, 3Conv_SVM, 4Conv_Softmax, 4Conv_SVM, 6Conv_Softmax, 6Conv_SVM) were created with a different number of implementations of Convolution, Pooling, and Rectified Linear Activation (ReLU) layers. Classification performances of the proposed models are investigated by using Softmax classifier and SVM classifier separately. A new original Wound Image Database are created for performance measures. According to the experimental results obtained for the Database, the model with 6 Convolution layers (6Conv_SVM) was the most successful method among the proposed methods with 98.85% accuracy, 98.86% sensitivity, and 99.42% specificity. The 6Conv_SVM model was also tested on diabetic and pressure wound images in the public medetec dataset, and 95.33% accuracy, 95.33% sensitivity, and 97.66% specificity values were obtained. The proposed method provides high performance compared to the pre-trained AlexNet architecture and other state-of-the-art models in the literature. The results showed that the proposed 6Conv_SVM architecture can be used by the relevant departments in the medical world with good performance in medical tasks such as examining and classifying wound images and following up the wound process. © 2023 Karabuk UniversityArticle Citation - WoS: 3Citation - Scopus: 6Analysis of Machine Learning Classification Approaches for Predicting Students' Programming Aptitude(MDPI, 2023) Çetinkaya, Ali; Baykan, Ömer Kaan; Kırgız, HavvaWith the increasing prevalence and significance of computer programming, a crucial challenge that lies ahead of teachers and parents is to identify students adept at computer programming and direct them to relevant programming fields. As most studies on students' coding abilities focus on elementary, high school, and university students in developed countries, we aimed to determine the coding abilities of middle school students in Turkey. We first administered a three-part spatial test to 600 secondary school students, of whom 400 completed the survey and the 20-level Classic Maze course on Code.org. We then employed four machine learning (ML) algorithms, namely, support vector machine (SVM), decision tree, k-nearest neighbor, and quadratic discriminant to classify the coding abilities of these students using spatial test and Code.org platform data. SVM yielded the most accurate results and can thus be considered a suitable ML technique to determine the coding abilities of participants. This article promotes quality education and coding skills for workforce development and sustainable industrialization, aligned with the United Nations Sustainable Development Goals.Article Citation - WoS: 7Citation - Scopus: 12Analyzing the Effect of Data Preprocessing Techniques Using Machine Learning Algorithms on the Diagnosis of Covid-19(Wiley, 2022) Erol, Gizemnur; Uzbaş, Betül; Yücelbaş, Cüneyt; Yücelbaş, SuleReal-time polymerase chain reaction (RT-PCR) known as the swab test is a diagnostic test that can diagnose COVID-19 disease through respiratory samples in the laboratory. Due to the rapid spread of the coronavirus around the world, the RT-PCR test has become insufficient to get fast results. For this reason, the need for diagnostic methods to fill this gap has arisen and machine learning studies have started in this area. On the other hand, studying medical data is a challenging area because the data it contains is inconsistent, incomplete, difficult to scale, and very large. Additionally, some poor clinical decisions, irrelevant parameters, and limited medical data adversely affect the accuracy of studies performed. Therefore, considering the availability of datasets containing COVID-19 blood parameters, which are less in number than other medical datasets today, it is aimed to improve these existing datasets. In this direction, to obtain more consistent results in COVID-19 machine learning studies, the effect of data preprocessing techniques on the classification of COVID-19 data was investigated in this study. In this study primarily, encoding categorical feature and feature scaling processes were applied to the dataset with 15 features that contain blood data of 279 patients, including gender and age information. Then, the missingness of the dataset was eliminated by using both K-nearest neighbor algorithm (KNN) and chain equations multiple value assignment (MICE) methods. Data balancing has been done with synthetic minority oversampling technique (SMOTE), which is a data balancing method. The effect of data preprocessing techniques on ensemble learning algorithms bagging, AdaBoost, random forest and on popular classifier algorithms KNN classifier, support vector machine, logistic regression, artificial neural network, and decision tree classifiers have been analyzed. The highest accuracies obtained with the bagging classifier were 83.42% and 83.74% with KNN and MICE imputations by applying SMOTE, respectively. On the other hand, the highest accuracy ratio reached with the same classifier without SMOTE was 83.91% for the KNN imputation. In conclusion, certain data preprocessing techniques are examined comparatively and the effect of these data preprocessing techniques on success is presented and the importance of the right combination of data preprocessing to achieve success has been demonstrated by experimental studies.Article Citation - WoS: 2Apneic Events Detection Using Different Features of Airflow Signals(MEHRAN UNIV ENGINEERING & TECHNOLOGY, 2019) Göğüş, Fatma Zehra; Tezel, GülayApneic-event based sleep disorders are very common and affect greatly the daily life of people. However, diagnosis of these disorders by detecting apneic events are very difficult. Studies show that analyzes of airflow signals are effective in diagnosis of apneic-event based sleep disorders. According to these studies, diagnosis can be performed by detecting the apneic episodes of the airflow signals. This work deals with detection of apneic episodes on airflow signals belonging to Apnea-ECG (Electrocardiogram) and MIT (Massachusetts Institute of Technology) BIH (Bastons's Beth Isreal Hospital) databases. In order to accomplish this task, three representative feature sets namely classic feature set, amplitude feature set and descriptive model feature set were created. The performance of these feature sets were evaluated individually and in combination with the aid of the random forest classifier to detect apneic episodes. Moreover, effective features were selected by OneR Attribute Eval Feature Selection Algorithm to obtain higher performance. Selected 28 features for Apnea-ECG database and 31 features for MIT-BIH database from 54 features were applied to classifier to compare achievements. As a result, the highest classification accuracies were obtained with the usage of effective features as 96.21% for Apnea-ECG database and 92.23% for MIT-BIH database. Kappa values are also quite good (91.80 and 81.96%) and support the classification accuracies for both databases, too. The results of the study are quite promising for determining apneic events on a minute-by-minute basis.Article Citation - WoS: 3Application of Abm To Spectral Features for Emotion Recognition(MEHRAN UNIV ENGINEERING & TECHNOLOGY, 2018) Demircan, Semiye; Örnek, Humar KahramanlıER (Emotion Recognition) from speech signals has been among the attractive subjects lately. As known feature extraction and feature selection are most important process steps in ER from speech signals. The aim of present study is to select the most relevant spectral feature subset. The proposed method is based on feature selection with optimization algorithm among the features obtained from speech signals. Firstly, MFCC (Mel-Frequency Cepstrum Coefficients) were extracted from the EmoDB. Several statistical values as maximum, minimum, mean, standard deviation, skewness, kurtosis and median were obtained from MFCC. The next process of study was feature selection which was performed in two stages: In the first stage ABM (Agent-Based Modelling) that is hardly applied to this area was applied to actual features. In the second stageOpt-aiNET optimization algorithm was applied in order to choose the agent group giving the best classification success. The last process of the study is classification. ANN (Artificial Neural Network) and 10 cross-validations were used for classification and evaluation. A narrow comprehension with three emotions was performed in the application. As a result, it was seen that the classification accuracy was rising after applying proposed method. The method was shown promising performance with spectral features.Conference Object An Application of Tree Seed Algorithm for Optimization of 50 and 100 Dimensional Numerical Functions(Institute of Electrical and Electronics Engineers Inc., 2021) Güngör, İmral; Emiroğlu, Bülent Gürsel; Uymaz, S.A.; Kıran, Mustafa ServetThe Tree-Seed Algorithm is an optimization algorithm created by observing the process of growing and becoming a new tree, the seeds scattering around trees in natural life. In this study, TSA is applied to optimize high-dimensional functions. In previous studies, the performance of the tree seed algorithm applied for the optimization of low-dimensional functions has been proven. Thus, in addition to running the algorithm on 30-dimensional functions before, it has also been applied to solve 50-and 100-dimensional numerical functions. This improvement, called the tree seed algorithm, is based on the use of more solution update mechanisms instead of one mechanism. In the experiments, CEC2015 benchmarking functions are used and the developed tree seed algorithm is compared with the base state of TSA, artificial bee colony, particle swarm optimization and some variants of the differential evolution algorithm. Experimental results are obtained as mean, max, min solutions and standard deviation of 30 different runs. As a result, it is observed by the studies that the developed algorithm gives successful results. © 2021 IEEE.Article Citation - WoS: 3Citation - Scopus: 4Approaches To Automated Land Subdivision Using Binary Search Algorithm in Zoning Applications(Ice Publishing, 2022) Koç, İsmail; Çay, Tayfun; Babaoğlu, İsmailThe planned development of urban areas depends on zoning applications. Although zoning practices are performed using different techniques, the parcelling operations that shape the future view of the city are the same. Preparing the parcelling plans is an important step that has a direct impact on ownership structure and reallocation. Parcelling operations are traditionally handled manually by a technician. This is a serious problem in terms of time and cost. In this study, by taking the zoning legislation, the production of a pre-land subdivision plan has been automatically performed for a region of Konya, which is one of the major cities in Turkey. The parcelling processes have been performed in three different ways: the first parcelling technique is parcelling with edge values, the second is parcelling with area values and the third is parcelling using both edge and area values together. For the entire parcelling process, the area of the parcel has been calculated using the Gauss method. Moreover, to effectively determine the boundaries and to calculate the parcel area in the parcelling process, the binary search technique has been used in all the methods. The experimental results show that the parcelling operations were carried out very quickly and successfully.Article Automatic Localization of Cephalometric Landmarks Using Convolutional Neural Networks(2021) Nourdine Mogham Njikam Mohamed; Uzbaş, BetülExperts have brought forward interesting and effective methods to address critical medical analysis problems. One of these fields of research is cephalometric analysis. During the analysis of tooth and the skeletal relationships of the human skull, cephalometric analysis plays an important role as it facilitates the interpretation of bone, tooth, and soft tissue structures of the patient. It is used during oral, craniofacial, and maxillofacial surgery and during treatments in orthodontic and orthopedic departments. The automatic localization of cephalometric landmarks reduces possible human errors and is time saving. To performed automatic localization of cephalometric landmarks, a deep learning model has been proposed inspired by the U-Net model. 19 cephalometric landmarks that are generally manually determined by experts are automatically obtained using this model. The cephalometric X-ray image dataset created under the context of IEEE 2015 International Symposium on Biomedical Imaging (ISBI 2015) is used and data augmentation is applied to it for this experiment. A Success Detection Rate SDR of 74% was achieved in the range of 2 mm, 81.4% in the range of 2.5mm, 86.3% in the range of 3mm, and 92.2% in the range of 4mm.Article Citation - Scopus: 1Automatic Sleep Stage Classification for the Obstructive Sleep Apnea(Trans Tech Publications Ltd, 2023) Özsen, Seral; Koca, Yasin; Tezel, Gülay Tezel; Solak, Fatma Zehra; Vatansev, Hulya; Kucukturk, SerkanAutomatic sleep scoring systems have been much more attention in the last decades. Whereas a wide variety of studies have been used in this subject area, the accuracies are still under acceptable limits to apply these methods to real-life data. One can find many high-accuracy studies in literature using a standard database but when it comes to using real data reaching such high performance is not straightforward. In this study, five distinct datasets were prepared using 124 persons including 93 unhealthy and 31 healthy persons. These datasets consist of time-, nonlinear-, welch-, discrete wavelet transform- and Hilbert-Huang transform features. By applying k-NN, Decision Trees, ANN, SVM, and Bagged Tree classifiers to these feature sets in various manners by using feature-selection highest classification accuracy was searched. The maximum classification accuracy was detected in the case of the Bagged Tree classifier as 95.06% with the use of 14 features among a total of 136 features. This accuracy is relatively high compared with the literature for a real-data application.Article Citation - WoS: 2B-Spline Curve Approximation by Utilizing Big Bang-Big Crunch Method(TECH SCIENCE PRESS, 2020) İnik, Özkan; Ülker, Erkan; Koç, İsmailThe location of knot points and estimation of the number of knots are undoubtedly known as one of the most difficult problems in B-Spline curve approximation. In the literature, different researchers have been seen to use more than one optimization algorithm in order to solve this problem. In this paper, Big Bang-Big Crunch method (BB-BC) which is one of the evolutionary based optimization algorithms was introduced and then the approximation of B-Spline curve knots was conducted by this method. The technique of reverse engineering was implemented for the curve knot approximation. The detection of knot locations and the number of knots were randomly selected in the curve approximation which was performed by using BB-BC method. The experimental results were carried out by utilizing seven different test functions for the curve approximation. The performance of BB-BC algorithm was examined on these functions and their results were compared with the earlier studies performed by the researchers. In comparison with the other studies, it was observed that though the number of the knot in BB-BC algorithm was high, this algorithm approximated the B-Spline curves at the rate of minor error.Conference Object Binary African Vultures Optimization Algorithm for Z-Shaped Transfer Functions(2023) Baş, EmineMetaheuristic algorithms are of great importance in solving binary optimization problems. African Vulture Optimization algorithm (AVO) is a swarm intelligence-based heuristic algorithm created by imitating the life forms of African vultures. In this study, the AVO, which has been proposed in recent years, is restructured to solve binary optimization problems. Thus, Binary AVO (BAVO) has been proposed. Four different z-shaped transfer functions are chosen to convert the continuous search space to binary search space. Variations for BAVO are defined according to the transfer function used (BAVO1, BAVO2, BAVO3, and BAVO4). The success of these variations was tested in thirteen classic test functions containing unimodal and multimodal functions. Three different dimensions were determined in the study (5, 10, and 20). Each test function was run ten times independently and the average, standard deviation, best, and worst values were obtained. According to the results obtained, the most successful of these variations has been identified. According to the results, the BAVO4 variant at higher dimensions achieved better results. The success of BAVO with z-shaped transfer functions was demonstrated for the first time in this study.Article Citation - WoS: 30Citation - Scopus: 36Binary Aquila Optimizer for 0-1 Knapsack Problems(Pergamon-Elsevier Science Ltd, 2023) Baş, EmineThe optimization process entails determining the best values for various system characteristics in order to finish the system design at the lowest possible cost. In general, real-world applications and issues in artificial intelligence and machine learning are discrete, unconstrained, or discrete. Optimization approaches have a high success rate in tackling such situations. As a result, several sophisticated heuristic algorithms based on swarm intelligence have been presented in recent years. Various academics in the literature have worked on such algorithms and have effectively addressed many difficulties. Aquila Optimizer (AO) is one such algorithm. Aquila Optimizer (AO) is a recently suggested heuristic algorithm. It is a novel population-based optimization strategy. It was made by mimicking the natural behavior of the Aquila. It was created by imitating the behavior of the Aquila in nature in the process of catching its prey. The AO algorithm is an algorithm developed to solve continuous optimization problems in their original form. In this study, the AO structure has been updated again to solve binary optimization problems. Problems encountered in the real world do not always have continuous values. It exists in problems with discrete values. Therefore, algorithms that solve continuous problems need to be restructured to solve discrete optimization problems as well. Binary optimization problems constitute a subgroup of discrete optimization problems. In this study, a new algorithm is proposed for binary optimization problems (BAO). The most successful BAO-T algorithm was created by testing the success of BAO in eight different transfer functions. Transfer functions play an active role in converting the continuous search space to the binary search space. BAO has also been developed by adding candidate solution step crossover and mutation methods (BAO-CM). The success of the proposed BAO-T and BAO-CM algorithms has been tested on the knapsack problem, which is widely selected in binary optimization problems in the literature. Knapsack problem examples are divided into three different benchmark groups in this study. A total of sixty-three low, medium, and large scale knapsack problems were determined as test datasets. The performances of BAO-T and BAO-CM algorithms were examined in detail and the results were clearly shown with graphics. In addition, the results of BAO-T and BAO-CM algorithms have been compared with the new heuristic algorithms proposed in the literature in recent years, and their success has been proven. According to the results, BAO-CM performed better than BAO-T and can be suggested as an alternative algorithm for solving binary optimization problems.Article Citation - WoS: 44Citation - Scopus: 47Binary Artificial Algae Algorithm for Feature Selection(Elsevier, 2022) Türkoğlu, Bahaeddin; Uymaz, Sait Ali; Kaya, ErsinIn this study, binary versions of the Artificial Algae Algorithm (AAA) are presented and employed to determine the ideal attribute subset for classification processes. AAA is a recently proposed algorithm inspired by microalgae's living behavior, which has not been consistently implemented to determine ideal attribute subset (feature selection) processes yet. AAA can effectively look into the feature space for ideal attributes combination minimizing a designed objective function. The proposed binary versions of AAA are employed to determine the ideal attribute combination that maximizes classification success while minimizing the count of attributes. The original AAA is utilized in these versions while its continuous spaces are restricted in a threshold using an appropriate threshold function after flattening them. In order to demonstrate the performance of the presented binary artificial algae algorithm model, an experimental study was conducted with the latest seven highperformance optimization algorithms. Several evaluation metrics are used to accurately evaluate and analyze the performance of these algorithms over twenty-five datasets with different difficulty levels from the UCI Machine Learning Repository. The experimental results and statistical tests verify the performance of the presented algorithms in increasing the classification accuracy compared to other state-of-the-art binary algorithms, which confirms the capability of the AAA algorithm in exploring the attribute space and deciding the most valuable features for classification problems. (C) 2022 Elsevier B.V. All rights reserved.Article Citation - WoS: 31Citation - Scopus: 32A Binary Artificial Bee Colony Algorithm and Its Performance Assessment(PERGAMON-ELSEVIER SCIENCE LTD, 2021) Kıran, Mustafa ServetArtificial bee colony algorithm, ABC for short, is a swarm-based optimization algorithm proposed for solving continuous optimization problems. Due to its simple but effective structure, some binary versions of the algorithm have been developed. In this study, we focus on modification of its xor-based binary version, called as binABC. The solution update rule of basic ABC is replaced with a xor logic gate in binABC algorithm, and binABC works on discretely-structured solution space. The rest of components in binABC are the same as with the basic ABC algorithm. In order to improve local search capability and convergence characteristics of binABC, a stigmergic behavior-based update rule for onlooker bees of binABC and extended version of xor-based update rule are proposed in the present study. The developed version of binABC is applied to solve a modern benchmark problem set (CEC2015). To validate the performance of proposed algorithm, a series of comparisons are conducted on this problem set. The proposed algorithm is first compared with the basic ABC and binABC on CEC2015 set. After its performance validation, six binary versions of ABC algorithm are considered for comparison of the algorithms, and a comprehensive comparison among the state-of-art variants of swarm intelligence or evolutionary computation algorithms is conducted on this set of functions. Finally, an uncapacitated facility location problem set, a pure binary optimization problem, is considered for the comparison of the proposed algorithm and binary variants of ABC algorithm. The experimental results and comparisons show that the proposed algorithm is successful and effective in solving binary optimization problems as its basic version in solving continuous optimization problems.Conference Object Binary Fox Optimization Algorithm Based U-Shaped Transfer Functions for Knapsack Problems(2023) Baş, EmineThis paper examines a new optimization algorithm called Fox optimizer (FOX), which mimics the foraging behavior of foxes while hunting in nature. When the literature is examined, it is seen that there is no version of FOX that solves binary optimization problems. In this study, continuous search space is converted to binary search space by U-shaped transfer functions and BinFOX is proposed. There are four U-shaped transfer functions in the literature. Based on these transfer functions, four BinFOX variants are derived (BinFOX1, BinFOX2, BinFOX3, and BinFOX4). With BinFOX variants, 25 well-known 0-1knapsack problems in the literature have been solved and their success has been demonstrated. The best, worst, mean, standard deviation, time, and gap values of each variant were calculated. According to the results, the most successful BinFOX variant was determined. The success of BinFOX with U-shaped transfer functions was demonstrated for the first time in this study.Article Citation - WoS: 20Citation - Scopus: 24A Binary Social Spider Algorithm for Continuous Optimization Task(SPRINGER, 2020) Baş, Emine; Ülker, ErkanThe social spider algorithm (SSA) is a new heuristic algorithm created on spider behaviors. The original study of this algorithm was proposed to solve continuous problems. In this paper, the binary version of SSA (binary SSA) is introduced to solve binary problems. Currently, there is insufficient focus on the binary version of SSA in the literature. The main part of the binary version is at the transfer function. The transfer function is responsible for mapping continuous search space to discrete search space. In this study, four of the transfer functions divided into two families, S-shaped and V-shaped, are evaluated. Thus, four different variations of binary SSA are formed as binary SSA-Tanh, binary SSA-Sigm, binary SSA-MSigm and binary SSA-Arctan. Two different techniques (SimSSA and LogicSSA) are developed at the candidate solution production schema in binary SSA. SimSSA is used to measure similarities between two binary solutions. With SimSSA, binary SSA's ability to discover new points in search space has been increased. Thus, binary SSA is able to find global optimum instead of local optimums. LogicSSA which is inspired by the logic gates and a popular method in recent years has been used to avoid local minima traps. By these two techniques, the exploration and exploitation capabilities of binary SSA in the binary search space are improved. Eighteen unimodal and multimodal standard benchmark optimization functions are employed to evaluate variations of binary SSA. To select the best variations of binary SSA, a comparative study is presented. The Wilcoxon signed-rank test has applied to the experimental results of variations of binary SSA. Compared to well-known evolutionary and recently developed methods in the literature, the variations of binary SSA performance is quite good. In particular, binary SSA-Tanh and binary SSA-Arctan variations of binary SSA showed superior performance.Article Citation - WoS: 32Citation - Scopus: 35A Binary Social Spider Algorithm for Uncapacitated Facility Location Problem(PERGAMON-ELSEVIER SCIENCE LTD, 2020) Baş, Emine; Ülker, ErkanIn order to find efficient solutions to real complex world problems, computer sciences and especially heuristic algorithms are often used. Heuristic algorithms can give optimal solutions for large scale optimization problems in an acceptable period. Social Spider Algorithm (SSA), which is a heuristic algorithm created on spider behaviors are studied. The original study of this algorithm was proposed to solve continuous problems. In this paper, the binary version of the Social Spider Algorithm called Binary Social Spider Algorithm (BinSSA) is proposed for binary optimization problems. BinSSA is obtained from SSA, by transforming constant search space to binary search space with four transfer functions. Thus, BinSSA variations are created as BinSSA1, BinSSA2, BinSSA3, and BinSSA4. The study steps of the original SSA are re-updated for BinSSA. A random walking schema in SSA is replaced by a candidate solution schema in BinSSA. Two new methods (similarity measure and logic gate) are used in candidate solution production schema for increasing the exploration and exploitation capacity of BinSSA. The performance of both techniques on BinSSA is examined. BinSSA is named as BinSSA(Sim&Logic). Local search and global search performance of BinSSA is increased by these two methods. Three different studies are performed with BinSSA. In the first study, the performance of BinSSA is tested on the classic eighteen unimodal and multimodal benchmark functions. Thus, the best variation of BinSSA and BinSSA (Sim&Logic) is determined as BinSSA4(Sim&Logic). BinSSA4(Sim&Logic) has been compared with other heuristic algorithms on CEC2005 and CEC2015 functions. In the second study, the uncapacitated facility location problems (UFLPs) are solved with BinSSA(Sim&Logic). UFL problems are one of the pure binary optimization problems. BinSSA is tested on low-scaled, middle-scaled, and large-scaled fifteen UFLP samples and obtained results are compared with eighteen state-of-art algorithms. In the third study, we solved UFL problems on a different dataset named M* with BinSSA(Sim&Logic). The results of BinSSA (Sim&Logic) are compared with the Local Search (LS), Tabu Search (TS), and Improved Scatter Search (ISS) algorithms. Obtained results have shown that BinSSA offers quality and stable solutions. (c) 2020 Elsevier Ltd. All rights reserved.Article Citation - WoS: 10Citation - Scopus: 11A Binary Sparrow Search Algorithm for Feature Selection on Classification of X-Ray Security Images(Elsevier Ltd, 2024) Babalik, A.; Babadag, A.In today's world, especially in public places, strict security measures are being implemented. Among these measures, the most common is the inspection of the contents of people's belongings, such as purses, knapsacks, and suitcases, through X-ray imaging to detect prohibited items. However, this process is typically performed manually by security personnel. It is an exhausting task that demands continuous attention and concentration, making it prone to errors. Additionally, the detection and classification of overlapping and occluded objects can be challenging. Therefore, automating this process can be highly beneficial for reducing errors and improving the overall efficiency. In this study, a framework consisting of three fundamental phases for the classification of prohibited objects was proposed. In the first phase, a deep neural network was trained using X-ray images to extract features. In the subsequent phase, features that best represent the object were selected. Feature selection helps eliminate redundant features, leading to the efficient use of memory, reduced computational costs, and improved classification accuracy owing to a decrease in the number of features. In the final phase, classification was performed using the selected features. In the first stage, a convolutional neural network model was utilized for feature extraction. In the second stage, the Sparrow Search Algorithm was binarized and proposed as the binISSA for feature selection. Feature selection was implemented using the proposed binISSA. In the final stage, classification was performed using the K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) algorithms. The performances of the convolutional neural network and the proposed framework were compared. In addition, the performance of the proposed framework was compared with that of other state-of-the-art meta-heuristic algorithms. The proposed method increased the classification accuracy of the network from 0.9702 to 0.9763 using both the KNN and SVM (linear kernel) classifiers. The total number of features extracted using the deep neural network was 512. With the application of the proposed binISSA, average number of features were reduced to 25.33 using the KNN classifier and 32.70 using the SVM classifier. The results indicate a notable reduction in the extracted features from the convolutional neural network and an improvement in the classification accuracy. © 2024 Elsevier B.V.

