Bilgisayar ve Bilişim Fakültesi Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.13091/10834
Browse
Browsing Bilgisayar ve Bilişim Fakültesi Koleksiyonu by Access Right "info:eu-repo/semantics/closedAccess"
Now showing 1 - 20 of 95
- Results Per Page
- Sort Options
Article Citation - WoS: 7Citation - Scopus: 12Analyzing the Effect of Data Preprocessing Techniques Using Machine Learning Algorithms on the Diagnosis of Covid-19(Wiley, 2022) Erol, Gizemnur; Uzbaş, Betül; Yücelbaş, Cüneyt; Yücelbaş, SuleReal-time polymerase chain reaction (RT-PCR) known as the swab test is a diagnostic test that can diagnose COVID-19 disease through respiratory samples in the laboratory. Due to the rapid spread of the coronavirus around the world, the RT-PCR test has become insufficient to get fast results. For this reason, the need for diagnostic methods to fill this gap has arisen and machine learning studies have started in this area. On the other hand, studying medical data is a challenging area because the data it contains is inconsistent, incomplete, difficult to scale, and very large. Additionally, some poor clinical decisions, irrelevant parameters, and limited medical data adversely affect the accuracy of studies performed. Therefore, considering the availability of datasets containing COVID-19 blood parameters, which are less in number than other medical datasets today, it is aimed to improve these existing datasets. In this direction, to obtain more consistent results in COVID-19 machine learning studies, the effect of data preprocessing techniques on the classification of COVID-19 data was investigated in this study. In this study primarily, encoding categorical feature and feature scaling processes were applied to the dataset with 15 features that contain blood data of 279 patients, including gender and age information. Then, the missingness of the dataset was eliminated by using both K-nearest neighbor algorithm (KNN) and chain equations multiple value assignment (MICE) methods. Data balancing has been done with synthetic minority oversampling technique (SMOTE), which is a data balancing method. The effect of data preprocessing techniques on ensemble learning algorithms bagging, AdaBoost, random forest and on popular classifier algorithms KNN classifier, support vector machine, logistic regression, artificial neural network, and decision tree classifiers have been analyzed. The highest accuracies obtained with the bagging classifier were 83.42% and 83.74% with KNN and MICE imputations by applying SMOTE, respectively. On the other hand, the highest accuracy ratio reached with the same classifier without SMOTE was 83.91% for the KNN imputation. In conclusion, certain data preprocessing techniques are examined comparatively and the effect of these data preprocessing techniques on success is presented and the importance of the right combination of data preprocessing to achieve success has been demonstrated by experimental studies.Conference Object An Application of Tree Seed Algorithm for Optimization of 50 and 100 Dimensional Numerical Functions(Institute of Electrical and Electronics Engineers Inc., 2021) Güngör, İmral; Emiroğlu, Bülent Gürsel; Uymaz, S.A.; Kıran, Mustafa ServetThe Tree-Seed Algorithm is an optimization algorithm created by observing the process of growing and becoming a new tree, the seeds scattering around trees in natural life. In this study, TSA is applied to optimize high-dimensional functions. In previous studies, the performance of the tree seed algorithm applied for the optimization of low-dimensional functions has been proven. Thus, in addition to running the algorithm on 30-dimensional functions before, it has also been applied to solve 50-and 100-dimensional numerical functions. This improvement, called the tree seed algorithm, is based on the use of more solution update mechanisms instead of one mechanism. In the experiments, CEC2015 benchmarking functions are used and the developed tree seed algorithm is compared with the base state of TSA, artificial bee colony, particle swarm optimization and some variants of the differential evolution algorithm. Experimental results are obtained as mean, max, min solutions and standard deviation of 30 different runs. As a result, it is observed by the studies that the developed algorithm gives successful results. © 2021 IEEE.Article Citation - WoS: 3Citation - Scopus: 4Approaches To Automated Land Subdivision Using Binary Search Algorithm in Zoning Applications(Ice Publishing, 2022) Koç, İsmail; Çay, Tayfun; Babaoğlu, İsmailThe planned development of urban areas depends on zoning applications. Although zoning practices are performed using different techniques, the parcelling operations that shape the future view of the city are the same. Preparing the parcelling plans is an important step that has a direct impact on ownership structure and reallocation. Parcelling operations are traditionally handled manually by a technician. This is a serious problem in terms of time and cost. In this study, by taking the zoning legislation, the production of a pre-land subdivision plan has been automatically performed for a region of Konya, which is one of the major cities in Turkey. The parcelling processes have been performed in three different ways: the first parcelling technique is parcelling with edge values, the second is parcelling with area values and the third is parcelling using both edge and area values together. For the entire parcelling process, the area of the parcel has been calculated using the Gauss method. Moreover, to effectively determine the boundaries and to calculate the parcel area in the parcelling process, the binary search technique has been used in all the methods. The experimental results show that the parcelling operations were carried out very quickly and successfully.Article Citation - Scopus: 1Automatic Sleep Stage Classification for the Obstructive Sleep Apnea(Trans Tech Publications Ltd, 2023) Özsen, Seral; Koca, Yasin; Tezel, Gülay Tezel; Solak, Fatma Zehra; Vatansev, Hulya; Kucukturk, SerkanAutomatic sleep scoring systems have been much more attention in the last decades. Whereas a wide variety of studies have been used in this subject area, the accuracies are still under acceptable limits to apply these methods to real-life data. One can find many high-accuracy studies in literature using a standard database but when it comes to using real data reaching such high performance is not straightforward. In this study, five distinct datasets were prepared using 124 persons including 93 unhealthy and 31 healthy persons. These datasets consist of time-, nonlinear-, welch-, discrete wavelet transform- and Hilbert-Huang transform features. By applying k-NN, Decision Trees, ANN, SVM, and Bagged Tree classifiers to these feature sets in various manners by using feature-selection highest classification accuracy was searched. The maximum classification accuracy was detected in the case of the Bagged Tree classifier as 95.06% with the use of 14 features among a total of 136 features. This accuracy is relatively high compared with the literature for a real-data application.Article Citation - WoS: 2B-Spline Curve Approximation by Utilizing Big Bang-Big Crunch Method(TECH SCIENCE PRESS, 2020) İnik, Özkan; Ülker, Erkan; Koç, İsmailThe location of knot points and estimation of the number of knots are undoubtedly known as one of the most difficult problems in B-Spline curve approximation. In the literature, different researchers have been seen to use more than one optimization algorithm in order to solve this problem. In this paper, Big Bang-Big Crunch method (BB-BC) which is one of the evolutionary based optimization algorithms was introduced and then the approximation of B-Spline curve knots was conducted by this method. The technique of reverse engineering was implemented for the curve knot approximation. The detection of knot locations and the number of knots were randomly selected in the curve approximation which was performed by using BB-BC method. The experimental results were carried out by utilizing seven different test functions for the curve approximation. The performance of BB-BC algorithm was examined on these functions and their results were compared with the earlier studies performed by the researchers. In comparison with the other studies, it was observed that though the number of the knot in BB-BC algorithm was high, this algorithm approximated the B-Spline curves at the rate of minor error.Article Citation - WoS: 30Citation - Scopus: 36Binary Aquila Optimizer for 0-1 Knapsack Problems(Pergamon-Elsevier Science Ltd, 2023) Baş, EmineThe optimization process entails determining the best values for various system characteristics in order to finish the system design at the lowest possible cost. In general, real-world applications and issues in artificial intelligence and machine learning are discrete, unconstrained, or discrete. Optimization approaches have a high success rate in tackling such situations. As a result, several sophisticated heuristic algorithms based on swarm intelligence have been presented in recent years. Various academics in the literature have worked on such algorithms and have effectively addressed many difficulties. Aquila Optimizer (AO) is one such algorithm. Aquila Optimizer (AO) is a recently suggested heuristic algorithm. It is a novel population-based optimization strategy. It was made by mimicking the natural behavior of the Aquila. It was created by imitating the behavior of the Aquila in nature in the process of catching its prey. The AO algorithm is an algorithm developed to solve continuous optimization problems in their original form. In this study, the AO structure has been updated again to solve binary optimization problems. Problems encountered in the real world do not always have continuous values. It exists in problems with discrete values. Therefore, algorithms that solve continuous problems need to be restructured to solve discrete optimization problems as well. Binary optimization problems constitute a subgroup of discrete optimization problems. In this study, a new algorithm is proposed for binary optimization problems (BAO). The most successful BAO-T algorithm was created by testing the success of BAO in eight different transfer functions. Transfer functions play an active role in converting the continuous search space to the binary search space. BAO has also been developed by adding candidate solution step crossover and mutation methods (BAO-CM). The success of the proposed BAO-T and BAO-CM algorithms has been tested on the knapsack problem, which is widely selected in binary optimization problems in the literature. Knapsack problem examples are divided into three different benchmark groups in this study. A total of sixty-three low, medium, and large scale knapsack problems were determined as test datasets. The performances of BAO-T and BAO-CM algorithms were examined in detail and the results were clearly shown with graphics. In addition, the results of BAO-T and BAO-CM algorithms have been compared with the new heuristic algorithms proposed in the literature in recent years, and their success has been proven. According to the results, BAO-CM performed better than BAO-T and can be suggested as an alternative algorithm for solving binary optimization problems.Article Citation - WoS: 44Citation - Scopus: 47Binary Artificial Algae Algorithm for Feature Selection(Elsevier, 2022) Türkoğlu, Bahaeddin; Uymaz, Sait Ali; Kaya, ErsinIn this study, binary versions of the Artificial Algae Algorithm (AAA) are presented and employed to determine the ideal attribute subset for classification processes. AAA is a recently proposed algorithm inspired by microalgae's living behavior, which has not been consistently implemented to determine ideal attribute subset (feature selection) processes yet. AAA can effectively look into the feature space for ideal attributes combination minimizing a designed objective function. The proposed binary versions of AAA are employed to determine the ideal attribute combination that maximizes classification success while minimizing the count of attributes. The original AAA is utilized in these versions while its continuous spaces are restricted in a threshold using an appropriate threshold function after flattening them. In order to demonstrate the performance of the presented binary artificial algae algorithm model, an experimental study was conducted with the latest seven highperformance optimization algorithms. Several evaluation metrics are used to accurately evaluate and analyze the performance of these algorithms over twenty-five datasets with different difficulty levels from the UCI Machine Learning Repository. The experimental results and statistical tests verify the performance of the presented algorithms in increasing the classification accuracy compared to other state-of-the-art binary algorithms, which confirms the capability of the AAA algorithm in exploring the attribute space and deciding the most valuable features for classification problems. (C) 2022 Elsevier B.V. All rights reserved.Article Citation - WoS: 31Citation - Scopus: 32A Binary Artificial Bee Colony Algorithm and Its Performance Assessment(PERGAMON-ELSEVIER SCIENCE LTD, 2021) Kıran, Mustafa ServetArtificial bee colony algorithm, ABC for short, is a swarm-based optimization algorithm proposed for solving continuous optimization problems. Due to its simple but effective structure, some binary versions of the algorithm have been developed. In this study, we focus on modification of its xor-based binary version, called as binABC. The solution update rule of basic ABC is replaced with a xor logic gate in binABC algorithm, and binABC works on discretely-structured solution space. The rest of components in binABC are the same as with the basic ABC algorithm. In order to improve local search capability and convergence characteristics of binABC, a stigmergic behavior-based update rule for onlooker bees of binABC and extended version of xor-based update rule are proposed in the present study. The developed version of binABC is applied to solve a modern benchmark problem set (CEC2015). To validate the performance of proposed algorithm, a series of comparisons are conducted on this problem set. The proposed algorithm is first compared with the basic ABC and binABC on CEC2015 set. After its performance validation, six binary versions of ABC algorithm are considered for comparison of the algorithms, and a comprehensive comparison among the state-of-art variants of swarm intelligence or evolutionary computation algorithms is conducted on this set of functions. Finally, an uncapacitated facility location problem set, a pure binary optimization problem, is considered for the comparison of the proposed algorithm and binary variants of ABC algorithm. The experimental results and comparisons show that the proposed algorithm is successful and effective in solving binary optimization problems as its basic version in solving continuous optimization problems.Article Citation - WoS: 20Citation - Scopus: 24A Binary Social Spider Algorithm for Continuous Optimization Task(SPRINGER, 2020) Baş, Emine; Ülker, ErkanThe social spider algorithm (SSA) is a new heuristic algorithm created on spider behaviors. The original study of this algorithm was proposed to solve continuous problems. In this paper, the binary version of SSA (binary SSA) is introduced to solve binary problems. Currently, there is insufficient focus on the binary version of SSA in the literature. The main part of the binary version is at the transfer function. The transfer function is responsible for mapping continuous search space to discrete search space. In this study, four of the transfer functions divided into two families, S-shaped and V-shaped, are evaluated. Thus, four different variations of binary SSA are formed as binary SSA-Tanh, binary SSA-Sigm, binary SSA-MSigm and binary SSA-Arctan. Two different techniques (SimSSA and LogicSSA) are developed at the candidate solution production schema in binary SSA. SimSSA is used to measure similarities between two binary solutions. With SimSSA, binary SSA's ability to discover new points in search space has been increased. Thus, binary SSA is able to find global optimum instead of local optimums. LogicSSA which is inspired by the logic gates and a popular method in recent years has been used to avoid local minima traps. By these two techniques, the exploration and exploitation capabilities of binary SSA in the binary search space are improved. Eighteen unimodal and multimodal standard benchmark optimization functions are employed to evaluate variations of binary SSA. To select the best variations of binary SSA, a comparative study is presented. The Wilcoxon signed-rank test has applied to the experimental results of variations of binary SSA. Compared to well-known evolutionary and recently developed methods in the literature, the variations of binary SSA performance is quite good. In particular, binary SSA-Tanh and binary SSA-Arctan variations of binary SSA showed superior performance.Article Citation - WoS: 32Citation - Scopus: 35A Binary Social Spider Algorithm for Uncapacitated Facility Location Problem(PERGAMON-ELSEVIER SCIENCE LTD, 2020) Baş, Emine; Ülker, ErkanIn order to find efficient solutions to real complex world problems, computer sciences and especially heuristic algorithms are often used. Heuristic algorithms can give optimal solutions for large scale optimization problems in an acceptable period. Social Spider Algorithm (SSA), which is a heuristic algorithm created on spider behaviors are studied. The original study of this algorithm was proposed to solve continuous problems. In this paper, the binary version of the Social Spider Algorithm called Binary Social Spider Algorithm (BinSSA) is proposed for binary optimization problems. BinSSA is obtained from SSA, by transforming constant search space to binary search space with four transfer functions. Thus, BinSSA variations are created as BinSSA1, BinSSA2, BinSSA3, and BinSSA4. The study steps of the original SSA are re-updated for BinSSA. A random walking schema in SSA is replaced by a candidate solution schema in BinSSA. Two new methods (similarity measure and logic gate) are used in candidate solution production schema for increasing the exploration and exploitation capacity of BinSSA. The performance of both techniques on BinSSA is examined. BinSSA is named as BinSSA(Sim&Logic). Local search and global search performance of BinSSA is increased by these two methods. Three different studies are performed with BinSSA. In the first study, the performance of BinSSA is tested on the classic eighteen unimodal and multimodal benchmark functions. Thus, the best variation of BinSSA and BinSSA (Sim&Logic) is determined as BinSSA4(Sim&Logic). BinSSA4(Sim&Logic) has been compared with other heuristic algorithms on CEC2005 and CEC2015 functions. In the second study, the uncapacitated facility location problems (UFLPs) are solved with BinSSA(Sim&Logic). UFL problems are one of the pure binary optimization problems. BinSSA is tested on low-scaled, middle-scaled, and large-scaled fifteen UFLP samples and obtained results are compared with eighteen state-of-art algorithms. In the third study, we solved UFL problems on a different dataset named M* with BinSSA(Sim&Logic). The results of BinSSA (Sim&Logic) are compared with the Local Search (LS), Tabu Search (TS), and Improved Scatter Search (ISS) algorithms. Obtained results have shown that BinSSA offers quality and stable solutions. (c) 2020 Elsevier Ltd. All rights reserved.Article Citation - WoS: 10Citation - Scopus: 11A Binary Sparrow Search Algorithm for Feature Selection on Classification of X-Ray Security Images(Elsevier Ltd, 2024) Babalik, A.; Babadag, A.In today's world, especially in public places, strict security measures are being implemented. Among these measures, the most common is the inspection of the contents of people's belongings, such as purses, knapsacks, and suitcases, through X-ray imaging to detect prohibited items. However, this process is typically performed manually by security personnel. It is an exhausting task that demands continuous attention and concentration, making it prone to errors. Additionally, the detection and classification of overlapping and occluded objects can be challenging. Therefore, automating this process can be highly beneficial for reducing errors and improving the overall efficiency. In this study, a framework consisting of three fundamental phases for the classification of prohibited objects was proposed. In the first phase, a deep neural network was trained using X-ray images to extract features. In the subsequent phase, features that best represent the object were selected. Feature selection helps eliminate redundant features, leading to the efficient use of memory, reduced computational costs, and improved classification accuracy owing to a decrease in the number of features. In the final phase, classification was performed using the selected features. In the first stage, a convolutional neural network model was utilized for feature extraction. In the second stage, the Sparrow Search Algorithm was binarized and proposed as the binISSA for feature selection. Feature selection was implemented using the proposed binISSA. In the final stage, classification was performed using the K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) algorithms. The performances of the convolutional neural network and the proposed framework were compared. In addition, the performance of the proposed framework was compared with that of other state-of-the-art meta-heuristic algorithms. The proposed method increased the classification accuracy of the network from 0.9702 to 0.9763 using both the KNN and SVM (linear kernel) classifiers. The total number of features extracted using the deep neural network was 512. With the application of the proposed binISSA, average number of features were reduced to 25.33 using the KNN classifier and 32.70 using the SVM classifier. The results indicate a notable reduction in the extracted features from the convolutional neural network and an improvement in the classification accuracy. © 2024 Elsevier B.V.Article Citation - WoS: 11Citation - Scopus: 13Bingso: Galactic Swarm Optimization Powered by Binary Artificial Algae Algorithm for Solving Uncapacitated Facility Location Problems(Springer London Ltd, 2022) Kaya, ErsinPopulation-based optimization methods are frequently used in solving real-world problems because they can solve complex problems in a reasonable time and at an acceptable level of accuracy. Many optimization methods in the literature are either directly used or their binary versions are adapted to solve binary optimization problems. One of the biggest challenges faced by both binary and continuous optimization methods is the balance of exploration and exploitation. This balance should be well established to reach the optimum solution. At this point, the galactic swarm optimization (GSO) framework, which uses traditional optimization methods, stands out. In this study, the binary galactic swarm optimization (BinGSO) approach using binary artificial algae algorithm as the main search algorithm in GSO is proposed. The performance of the proposed binary approach has been performed on uncapacitated facility location problems (UFLPs), which is a complex problem due to its NP-hard structure. The parameter analysis of the BinGSO method was performed using the 15 Cap problems. Then, the BinGSO method was compared with both traditional binary optimization methods and the state-of-the-art methods which are used on Cap problems. Finally, the performance of the BinGSO method on the M* problems was examined. The results of the proposed approach on the M* problem set were compared with the results of the state-of-the-art methods. The results of the evaluation process showed that the BinGSO method is more successful than other methods through its ability to establish the balance between exploration and exploitation in UFLPs.Article Citation - WoS: 11Citation - Scopus: 13Boosting Galactic Swarm Optimization With Abc(SPRINGER HEIDELBERG, 2019) Kaya, Ersin; Uymaz, Sait Ali; Koçer, BarışGalactic swarm optimization (GSO) is a new global metaheuristic optimization algorithm. It manages multiple sub-populations to explore search space efficiently. Then superswarm is recruited from the best-found solutions. Actually, GSO is a framework. In this framework, search method in both sub-population and superswarm can be selected differently. In the original work, particle swarm optimization is used as the search method in both phases. In this work, performance of the state of the art and well known methods are tested under GSO framework. Experiments show that performance of artificial bee colony algorithm under the GSO framework is the best among the other algorithms both under GSO framework and original algorithms.Article Citation - WoS: 21Citation - Scopus: 24Boosting the Oversampling Methods Based on Differential Evolution Strategies for Imbalanced Learning(Elsevier, 2021) Korkmaz, Sedat; Sahman, Mehmet Akif; Çınar, Ahmet Cevahir; Kaya, ErsinThe class imbalance problem is a challenging problem in the data mining area. To overcome the low classification performance related to imbalanced datasets, sampling strategies are used for balancing the datasets. Oversampling is a technique that increases the minority class samples in various proportions. In this work, these 16 different DE strategies are used for oversampling the imbalanced datasets for better classification. The main aim of this work is to determine the best strategy in terms of Area Under the receiver operating characteristic (ROC) Curve (AUC) and Geometric Mean (G-Mean) metrics. 44 imbalanced datasets are used in experiments. Support Vector Machines (SVM), k-Nearest Neighbor (kNN), and Decision Tree (DT) are used as a classifier in the experiments. The best results are produced by 6th Debohid Strategy (DSt6), 1th Debohid Strategy (DSt1), and 3th Debohid Strategy (DSt3) by using kNN, DT, and SVM classifiers, respectively. The obtained results outperform the 9 state-of-the-art oversampling methods in terms of AUC and G-Mean metrics (C) 2021 Elsevier B.V. All rights reserved.Article Citation - WoS: 24Citation - Scopus: 34Boundary Constrained Voxel Segmentation for 3d Point Clouds Using Local Geometric Differences(PERGAMON-ELSEVIER SCIENCE LTD, 2020) Sağlam, Ali; Makineci, Hasan Bilgehan; Baykan, Nurdan Akhan; Baykan, Ömer KaanIn 3D point cloud processing, the spatial continuity of points is convenient for segmenting point clouds obtained by 3D laser scanners, RGB-D cameras and LiDAR (light detection and ranging) systems in general. In real life, the surface features of both objects and structures give meaningful information enabling them to be identified and distinguished. Segmenting the points by using their local plane directions (normals), which are estimated by point neighborhoods, is a method that has been widely used in the literature. The angle difference between two nearby local normals allows for measurement of the continuity between the two planes. In real life, the surfaces of objects and structures are not simply planes. Surfaces can also be found in other forms, such as cylinders, smooth transitions and spheres. The proposed voxel-based method developed in this paper solves this problem by inspecting only the local curvatures with a new merging criteria and using a non-sequential region growing approach. The general prominent feature of the proposed method is that it mutually one-to-one pairs all of the adjoining boundary voxels between two adjacent segments to examine the curvatures of all of the pairwise connections. The proposed method uses only one parameter, except for the parameter of unit point group (voxel size), and it does not use a mid-level over-segmentation process, such as supervoxelization. The method checks the local surface curvatures using unit normals, which are close to the boundary between two growing adjacent segments. Another contribution of this paper is that some effective solutions are introduced for the noise units that do not have surface features. The method has been applied to one indoor and four outdoor datasets, and the visual and quantitative segmentation results have been presented. As quantitative measurements, the accuracy (based on the number of true segmented points over all points) and F1 score (based on the means of precision and recall values of the reference segments) are used. The results from testing over five datasets show that, according to both measurement techniques, the proposed method is the fastest and achieves the best mean scores among the methods tested. (C) 2020 Elsevier Ltd. All rights reserved.Book Part Citation - Scopus: 15Chaos Theory in Metaheuristics(Elsevier, 2023) Türkoğlu, B.; Uymaz, S.A.; Kaya, E.Metaheuristic optimization is the technique of finding the most suitable solution among the possible solutions for a particular problem. We encounter many problems in the real world, such as timetabling, path planning, packing, traveling salesman, trajectory optimization, and engineering design problems. The two main problems faced by all metaheuristic algorithms are being stuck in local optima and early convergence. To overcome these problems and achieve better performance, chaos theory is included in the metaheuristic optimization. The chaotic maps are employed to balance the exploration and exploitation efficiently and improve the performance of algorithms in terms of both local optima avoidance and convergence speed. The literature shows that chaotic maps can significantly boost the performance of metaheuristic optimization algorithms. In this chapter, chaos theory and chaotic maps are briefly explained. The use of chaotic maps in metaheuristic is presented, and an enhanced version of GSA with chaotic maps is shown as an application. © 2023 Elsevier Inc. All rights reserved.Article Citation - WoS: 24Citation - Scopus: 30Classification of Physiological Disorders in Apples Fruit Using a Hybrid Model Based on Convolutional Neural Network and Machine Learning Methods(Springer London Ltd, 2022) Büyükarıkan, Birkan; Ülker, ErkanPhysiological disorders in apples are due to post-harvest conditions. For this reason, automatic identification of physiological disorders is important in obtaining agricultural information. Image processing is one of the techniques that can help achieve the features of physiological disorders. Physiological disorders during image acquisition can be affected by the changes in brightness values created by different lighting conditions. This changes the results of the classification. In recent years, the convolutional neural network (CNN) has been a successful approach in automatically obtaining deep features from raw images in image classification problems. The study aims to classify physiological disorders using machine learning (ML) methods according to extracted deep features of the images under different lighting conditions. The data sets were created by acquired images (1080 images) and augmentation images (4320 images). Deep features were extracted using five popular pre-trained CNN models in these data sets, and these features were classified using five ML methods. The highest average accuracy was obtained with the VGG19(fc6) + SVM method in the data set-1 and data set-2 and were 96.11 and 96.09%, respectively. With this study, physiological disorders can be determined early, and needed precautions can be taken before and after harvest, not too late.Article Citation - WoS: 3Citation - Scopus: 3Classification of Physiological Disorders in Apples Using Deep Convolutional Neural Network Under Different Lighting Conditions(Springer, 2023) Büyükarıkan, Birkan; Ülker, ErkanNon-destructive testing of apple fruit, an important product in the world fresh fruit trade, according to physiological disorders, can be done with a computer vision system. However, in the vision system, images may be affected by the brightness values created by different lighting conditions. For this reason, it is a necessity to use algorithms that accurately and quickly detect physiological disorders. By using a convolutional neural network (CNN), an algorithm that enables easy extraction of features from images, determining physiological disorders becomes easier. This study aims to classify the images of apples with physiological disorders obtained under different lighting conditions with CNN models. This study created a dataset (images of different light colors, angles, and distances) with some physiological disorder images. A 5-fold cross-validation method was applied to improve the generalization ability of the models, and CNN models were trained end-to-end. In addition, the Friedman hypothesis test and post-hoc Nemenyi test were performed to compare the evaluation indicators of different CNN models. The average accuracy, precision, recall, and F1-score of the Xception model were 0.996, 0.994, 0.996, and 0.998, respectively. The classification accuracy of this model is followed by the ResNet101, MobileNet, ResNet152, ResNet18, ResNet34, ResNet50, EfficientNetB0, AlexNet, VGG16, and VGG19. Finally, Xception performed well, according to Friedman/Nemenyi test results.Article Citation - WoS: 25Citation - Scopus: 31Clustering Analysis Through Artificial Algae Algorithm(Springer Heidelberg, 2022) Türkoğlu, Bahaeddin; Uymaz, Sait Ali; Kaya, ErsinClustering analysis is widely used in many areas such as document grouping, image recognition, web search, business intelligence, bio information, and medicine. Many algorithms with different clustering approaches have been proposed in the literature. As they are easy and straightforward, partitioning methods such as K-means and K-medoids are the most commonly used algorithms. These are greedy methods that gradually improve clustering quality, highly dependent on initial parameters, and stuck a local optima. For this reason, in recent years, heuristic optimization methods have also been used in clustering. These heuristic methods can provide successful results because they have some mechanism to escape local optimums. In this study, for the first time, Artificial Algae Algorithm was used for clustering and compared with ten well-known bio-inspired metaheuristic clustering approaches. The proposed AAA clustering efficiency is evaluated using statistical analysis, convergence rate analysis, Wilcoxon's test, and different cluster evaluating measures ranking on 25 well-known public datasets with different difficulty levels (features and instances). The results demonstrate that the AAA clustering method provides more accurate solutions with a high convergence rate than other existing heuristic clustering techniques.Article Citation - WoS: 8Citation - Scopus: 9A Comparative Study of Swarm Intelligence and Evolutionary Algorithms on Urban Land Readjustment Problem(ELSEVIER, 2021) Koç, İsmail; Babaoğlu, İsmailLand Readjustment and redistribution (LR) is a land management tool that helps regular urban development with the contribution of landowners. The main purpose of LR is to transform irregularly developed land parcels into suitable forms. Since it is necessary to handle many criteria simultaneously to solve LR problems, classical mathematical methods can be insufficient due to time limitation. Since LR problems are similar to traveling salesman problems and typical scheduling problems in terms of structure, they are kinds of NP-hard problems in combinatorial optimization. Therefore, metaheuristic algorithms are used in order to solve NP-hard problems instead of classical methods. At first, in this study, an effective problem-specific objective function is proposed to address the main criteria of the problem. In addition, a map-based crossover operator and three different mutation operators are proposed for the LR, and then a hybrid approach is implemented by utilizing those operators together. Furthermore, since the optimal value of the problem handled in real world cannot be exactly estimated, a synthetic dataset is proposed as a benchmarking set in LR which makes the success of algorithms can be objectively evaluated. This dataset consists of 5 different problems according to number of parcel which are 20, 40, 60, 80 and 100. Each problem set consists of 4 sub-problems in terms of number of landowners per-parcel which are 1, 2, 3 and 4. Therefore, the dataset consists of 20 kinds of problems. In this study, artificial bee colony, particle swarm optimization, differential evolution, genetic and tree seed algorithm are used. In the experimental studies, five algorithms are set to run under equal conditions using the proposed synthetic dataset. When the acquired experimental results are examined, genetic algorithm seems to be the most effective algorithm in terms of both speed and performance. Although artificial bee colony has better results from genetic algorithm in a few problems, artificial bee colony is the second most successful algorithm after genetic algorithm in terms of performance. However, in terms of time, artificial bee colony is an algorithm nearly as successful as genetic algorithm. On the other hand, the results of differential evolution, particle swarm optimization and tree seed algorithms are similar to each other in terms of solution quality. In conclusion, the statistical tests clearly show that genetic algorithm is the most effective technique in solving LR problems in terms of speed, performance and robustness. (C) 2020 Elsevier B.V. All rights reserved.

