Bilgisayar ve Bilişim Fakültesi Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.13091/10834
Browse
Browsing Bilgisayar ve Bilişim Fakültesi Koleksiyonu by WoS Q "Q1"
Now showing 1 - 20 of 47
- Results Per Page
- Sort Options
Article Citation - WoS: 29Citation - Scopus: 54Alexnet Architecture Variations With Transfer Learning for Classification of Wound Images(Elsevier B.V., 2023) Eldem, H.; Ülker, E.; Işıklı, O.Y.In medical world, wound care and follow-up is one of the issues that are gaining importance to work on day by day. Accurate and early recognition of wounds can reduce treatment costs. In the field of computer vision, deep learning architectures have received great attention recently. The achievements of existing pre-trained architectures for describing (classifying) data belonging to many image sets in the real world are primarily addressed. However, to increase the success of these architectures in a certain area, some improvements and enhancements can be made on the architecture. In this paper, the classification of pressure and diabetic wound images was performed with high accuracy. The six different new AlexNet architecture variations (3Conv_Softmax, 3Conv_SVM, 4Conv_Softmax, 4Conv_SVM, 6Conv_Softmax, 6Conv_SVM) were created with a different number of implementations of Convolution, Pooling, and Rectified Linear Activation (ReLU) layers. Classification performances of the proposed models are investigated by using Softmax classifier and SVM classifier separately. A new original Wound Image Database are created for performance measures. According to the experimental results obtained for the Database, the model with 6 Convolution layers (6Conv_SVM) was the most successful method among the proposed methods with 98.85% accuracy, 98.86% sensitivity, and 99.42% specificity. The 6Conv_SVM model was also tested on diabetic and pressure wound images in the public medetec dataset, and 95.33% accuracy, 95.33% sensitivity, and 97.66% specificity values were obtained. The proposed method provides high performance compared to the pre-trained AlexNet architecture and other state-of-the-art models in the literature. The results showed that the proposed 6Conv_SVM architecture can be used by the relevant departments in the medical world with good performance in medical tasks such as examining and classifying wound images and following up the wound process. © 2023 Karabuk UniversityArticle Citation - WoS: 30Citation - Scopus: 36Binary Aquila Optimizer for 0-1 Knapsack Problems(Pergamon-Elsevier Science Ltd, 2023) Baş, EmineThe optimization process entails determining the best values for various system characteristics in order to finish the system design at the lowest possible cost. In general, real-world applications and issues in artificial intelligence and machine learning are discrete, unconstrained, or discrete. Optimization approaches have a high success rate in tackling such situations. As a result, several sophisticated heuristic algorithms based on swarm intelligence have been presented in recent years. Various academics in the literature have worked on such algorithms and have effectively addressed many difficulties. Aquila Optimizer (AO) is one such algorithm. Aquila Optimizer (AO) is a recently suggested heuristic algorithm. It is a novel population-based optimization strategy. It was made by mimicking the natural behavior of the Aquila. It was created by imitating the behavior of the Aquila in nature in the process of catching its prey. The AO algorithm is an algorithm developed to solve continuous optimization problems in their original form. In this study, the AO structure has been updated again to solve binary optimization problems. Problems encountered in the real world do not always have continuous values. It exists in problems with discrete values. Therefore, algorithms that solve continuous problems need to be restructured to solve discrete optimization problems as well. Binary optimization problems constitute a subgroup of discrete optimization problems. In this study, a new algorithm is proposed for binary optimization problems (BAO). The most successful BAO-T algorithm was created by testing the success of BAO in eight different transfer functions. Transfer functions play an active role in converting the continuous search space to the binary search space. BAO has also been developed by adding candidate solution step crossover and mutation methods (BAO-CM). The success of the proposed BAO-T and BAO-CM algorithms has been tested on the knapsack problem, which is widely selected in binary optimization problems in the literature. Knapsack problem examples are divided into three different benchmark groups in this study. A total of sixty-three low, medium, and large scale knapsack problems were determined as test datasets. The performances of BAO-T and BAO-CM algorithms were examined in detail and the results were clearly shown with graphics. In addition, the results of BAO-T and BAO-CM algorithms have been compared with the new heuristic algorithms proposed in the literature in recent years, and their success has been proven. According to the results, BAO-CM performed better than BAO-T and can be suggested as an alternative algorithm for solving binary optimization problems.Article Citation - WoS: 44Citation - Scopus: 47Binary Artificial Algae Algorithm for Feature Selection(Elsevier, 2022) Türkoğlu, Bahaeddin; Uymaz, Sait Ali; Kaya, ErsinIn this study, binary versions of the Artificial Algae Algorithm (AAA) are presented and employed to determine the ideal attribute subset for classification processes. AAA is a recently proposed algorithm inspired by microalgae's living behavior, which has not been consistently implemented to determine ideal attribute subset (feature selection) processes yet. AAA can effectively look into the feature space for ideal attributes combination minimizing a designed objective function. The proposed binary versions of AAA are employed to determine the ideal attribute combination that maximizes classification success while minimizing the count of attributes. The original AAA is utilized in these versions while its continuous spaces are restricted in a threshold using an appropriate threshold function after flattening them. In order to demonstrate the performance of the presented binary artificial algae algorithm model, an experimental study was conducted with the latest seven highperformance optimization algorithms. Several evaluation metrics are used to accurately evaluate and analyze the performance of these algorithms over twenty-five datasets with different difficulty levels from the UCI Machine Learning Repository. The experimental results and statistical tests verify the performance of the presented algorithms in increasing the classification accuracy compared to other state-of-the-art binary algorithms, which confirms the capability of the AAA algorithm in exploring the attribute space and deciding the most valuable features for classification problems. (C) 2022 Elsevier B.V. All rights reserved.Article Citation - WoS: 31Citation - Scopus: 32A Binary Artificial Bee Colony Algorithm and Its Performance Assessment(PERGAMON-ELSEVIER SCIENCE LTD, 2021) Kıran, Mustafa ServetArtificial bee colony algorithm, ABC for short, is a swarm-based optimization algorithm proposed for solving continuous optimization problems. Due to its simple but effective structure, some binary versions of the algorithm have been developed. In this study, we focus on modification of its xor-based binary version, called as binABC. The solution update rule of basic ABC is replaced with a xor logic gate in binABC algorithm, and binABC works on discretely-structured solution space. The rest of components in binABC are the same as with the basic ABC algorithm. In order to improve local search capability and convergence characteristics of binABC, a stigmergic behavior-based update rule for onlooker bees of binABC and extended version of xor-based update rule are proposed in the present study. The developed version of binABC is applied to solve a modern benchmark problem set (CEC2015). To validate the performance of proposed algorithm, a series of comparisons are conducted on this problem set. The proposed algorithm is first compared with the basic ABC and binABC on CEC2015 set. After its performance validation, six binary versions of ABC algorithm are considered for comparison of the algorithms, and a comprehensive comparison among the state-of-art variants of swarm intelligence or evolutionary computation algorithms is conducted on this set of functions. Finally, an uncapacitated facility location problem set, a pure binary optimization problem, is considered for the comparison of the proposed algorithm and binary variants of ABC algorithm. The experimental results and comparisons show that the proposed algorithm is successful and effective in solving binary optimization problems as its basic version in solving continuous optimization problems.Article Citation - WoS: 32Citation - Scopus: 35A Binary Social Spider Algorithm for Uncapacitated Facility Location Problem(PERGAMON-ELSEVIER SCIENCE LTD, 2020) Baş, Emine; Ülker, ErkanIn order to find efficient solutions to real complex world problems, computer sciences and especially heuristic algorithms are often used. Heuristic algorithms can give optimal solutions for large scale optimization problems in an acceptable period. Social Spider Algorithm (SSA), which is a heuristic algorithm created on spider behaviors are studied. The original study of this algorithm was proposed to solve continuous problems. In this paper, the binary version of the Social Spider Algorithm called Binary Social Spider Algorithm (BinSSA) is proposed for binary optimization problems. BinSSA is obtained from SSA, by transforming constant search space to binary search space with four transfer functions. Thus, BinSSA variations are created as BinSSA1, BinSSA2, BinSSA3, and BinSSA4. The study steps of the original SSA are re-updated for BinSSA. A random walking schema in SSA is replaced by a candidate solution schema in BinSSA. Two new methods (similarity measure and logic gate) are used in candidate solution production schema for increasing the exploration and exploitation capacity of BinSSA. The performance of both techniques on BinSSA is examined. BinSSA is named as BinSSA(Sim&Logic). Local search and global search performance of BinSSA is increased by these two methods. Three different studies are performed with BinSSA. In the first study, the performance of BinSSA is tested on the classic eighteen unimodal and multimodal benchmark functions. Thus, the best variation of BinSSA and BinSSA (Sim&Logic) is determined as BinSSA4(Sim&Logic). BinSSA4(Sim&Logic) has been compared with other heuristic algorithms on CEC2005 and CEC2015 functions. In the second study, the uncapacitated facility location problems (UFLPs) are solved with BinSSA(Sim&Logic). UFL problems are one of the pure binary optimization problems. BinSSA is tested on low-scaled, middle-scaled, and large-scaled fifteen UFLP samples and obtained results are compared with eighteen state-of-art algorithms. In the third study, we solved UFL problems on a different dataset named M* with BinSSA(Sim&Logic). The results of BinSSA (Sim&Logic) are compared with the Local Search (LS), Tabu Search (TS), and Improved Scatter Search (ISS) algorithms. Obtained results have shown that BinSSA offers quality and stable solutions. (c) 2020 Elsevier Ltd. All rights reserved.Article Citation - WoS: 10Citation - Scopus: 11A Binary Sparrow Search Algorithm for Feature Selection on Classification of X-Ray Security Images(Elsevier Ltd, 2024) Babalik, A.; Babadag, A.In today's world, especially in public places, strict security measures are being implemented. Among these measures, the most common is the inspection of the contents of people's belongings, such as purses, knapsacks, and suitcases, through X-ray imaging to detect prohibited items. However, this process is typically performed manually by security personnel. It is an exhausting task that demands continuous attention and concentration, making it prone to errors. Additionally, the detection and classification of overlapping and occluded objects can be challenging. Therefore, automating this process can be highly beneficial for reducing errors and improving the overall efficiency. In this study, a framework consisting of three fundamental phases for the classification of prohibited objects was proposed. In the first phase, a deep neural network was trained using X-ray images to extract features. In the subsequent phase, features that best represent the object were selected. Feature selection helps eliminate redundant features, leading to the efficient use of memory, reduced computational costs, and improved classification accuracy owing to a decrease in the number of features. In the final phase, classification was performed using the selected features. In the first stage, a convolutional neural network model was utilized for feature extraction. In the second stage, the Sparrow Search Algorithm was binarized and proposed as the binISSA for feature selection. Feature selection was implemented using the proposed binISSA. In the final stage, classification was performed using the K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) algorithms. The performances of the convolutional neural network and the proposed framework were compared. In addition, the performance of the proposed framework was compared with that of other state-of-the-art meta-heuristic algorithms. The proposed method increased the classification accuracy of the network from 0.9702 to 0.9763 using both the KNN and SVM (linear kernel) classifiers. The total number of features extracted using the deep neural network was 512. With the application of the proposed binISSA, average number of features were reduced to 25.33 using the KNN classifier and 32.70 using the SVM classifier. The results indicate a notable reduction in the extracted features from the convolutional neural network and an improvement in the classification accuracy. © 2024 Elsevier B.V.Article Citation - WoS: 21Citation - Scopus: 24Boosting the Oversampling Methods Based on Differential Evolution Strategies for Imbalanced Learning(Elsevier, 2021) Korkmaz, Sedat; Sahman, Mehmet Akif; Çınar, Ahmet Cevahir; Kaya, ErsinThe class imbalance problem is a challenging problem in the data mining area. To overcome the low classification performance related to imbalanced datasets, sampling strategies are used for balancing the datasets. Oversampling is a technique that increases the minority class samples in various proportions. In this work, these 16 different DE strategies are used for oversampling the imbalanced datasets for better classification. The main aim of this work is to determine the best strategy in terms of Area Under the receiver operating characteristic (ROC) Curve (AUC) and Geometric Mean (G-Mean) metrics. 44 imbalanced datasets are used in experiments. Support Vector Machines (SVM), k-Nearest Neighbor (kNN), and Decision Tree (DT) are used as a classifier in the experiments. The best results are produced by 6th Debohid Strategy (DSt6), 1th Debohid Strategy (DSt1), and 3th Debohid Strategy (DSt3) by using kNN, DT, and SVM classifiers, respectively. The obtained results outperform the 9 state-of-the-art oversampling methods in terms of AUC and G-Mean metrics (C) 2021 Elsevier B.V. All rights reserved.Article Citation - WoS: 24Citation - Scopus: 34Boundary Constrained Voxel Segmentation for 3d Point Clouds Using Local Geometric Differences(PERGAMON-ELSEVIER SCIENCE LTD, 2020) Sağlam, Ali; Makineci, Hasan Bilgehan; Baykan, Nurdan Akhan; Baykan, Ömer KaanIn 3D point cloud processing, the spatial continuity of points is convenient for segmenting point clouds obtained by 3D laser scanners, RGB-D cameras and LiDAR (light detection and ranging) systems in general. In real life, the surface features of both objects and structures give meaningful information enabling them to be identified and distinguished. Segmenting the points by using their local plane directions (normals), which are estimated by point neighborhoods, is a method that has been widely used in the literature. The angle difference between two nearby local normals allows for measurement of the continuity between the two planes. In real life, the surfaces of objects and structures are not simply planes. Surfaces can also be found in other forms, such as cylinders, smooth transitions and spheres. The proposed voxel-based method developed in this paper solves this problem by inspecting only the local curvatures with a new merging criteria and using a non-sequential region growing approach. The general prominent feature of the proposed method is that it mutually one-to-one pairs all of the adjoining boundary voxels between two adjacent segments to examine the curvatures of all of the pairwise connections. The proposed method uses only one parameter, except for the parameter of unit point group (voxel size), and it does not use a mid-level over-segmentation process, such as supervoxelization. The method checks the local surface curvatures using unit normals, which are close to the boundary between two growing adjacent segments. Another contribution of this paper is that some effective solutions are introduced for the noise units that do not have surface features. The method has been applied to one indoor and four outdoor datasets, and the visual and quantitative segmentation results have been presented. As quantitative measurements, the accuracy (based on the number of true segmented points over all points) and F1 score (based on the means of precision and recall values of the reference segments) are used. The results from testing over five datasets show that, according to both measurement techniques, the proposed method is the fastest and achieves the best mean scores among the methods tested. (C) 2020 Elsevier Ltd. All rights reserved.Article Citation - WoS: 17Citation - Scopus: 15Chaotic Golden Ratio Guided Local Search for Big Data Optimization(Elsevier - Division Reed Elsevier India Pvt Ltd, 2023) Koçer, Havva Gül; Türkoğlu, Bahaeddin; Uymaz, Sait AliBiological systems where order arises from disorder inspires for many metaheuristic optimization techniques. Self-organization and evolution are the common behaviour of chaos and optimization algorithms. Chaos can be defined as an ordered state of disorder that is hypersensitive to initial conditions. Therefore, chaos can help create order out of disorder. In the scope of this work, Golden Ratio Guided Local Search method was improved with inspiration by chaos and named as Chaotic Golden Ratio Guided Local Search (CGRGLS). Chaos is used as a random number generator in the proposed method. The coefficient in the equation for determining adaptive step size was derived from the Singer Chaotic Map. Performance evaluation of the proposed method was done by using CGRGLS in the local search part of MLSHADE-SPA algorithm. The experimental studies carried out with the electroencephalographic signal decomposition based optimization problems, named as Big Data optimization problem (Big-Opt), introduced at the Congress on Evolutionary Computing Big Data Competition (CEC'2015). Experimental results have shown that the local search method developed using chaotic maps has an effect that increases the performance of the algorithm.& COPY; 2023 Karabuk University. Publishing services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).Article Citation - WoS: 8Citation - Scopus: 9A Comparative Study of Swarm Intelligence and Evolutionary Algorithms on Urban Land Readjustment Problem(ELSEVIER, 2021) Koç, İsmail; Babaoğlu, İsmailLand Readjustment and redistribution (LR) is a land management tool that helps regular urban development with the contribution of landowners. The main purpose of LR is to transform irregularly developed land parcels into suitable forms. Since it is necessary to handle many criteria simultaneously to solve LR problems, classical mathematical methods can be insufficient due to time limitation. Since LR problems are similar to traveling salesman problems and typical scheduling problems in terms of structure, they are kinds of NP-hard problems in combinatorial optimization. Therefore, metaheuristic algorithms are used in order to solve NP-hard problems instead of classical methods. At first, in this study, an effective problem-specific objective function is proposed to address the main criteria of the problem. In addition, a map-based crossover operator and three different mutation operators are proposed for the LR, and then a hybrid approach is implemented by utilizing those operators together. Furthermore, since the optimal value of the problem handled in real world cannot be exactly estimated, a synthetic dataset is proposed as a benchmarking set in LR which makes the success of algorithms can be objectively evaluated. This dataset consists of 5 different problems according to number of parcel which are 20, 40, 60, 80 and 100. Each problem set consists of 4 sub-problems in terms of number of landowners per-parcel which are 1, 2, 3 and 4. Therefore, the dataset consists of 20 kinds of problems. In this study, artificial bee colony, particle swarm optimization, differential evolution, genetic and tree seed algorithm are used. In the experimental studies, five algorithms are set to run under equal conditions using the proposed synthetic dataset. When the acquired experimental results are examined, genetic algorithm seems to be the most effective algorithm in terms of both speed and performance. Although artificial bee colony has better results from genetic algorithm in a few problems, artificial bee colony is the second most successful algorithm after genetic algorithm in terms of performance. However, in terms of time, artificial bee colony is an algorithm nearly as successful as genetic algorithm. On the other hand, the results of differential evolution, particle swarm optimization and tree seed algorithms are similar to each other in terms of solution quality. In conclusion, the statistical tests clearly show that genetic algorithm is the most effective technique in solving LR problems in terms of speed, performance and robustness. (C) 2020 Elsevier B.V. All rights reserved.Article Citation - WoS: 1Citation - Scopus: 8Comparison Between Ssa and Sso Algorithm Inspired in the Behavior of the Social Spider for Constrained Optimization(SPRINGER, 2021) Baş, Emine; Ülker, ErkanThe heuristic algorithms are often used to find solutions to real complex world problems. In this paper, the Social Spider Algorithm (SSA) and Social Spider Optimization (SSO) which are heuristic algorithms created upon spider behaviors are considered. Performances of both algorithms are compared with each other from six different items. These are; fitness values of spider population which are obtained in different dimensions, number of candidate solution obtained in each iteration, the best value of candidate solutions obtained in each iteration, the worst value of candidate solutions obtained in each iteration, average fitness value of candidate solutions obtained in each iteration and running time of each iteration. Obtained results of SSA and SSO are applied to the Wilcoxon signed-rank test. Various unimodal, multimodal, and hybrid standard benchmark functions are studied to compare each other with the performance of SSO and SSA. Using these benchmark functions, performances of SSO and SSA are compared with well-known evolutionary and recently developed methods in the literature. Obtained results show that both heuristic algorithms have advantages to another from different aspects. Also, according to other algorithms have good performance.Article Citation - WoS: 20Citation - Scopus: 18Comparison of Different Optimization Based Land Reallocation Models(ELSEVIER SCI LTD, 2020) Uyan, Mevlüt; Tongur, Vahit; Ertunç, ElaLand reallocation, which is an optimization problem in the field of engineering, is the process of reallocating parcels to pre-determined blocks according to the preferences of landowners. In practice, this is done manually and takes weeks or even months. The elongation of this process affects both the cost of the project and the project's acceptability by the landowners and thus the success of the project. Because the success of land consolidation projects is determined by the satisfaction of the landowners. For these reasons, the optimization-based land reallocation studies have been extensively carried out recently. However, these methods in the literature are not used in practice and the reallocation is still done manually. Therefore, for the first time in this study, two new reallocation models were developed to solve this problem by using Migration Birds and Simulated Annealing Algorithms and the results of these methods in a real project area were compared. Additionally, the results were compared to the conventional reallocation method (manual reallocation) to evaluate the performance of the methods developed. Both proposed methods provided a successful and practicable reallocation plan in a very short time with respect to the conventional one.Article Citation - WoS: 15Citation - Scopus: 19A Comprehensive Analysis of Grid-Based Wind Turbine Layout Using an Efficient Binary Invasive Weed Optimization Algorithm With Levy Flight(Pergamon-Elsevier Science Ltd, 2022) Koç, İsmailWind energy has attracted great attention in recent years due to the increasing demand for alternative energy sources. Gathering the maximum amount of energy from wind energy is directly related to the layout of wind turbines in wind farm. This study focuses on grid-based wind turbine layout problem in an area of 2 km x 2 km. In order to solve this problem, 9 novel grids of 11 x 11, 12 x 12, ... and 19 x 19 are proposed in this paper in addition to 10 x 10 and 20 x 20 turbine grids which have already used in the literature. A new repair operator is recommended, taking into account the distance constraint between two adjacent cells in turbine layout problem, and thus the obtained solutions are made feasible. Furthermore, a levy flight-based IWO (LFIWO) algorithm is developed to optimize the layout of the turbines in wind farm. The basic IWO algorithm and the proposed LFIWO algorithm are compared on 11 different turbine layouts. Experimental studies are carried out for both algorithms under equal conditions with the help of 10 different binary versions. According to the comparisons performed using 11 different grids, LFIWO demonstrates a much superior success than the IWO algorithm. In addition, the proposed 19 x 19 grid reveals the best success among the other grids. When compared to the other studies in the literature, it is seen that LFIWO surpasses the other algorithms in terms of solution quality. As a result, it can be clearly stated that the proposed binary version of LFIWO is a competitive and effective binary algorithm.Article Citation - WoS: 14Citation - Scopus: 18D-Mosg: Discrete Multi-Objective Shuffled Gray Wolf Optimizer for Multi-Level Image Thresholding(ELSEVIER - DIVISION REED ELSEVIER INDIA PVT LTD, 2021) Karakoyun, Murat; Gülcü, Şaban; Kodaz, HalifeSegmentation is an important step of image processing that directly affects its success. Among the methods used for image segmentation, histogram-based thresholding is a very popular approach. To apply the thresholding approach, many methods such as Otsu, Kapur, Renyi etc. have been proposed in order to produce the thresholds that will segment the image optimally. These suggested methods usually have their own characteristics and are successful for particular images. It can be thought that better results may be obtained by using objective functions with different characteristics together. In this study, the thresholding which is originally applied as a single-objective problem has been considered as a multi-objective problem by using the Otsu and Kapur methods. Therefore, the discrete multi-objective shuffled gray wolf optimizer (D-MOSG) algorithm has been proposed for multi-level thresholding segmentation. Experiments have clearly shown that the D-MOSG algorithm has achieved superior results than the compared algorithms. (C) 2021 Karabuk University. Publishing services by Elsevier B.V.Article Citation - WoS: 36Citation - Scopus: 41Debohid: a Differential Evolution Based Oversampling Approach for Highly Imbalanced Datasets(PERGAMON-ELSEVIER SCIENCE LTD, 2021) Kaya, Ersin; Korkmaz, Sedat; Şahman, Mehmet Akif; Çınar, Ahmet CevahirClass distribution of the samples in the dataset is one of the critical factors affecting the classification success. Classifiers trained with imbalanced datasets classify majority class samples more successfully than minority class samples. Oversampling, which is based on increasing the minority class samples, is a frequently used method to overcome the class imbalance. More than two decades, many oversampling methods are presented for the class imbalance problem. Differential Evolution is a metaheuristic algorithm that achieves successful results in a lot of domains. One of the main reasons for this success is that DE has an effective candidate individual generation mechanism. In this work, we propose a novel oversampling method based on a differential evolution algorithm for highly imbalanced datasets, and it is named as DEBOHID (A differential evolution based oversampling approach for highly imbalanced datasets). In order to show the success of DEBOHID, 44 highly imbalanced ratio datasets are used in experiments. The obtained results are compared with nine different state-of-art oversampling methods. In order to show the independence of the experimental results to classifier, Support Vector Machines (SVM), k-Nearest Neighbor (kNN), and Decision Tree (DT) are used as a classifier in the experiments. AUC and G Mean metrics are used for the performance measurements. The experimental results and statistical analyses have shown the triumph of the DEBOHID.Article Citation - WoS: 14Citation - Scopus: 16Determination of Damage Levels of Rc Columns With a Smart System Oriented Method(SPRINGER, 2020) Doğan, Gamze; Arslan, Musa Hakan; Baykan, Ömer KaanIn this study, a method that is fast, economical and satisfying in terms of accuracy rate has been developed in order to determine the post-earthquake damage level of reinforced concrete column elements dependent on the damage image on the column surface. In order to represent the Turkish building stock, reinforced concrete columns were produced complying with the 2007 and 2018 Turkish Earthquake Code (TEC-2007 and TBEC-2018) and, in order to represent the existing building stock made before 2000, reinforced concrete columns which are non-complying with the code have been produced. A total of 12 reinforced concrete columns produced in 1/1 scale with square cross sections were tested under earthquake resembling reversible cycling lateral load and axial force. For each cycle, a data set was created by matching the surface images taken from the determined regions of the columns with the damage levels specified in TEC-2007 and TBEC-2018 depending on the load-displacement values measured on the column during the experiment. As a result of the experimental study, a total of 390 damage images were obtained for each load and displacement level. Image processing application was performed by using MATLAB on the damage images and the cracks on the column surface were separated. Parameters such as total cracks area, total cracks length, maximum crack length and maximum crack width have been obtained to represent the amount of damage on the column through the feature extraction process of the cracks in the images. The characteristics of the cracks were classified by support vector machines, decision trees, K-nearest neighborhood, Discriminant Analysis, Ensemble algorithms, which are machine learning classifiers, and the damage states for the columns were estimated. The estimation success from the classifiers ranges from 64 to 80%. In this study, it has been seen that the proposed and developed intelligent system will be open to development and will be a good alternative to existing conventional systems for the determination of column damage.Article Citation - WoS: 10Citation - Scopus: 13Discrete Artificial Algae Algorithm for Solving Job-Shop Scheduling Problems(Elsevier B.V., 2022) Şahman, Mehmet Akif; Korkmaz, SedatThe Job-Shop Scheduling Problem (JSSP) is an NP-hard problem and can be solved with both exact methods and heuristic algorithms. When the dimensionality is increased, exact methods cannot produce proper solutions, but heuristic algorithms can produce optimal or near-optimal results for high-dimensional JSSPs in a reasonable time. In this work, novel versions of the Artificial Algae Algorithm (AAA) have been proposed to solve discrete optimization problems. Three encoding schemes (Random-Key (RK), Smallest Position Value (SPV), and Ranked-Over Value (ROV) Encoding Schemes) were integrated with AAA to solve JSSPs. In addition, the comparison of these three encoding schemes was carried out for the first time in this study. In the experiments, 48 JSSP problems that have 36 to 300 dimensions were solved with 24 different approaches obtained by integrating 3 different coding schemes into 8 state-of-the-art algorithms. As a result of the comparative and detailed analysis, the best results in terms of makespan value were obtained by integrating the SPV coding scheme into the AAA method. © 2022 Elsevier B.V.Article Citation - WoS: 15Citation - Scopus: 21Discrete Social Spider Algorithm for the Traveling Salesman Problem(SPRINGER, 2021) Baş, Emine; Ülker, ErkanHeuristic algorithms are often used to find solutions to real complex world problems. These algorithms can provide solutions close to the global optimum at an acceptable time for optimization problems. Social Spider Algorithm (SSA) is one of the newly proposed heuristic algorithms and based on the behavior of the spider. Firstly it has been proposed to solve the continuous optimization problems. In this paper, SSA is rearranged to solve discrete optimization problems. Discrete Social Spider Algorithm (DSSA) is developed by adding explorer spiders and novice spiders in discrete search space. Thus, DSSA's exploration and exploitation capabilities are increased. The performance of the proposed DSSA is investigated on traveling salesman benchmark problems. The Traveling Salesman Problem (TSP) is one of the standard test problems used in the performance analysis of discrete optimization algorithms. DSSA has been tested on a low, middle, and large-scale thirty-eight TSP benchmark datasets. Also, DSSA is compared to eighteen well-known algorithms in the literature. Experimental results show that the performance of proposed DSSA is especially good for low and middle-scale TSP datasets. DSSA can be used as an alternative discrete algorithm for discrete optimization tasks.Article Citation - WoS: 15Citation - Scopus: 17Discrete Tree Seed Algorithm for Urban Land Readjustment(Pergamon-Elsevier Science Ltd, 2022) Koç, İsmail; Atay, Yılmaz; Babaoğlu, İsmailLand readjustment and redistribution (LR) is an important approach used to realize development plans by converting rural lands to urban land and also providing urban infrastructure. The LR problem, which is a complex challenging real-world problem, is a discrete optimization problem because its structure is similar to TSP (Traveling Salesman Problem) and scheduling problems which are combinatorial optimization problems. Since classical mathematical methods are insufficient for solving NP (Nondeterministic Polynomial) optimization problems due to time limitations, meta-heuristic optimization algorithms are commonly utilized for solving these kinds of problems. In this paper, meta-heuristic algorithms including genetic, particle swarm, differential evolution, artificial bee, and tree seed algorithms are utilized for solving LR problems. The stated meta-heuristic algorithms are used by applying spatial-based crossover and mutation operators depending upon the LR problem on each algorithm. Moreover, a synthetic dataset is used to ensure that the quality of the solution obtained is acceptable to everyone, to prove an optimal solution easily. By utilizing the suggested spatial-based crossover and mutation operators, finding the ideal solution is aimed using the synthetic dataset. In addition, five different modifications on TSA (Tree-Seed Algorithm) are performed and used to solve LR problems. All the modified versions of TSA are carried out only by changing the mechanism of seed reproduction. The novel TSA approaches are respectively named as tcTSA (tournament current), tbTSA (tournament best), pbTSA (personal-best based), t2TSA (double tournament), and elTSA (elitism based). In the experimental studies, the hybrid approach, which includes the crossover and mutation operators, is successfully applied in all of the algorithms under equal conditions for a fair comparison. According to experimental results performed using the dataset, it can be clearly stated that especially t2TSA outperforms all the algorithms in terms of performance and time.Article Citation - WoS: 58Citation - Scopus: 76A Discrete Tree-Seed Algorithm for Solving Symmetric Traveling Salesman Problem(ELSEVIER - DIVISION REED ELSEVIER INDIA PVT LTD, 2020) Çınar, Ahmet Cevahir; Korkmaz, Sedat; Kıran, Mustafa ServetTree-Seed algorithm (TSA) is a recently developed nature inspired population-based iterative search algorithm. TSA is proposed for solving continuous optimization problems by inspiring the relations between trees and their seeds. The constrained and binary versions of TSA are present in the literature but there is no discrete version of TSA which decision variables represented as integer values. In the present work, the basic TSA is redesigned by integrating the swap, shift, and symmetry transformation operators in order to solve the permutation-coded optimization problems and it is called as DTSA. In the basic TSA, the solution update rules can be used for the decision variables whose are defined in continuous solution space, this rules are replaced with the transformation operators in the proposed DTSA. In order to investigate the performance of DTSA, well-known symmetric traveling salesman problems are considered in the experiments. The obtained results are compared with well-known metaheuristic algorithms and their variants, such as Ant Colony Optimization (ACO), Genetic Algorithm (GA), Simulated Annealing (SA), State Transition Algorithm (STA), Artificial Bee Colony (ABC), Black Hole (BH), and Particle Swarm Optimization (PSO). Experimental results show that DTSA is another qualified and competitive solver on discrete optimization. (C) 2019 Karabuk University. Publishing services by Elsevier B.V.
- «
- 1 (current)
- 2
- 3
- »

