Bilgisayar ve Bilişim Fakültesi Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.13091/10834
Browse
Browsing Bilgisayar ve Bilişim Fakültesi Koleksiyonu by Scopus Q "Q2"
Now showing 1 - 20 of 31
- Results Per Page
- Sort Options
Article Citation - WoS: 3Citation - Scopus: 6Analysis of Machine Learning Classification Approaches for Predicting Students' Programming Aptitude(MDPI, 2023) Çetinkaya, Ali; Baykan, Ömer Kaan; Kırgız, HavvaWith the increasing prevalence and significance of computer programming, a crucial challenge that lies ahead of teachers and parents is to identify students adept at computer programming and direct them to relevant programming fields. As most studies on students' coding abilities focus on elementary, high school, and university students in developed countries, we aimed to determine the coding abilities of middle school students in Turkey. We first administered a three-part spatial test to 600 secondary school students, of whom 400 completed the survey and the 20-level Classic Maze course on Code.org. We then employed four machine learning (ML) algorithms, namely, support vector machine (SVM), decision tree, k-nearest neighbor, and quadratic discriminant to classify the coding abilities of these students using spatial test and Code.org platform data. SVM yielded the most accurate results and can thus be considered a suitable ML technique to determine the coding abilities of participants. This article promotes quality education and coding skills for workforce development and sustainable industrialization, aligned with the United Nations Sustainable Development Goals.Article Citation - WoS: 7Citation - Scopus: 12Analyzing the Effect of Data Preprocessing Techniques Using Machine Learning Algorithms on the Diagnosis of Covid-19(Wiley, 2022) Erol, Gizemnur; Uzbaş, Betül; Yücelbaş, Cüneyt; Yücelbaş, SuleReal-time polymerase chain reaction (RT-PCR) known as the swab test is a diagnostic test that can diagnose COVID-19 disease through respiratory samples in the laboratory. Due to the rapid spread of the coronavirus around the world, the RT-PCR test has become insufficient to get fast results. For this reason, the need for diagnostic methods to fill this gap has arisen and machine learning studies have started in this area. On the other hand, studying medical data is a challenging area because the data it contains is inconsistent, incomplete, difficult to scale, and very large. Additionally, some poor clinical decisions, irrelevant parameters, and limited medical data adversely affect the accuracy of studies performed. Therefore, considering the availability of datasets containing COVID-19 blood parameters, which are less in number than other medical datasets today, it is aimed to improve these existing datasets. In this direction, to obtain more consistent results in COVID-19 machine learning studies, the effect of data preprocessing techniques on the classification of COVID-19 data was investigated in this study. In this study primarily, encoding categorical feature and feature scaling processes were applied to the dataset with 15 features that contain blood data of 279 patients, including gender and age information. Then, the missingness of the dataset was eliminated by using both K-nearest neighbor algorithm (KNN) and chain equations multiple value assignment (MICE) methods. Data balancing has been done with synthetic minority oversampling technique (SMOTE), which is a data balancing method. The effect of data preprocessing techniques on ensemble learning algorithms bagging, AdaBoost, random forest and on popular classifier algorithms KNN classifier, support vector machine, logistic regression, artificial neural network, and decision tree classifiers have been analyzed. The highest accuracies obtained with the bagging classifier were 83.42% and 83.74% with KNN and MICE imputations by applying SMOTE, respectively. On the other hand, the highest accuracy ratio reached with the same classifier without SMOTE was 83.91% for the KNN imputation. In conclusion, certain data preprocessing techniques are examined comparatively and the effect of these data preprocessing techniques on success is presented and the importance of the right combination of data preprocessing to achieve success has been demonstrated by experimental studies.Article Citation - WoS: 2B-Spline Curve Approximation by Utilizing Big Bang-Big Crunch Method(TECH SCIENCE PRESS, 2020) İnik, Özkan; Ülker, Erkan; Koç, İsmailThe location of knot points and estimation of the number of knots are undoubtedly known as one of the most difficult problems in B-Spline curve approximation. In the literature, different researchers have been seen to use more than one optimization algorithm in order to solve this problem. In this paper, Big Bang-Big Crunch method (BB-BC) which is one of the evolutionary based optimization algorithms was introduced and then the approximation of B-Spline curve knots was conducted by this method. The technique of reverse engineering was implemented for the curve knot approximation. The detection of knot locations and the number of knots were randomly selected in the curve approximation which was performed by using BB-BC method. The experimental results were carried out by utilizing seven different test functions for the curve approximation. The performance of BB-BC algorithm was examined on these functions and their results were compared with the earlier studies performed by the researchers. In comparison with the other studies, it was observed that though the number of the knot in BB-BC algorithm was high, this algorithm approximated the B-Spline curves at the rate of minor error.Article Citation - WoS: 11Citation - Scopus: 13Boosting Galactic Swarm Optimization With Abc(SPRINGER HEIDELBERG, 2019) Kaya, Ersin; Uymaz, Sait Ali; Koçer, BarışGalactic swarm optimization (GSO) is a new global metaheuristic optimization algorithm. It manages multiple sub-populations to explore search space efficiently. Then superswarm is recruited from the best-found solutions. Actually, GSO is a framework. In this framework, search method in both sub-population and superswarm can be selected differently. In the original work, particle swarm optimization is used as the search method in both phases. In this work, performance of the state of the art and well known methods are tested under GSO framework. Experiments show that performance of artificial bee colony algorithm under the GSO framework is the best among the other algorithms both under GSO framework and original algorithms.Article Citation - WoS: 25Citation - Scopus: 31Clustering Analysis Through Artificial Algae Algorithm(Springer Heidelberg, 2022) Türkoğlu, Bahaeddin; Uymaz, Sait Ali; Kaya, ErsinClustering analysis is widely used in many areas such as document grouping, image recognition, web search, business intelligence, bio information, and medicine. Many algorithms with different clustering approaches have been proposed in the literature. As they are easy and straightforward, partitioning methods such as K-means and K-medoids are the most commonly used algorithms. These are greedy methods that gradually improve clustering quality, highly dependent on initial parameters, and stuck a local optima. For this reason, in recent years, heuristic optimization methods have also been used in clustering. These heuristic methods can provide successful results because they have some mechanism to escape local optimums. In this study, for the first time, Artificial Algae Algorithm was used for clustering and compared with ten well-known bio-inspired metaheuristic clustering approaches. The proposed AAA clustering efficiency is evaluated using statistical analysis, convergence rate analysis, Wilcoxon's test, and different cluster evaluating measures ranking on 25 well-known public datasets with different difficulty levels (features and instances). The results demonstrate that the AAA clustering method provides more accurate solutions with a high convergence rate than other existing heuristic clustering techniques.Article Citation - WoS: 5Citation - Scopus: 7Convolutional Neural Network-Based Apple Images Classification and Image Quality Measurement by Light Colors Using the Color-Balancing Approach(Springer, 2023) Büyükarıkan, Birkan; Ülker, ErkanThe appearance of an object is affected by the color and quality of the light on the surface and the location of the lighting source. Color-balancing methods can solve the problems caused by light changes. Color-balancing models increase the visibility of the image by changing color and clarity. The study aims to examine the images of physiological disorders in apples' classification performances of images in different light colors with color-balancing models with pre-trained CNN models. Physiological disorders were classified with 0.949 accuracies in the ResNet50V2 model and sharpness data set in the green light color. With the proposed approaches, there was an increase in performance compared to the original data set. The best success in all light colors is in the sharpness data set type. In addition, the quality of the images was measured using MSE, PSNR, and SSIM. PSNR increased in the warm and cold white sharpness data set type and green light CLAHE data set type. Finally, experimental studies have shown that color balancing significantly affects classification success.Article Citation - WoS: 1Citation - Scopus: 1Empirical Evaluation of Leveraging Named Entities for Arabic Sentiment Analysis(ZARKA PRIVATE UNIV, 2020) Mulki, Hala; Haddad, Hatem; Gridach, Mourad; Babaoglu, İsmailSocial media reflects the attitudes of the public towards specific events. Events are often related to persons, locations or organizations, the so-called Named Entities (NEs). This can define NEs as sentiment-bearing components. In this paper, we dive beyond NEs recognition to the exploitation of sentiment-annotated NEs in Arabic sentiment analysis. Therefore, we develop an algorithm to detect the sentiment of NEs based on the majority of attitudes towards them. This enabled tagging NEs with proper tags and, thus, including them in a sentiment analysis framework of two models: supervised and lexicon-based. Both models were applied on datasets of multi-dialectal content. The results revealed that NEs have no considerable impact on the supervised model, while employing NEs in the lexicon-based model improved the classification performance and outperformed most of the baseline systems.Article Citation - WoS: 3Citation - Scopus: 4Encoder-Decoder Semantic Segmentation Models for Pressure Wound Images(Taylor & Francis Ltd, 2023) Eldem, Huseyin; Ülker, Erkan; Işıklı, Osman YasarSegmentation of wound images is important for efficient wound treatment so that appropriate treatment methods can be recommended quickly. Wound measurement, is subjective for an overall assessment. The establishment of a high-performance automatic segmentation system is of great importance for wound care. The use of machine learning methods will make performing wound segmentation with high performance possible. Great success can be achieved with deep learning, which is a sub-branch of machine learning and has been used in the analysis of images recently (classification, segmentation, etc.). In this study, pressure wound segmentation was discussed with different encoder-decoder based segmentation models. All methods are implemented on the Medetec pressure wound image dataset. In the experiments, FCN, PSP, UNet, SegNet and DeepLabV3 segmentation architectures were used on a five-fold cross-validation. Performances of the models were measured in the experiments and it was demonstrated that the most successful architecture was MobileNet-UNet with 99.67% accuracy.Article Citation - WoS: 14Citation - Scopus: 27Enhanced Coati Optimization Algorithm for Big Data Optimization Problem(Springer, 2023) Baş, Emine; Yıldızdan, GülnurThe recently proposed Coati Optimization Algorithm (COA) is one of the swarm-based intelligence algorithms. In this study, the current COA algorithm is developed and Enhanced COA (ECOA) is proposed. There is an imbalance between the exploitation and exploration capabilities of the COA. To balance the exploration and exploitation capabilities of COA in the search space, the algorithm has been improved with two modifications. These modifications are those that preserve population diversity for a longer period of time during local and global searches. Thus, some of the drawbacks of COA in search strategies are eliminated. The achievements of COA and ECOA were tested in four different test groups. COA and ECOA were first compared on twenty-three classic CEC functions in three different dimensions (10, 20, and 30). Later, ECOA was tested on CEC-2017 with twenty-nine functions and on CEC-2020 with ten functions, and its success was demonstrated in different dimensions (5, 10, and 30). Finally, ECOA has been shown to be successful in different cycles (300, 500, and 1000) on Big Data Optimization Problems (BOP), which have high dimensions. Friedman and Wilcoxon tests were performed on the results, and the obtained results were analyzed in detail. According to the results, ECOA outperformed COA in all comparisons performed. In order to prove the success of ECOA, seven newly proposed algorithms (EMA, FHO, SHO, HBA, SMA, SOA, and JAYA) were selected from the literature in the last few years and compared with ECOA and COA. In the classical test functions, ECOA achieved the best results, surpassing all other algorithms when compared. It achieved the second-best results in CEC-2020 test functions and entered the top four in CEC-2017 and BOP test functions. According to the results, ECOA can be used as an alternative algorithm for solving small, medium, and large-scale continuous optimization problems.Article Citation - WoS: 1Citation - Scopus: 1Enhancing Classification Accuracy Through Feature Extraction: a Comparative Study of Discretization and Clustering Approaches on Sensor-Based Datasets(Springer London Ltd, 2023) Esme, EnginAccuracy in a classification problem is directly related to the ability of features to adequately represent the differences between classes. In sensor-based datasets, measurements taken from the sensor form feature vectors. Measuring a given physical signal with different sensors enables it to be expressed with various feature vectors. For this reason, using sensor fusion is preferred in data acquisition. However, each new sensor added to the system brings problems such as complex sensory and supply circuit structures, extra energy consumption, signal sampling complexity, and time-consumption. On the other hand, in cases where sensor fusion cannot be applied, the ability of data from one sensor to represent classes may be insufficient. To avoid these problems, discretization and clustering approaches are suitable to derive more features from fewer sensors. The aim is to improve the accuracy of classifiers by deriving new feature vectors that can represent sensor data. This research reveals the contributions of clustering and discretization approaches as feature extraction methods to improve classification accuracy. In this study, three widely used machine learning techniques are investigated on Perfume, Wine, Seeds, and Gas datasets from the UCI repository. This comprehensive empirical study indicates that the accuracy of classifiers improves by up to 20% on datasets obtained from some sensors by using both discretization and clustering as feature-extracting methods.Article Citation - WoS: 4Citation - Scopus: 6Enhancing Signer-Independent Recognition of Isolated Sign Language Through Advanced Deep Learning Techniques and Feature Fusion(MDPI, 2024) Akdağ, Ali; Baykan, Ömer KaanSign Language Recognition (SLR) systems are crucial bridges facilitating communication between deaf or hard-of-hearing individuals and the hearing world. Existing SLR technologies, while advancing, often grapple with challenges such as accurately capturing the dynamic and complex nature of sign language, which includes both manual and non-manual elements like facial expressions and body movements. These systems sometimes fall short in environments with different backgrounds or lighting conditions, hindering their practical applicability and robustness. This study introduces an innovative approach to isolated sign language word recognition using a novel deep learning model that combines the strengths of both residual three-dimensional (R3D) and temporally separated (R(2+1)D) convolutional blocks. The R3(2+1)D-SLR network model demonstrates a superior ability to capture the intricate spatial and temporal features crucial for accurate sign recognition. Our system combines data from the signer's body, hands, and face, extracted using the R3(2+1)D-SLR model, and employs a Support Vector Machine (SVM) for classification. It demonstrates remarkable improvements in accuracy and robustness across various backgrounds by utilizing pose data over RGB data. With this pose-based approach, our proposed system achieved 94.52% and 98.53% test accuracy in signer-independent evaluations on the BosphorusSign22k-general and LSA64 datasets.Article Citation - WoS: 4Citation - Scopus: 5Evaluating the Attributes of Remote Sensing Image Pixels for Fast K-Means Clustering(2019) Sağlam, Ali; Baykan, Nurdan AkhanClustering process is an important stage for many data mining applications. In this process, data elements are grouped according to their similarities. One of the most known clustering algorithms is the k-means algorithm. The algorithm initially requires the number of clusters as a parameter and runs iteratively. Many remote sensing image processing applications usually need the clustering stage like many image processing applications. Remote sensing images provide more information about the environments with the development of the multispectral sensor and laser technologies. In the dataset used in this paper, the infrared (IR) and the digital surface maps (DSM) are also supplied besides the red (R), the green (G), and the blue (B) color values of the pixels. However, remote sensing images come with very large sizes (6000 × 6000 pixels for each image in the dataset used). Clustering these large-size images using their multiattributes consumes too much time if it is used directly. In the literature, some studies are available to accelerate the k-means algorithm. One of them is the normalized distance value (NDV)-based fast k-means algorithm that benefits from the speed of the histogram-based approach and uses the multiattributes of the pixels. In this paper, we evaluated the effects of these attributes on the correctness of the clustering process with different color space transformations and distance measurements. We give the success results as peak signal-to-noise ratio and structural similarity index values using two different types of reference data (the source images and the ground-truth images) separately. Finally, we give the results based on accuracy measurement for evaluating both the success of the clustering outputs and the reliability of the NDV-based measurement methods presented in this paper.Article Citation - WoS: 14Citation - Scopus: 15Experimental and Numerical Investigation of Rc Column Strengthening With Cfrp Strips Subjected To Low-Velocity Impact Load(TECHNO-PRESS, 2021) Mercimek, Ömer; Anıl, Özgür; Ghoroubi, Rahim; Sakin, Shaimaa; Yılmaz, TolgaReinforced concrete (RC) square columns are vulnerable to sudden dynamic impact loadings such as the vehicle crash to the bridges of highway or seaway, rock fall, the collision of masses with the effect of flood and landslide. In this experimental study RC square columns strengthened with and without CFRP strip subjected to sudden low velocity lateral impact loading were investigated. Drop-hammer testing machine was used to apply the impact loading to RC square columns. The test specimens were manufactured with square cross sections with 1/3 geometric scale. In scope of the study, 6 test specimens were manufactured and tested. The main variables considered in the study were the application point of impact loading, and CFRP strip spacing. A 9.0 kg mass was allowed to fall freely from a height of 1.0 m to apply the impact loading on the columns. During the impact tests, accelerations, impact force, column mid-point displacements and CFRP strip strains measurements were taken. The general behavior of test specimens, collapse mechanisms, acceleration, displacement, impact load and strain time relationships were interpreted, and the load displacement relationships were obtained. The data from the experimental study was used to investigate the effect of variables on the impact performances of RC columns. It has been observed that the strengthening method applied to reinforced concrete columns, which are designed with insufficient shear strength, insufficient shear reinforcement and produced with low strength concrete, using CFRP strips significantly improves the behavior of the columns under the effect of sudden dynamic impact loading and increases their performance. As a result of the increase in the hardness and rigidity of the specimens strengthened by wrapping with CFRP strips, the accelerations due to the impact loading increased, the displacements decreased and the number of shear cracks formed decreased and the damage was limited. Moreover, the finite element analyses of tested specimens were performed using ABAQUS software to further investigate the impact behavior.Article Citation - WoS: 1Citation - Scopus: 1Histological Tissue Classification With a Novel Statistical Filter-Based Convolutional Neural Network(Wiley, 2024) Ünlükal, Nejat; Ülker, Erkan; Solmaz, Merve; Uyar, Kübra; Tasdemir, SakirDeep networks have been of considerable interest in literature and have enabled the solution of recent real-world applications. Due to filters that offer feature extraction, Convolutional Neural Network (CNN) is recognized as an accurate, efficient and trustworthy deep learning technique for the solution of image-based challenges. The high-performing CNNs are computationally demanding even if they produce good results in a variety of applications. This is because a large number of parameters limit their ability to be reused on central processing units with low performance. To address these limitations, we suggest a novel statistical filter-based CNN (HistStatCNN) for image classification. The convolution kernels of the designed CNN model were initialized by continuous statistical methods. The performance of the proposed filter initialization approach was evaluated on a novel histological dataset and various histopathological benchmark datasets. To prove the efficiency of statistical filters, three unique parameter sets and a mixed parameter set of statistical filters were applied to the designed CNN model for the classification task. According to the results, the accuracy of GoogleNet, ResNet18, ResNet50 and ResNet101 models were 85.56%, 85.24%, 83.59% and 83.79%, respectively. The accuracy was improved by 87.13% by HistStatCNN for the histological data classification task. Moreover, the performance of the proposed filter generation approach was proved by testing on various histopathological benchmark datasets, increasing average accuracy rates. Experimental results validate that the proposed statistical filters enhance the performance of the network with more simple CNN models.Article Citation - WoS: 3Citation - Scopus: 3Implementation of the Land Reallocation Problem Using Nsga-Ii and Pesa-Ii Algorithms: a Case Study in Konya/Turkey(Taylor & Francis Ltd, 2022) Haber, Zeynep; Uguz, Harun; Haklı, HüseyinLand consolidation is one of the essential tools to increase productivity in agricultural production. The most important, complex, and time-consuming step is land allocation among the land consolidation stages. For these reasons, it is inevitable to use computer technology to optimize this process. This study used reallocation models based on PESA-II and NSGA-II optimization algorithms to solve the reallocation problem. The methods were compared with the optimization algorithms in the literature and the conventional method obtained by the technician. The applied algorithms have achieved successful results by parcel number, average parcel size, and reallocation cost.Article Citation - WoS: 39Citation - Scopus: 51An Improved Artificial Bee Colony Algorithm for Balancing Local and Global Search Behaviors in Continuous Optimization(SPRINGER HEIDELBERG, 2020) Haklı, Hüseyin; Kıran, Mustafa ServetThe artificial bee colony, ABC for short, algorithm is population-based iterative optimization algorithm proposed for solving the optimization problems with continuously-structured solution space. Although ABC has been equipped with powerful global search capability, this capability can cause poor intensification on found solutions and slow convergence problem. The occurrence of these issues is originated from the search equations proposed for employed and onlooker bees, which only updates one decision variable at each trial. In order to address these drawbacks of the basic ABC algorithm, we introduce six search equations for the algorithm and three of them are used by employed bees and the rest of equations are used by onlooker bees. Moreover, each onlooker agent can modify three dimensions or decision variables of a food source at each attempt, which represents a possible solution for the optimization problems. The proposed variant of ABC algorithm is applied to solve basic, CEC2005, CEC2014 and CEC2015 benchmark functions. The obtained results are compared with results of the state-of-art variants of the basic ABC algorithm, artificial algae algorithm, particle swarm optimization algorithm and its variants, gravitation search algorithm and its variants and etc. Comparisons are conducted for measurement of the solution quality, robustness and convergence characteristics of the algorithms. The obtained results and comparisons show the experimentally validation of the proposed ABC variant and success in solving the continuous optimization problems dealt with the study.Article Citation - WoS: 20Citation - Scopus: 28Integration Search Strategies in Tree Seed Algorithm for High Dimensional Function Optimization(SPRINGER HEIDELBERG, 2020) Güngör, İmral; Emiroğlu, Bülent Gürsel; Çınar, Ahmet Cevahir; Kıran, Mustafa ServetThe tree-seed algorithm, TSA for short, is a new population-based intelligent optimization algorithm developed for solving continuous optimization problems by inspiring the relationship between trees and their seeds. The locations of trees and seeds correspond to the possible solutions of the optimization problem on the search space. By using this model, the continuous optimization problems with lower dimensions are solved effectively, but its performance dramatically decreases on solving higher dimensional optimization problems. In order to address this issue in the basic TSA, an integration of different solution update rules are proposed in this study for solving high dimensional continuous optimization problems. Based on the search tendency parameter, which is a peculiar control parameter of TSA, five update rules and a withering process are utilized for obtaining seeds for the trees. The performance of the proposed method is investigated on basic 30-dimensional twelve numerical benchmark functions and CEC (congress on evolutionary computation) 2015 test suite. The performance of the proposed approach is also compared with the artificial bee colony algorithm, particle swarm optimization algorithm, genetic algorithm, pure random search algorithm and differential evolution variants. Experimental comparisons show that the proposed method is better than the basic method in terms of solution quality, robustness and convergence characteristics.Article Citation - WoS: 10Citation - Scopus: 7A Jaya-Based Approach To Wind Turbine Placement Problem(TAYLOR & FRANCIS INC, 2023) Aslan, Murat; Gündüz, Mesut; Kıran, Mustafa ServetRenewable energy resources are natural, clean, economical, and never-ending energy resources. Wind energy is an important clean, cheap, and easy applicable energy sources. On account of this, generation of the energy from wind technology has been raised day by day because of the competition with fossil-fuel power production methods. By depending on increases the number of turbines located in the wind farm, the average power obtains from each wind turbine appreciable reduces due to the existence of wake effects within the wind farm. Therefore, the optimal placement of turbines in a wind farm provides to get optimum wind energy from the wind farm. When the place where the wind turbines are located is considered as NxN grid, a wind turbine can be established to each cell of this grid. Whether a wind turbine is replaced to each cell of the grid or not can be modeled as a binary-based optimization problem. In this study, a Jaya-based binary optimization algorithm is proposed to determine which cells are used for wind turbine replacement. In order to justify the efficiency of the proposed approach, two different test cases are considered, and the solutions produced by the proposed approach are compared with the solutions of the swarm intelligence or evolutionary computation methods. According to the experiments and comparisons the Jaya-based binary approach shows a superior performance than compared approaches in terms of cost and power effectiveness. While the efficiency of the Jaya-based approach is 92.2% with 30 turbines replacement on 10 x 10 grid, the efficiency of the Jaya-based binary method is 95.7% with 43 turbines replacement on 20 x 20 grid.Article Citation - WoS: 10Citation - Scopus: 9Land Reallocation Model With Simulated Annealing Algorithm(TAYLOR & FRANCIS LTD, 2021) Ertunç, Ela; Uyan, Mevlüt; Tongur, VahitLand consolidation project has many stages. Land reallocation is the most considerable stage in which many factors play a role and forms the basis of this project. In this study, a new optimisation-based reallocation model has been developed to realise block reallocation by evaluating the requests of landowners. The reallocation according to the developed method also reset the block spaces automatically. The most powerful aspect of the method is that while the reallocation phase in land consolidation projects takes weeks and months, this method can be done in minutes. This method contributes to projects in terms of time and cost.Review Citation - WoS: 7Citation - Scopus: 4A Literature Review on Deep Learning Algorithms for Analysis of X-Ray Images(SPRINGER HEIDELBERG, 2023) Seyfi, Gökhan; Esme, Engin; Yılmaz, Merve; Kıran, Mustafa ServetSince the invention of the X-ray beam, it has been used for useful applications in various fields, such as medical diagnosis, fluoroscopy, radiation therapy, and computed tomography. In addition, it is also widely used to identify prohibited or illegal materials using X-ray imaging in the security field. However, these procedures are generally dependent on the human factor. An operator detects prohibited objects by projecting pseudo-color images onto a computer screen. Because these processes are prone to error, much work has gone into automating the processes involved. Initial research on this topic consisted mainly of machine learning and methods using hand-crafted features. The newly developed deep learning methods have subsequently been more successful. For this reason, deep learning algorithms are a trend in recent studies and the number of publications has increased in areas such as X-ray imaging. Therefore, we surveyed the studies published in the literature on Deep Learning-based X-ray imaging to attract new readers and provide new perspectives.

