Korkmaz, SedatSahman, Mehmet AkifÇınar, Ahmet CevahirKaya, Ersin2022-01-302022-01-3020211568-49461872-9681https://doi.org/10.1016/j.asoc.2021.107787https://hdl.handle.net/20.500.13091/1701The class imbalance problem is a challenging problem in the data mining area. To overcome the low classification performance related to imbalanced datasets, sampling strategies are used for balancing the datasets. Oversampling is a technique that increases the minority class samples in various proportions. In this work, these 16 different DE strategies are used for oversampling the imbalanced datasets for better classification. The main aim of this work is to determine the best strategy in terms of Area Under the receiver operating characteristic (ROC) Curve (AUC) and Geometric Mean (G-Mean) metrics. 44 imbalanced datasets are used in experiments. Support Vector Machines (SVM), k-Nearest Neighbor (kNN), and Decision Tree (DT) are used as a classifier in the experiments. The best results are produced by 6th Debohid Strategy (DSt6), 1th Debohid Strategy (DSt1), and 3th Debohid Strategy (DSt3) by using kNN, DT, and SVM classifiers, respectively. The obtained results outperform the 9 state-of-the-art oversampling methods in terms of AUC and G-Mean metrics (C) 2021 Elsevier B.V. All rights reserved.eninfo:eu-repo/semantics/closedAccessImbalanced DatasetsDifferential EvolutionOversamplingImbalanced LearningClass ImbalanceDifferential Evolution StrategiesPreprocessing MethodGlobal OptimizationSoftware ToolSmoteClassificationAlgorithmsKeelBoosting the Oversampling Methods Based on Differential Evolution Strategies for Imbalanced LearningArticle10.1016/j.asoc.2021.1077872-s2.0-85113352660