Bilgisayar ve Bilişim Fakültesi Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.13091/10834
Browse
Browsing Bilgisayar ve Bilişim Fakültesi Koleksiyonu by Title
Now showing 1 - 20 of 230
- Results Per Page
- Sort Options
Doctoral Thesis 3d Lidar Nokta Bulutu İşlemede Sınır Gözetimli Voksel Tabanlı Bir Segmentasyon Yöntemi Geliştirilmesi(Konya Teknik Üniversitesi, 2020) Sağlam, Ali; Baykan, Nurdanİç ve dış mekânlarda bulunan yapı ve nesneler Lidar (ışık algılayan ve mesafe ölçen) sistemler ile taranarak nokta bulutu halinde, üç boyutlu (3D) ve renkli olarak dijital ortamlara aktarılabilmektedir. Lidar taramasıyla elde edilen, yapı ve nesneler hakkında detaylı bilgi sağlayan bu 3D nokta bulutu verisinin elemanları olan noktalar, organize edilmiş bir veri yapısı içerisinde konumlandırılmamış olarak düzensiz bir şekilde gelmektedir. Günümüzde Lidar teknolojisindeki gelişmeler, nokta bulutu verilerinin kalitesini artırmasının (daha az konum hatası ve daha yüksek çözünürlüklü olarak) yanında, çok yüksek miktarlarda düzensiz veri yığınını da getirmiştir. Çok yüksek boyuttaki bir veriden benzer özellikteki ve konumsal yakınlığı olan verileri gruplayarak, işlenecek veri sayısını düşürmekle birlikte veriden daha anlamlı bilgiler çıkarılmasını da sağlayan işleme segmentasyon denilmektedir. Segmentasyon, 3D nokta bulutu işlemeyi de kapsayan bilgisayarlı görme alanında büyük miktarda veri ile uğraşmayı gerektiren uygulamalar için oldukça yüksek bir öneme sahiptir. Segmentasyon işleminin de karmaşık veriler üzerinde istenilen özelliklerde ve sürede sonuç vermesi, bilgisayarlı görme alanında ayrı bir uğraş konusu olmuştur. Tez çalışmasında, 3D nokta bulutu işlemede, uygulamanın başarısını önemli oranda etkileyen segmentasyon işleminin daha başarılı ve hızlı bir şekilde yapılabilmesi için yeni bir voksel tabanlı segmentasyon yöntemi geliştirilmiştir. Geliştirilen yöntem, yüzeylerdeki lokal nokta gruplarının oluşturdukları düzlemsel yapıların eğim açıları ve ağırlık merkezleri gibi basit geometrik özelliklerini kullanarak segmentasyon işlemini gerçekleştirebilmiştir. Tez kapsamında, literatürde kullanılan veri setlerinin özellikleri dikkate alınarak, benzer şekilde bir adet iç ve iki adet dış mekânsal ortam, bir karasal Lidar sistemi ile taranarak üç farklı 3D nokta bulutu verisi temin edilmiştir. Elde edilen ham nokta verileri, oluşturulan veri setinin kullanım amacına göre indirgeme, kırpma ve gürültü giderme gibi ön işlemlerden geçirildikten sonra, segmentasyon referans segmentleri de hazırlanarak üç adet veri seti oluşturulmuştur. Tez kapsamında hazırlanan veri setlerine ek olarak, literatürden de iki adet daha segmentasyon veri seti temin edilmiş ve böylece toplam beş adet veri seti segmentasyon karşılaştırmasında kullanılmıştır. Veri setlerinin temin edilmesinin ardından, yöntemlerin nicel değerler üzerinden karşılaştırması aşamasına kadar olan geliştirme ve iyileştirme aşamaları iki ayrı koldan eş zamanlı olarak ilerlemiştir. Bunlardan birisi sekiz dallı ağaç (octree) organizasyonu ile veriyi vokselleme tekniğinin, düzlem özelliği göstermeyen vokseller için yeniden düzlem uydurma ön işleminin ve geliştirilen segmentasyon yönteminin kodlanması aşamalarıdır. Diğeri ise karşılaştırma için literatürde başarı göstermiş segmentasyon yöntemlerinin belirlenmesi, bunların temin edilmesi veya yeniden kodlanması ve nicel karşılaştırma için doğruluk ve F1 skor değerleri hesaplama yöntemlerinin kodlanması aşamalarıdır. Bütün bu geliştirme, iyileştirme ve kodlama aşamaları tamamlandıktan sonra uygulanan yöntemlerin tez kapsamında kullanılan veri setleri üzerindeki segmentasyon çıktılarının doğruluk ve F1 skor sonuçları alınarak, başarı ve çalışma süresi açısından karşılaştırma analizleri yapılmıştır. Geliştirilen yöntem, 0.81 ortalama doğruluk değeri ve 0.69 ortalama F1 skor değeri ile literatürde bulunan ve benzer şekilde noktaların geometrik özelliklerini kullanarak segmentasyon yapan diğer yöntemlere göre segmentasyon başarısı ve hız açısından üstünlük elde etmiştir. Tez kapsamında ayrıca, nokta bulutundaki noktaların renk değerlerinin farklılıkları da belirli etki oranlarında segmentasyona dâhil edilmiş ve renk kalitesi yüksek iç mekân verisinde başarı arttırılmıştır. Tez kapsamında daha sonra, geliştirilen yöntemin farklı parametre değerleri ile literatürden alınan yüksek miktarda noktadan oluşan bir iç mekân anlamsal segmentasyon (semantic segmentation) veri seti (S3DIS) üzerindeki ham nokta bulutu sınıflandırmasında ara işlem olarak kullanımı da incelenmiştir. Sınıflandırma işlemi için, öncelikle geliştirilen yöntemle segmentasyon yapılarak veri segmentlere ayrılmış ve her segmentten bir özellik vektörü çıkarılmıştır. Daha sonra da, bu özellik vektörleri kullanılarak sınıflandırma yapılmıştır. Segmentasyon tabanlı sınıflandırma işlemi, Destek Vektör Makinesi (DVM) ve Rastgele Orman (RO) olarak iki farklı sınıflandırıcı ile ayrı ayrı uygulanmıştır. Sınıflandırma işlemlerinin sonuçları da noktaların sınıf etiketlerinin doğruluk ve F1 skor değerleri üzerinden karşılaştırılmıştır. Karşılaştırma sonuçlarına göre, ham nokta bulutundaki noktaların sınıflandırma başarıları DVM için 0.76 doğruluk ve 0.48 F1 skor iken RO için 0.83 doğruluk ve 0.70 F1 skor olmuştur. Sonuçlara bakıldığında kullanılan veri ve özellik setlerine göre RO sınıflandırıcısı DVM sınıflandırıcısından daha iyi sonuç vermiştir.Article A 3d U-Net Based on Early Fusion Model: Improvement, Comparative Analysis With State-Of Models and Fine-Tuning(Konya Teknik Univ, 2024) Kayhan, Beyza; Uymaz, Sait AliMulti-organ segmentation is the process of identifying and separating multiple organs in medical images. This segmentation allows for the detection of structural abnormalities by examining the morphological structure of organs. Carrying out the process quickly and precisely has become an important issue in today's conditions. In recent years, researchers have used various technologies for the automatic segmentation of multiple organs. In this study, improvements were made to increase the multi-organ segmentation performance of the 3D U-Net based fusion model combining HSV and grayscale color spaces and compared with state-of-the-art models. Training and testing were performed on the MICCAI 2015 dataset published at Vanderbilt University, which contains 3D abdominal CT images in NIfTI format. The model's performance was evaluated using the Dice similarity coefficient. In the tests, the liver organ showed the highest Dice score. Considering the average Dice score of all organs, and comparing it with other models, it has been observed that the fusion approach model yields promising results.Article Citation - WoS: 1Academic Text Clustering Using Natural Language Processing(2022) Taşkıran, Fatma; Kaya, ErsinAccessing data is very easy nowadays. However, to use these data in an efficient way, it is necessary to get the right information from them. Categorizing these data in order to reach the needed information in a short time provides great convenience. All the more, while doing research in the academic field, text-based data such as articles, papers, or thesis studies are generally used. Natural language processing and machine learning methods are used to get the right information we need from these text-based data. In this study, abstracts of academic papers are clustered. Text data from academic paper abstracts are preprocessed using natural language processing techniques. A vectorized word representation extracted from preprocessed data with Word2Vec and BERT word embeddings and representations are clustered with four clustering algorithms.Article Citation - WoS: 6Citation - Scopus: 9Ağaç-tohum Algoritmasının Cuda Destekli Grafik İşlem Birimi Üzerinde Paralel Uygulaması(2018) Çınar, Ahmet Cevahir; Kıran, Mustafa ServetSon yıllarda toplanan verinin artmasıyla birlikte verimli hesaplama yöntemlerinin de geliştirilmesi ihtiyacı artmaktadır. Çoğunlukla gerçek dünya problemlerinin zor olması sebebiyle optimal çözümü garanti etmese dahi makul zamanda yakın optimal çözümü garanti edebilen sürü zekâsı veya evrimsel hesaplama yöntemlerine olan ilgi de artmaktadır. Diğer bir açıdan seri hesaplama yöntemlerinde verinin veya işlemin paralelleştirilebileceği durumlarda paralel algoritmaların da geliştirilmesi ihtiyacı ortaya çıkmıştır. Bu çalışmada literatüre son yıllarda kazandırılmış olan popülasyon tabanlı ağaç-tohum algoritması ele alınmış ve CUDA platformu içerisinde paralel versiyonu geliştirilmiştir. Algoritmanın paralel versiyonunun performansı kıyas fonksiyonları üzerinde analiz edilmiş ve seri versiyonunun performansı ile karşılaştırılmıştır. Kıyas fonksiyonlarında problem boyutluluğu 10 olarak alınmış ve farklı popülasyon ve blok sayıları altında performans analizi yapılmıştır. Deneysel çalışmalar algoritmanın paralel versiyonunun algoritmanın seri sürümüne göre bazı problemler için 184,65 kata performans artışı sağladığı görülmüştür.Article Citation - WoS: 29Citation - Scopus: 54Alexnet Architecture Variations With Transfer Learning for Classification of Wound Images(Elsevier B.V., 2023) Eldem, H.; Ülker, E.; Işıklı, O.Y.In medical world, wound care and follow-up is one of the issues that are gaining importance to work on day by day. Accurate and early recognition of wounds can reduce treatment costs. In the field of computer vision, deep learning architectures have received great attention recently. The achievements of existing pre-trained architectures for describing (classifying) data belonging to many image sets in the real world are primarily addressed. However, to increase the success of these architectures in a certain area, some improvements and enhancements can be made on the architecture. In this paper, the classification of pressure and diabetic wound images was performed with high accuracy. The six different new AlexNet architecture variations (3Conv_Softmax, 3Conv_SVM, 4Conv_Softmax, 4Conv_SVM, 6Conv_Softmax, 6Conv_SVM) were created with a different number of implementations of Convolution, Pooling, and Rectified Linear Activation (ReLU) layers. Classification performances of the proposed models are investigated by using Softmax classifier and SVM classifier separately. A new original Wound Image Database are created for performance measures. According to the experimental results obtained for the Database, the model with 6 Convolution layers (6Conv_SVM) was the most successful method among the proposed methods with 98.85% accuracy, 98.86% sensitivity, and 99.42% specificity. The 6Conv_SVM model was also tested on diabetic and pressure wound images in the public medetec dataset, and 95.33% accuracy, 95.33% sensitivity, and 97.66% specificity values were obtained. The proposed method provides high performance compared to the pre-trained AlexNet architecture and other state-of-the-art models in the literature. The results showed that the proposed 6Conv_SVM architecture can be used by the relevant departments in the medical world with good performance in medical tasks such as examining and classifying wound images and following up the wound process. © 2023 Karabuk UniversityArticle Citation - WoS: 3Citation - Scopus: 6Analysis of Machine Learning Classification Approaches for Predicting Students' Programming Aptitude(MDPI, 2023) Çetinkaya, Ali; Baykan, Ömer Kaan; Kırgız, HavvaWith the increasing prevalence and significance of computer programming, a crucial challenge that lies ahead of teachers and parents is to identify students adept at computer programming and direct them to relevant programming fields. As most studies on students' coding abilities focus on elementary, high school, and university students in developed countries, we aimed to determine the coding abilities of middle school students in Turkey. We first administered a three-part spatial test to 600 secondary school students, of whom 400 completed the survey and the 20-level Classic Maze course on Code.org. We then employed four machine learning (ML) algorithms, namely, support vector machine (SVM), decision tree, k-nearest neighbor, and quadratic discriminant to classify the coding abilities of these students using spatial test and Code.org platform data. SVM yielded the most accurate results and can thus be considered a suitable ML technique to determine the coding abilities of participants. This article promotes quality education and coding skills for workforce development and sustainable industrialization, aligned with the United Nations Sustainable Development Goals.Article Citation - WoS: 7Citation - Scopus: 12Analyzing the Effect of Data Preprocessing Techniques Using Machine Learning Algorithms on the Diagnosis of Covid-19(Wiley, 2022) Erol, Gizemnur; Uzbaş, Betül; Yücelbaş, Cüneyt; Yücelbaş, SuleReal-time polymerase chain reaction (RT-PCR) known as the swab test is a diagnostic test that can diagnose COVID-19 disease through respiratory samples in the laboratory. Due to the rapid spread of the coronavirus around the world, the RT-PCR test has become insufficient to get fast results. For this reason, the need for diagnostic methods to fill this gap has arisen and machine learning studies have started in this area. On the other hand, studying medical data is a challenging area because the data it contains is inconsistent, incomplete, difficult to scale, and very large. Additionally, some poor clinical decisions, irrelevant parameters, and limited medical data adversely affect the accuracy of studies performed. Therefore, considering the availability of datasets containing COVID-19 blood parameters, which are less in number than other medical datasets today, it is aimed to improve these existing datasets. In this direction, to obtain more consistent results in COVID-19 machine learning studies, the effect of data preprocessing techniques on the classification of COVID-19 data was investigated in this study. In this study primarily, encoding categorical feature and feature scaling processes were applied to the dataset with 15 features that contain blood data of 279 patients, including gender and age information. Then, the missingness of the dataset was eliminated by using both K-nearest neighbor algorithm (KNN) and chain equations multiple value assignment (MICE) methods. Data balancing has been done with synthetic minority oversampling technique (SMOTE), which is a data balancing method. The effect of data preprocessing techniques on ensemble learning algorithms bagging, AdaBoost, random forest and on popular classifier algorithms KNN classifier, support vector machine, logistic regression, artificial neural network, and decision tree classifiers have been analyzed. The highest accuracies obtained with the bagging classifier were 83.42% and 83.74% with KNN and MICE imputations by applying SMOTE, respectively. On the other hand, the highest accuracy ratio reached with the same classifier without SMOTE was 83.91% for the KNN imputation. In conclusion, certain data preprocessing techniques are examined comparatively and the effect of these data preprocessing techniques on success is presented and the importance of the right combination of data preprocessing to achieve success has been demonstrated by experimental studies.Article Citation - WoS: 2Apneic Events Detection Using Different Features of Airflow Signals(MEHRAN UNIV ENGINEERING & TECHNOLOGY, 2019) Göğüş, Fatma Zehra; Tezel, GülayApneic-event based sleep disorders are very common and affect greatly the daily life of people. However, diagnosis of these disorders by detecting apneic events are very difficult. Studies show that analyzes of airflow signals are effective in diagnosis of apneic-event based sleep disorders. According to these studies, diagnosis can be performed by detecting the apneic episodes of the airflow signals. This work deals with detection of apneic episodes on airflow signals belonging to Apnea-ECG (Electrocardiogram) and MIT (Massachusetts Institute of Technology) BIH (Bastons's Beth Isreal Hospital) databases. In order to accomplish this task, three representative feature sets namely classic feature set, amplitude feature set and descriptive model feature set were created. The performance of these feature sets were evaluated individually and in combination with the aid of the random forest classifier to detect apneic episodes. Moreover, effective features were selected by OneR Attribute Eval Feature Selection Algorithm to obtain higher performance. Selected 28 features for Apnea-ECG database and 31 features for MIT-BIH database from 54 features were applied to classifier to compare achievements. As a result, the highest classification accuracies were obtained with the usage of effective features as 96.21% for Apnea-ECG database and 92.23% for MIT-BIH database. Kappa values are also quite good (91.80 and 81.96%) and support the classification accuracies for both databases, too. The results of the study are quite promising for determining apneic events on a minute-by-minute basis.Article Citation - WoS: 3Application of Abm To Spectral Features for Emotion Recognition(MEHRAN UNIV ENGINEERING & TECHNOLOGY, 2018) Demircan, Semiye; Örnek, Humar KahramanlıER (Emotion Recognition) from speech signals has been among the attractive subjects lately. As known feature extraction and feature selection are most important process steps in ER from speech signals. The aim of present study is to select the most relevant spectral feature subset. The proposed method is based on feature selection with optimization algorithm among the features obtained from speech signals. Firstly, MFCC (Mel-Frequency Cepstrum Coefficients) were extracted from the EmoDB. Several statistical values as maximum, minimum, mean, standard deviation, skewness, kurtosis and median were obtained from MFCC. The next process of study was feature selection which was performed in two stages: In the first stage ABM (Agent-Based Modelling) that is hardly applied to this area was applied to actual features. In the second stageOpt-aiNET optimization algorithm was applied in order to choose the agent group giving the best classification success. The last process of the study is classification. ANN (Artificial Neural Network) and 10 cross-validations were used for classification and evaluation. A narrow comprehension with three emotions was performed in the application. As a result, it was seen that the classification accuracy was rising after applying proposed method. The method was shown promising performance with spectral features.Conference Object An Application of Tree Seed Algorithm for Optimization of 50 and 100 Dimensional Numerical Functions(Institute of Electrical and Electronics Engineers Inc., 2021) Güngör, İmral; Emiroğlu, Bülent Gürsel; Uymaz, S.A.; Kıran, Mustafa ServetThe Tree-Seed Algorithm is an optimization algorithm created by observing the process of growing and becoming a new tree, the seeds scattering around trees in natural life. In this study, TSA is applied to optimize high-dimensional functions. In previous studies, the performance of the tree seed algorithm applied for the optimization of low-dimensional functions has been proven. Thus, in addition to running the algorithm on 30-dimensional functions before, it has also been applied to solve 50-and 100-dimensional numerical functions. This improvement, called the tree seed algorithm, is based on the use of more solution update mechanisms instead of one mechanism. In the experiments, CEC2015 benchmarking functions are used and the developed tree seed algorithm is compared with the base state of TSA, artificial bee colony, particle swarm optimization and some variants of the differential evolution algorithm. Experimental results are obtained as mean, max, min solutions and standard deviation of 30 different runs. As a result, it is observed by the studies that the developed algorithm gives successful results. © 2021 IEEE.Article Citation - WoS: 3Citation - Scopus: 4Approaches To Automated Land Subdivision Using Binary Search Algorithm in Zoning Applications(Ice Publishing, 2022) Koç, İsmail; Çay, Tayfun; Babaoğlu, İsmailThe planned development of urban areas depends on zoning applications. Although zoning practices are performed using different techniques, the parcelling operations that shape the future view of the city are the same. Preparing the parcelling plans is an important step that has a direct impact on ownership structure and reallocation. Parcelling operations are traditionally handled manually by a technician. This is a serious problem in terms of time and cost. In this study, by taking the zoning legislation, the production of a pre-land subdivision plan has been automatically performed for a region of Konya, which is one of the major cities in Turkey. The parcelling processes have been performed in three different ways: the first parcelling technique is parcelling with edge values, the second is parcelling with area values and the third is parcelling using both edge and area values together. For the entire parcelling process, the area of the parcel has been calculated using the Gauss method. Moreover, to effectively determine the boundaries and to calculate the parcel area in the parcelling process, the binary search technique has been used in all the methods. The experimental results show that the parcelling operations were carried out very quickly and successfully.Master Thesis Arazi Toplulaştırma Çalışmasında Çok Amaçlı Optimizasyon Tabanlı Dağıtım(Konya Teknik Üniversitesi, 2020) Ortaçay, Zeynep; Uğuz, HarunGerçek hayatta karşılaştığımız problemlerin bir çoğu optimizasyon gerektiren problemlerdir. Bu problemlerin bazıları tek bir hedefe bazıları ise birden fazla hedefe sahiptirler. Tek hedefe sahip problemler tek amaçlı optimizasyon algoritmaları denilen yöntemlerle çözülürler fakat birden fazla hedefi olan problemler bu yöntemlerle çözülmesi zordur. Bu problemler için çok amaçlı optimizasyon algoritmaları olarak adlandırılan yöntemler kullanılmaktadır. Arazi Toplulaştırma (AT) çalışmaları küçük ve dağınık olarak bulunan parsellerin büyük ve bir arada verilmesini amaçlayan bir çalışmadır. AT çalışmasının adımlarından olan Dağıtım aşamasında birden fazla kriter olmasından dolayı çok amaçlı optimizasyon problemi olarak tanımlanır. Bu problemin çözümü için PESA-II, NSGA-II ve Önerilen Hibrit NSGA-II algoritmaları kullanılmıştır. Elde edilen sonuçlar literatürdeki sonuçlar ile karşılaştırılmıştır. Elde edilen sonuçlara göre çok amaçlı optimizasyon algoritmalarının başarılı değerler elde ettiği görülmüştürArticle Automatic Localization of Cephalometric Landmarks Using Convolutional Neural Networks(2021) Nourdine Mogham Njikam Mohamed; Uzbaş, BetülExperts have brought forward interesting and effective methods to address critical medical analysis problems. One of these fields of research is cephalometric analysis. During the analysis of tooth and the skeletal relationships of the human skull, cephalometric analysis plays an important role as it facilitates the interpretation of bone, tooth, and soft tissue structures of the patient. It is used during oral, craniofacial, and maxillofacial surgery and during treatments in orthodontic and orthopedic departments. The automatic localization of cephalometric landmarks reduces possible human errors and is time saving. To performed automatic localization of cephalometric landmarks, a deep learning model has been proposed inspired by the U-Net model. 19 cephalometric landmarks that are generally manually determined by experts are automatically obtained using this model. The cephalometric X-ray image dataset created under the context of IEEE 2015 International Symposium on Biomedical Imaging (ISBI 2015) is used and data augmentation is applied to it for this experiment. A Success Detection Rate SDR of 74% was achieved in the range of 2 mm, 81.4% in the range of 2.5mm, 86.3% in the range of 3mm, and 92.2% in the range of 4mm.Article Citation - Scopus: 1Automatic Sleep Stage Classification for the Obstructive Sleep Apnea(Trans Tech Publications Ltd, 2023) Özsen, Seral; Koca, Yasin; Tezel, Gülay Tezel; Solak, Fatma Zehra; Vatansev, Hulya; Kucukturk, SerkanAutomatic sleep scoring systems have been much more attention in the last decades. Whereas a wide variety of studies have been used in this subject area, the accuracies are still under acceptable limits to apply these methods to real-life data. One can find many high-accuracy studies in literature using a standard database but when it comes to using real data reaching such high performance is not straightforward. In this study, five distinct datasets were prepared using 124 persons including 93 unhealthy and 31 healthy persons. These datasets consist of time-, nonlinear-, welch-, discrete wavelet transform- and Hilbert-Huang transform features. By applying k-NN, Decision Trees, ANN, SVM, and Bagged Tree classifiers to these feature sets in various manners by using feature-selection highest classification accuracy was searched. The maximum classification accuracy was detected in the case of the Bagged Tree classifier as 95.06% with the use of 14 features among a total of 136 features. This accuracy is relatively high compared with the literature for a real-data application.Article Aydınlatma Özniteliği Kullanılarak Evrişimsel Sinir Ağı Modelleri İle Meyve Sınıflandırma(2020) Büyükarıkan, Birkan; Ülker, ErkanAydınlatma, nesnelerin olduğu gibi görünmesini sağlayan doğal veya yapay kaynaklardır. Özellikle görüntü işleme uygulamalarında yakalanan görüntüdeki nesne bilgisinin eksiksiz ve doğru şekilde alınabilmesi için aydınlatmanın kullanılması bir gerekliliktir. Ancak aydınlatma kaynağının tür, parlaklık ve konumunun değişimi; nesnenin görüntüsü, rengi, gölgesi veya boyutunun da değişmesine ve nesnenin farklı olarak algılanmasına sebep olmaktadır. Bu sebeple görüntülerin ayırt edilmesinde güçlü bir yapay zeka tekniğinin kullanılması, sınıfların ayırt edilmesini kolaylaştıracaktır. Bir yapay zeka yöntemi olan Evrişimsel Sinir Ağları (ESA), otomatik olarak özellikleri çıkarabilen ve ağ eğitilirken öğrenme sağlandığı için bariz özellikleri kolaylıkla belirleyen bir algoritmadır. Çalışmada ALOI-COL veriseti kullanılmıştır. ALOI-COL, 12 farklı renk sıcaklığıyla elde edilmiş 1000 sınıftan oluşan bir verisetidir. ALOI-COL verisetindeki 29 sınıftan oluşan meyve görüntüleri, ESA mimarilerinden AlexNet, VGG16 ve VGG19 kullanılarak sınıflandırılmıştır. Verisetindeki görüntüler, görüntü işleme teknikleriyle zenginleştirilmiş ve her sınıftan 51 adet görüntü elde edilmiştir. Çalışma; %80-20 ve %60-40 eğitim-test olmak üzere iki yapıda incelenmiştir. 50 devir çalıştırılması sonucunda test verileri, AlexNet (%80-20) ve VGG16 (%60-40) mimarilerinde %100, VGG19 (%80-20) mimarisinde ise %86.49 doğrulukla sınıflandırılmıştır.Article Citation - WoS: 2B-Spline Curve Approximation by Utilizing Big Bang-Big Crunch Method(TECH SCIENCE PRESS, 2020) İnik, Özkan; Ülker, Erkan; Koç, İsmailThe location of knot points and estimation of the number of knots are undoubtedly known as one of the most difficult problems in B-Spline curve approximation. In the literature, different researchers have been seen to use more than one optimization algorithm in order to solve this problem. In this paper, Big Bang-Big Crunch method (BB-BC) which is one of the evolutionary based optimization algorithms was introduced and then the approximation of B-Spline curve knots was conducted by this method. The technique of reverse engineering was implemented for the curve knot approximation. The detection of knot locations and the number of knots were randomly selected in the curve approximation which was performed by using BB-BC method. The experimental results were carried out by utilizing seven different test functions for the curve approximation. The performance of BB-BC algorithm was examined on these functions and their results were compared with the earlier studies performed by the researchers. In comparison with the other studies, it was observed that though the number of the knot in BB-BC algorithm was high, this algorithm approximated the B-Spline curves at the rate of minor error.Master Thesis Bilgisayarlı Tomografi Görüntülerinde Derin Öğrenme Tabanlı Çoklu Organ Segmentasyonu(Konya Teknik Üniversitesi, 2022) Kayhan, Beyza; Uymaz, Sait AliGelişen teknoloji ile birlikte sağlık alanındaki en önemli gelişmeler tıbbi görüntüleme teknikleri sayesinde gerçekleştirilmektedir. Vücudumuzun iç yapısının tıbbi görüntüleme teknikleri aracılığıyla detaylı olarak görüntülenmesi sonucu organların durumu hakkında bilgi edinilmektedir. Elde edilen bu görüntüler radyologlar tarafından değerlendirilir ve yorumlanır. Tıbbi görüntü analizinde öncelikli olarak organların ve dokuların tanınması hastalık teşhisi ve tedavi planlamasının ilk aşamasıdır. Fakat tıbbi görüntüler üzerinden organların tanınması oldukça zor ve zaman alıcı bir işlemdir. Bu çalışmada radyologlara yardımcı olması için karın bölgesine ait bilgisayarlı tomografi görüntüleri üzerinde birden fazla organın segmentasyonunu sağlayan bilgisayar destekli otomatik tanı sistemi gerçekleştirilmiştir. Derin Öğrenme modellerinin diğer bilgisayarlı görü alanlarında olduğu gibi segmentasyon alanında da yüksek başarı elde etmesi nedeniyle otomatik çoklu organ segmentasyon işleminde derin öğrenme yöntemi olan tam evrişimli bir sinir ağı kullanılmıştır. Bu çalışmada Vanderbilt Üniversitesinin çoklu organ segmentasyon yarışması (MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge) için sağladığı veri seti kullanılmıştır. Veri setindeki dosyalar NIfTI formatında 3 boyutlu karın tomografi görüntüleridir. Bu görüntülerden HSV renk uzayında görüntü elde edilerek farklı renk uzaylarının birleştiren iki aşamalı 3D U-Net modelinin kullanıldığı füzyon bir yaklaşım önerilmiştir. Önerilen modelin değerlendirilmesi için Dice benzerlik katsayısı kullanılmıştır ve test işlemi sonucunda en yüksek Dice skorunun elde edildiği organ karaciğer ve en düşük skorun elde edildiği organ sol böbrek üstü bezi olarak bulunmuştur. Tüm organların ortalama doğruluk skoruna bakıldığında radyologlara yardımcı olması için gerçekleştirilen otomatik segmentasyon sisteminin başarılı ve umut verici olduğu görülmüştür.Conference Object Binary African Vultures Optimization Algorithm for Z-Shaped Transfer Functions(2023) Baş, EmineMetaheuristic algorithms are of great importance in solving binary optimization problems. African Vulture Optimization algorithm (AVO) is a swarm intelligence-based heuristic algorithm created by imitating the life forms of African vultures. In this study, the AVO, which has been proposed in recent years, is restructured to solve binary optimization problems. Thus, Binary AVO (BAVO) has been proposed. Four different z-shaped transfer functions are chosen to convert the continuous search space to binary search space. Variations for BAVO are defined according to the transfer function used (BAVO1, BAVO2, BAVO3, and BAVO4). The success of these variations was tested in thirteen classic test functions containing unimodal and multimodal functions. Three different dimensions were determined in the study (5, 10, and 20). Each test function was run ten times independently and the average, standard deviation, best, and worst values were obtained. According to the results obtained, the most successful of these variations has been identified. According to the results, the BAVO4 variant at higher dimensions achieved better results. The success of BAVO with z-shaped transfer functions was demonstrated for the first time in this study.Article Citation - WoS: 30Citation - Scopus: 36Binary Aquila Optimizer for 0-1 Knapsack Problems(Pergamon-Elsevier Science Ltd, 2023) Baş, EmineThe optimization process entails determining the best values for various system characteristics in order to finish the system design at the lowest possible cost. In general, real-world applications and issues in artificial intelligence and machine learning are discrete, unconstrained, or discrete. Optimization approaches have a high success rate in tackling such situations. As a result, several sophisticated heuristic algorithms based on swarm intelligence have been presented in recent years. Various academics in the literature have worked on such algorithms and have effectively addressed many difficulties. Aquila Optimizer (AO) is one such algorithm. Aquila Optimizer (AO) is a recently suggested heuristic algorithm. It is a novel population-based optimization strategy. It was made by mimicking the natural behavior of the Aquila. It was created by imitating the behavior of the Aquila in nature in the process of catching its prey. The AO algorithm is an algorithm developed to solve continuous optimization problems in their original form. In this study, the AO structure has been updated again to solve binary optimization problems. Problems encountered in the real world do not always have continuous values. It exists in problems with discrete values. Therefore, algorithms that solve continuous problems need to be restructured to solve discrete optimization problems as well. Binary optimization problems constitute a subgroup of discrete optimization problems. In this study, a new algorithm is proposed for binary optimization problems (BAO). The most successful BAO-T algorithm was created by testing the success of BAO in eight different transfer functions. Transfer functions play an active role in converting the continuous search space to the binary search space. BAO has also been developed by adding candidate solution step crossover and mutation methods (BAO-CM). The success of the proposed BAO-T and BAO-CM algorithms has been tested on the knapsack problem, which is widely selected in binary optimization problems in the literature. Knapsack problem examples are divided into three different benchmark groups in this study. A total of sixty-three low, medium, and large scale knapsack problems were determined as test datasets. The performances of BAO-T and BAO-CM algorithms were examined in detail and the results were clearly shown with graphics. In addition, the results of BAO-T and BAO-CM algorithms have been compared with the new heuristic algorithms proposed in the literature in recent years, and their success has been proven. According to the results, BAO-CM performed better than BAO-T and can be suggested as an alternative algorithm for solving binary optimization problems.Article Citation - WoS: 44Citation - Scopus: 47Binary Artificial Algae Algorithm for Feature Selection(Elsevier, 2022) Türkoğlu, Bahaeddin; Uymaz, Sait Ali; Kaya, ErsinIn this study, binary versions of the Artificial Algae Algorithm (AAA) are presented and employed to determine the ideal attribute subset for classification processes. AAA is a recently proposed algorithm inspired by microalgae's living behavior, which has not been consistently implemented to determine ideal attribute subset (feature selection) processes yet. AAA can effectively look into the feature space for ideal attributes combination minimizing a designed objective function. The proposed binary versions of AAA are employed to determine the ideal attribute combination that maximizes classification success while minimizing the count of attributes. The original AAA is utilized in these versions while its continuous spaces are restricted in a threshold using an appropriate threshold function after flattening them. In order to demonstrate the performance of the presented binary artificial algae algorithm model, an experimental study was conducted with the latest seven highperformance optimization algorithms. Several evaluation metrics are used to accurately evaluate and analyze the performance of these algorithms over twenty-five datasets with different difficulty levels from the UCI Machine Learning Repository. The experimental results and statistical tests verify the performance of the presented algorithms in increasing the classification accuracy compared to other state-of-the-art binary algorithms, which confirms the capability of the AAA algorithm in exploring the attribute space and deciding the most valuable features for classification problems. (C) 2022 Elsevier B.V. All rights reserved.

