Browsing by Author "Yusefi, Abdullah"
Now showing 1 - 11 of 11
- Results Per Page
- Sort Options
Article Citation - WoS: 3Citation - Scopus: 5Covid-19 Isolation Control Proposal Via Uav and Ugv for Crowded Indoor Environments: Assistive Robots in the Shopping Malls(Frontiers Media Sa, 2022) Aslan, Muhammet Fatih; Hasikin, Khairunnisa; Yusefi, Abdullah; Durdu, Akif; Sabancı, Kadir; Azizan, Muhammad MokhzainiArtificial intelligence researchers conducted different studies to reduce the spread of COVID-19. Unlike other studies, this paper isn't for early infection diagnosis, but for preventing the transmission of COVID-19 in social environments. Among the studies on this is regarding social distancing, as this method is proven to prevent COVID-19 to be transmitted from one to another. In the study, Robot Operating System (ROS) simulates a shopping mall using Gazebo, and customers are monitored by Turtlebot and Unmanned Aerial Vehicle (UAV, DJI Tello). Through frames analysis captured by Turtlebot, a particular person is identified and followed at the shopping mall. Turtlebot is a wheeled robot that follows people without contact and is used as a shopping cart. Therefore, a customer doesn't touch the shopping cart that someone else comes into contact with, and also makes his/her shopping easier. The UAV detects people from above and determines the distance between people. In this way, a warning system can be created by detecting places where social distance is neglected. Histogram of Oriented-Gradients (HOG)-Support Vector Machine (SVM) is applied by Turtlebot to detect humans, and Kalman-Filter is used for human tracking. SegNet is performed for semantically detecting people and measuring distance via UAV. This paper proposes a new robotic study to prevent the infection and proved that this system is feasible.Article Data Mining in a Smart Traffic Light Control System Based on Image Processing and Knn Classification Algorithm(2020) Yusefi, Abdullah; Altun, Adem Alpaslan; Sungur, CemilIn today's modern world, communication, transportation and the movement of people and merchandises are important, and doing so in the shortest possible time is also essential and vital. In the past decade, due to the significant increase in the number of passengers and vehicles along with the capacity limitations of communication arrays, it is absolutely necessary to apply new technologies to intelligent traffic control and management. The intelligent transportation system (ITS) utilizes advanced technologies in the fields of information processing, telecommunications and electronic control to meet transportation needs. The purpose of these systems is to streamline traffic in important and sensitive routes, and in addition to providing traffic safety, information, timely traffic control and the use of optimal capacity of transport arteries. This paper presents new method for extracting traffic parameters associated with a signalized highway using image processing and data mining KNN classification algorithm. These parameters include the length of red light LED, the volume of passing vehicles and the volume of pedestrians passing the highways in the green phase. In what follows, a Data Mining Traffic Light Control System is introduced, which by receiving the three traffic parameters mentioned above, proceeds to optimize the traffic signal timing. At the end, a two-phase common highway is simulated in the MATLAB software environment, and the results of the image processing algorithms and the Data Mining Traffic Light Control System designed for it are evaluated.Article Djı Tello ile Ros Tabanlı Haritalandırma Simülasyonu(2020) Buyukkelek, Ahmet Furkan; Dağadası, Mert; Türkmenoğlu, Yasin; Yusefi, Abdullah; Durdu, Akifİnsansız sistemlerin önemi her geçen gün artmaktadır. Kendi kendine karar verebilme yeteneğine sahip bu sistemler başta askeri olmak üzere birçok alanda kullanılmaktadır. İnsan faktörünü en aza indirme, zaman ve maliyet tasarrufu gibi sebeplerle tasarlanan insansız sistemler, yeni problemlerin ortaya çıkmasına sebep olmuştur. Otonom hareket kabiliyetine sahip robotlar için bu sorunlar; konumlandırma ve robot davranışlarının nasıl kontrol edileceğidir. Bu çalışmada, kapalı bir alanda İnsansız Hava Aracı (İHA), DJI Tello’nun konumlandırılması üzerine çalışılmıştır. GPS’e ihtiyaç duymadan hareket edebilen bu araç Görüş Konumlandırma Sistemi ile bulunduğu konumu korumakta ve aynı zamanda verilen komutları yerine getirmektedir. Yapılan çalışmada, haritalandırma ve konumlandırma için yaygın olarak kullanılan SLAM tekniklerinden Hector SLAM kullanılmış ve elde edilen sonuçlar analiz edilmiştir. Gerçek ortam koşullarına yakın olması ve hazırlanan uçuş algoritmalarını güvenle, çevreye zarar vermeden test edebilme imkanı sağlaması nedeniyle çalışma, Gazebo simülasyon ortamında gerçekleştirilmiştir.Article Citation - WoS: 36Citation - Scopus: 42Hvionet: a Deep Learning Based Hybrid Visual-Inertial Odometry Approach for Unmanned Aerial System Position Estimation(Pergamon-Elsevier Science Ltd, 2022) Aslan, Muhammet Fatih; Durdu, Akif; Yusefi, Abdullah; Yılmaz, AlperSensor fusion is used to solve the localization problem in autonomous mobile robotics applications by integrating complementary data acquired from various sensors. In this study, we adopt Visual- Inertial Odometry (VIO), a low-cost sensor fusion method that integrates inertial data with images using a Deep Learning (DL) framework to predict the position of an Unmanned Aerial System (UAS). The developed system has three steps. The first step extracts features from images acquired from a platform camera and uses a Convolutional Neural Network (CNN) to project them to a visual feature manifold. Next, temporal features are extracted from the Inertial Measurement Unit (IMU) data on the platform using a Bidirectional Long Short Term Memory (BiLSTM) network and are projected to an inertial feature manifold. The final step estimates the UAS position by fusing the visual and inertial feature manifolds via a BiLSTM-based architecture. The proposed approach is tested with the public EuRoC (European Robotics Challenge) dataset and simulation environment data generated within the Robot Operating System (ROS). The result of the EuRoC dataset shows that the proposed approach achieves successful position estimations comparable to previous popular VIO methods. In addition, as a result of the experiment with the simulation dataset, the UAS position is successfully estimated with 0.167 Mean Square Error (RMSE). The obtained results prove that the proposed deep architecture is useful for UAS position estimation. (c) 2022 Elsevier Ltd. All rights reserved.Conference Object Model Predictive Control for Reliable and Efficient Path Tracking in Autonomous Vehicles(Institute of Electrical and Electronics Engineers Inc., 2025) Toy, Ibrahim; Yusefi, Abdullah; Durdu, AkifIn recent years, there have been countless studies on autonomous vehicles. And this field is growing. Considering this growth, the issue of planning and control, which has an important place in autonomous vehicles, comes to the fore. In this study, a path tracking algorithm based on Model Predictive Control (MPC) is developed for autonomous vehicle control. MPC is basically to predict the future behavior of a generated cost function to be minimized by optimization methods. In the proposed algorithm, control inputs are calculated over a prediction horizon using the vehicle dynamic model and the reference path to optimize the vehicle progression. In order to add the obstacle avoidance mechanism to the system, obstacle locations are detected from an occupancy grid map generated with three-dimensional LiDAR and added to the cost function. Simulation and real-world tests have shown that the MPC algorithm can optimally follow the reference path while avoiding obstacles. © 2025 Elsevier B.V., All rights reserved.Conference Object Narrow Space Warning and Slope Control System Compatible With Adas(IEEE, 2023) Toy, İbrahim; Yusefi, Abdullah; Durdu, AkifThe number of studies on autonomous vehicle systems is increasing day by day. Autonomous vehicles can perform various tasks without human intervention. However, in the environment where these tasks are performed, there are locations that can pose a danger in terms of width and height. These locations are generally referred to as narrow spaces. The autonomous vehicle must detect these narrow spaces from the front with the sensors on the vehicle to minimize the accident rate. In this study, a narrow space detection algorithm is created by including width detection, height detection, positive and negative obstacle detection in autonomous vehicle algorithms. LiDAR sensor data is utilized in the conducted studies, utilizing a 16-layered, 100-meter range product manufactured by Velodyne. When the width and height measurements obtained from the sensor data did not match the vehicle dimensions, the user is informed. This notification is conveyed to the user as a visual warning message on an interface. In addition, the incline of the hills that the vehicle cannot climb (positive obstacle) and the cliffs that it cannot descend (negative obstacle) were determined by measuring the slope. According to the results of the study, the average error rate is calculated as 2.7% for width measurements, 1.84% for height measurements, and 2.22% for slope measurements for positive and negative obstacle detection. The outputs of this study can also be included in advanced driver assistance systems (ADAS).Article Orb-Slam 2d Reconstruction of Environment for Indoor Autonomous Navigation of Uavs(2020) Yusefi, Abdullah; Durdu, Akif; Sungur, CemilIn this paper, a simple and economic yet efficient autonomous mapping and navigation system for unmanned aerial vehicles is presented. In order to realize this system, three modules have been implemented. First module constructs a 3D model of the environment while autonomously navigating the drone and is based on one of the top monocular SLAM algorithms called ORB- SLAM. For the autonomous navigation of the system a visual-based line tracking method is proposed. Afterwards, the second module performs a real time transformation of the 3D map to 2D grid map. While most of the 3D to 2D map conversion studies use octomaps in the middle of two, we present a threshold-based method that directly converts the 3D map to 2D without need for any middle component. Finally, third module uses A* path planning algorithm to navigate the drone to the goal pose in the constructed 2D grid map. This module uses only IMU-aided Adaptive Monte Carlo localization (AMCL) combined with monocular camera information to complete this task. The experimentation results indicate that the proposed system is adequately efficient to be used in the low-cost drones that have only a monocular camera and limited processing resources on them.Doctoral Thesis Otonom Sistemler için Sensör Füzyon ve Görsel Tabanlı Konumlandırma(Konya Teknik Üniversitesi, 2024) Yusefi, Abdullah; Sungur, Cemi̇lİnsanlar gibi kendi kararlarını veren ve görevlerini gerçekleştiren otonom mobil robot uygulamalarının ihtiyaç duyduğu bir özellik, insansı görevleri robotlara yaptırma gerekliliğidir. Otonom bir robot, bulunduğu ortamın geometrik yapısını anlamalı, kendini doğru bir şekilde konumlandırmalı ve bu bilgileri kullanarak belirlenen görev noktasına yönlendiren bir hareket yörüngesi oluşturmalıdır. Özellikle tek bir sensörün yetersiz olduğu durumlarda, farklı sensörlerin birleştirilmesiyle elde edilen sensör füzyonu, mobil robot konumlandırma çalışmalarında önemli bir rol oynamaktadır. Son yıllarda, işlemci hızındaki gelişmeler sayesinde, düşük maliyetli monoküler kameralar kullanılarak gerçekleştirilen Görsel Odometri (Visual Odometry (VO)) yöntemlerine daha fazla odaklanılmıştır. Ayrıca, kameralara ek olarak düşük maliyetli Atalet Ölçü Birimi (Inertial Measurement Unit (IMU)) sensörlerini içeren VIO çözümleri, konumlandırmaya katkı sağlamak amacıyla tercih edilmeye başlanmıştır. Geleneksel geometrik tabanlı çözümler genellikle karmaşık dünyayı doğru bir şekilde temsil etmekte zorlanır ve güvenilir sonuçlar elde etmekte zorluk yaşar. Bu nedenle, günümüzde Yapay Zeka tabanlı çözümler, farklı ortamlara ve kolay uyarlanabilme avantajları nedeniyle geleneksel çözümlerin yerini almaktadır. Bu tez çalışması, yukarıda belirtilen bilgiler ışığında, tek bir sensörün yetersiz olduğu durumlarda otonom araç geliştirmek için iki farklı uygulama önermektedir. İlk uygulama, iç ortamda hareket eden bir İHA'nın konumunu tahmin etmek amacıyla görsel ve IMU bilgilerine dayalı derin öğrenme tabanlı bir hibrit mimari sunmaktadır. İkinci uygulama ise ardışık kamera görüntülerini işleyerek otonom aracın konumunu başarılı bir şekilde tahmin eden yapay zeka tabanlı farklı bir VIO uygulamasını, farklı bir füzyon tekniğiyle gerçekleştirmektedir. Her iki uygulama da otonom aracın konumlandırması için yenilikçi yöntemler sunmaktadır. Bu yöntemler, önceki çalışmalara kıyasla üstün performans sergileme eğilimindedir. Ayrıca, gerçekleştirilen uygulamaların gerçek zamanlı sistemlerde çalışabilecek nitelikte olduğu belirlenmiştir.Article Performance and Trade-Off Evaluation of Sift, Surf, Fast, Star and Orb Feature Detection Algorithms in Visual Odometry(2020) Yusefi, Abdullah; Durdu, AkifIn recent years there has been a great deal of research and study in the field of visual odometry, which has led to the development of practical processes such as visual based measurement in robotics and automotive technology. Direct methods, feature-based methods and hybrid methods are three common approaches in solving visual odometry problems and given the general belief that feature-based approach speeds are higher, this approach has been welcomed in recent years. Therefore, an attempt has been made in the present study to calculate the transformation matrix of two-dimensional sequential image sets using invariant features that can estimate the changes in camera rotation and translation. In the algorithm, two-steps of identifying keypoints and removing outliers are performed using five different local feature detection algorithms (SURF, SIFT, FAST, STAR, ORB) and RANdom SAmple Consensus algorithm (RANSAC), respectively. In addition, the impact of each of them, their intrinsic parameters and dynamic noise on the accuracy of the transformation matrix are evaluated and analyzed in terms of rotational MSE and computational runtime.Conference Object Real-Time Safety Helmets and Vests Detection in Industrial Environments Using YOLO(Institute of Electrical and Electronics Engineers Inc., 2025) Souare, Mamady Cheick; Toy, Ibrahim; Yusefi, Abdullah; Durdu, AkifWorker safety is a critical concern in industrial and construction environments, where hazardous conditions can pose significant risks to employees. Ensuring that workers wear appropriate safety equipment, such as safety helmets and vests, is essential in preventing serious workplace injuries and illnesses. However, traditional monitoring methods may be insufficient for effectively detecting whether workers are adhering to safety regulations. Manual inspections, while common, can be time-consuming, and difficult to implement consistently across large worksites. This paper explores the application of the You Only Look Once object detection algorithm to automatically detect safety helmets and vests in real-time. By combining deep learning and computer vision methods, the implemented solution aims to enhance workplace safety compliance by providing an efficient, scalable, and accurate method for monitoring workers. The real-time nature of YOLO enables swift identification of safety violations, allowing for prompt corrective actions. This approach has the potential to significantly improve worker protection while reducing the reliance on manual inspection processes, ultimately contributing to a safer and more efficient working environment. © 2025 Elsevier B.V., All rights reserved.Article Citation - WoS: 18Citation - Scopus: 21The Ytu Dataset and Recurrent Neural Network Based Visual-Inertial Odometry(ELSEVIER SCI LTD, 2021) Gürtürk, Mert; Yusefi, Abdullah; Aslan, Muhammet Fatih; Soycan, Metin; Durdu, Akif; Masiero, AndreaVisual Simultaneous Localization and Mapping (VSLAM) and Visual Odometry (VO) are fundamental problems to be properly tackled for enabling autonomous and effective movements of vehicles/robots supported by vision -based positioning systems. This study presents a publicly shared dataset for SLAM investigations: a dataset collected at the Yildiz Technical University (YTU) in an outdoor area by an acquisition system mounted on a terrestrial vehicle. The acquisition system includes two cameras, an inertial measurement unit, and two GPS receivers. All sensors have been calibrated and synchronized. To prove the effectiveness of the introduced dataset, this study also applies Visual Inertial Odometry (VIO) on the KITTI dataset. Also, this study proposes a new recurrent neural network-based VIO rather than just introducing a new dataset. In addition, the effectiveness of this proposed method is proven by comparing it with the state-of-the-arts ORB-SLAM2 and OKVIS methods. The experimental results show that the YTU dataset is robust enough to be used for benchmarking studies and the proposed deep learning-based VIO is more successful than the other two traditional methods.

