Browsing by Author "Yusefi, A."
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Conference Object Citation - WoS: 3Citation - Scopus: 12Camera/Lidar Sensor Fusion-Based Autonomous Navigation(Institute of Electrical and Electronics Engineers Inc., 2024) Yusefi, A.; Durdu, A.; Toy, I.This research presents a novel approach for autonomous navigation of Unmanned Ground Vehicles (UGV) using a camera and LiDAR sensor fusion system. The proposed method is designed to achieve a high rate of obstacle detection, distance estimation, and obstacle avoidance. In order to thoroughly study the form of things and decrease the problem of object occlusion, which frequently happens in camera-based object recognition, the 3D point cloud received from the LiDAR depth sensors is used. The proposed camera and LiDAR sensor fusion design balance the benefits and drawbacks of the two sensors to produce a detection system that is more reliable than others. The UGV's autonomous navigation system is then provided with the region proposal to re-plan its route and navigate appropriately. The experiments were conducted on a UGV system with high obstacle avoidance and fully autonomous navigation capabilities. The outcomes demonstrate that the suggested technique can successfully maneuver the UGV and detect impediments in actual situations. © 2024 IEEE.Conference Object Citation - WoS: 1Citation - Scopus: 2Enhanced Obstacle Detection in Autonomous Vehicles Using 3d Lidar Mapping Techniques(Institute of Electrical and Electronics Engineers Inc., 2024) Tokgoz, M.E.; Yusefi, A.; Toy, I.; Durdu, A.In this study, a method utilizing a 3D LiDAR(Light Detection and Ranging) sensor for mapping and obstacle detection in autonomous vehicles has been developed. The LiDAR sensor employs laser beams to detect the positions and distances of surrounding objects. Data from the LiDAR were processed to generate 2D maps from the 3D point cloud. During this process, obstacles within the vehicle's navigable height range, as well as those that wouldn't impede its movement were identified. Using a filtering method, points outside of these obstacles were removed to create a map. In experimental studies, it was observed that the developed method can accurately detect challenging obstacles such as fences made of thin wires. Consequently, it is evident that this method holds the potential to offer more reliable and safe obstacle detection for autonomous vehicles. © 2024 IEEE.Article Citation - WoS: 4Citation - Scopus: 6A Generalizable D-Vio and Its Fusion With Gnss/imu for Improved Autonomous Vehicle Localization(Institute of Electrical and Electronics Engineers Inc., 2023) Yusefi, A.; Durdu, A.; Bozkaya, F.; Tiglioglu, S.; Yilmaz, A.; Sungur, C.An autonomous vehicle must be able to locate itself precisely and reliably in a large-scale outdoor area. In an attempt to enhance the localization of an autonomous vehicle based on Global Navigation Satellite System (GNSS)/Camera/Inertial Measurement Unit (IMU), when GNSS signals are interfered with or obstructed by reflected signals, a multi-step correction filter is used to smooth the inaccurate GNSS data obtained. The proposed solutions integrate a high amount of data from several sensors to compensate for the sensors' individual weaknesses. Additionally, this work proposes a Generalizable Deep Visual Intertial Odometry (GD-VIO) to better locate the vehicle in the event of GNSS outages. The algorithms suggested in this research have been tested through real-world experimentations, demonstrating that they are able to deliver accurate and trustworthy vehicle pose estimation. IEEEConference Object Citation - Scopus: 5Improved Dead Reckoning Localization Using Imu Sensor(Institute of Electrical and Electronics Engineers Inc., 2022) Toy, I.; Durdu, A.; Yusefi, A.In the upcoming years, autonomous vehicle technology will advance quickly and spread widely. The most crucial component of such systems is vehicle localization or position estimation. The global navigation satellite system (GNSS) is the one of the most advanced positioning system currently in use. But the GNSS signals could be interrupted or degraded due to interference. Additionally, while GNSS delivers precise information for outdoor systems but is unable to provide a solution for indoor. An indoors-and-outdoors-usable device called an inertial measurement unit (IMU) sensor is employed in this paper to suggest a location estimate technique. The study's proposed IMU dead reckoning method makes use of novel techniques to filter noise data and accurately determine the position of the vehicle. The GNSS-based vehicle localization system can be improved by using this technique in GNSS-denied scenarios. In this paper, the IMU sensor's accelerometer, magnetometer, and gyroscope data are utilized to calculate the vehicle's velocity, orientation, and position. The experimental results on a real autonomous vehicle demonstrate that the system is effective, with average errors in rotation and translation of 1.03 degrees and 1.04 meters, respectively. © 2022 IEEE.Conference Object Citation - Scopus: 3Localization Using Two Different Imu Sensor-Based Dead Reckoning System(Institute of Electrical and Electronics Engineers Inc., 2024) Toy, I.; Durdu, A.; Yusefi, A.Dead reckoning estimates the current position, speed, and direction of moving objects using known position information. Localization determines an object's location on the map, categorized into human and vehicle localization. Autonomous vehicles rely on accurate vehicle localization for effective task execution. While Global Navigation Satellite System (GNSS) is a popular method, weak or absent signals can pose challenges. This study utilizes Inertial Measurement Unit (IMU) sensors for localization, integrating a second IMU to enhance accuracy. Fusing data from two IMU sensors, a dead reckoning system achieves 1.02 degrees and 1.41 meters errors in rotation and translation with a single IMU, and 1.01 degrees and 1.04 meters with two IMUs, respectively. © 2024 IEEE.Conference Object Citation - Scopus: 8Performance Comparison of Extreme Learning Machines and Other Machine Learning Methods on Wbcd Data Set(Institute of Electrical and Electronics Engineers Inc., 2021) Keskin, O.S.; Durdu, A.; Aslan, M.F.; Yusefi, A.Breast cancer is one of the most common forms of cancer among women in our country and the world. Artificial intelligence studies are growing in order to reduce the mortality and early diagnosis needed for appropriate treatment. The Excessive Learning Machines (ELM) method, one of the machine learning approaches, is applied to the Wisconsin Breast Cancer Diagnostic (WBCD) dataset in this study, and the findings are compared to those of other machine learning methods. For this purpose, the same dataset is also classified using Multi-Layer Perceptron (MLP), Sequential Minimum Optimization (SMO), Decision Tree Learning (J48), Naive Bayes (NB), and K-Nearest Neighbor (KNN) methods. According to the results of the study, the ELM approach is more successful than other approaches on the WBCD dataset. It's also worth noting that as the number of neurons in the ELM grows, so does the learning ability of the network. However, after a certain number of neurons have passed, test performance begins to decline sharply. Finally, the ELM's performance is compared to the results of other studies in the literature. © 2021 IEEE.Book Part Citation - Scopus: 11A Tutorial: Mobile Robotics, Slam, Bayesian Filter, Keyframe Bundle Adjustment and Ros Applications(Springer Science and Business Media Deutschland GmbH, 2021) Aslan, Muhammet Fatih; Durdu, Akif; Yusefi, A.; Sabancı, Kadir; Sungur, C.Autonomous mobile robots, an important research topic today, are often developed for smart industrial environments where they interact with humans. For autonomous movement of a mobile robot in an unknown environment, mobile robots must solve three main problems; localization, mapping and path planning. Robust path planning depends on successful localization and mapping. Both problems can be overcome with Simultaneous Localization and Mapping (SLAM) techniques. Since sequential sensor information is required for SLAM, eliminating these sensor noises is crucial for the next measurement and prediction. Recursive Bayesian filter is a statistical method used for sequential state prediction. Therefore, it is an essential method for the autonomous mobile robots and SLAM techniques. This study deals with the relationship between SLAM and Bayes methods for autonomous robots. Additionally, keyframe Bundle Adjustment (BA) based SLAM, which includes state-of-art methods, is also investigated. SLAM is an active research area and new algorithms are constantly being developed to increase accuracy rates, so new researchers need to understand this issue with ease. This study is a detailed and easily understandable resource for new SLAM researchers. ROS (Robot Operating System)-based SLAM applications are also given for better understanding. In this way, the reader obtains the theoretical basis and application experience to develop alternative methods related to SLAM. © 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.Article A Unified Monocular Vision-Based Driving Model for Autonomous Vehicles With Multi-Task Capabilities(Institute of Electrical and Electronics Engineers Inc., 2025) Azak, S.; Bozkaya, F.; Tiglioglu, S.; Yusefi, A.; Durdu, A.The recent progress in autonomous driving primarily relies on sensor-rich systems, encompassing radars, LiDARs, and advanced cameras, in order to perceive the environment. However, human-operated vehicles showcase an impressive ability to drive based solely on visual perception. This study introduces an end-to-end method for predicting the steering angle and vehicle speed exclusively from a monocular camera image. Alongside the color image, which conveys scene texture and appearance details, a monocular depth image and a semantic segmentation image are internally derived and incorporated, offering insights into spatial and semantic environmental structures. This results in a total of three input images. Moreover, LSTM units are also employed to acquire temporal features. The proposed model demonstrates a significant enhancement in RMSE compared to the state-of-the-art, achieving a notable improvement of 44.96% for the steering angle and 4.39% for the speed on the Udacity dataset. Furthermore, tests on the CARLA and Sully Chen datasets yield results that outperform those reported in the literature. Extensive ablation studies are also conducted to showcase the effectiveness of each component. These findings highlight the potential of self-driving systems using visual input alone. © 2016 IEEE.

