Yusefi, A.Durdu, A.Toy, I.2024-06-012024-06-0120249798350329940https://doi.org/10.1109/INFOTEH60418.2024.10495974https://hdl.handle.net/20.500.13091/5616Digitalni ozon Banja Luka;DWELT Software Banja Luka;et al.;MTEL Banja Luka;Municipality of East Ilidza;Municipality of East Stari Grad23rd International Symposium INFOTEH-JAHORINA, INFOTEH 2024 -- 20 March 2024 through 22 March 2024 -- 199053This research presents a novel approach for autonomous navigation of Unmanned Ground Vehicles (UGV) using a camera and LiDAR sensor fusion system. The proposed method is designed to achieve a high rate of obstacle detection, distance estimation, and obstacle avoidance. In order to thoroughly study the form of things and decrease the problem of object occlusion, which frequently happens in camera-based object recognition, the 3D point cloud received from the LiDAR depth sensors is used. The proposed camera and LiDAR sensor fusion design balance the benefits and drawbacks of the two sensors to produce a detection system that is more reliable than others. The UGV's autonomous navigation system is then provided with the region proposal to re-plan its route and navigate appropriately. The experiments were conducted on a UGV system with high obstacle avoidance and fully autonomous navigation capabilities. The outcomes demonstrate that the suggested technique can successfully maneuver the UGV and detect impediments in actual situations. © 2024 IEEE.eninfo:eu-repo/semantics/closedAccessAutonomous NavigationCamera/LiDAR Sensor FusionDeep LearningObstacle AvoidanceYOLOv7Air navigationCollision avoidanceDeep learningGround vehiclesIntelligent vehicle highway systemsNavigation systemsObject recognitionObstacle detectorsOptical radarAutonomous navigationCamera/LiDAR sensor fusionDeep learningDistance estimationHigh rateObstacles avoidanceObstacles detectionSensor fusionSensor fusion systemsYOLOv7CamerasCamera/Lidar Sensor Fusion-Based Autonomous NavigationConference Object10.1109/INFOTEH60418.2024.104959742-s2.0-85192144342