Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Tiglioglu, S."

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 4
    Citation - Scopus: 6
    A Generalizable D-Vio and Its Fusion With Gnss/imu for Improved Autonomous Vehicle Localization
    (Institute of Electrical and Electronics Engineers Inc., 2023) Yusefi, A.; Durdu, A.; Bozkaya, F.; Tiglioglu, S.; Yilmaz, A.; Sungur, C.
    An autonomous vehicle must be able to locate itself precisely and reliably in a large-scale outdoor area. In an attempt to enhance the localization of an autonomous vehicle based on Global Navigation Satellite System (GNSS)/Camera/Inertial Measurement Unit (IMU), when GNSS signals are interfered with or obstructed by reflected signals, a multi-step correction filter is used to smooth the inaccurate GNSS data obtained. The proposed solutions integrate a high amount of data from several sensors to compensate for the sensors' individual weaknesses. Additionally, this work proposes a Generalizable Deep Visual Intertial Odometry (GD-VIO) to better locate the vehicle in the event of GNSS outages. The algorithms suggested in this research have been tested through real-world experimentations, demonstrating that they are able to deliver accurate and trustworthy vehicle pose estimation. IEEE
  • Loading...
    Thumbnail Image
    Article
    A Unified Monocular Vision-Based Driving Model for Autonomous Vehicles With Multi-Task Capabilities
    (Institute of Electrical and Electronics Engineers Inc., 2025) Azak, S.; Bozkaya, F.; Tiglioglu, S.; Yusefi, A.; Durdu, A.
    The recent progress in autonomous driving primarily relies on sensor-rich systems, encompassing radars, LiDARs, and advanced cameras, in order to perceive the environment. However, human-operated vehicles showcase an impressive ability to drive based solely on visual perception. This study introduces an end-to-end method for predicting the steering angle and vehicle speed exclusively from a monocular camera image. Alongside the color image, which conveys scene texture and appearance details, a monocular depth image and a semantic segmentation image are internally derived and incorporated, offering insights into spatial and semantic environmental structures. This results in a total of three input images. Moreover, LSTM units are also employed to acquire temporal features. The proposed model demonstrates a significant enhancement in RMSE compared to the state-of-the-art, achieving a notable improvement of 44.96% for the steering angle and 4.39% for the speed on the Udacity dataset. Furthermore, tests on the CARLA and Sully Chen datasets yield results that outperform those reported in the literature. Extensive ablation studies are also conducted to showcase the effectiveness of each component. These findings highlight the potential of self-driving systems using visual input alone. © 2016 IEEE.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 1
    Use of Yolo Algorithm for Traffic Sign Detection in Autonomous Vehicles and Improvement Using Data Replication Methods
    (Institute of Electrical and Electronics Engineers Inc., 2023) Budak, S.; Bozkaya, F.; Akmaz, M.Y.; Tiglioglu, S.; Boynukara, C.; Kazancı, O.; Budak, Z.H.Y.
    Autonomous vehicles use many technologies and methods to detect and act on surrounding objects. The most common among these technologies is an algorithm called YOLO (You Only Look Once). This algorithm quickly detects objects in an image and classifies these objects accurately. This study examines the use of the YOLO algorithm for signage detection in autonomous vehicles and how this algorithm can be improved. First of all, the basic principles and working mechanisms of the YOLO algorithm are explained. Then, it is explained in detail how this algorithm can be used for plate detection in autonomous vehicles. Various models were trained using the YOLO algorithm and the data set created with real data, and the trained models were tested on real-time systems. Finally, suggestions for the improvement of the YOLO algorithm are presented and how this algorithm can be improved further in the future is discussed. © 2023 IEEE.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback