Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Cinar, Ilkay"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Real-Time and Fully Automated Robotic Stacking System with Deep Learning-Based Visual Perception
    (MDPI, 2025) Ozer, Ali Sait; Cinar, Ilkay
    Highlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Highlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Highlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Abstract This study presents a fully automated, real-time robotic stacking system based on deep learning-driven visual perception, designed to optimize classification and handling tasks on industrial production lines. The proposed system integrates a YOLOv5s-based object detection algorithm with an ABB IRB6640 robotic arm via a programmable logic controller and the Profinet communication protocol. Using a camera mounted above a conveyor belt and a Python-based interface, 13 different types of industrial bags were classified and sorted. The trained model achieved a high validation performance with an mAP@0.5 score of 0.99 and demonstrated 99.08% classification accuracy in initial field tests. Following environmental and mechanical optimizations, such as adjustments to lighting, camera angle, and cylinder alignment, the system reached 100% operational accuracy during real-world applications involving 9600 packages over five days. With an average cycle time of 10-11 s, the system supports a processing capacity of up to six items per minute, exhibiting robustness, adaptability, and real-time performance. This integration of computer vision, robotics, and industrial automation offers a scalable solution for future smart manufacturing applications.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback