Real-Time and Fully Automated Robotic Stacking System with Deep Learning-Based Visual Perception
| dc.contributor.author | Ozer, Ali Sait | |
| dc.contributor.author | Cinar, Ilkay | |
| dc.date.accessioned | 2025-12-24T21:38:19Z | |
| dc.date.available | 2025-12-24T21:38:19Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Highlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Highlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Highlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Abstract This study presents a fully automated, real-time robotic stacking system based on deep learning-driven visual perception, designed to optimize classification and handling tasks on industrial production lines. The proposed system integrates a YOLOv5s-based object detection algorithm with an ABB IRB6640 robotic arm via a programmable logic controller and the Profinet communication protocol. Using a camera mounted above a conveyor belt and a Python-based interface, 13 different types of industrial bags were classified and sorted. The trained model achieved a high validation performance with an mAP@0.5 score of 0.99 and demonstrated 99.08% classification accuracy in initial field tests. Following environmental and mechanical optimizations, such as adjustments to lighting, camera angle, and cylinder alignment, the system reached 100% operational accuracy during real-world applications involving 9600 packages over five days. With an average cycle time of 10-11 s, the system supports a processing capacity of up to six items per minute, exhibiting robustness, adaptability, and real-time performance. This integration of computer vision, robotics, and industrial automation offers a scalable solution for future smart manufacturing applications. | en_US |
| dc.description.sponsorship | Scientific Research Projects Coordinatorship of Selcuk University [25601075] | en_US |
| dc.description.sponsorship | This research was funded by the Scientific Research Projects Coordinatorship of Selcuk University, grant number 25601075. | en_US |
| dc.identifier.doi | 10.3390/s25226960 | |
| dc.identifier.issn | 1424-8220 | |
| dc.identifier.scopus | 2-s2.0-105022903654 | |
| dc.identifier.uri | https://doi.org/10.3390/s25226960 | |
| dc.identifier.uri | https://hdl.handle.net/123456789/12735 | |
| dc.language.iso | en | en_US |
| dc.publisher | MDPI | en_US |
| dc.relation.ispartof | Sensors | en_US |
| dc.rights | info:eu-repo/semantics/openAccess | en_US |
| dc.subject | Computer Vision | en_US |
| dc.subject | Industrial Automation | en_US |
| dc.subject | Programmable Logic Controller Integration | en_US |
| dc.subject | Real-Time Object Detection | en_US |
| dc.subject | Robotic Stacking | en_US |
| dc.subject | Smart Manufacturing | en_US |
| dc.title | Real-Time and Fully Automated Robotic Stacking System with Deep Learning-Based Visual Perception | en_US |
| dc.type | Article | en_US |
| dspace.entity.type | Publication | |
| gdc.author.scopusid | 57730535300 | |
| gdc.author.scopusid | 57224821251 | |
| gdc.author.wosid | Özer, Ali Sait/Gqz-2616-2022 | |
| gdc.author.wosid | Cinar, Ilkay/Gls-2427-2022 | |
| gdc.bip.impulseclass | C5 | |
| gdc.bip.influenceclass | C5 | |
| gdc.bip.popularityclass | C5 | |
| gdc.coar.access | open access | |
| gdc.coar.type | text::journal::journal article | |
| gdc.description.department | Konya Technical University | en_US |
| gdc.description.departmenttemp | [Ozer, Ali Sait] Konya Tech Univ, Dept Control & Automat Technol, TR-42250 Konya, Turkiye; [Cinar, Ilkay] Selcuk Univ, Dept Comp Engn, TR-42250 Konya, Turkiye | en_US |
| gdc.description.issue | 22 | en_US |
| gdc.description.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
| gdc.description.scopusquality | Q1 | |
| gdc.description.startpage | 6960 | |
| gdc.description.volume | 25 | en_US |
| gdc.description.woscitationindex | Science Citation Index Expanded | |
| gdc.description.wosquality | Q2 | |
| gdc.identifier.openalex | W4416209010 | |
| gdc.identifier.pmid | 41305167 | |
| gdc.identifier.wos | WOS:001624459400001 | |
| gdc.index.type | WoS | |
| gdc.index.type | Scopus | |
| gdc.index.type | PubMed | |
| gdc.oaire.impulse | 0.0 | |
| gdc.oaire.influence | 2.4895952E-9 | |
| gdc.oaire.keywords | Article | |
| gdc.oaire.popularity | 2.7494755E-9 | |
| gdc.openalex.collaboration | National | |
| gdc.opencitations.count | 0 | |
| gdc.plumx.mendeley | 3 | |
| gdc.plumx.scopuscites | 0 | |
| gdc.scopus.citedcount | 0 | |
| gdc.virtual.author | Özer, Ali Sait | |
| gdc.wos.citedcount | 0 | |
| relation.isAuthorOfPublication | ea2ff530-9a3e-4a0f-b651-8660510b7766 | |
| relation.isAuthorOfPublication.latestForDiscovery | ea2ff530-9a3e-4a0f-b651-8660510b7766 |
