Browsing by Author "Ozer, Ali Sait"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference Object Enhanced DSOGI-PLL Based Control Strategy for DSTATCOM(IEEE, 2025) Ozer, Ali Sait; Karaca, HulusiA self-excited induction generator (SEIG) is highly preferred in standalone wind power generation systems due to its robust structure. In order for the voltage and frequency generated by SEIG to be stable and constant, active and reactive power control is required. For this purpose, DSTATCOM is widely used. The effectiveness of the performance of DSTATCOM depends on the control algorithm used. The control algorithm generates reference currents using load currents, SEIG voltages, and frequency. When the SEIG is fed with nonlinear and unbalanced loads, if the load currents are not well filtered, the active components, reactive components, and frequency can be misestimated. In this paper, an enhanced DSOGI-PLL based control algorithm with superior filtering capability is proposed for voltage and frequency control of SEIG. The proposed algorithm is tested under nonlinear load and nonlinear unbalanced load conditions. The results clearly demonstrated the superiority of the EDSOGI-PLLL based algorithm.Article Real-Time and Fully Automated Robotic Stacking System with Deep Learning-Based Visual Perception(MDPI, 2025) Ozer, Ali Sait; Cinar, IlkayHighlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Highlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Highlights The proposed framework represents a fully deployable AI-driven automation system that enhances operational accuracy, flexibility, and efficiency. It establishes a benchmark for smart manufacturing solutions that integrate machine vision, robotics, and industrial communication technologies. The study contributes to the advancement of Industry 4.0 practices by validating an intelligent production model applicable to real industrial environments. What are the main findings? A real-time image processing framework was developed in Python using the YOLOv5 models and directly integrated into an industrial production line. The system successfully combined object classification results with a Siemens S7-1200 PLC via Profinet communication, enabling synchronized control of the robotic arm, conveyor motors, and sensors. What are the implications of the main findings? The integration of deep learning-based visual perception with PLC-controlled automation enables seamless communication between vision and mechanical components in industrial settings. The validated framework demonstrates scalability and real-world applicability, offering an effective solution for multi-class object detection and robotic stacking in manufacturing environments.Abstract This study presents a fully automated, real-time robotic stacking system based on deep learning-driven visual perception, designed to optimize classification and handling tasks on industrial production lines. The proposed system integrates a YOLOv5s-based object detection algorithm with an ABB IRB6640 robotic arm via a programmable logic controller and the Profinet communication protocol. Using a camera mounted above a conveyor belt and a Python-based interface, 13 different types of industrial bags were classified and sorted. The trained model achieved a high validation performance with an mAP@0.5 score of 0.99 and demonstrated 99.08% classification accuracy in initial field tests. Following environmental and mechanical optimizations, such as adjustments to lighting, camera angle, and cylinder alignment, the system reached 100% operational accuracy during real-world applications involving 9600 packages over five days. With an average cycle time of 10-11 s, the system supports a processing capacity of up to six items per minute, exhibiting robustness, adaptability, and real-time performance. This integration of computer vision, robotics, and industrial automation offers a scalable solution for future smart manufacturing applications.

