
(2) Kar Mun Chin

(3) * Yeh Huann Goh

(4) Tsung Heng Chiew

(5) Terence Sy Horng Ting

(6) Ge MA

(7) Chong Keat How

*corresponding author
AbstractThe assembly of fastening components traditionally relies on labour-intensive human-machine collaboration, which incurs high costs. Existing methods often assume fixed positions or use markers for guidance, requiring extra effort to place and maintain them. This study aims to develop an intelligent control system for a vision-equipped robotic arm to autonomously assemble fastening components in industrial settings, enhancing flexibility and reducing labour costs. The system integrates object detection with edge and ellipse detection, alongside filtering techniques, to accurately locate the centres of the fastening components.  The key contribution is the system's ability to perform autonomous assembly without predefined positions, enhancing flexibility in varied environments. YOLOv8 is employed to detect the bolt and nut, followed by edge and ellipse detection to pinpoint centre coordinates. A depth camera and kinematic calculations enable accurate 3D positioning for pick-and-place tasks. Experimental results demonstrate the system’s high effectiveness, with less than 1% of targets undetected. Based on experiments conducted in randomly arranged conditions, the system demonstrated high effectiveness, achieving over 99% detection accuracy. It achieved an 87% average success rate for picking fastening components ranging from sizes M6 to M18, and a 90% success rate for precise placement. Additionally, the system demonstrated robustness across various component sizes, with a minor increase in orientation errors for smaller components, attributed to depth estimation challenges. Future work could explore alternative depth data collection methods to improve accuracy. These results confirm the reliability of the system in flexible assembly tasks, demonstrating its potential to reduce costs by minimising manual involvement in industrial settings.
KeywordsRobotic Vision-Based Assembly; YOLOv8 Object Detection; Ellipse Detection; 3D Positioning; Depth Camera
|
DOIhttps://doi.org/10.31763/ijrcs.v5i1.1705 |
Article metrics10.31763/ijrcs.v5i1.1705 Abstract views : 218 | PDF views : 33 |
Cite |
Full Text![]() |
References
[1] J. Xu, C. Zhang, Z. Liu and Y. Pei, “A Review on Significant Technologies Related to the Robot-Guided Intelligent Bolt Assembly Under Complex or Uncertain Working Conditions,†IEEE Access, vol. 7, pp. 136752-136776, 2019, https://doi.org/10.1109/ACCESS.2019.2941918.
[2] J. M. Tao, J. Y. S. Luh and Y. F. Zheng, “Compliant coordination control of two moving industrial robots,†IEEE Transactions on Robotics and Automation, vol. 6, no. 3, pp. 322-330, 1990, https://doi.org/10.1109/70.56664.
[3] B. Chu, K. Jung, K. H. Ko and D. Hong, “Mechanism and analysis of a robotic bolting device for steel beam assembly,†ICCAS 2010, pp. 2351-2356, 2010, https://doi.org/10.1109/ICCAS.2010.5669929.
[4] F. Zhang, L. Hua, Y. Fu and B. Guo, “Dynamic simulation and analysis for bolt and nut mating of dual arm robot,†2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 660-665, 2012, https://doi.org/10.1109/ROBIO.2012.6491042.
[5] K. Inamura, K. Iwasaki, R. Kunimura, S. Hamasaki and H. Osumi, “Investigation of Screw Fastening by Human-Robot Cooperation,†2023 IEEE/SICE International Symposium on System Integration (SII), pp. 1-6, 2023, https://doi.org/10.1109/SII55687.2023.10039285.
[6] M. Liu et al., “A robotic Hi-Lite Bolts/collars assembly system and control strategy,†2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 2255-2260, 2017, https://doi.org/10.1109/ROBIO.2017.8324754.
[7] K. Pfeiffer, A. Escande and A. Kheddar, “Nut fastening with a humanoid robot,†2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6142-6148, 2017, https://doi.org/10.1109/IROS.2017.8206515.
[8] B. H. Yoshimi and P. K. Allen, “Integrating real-time vision and manipulation,†Proceedings of the Thirtieth Hawaii International Conference on System Sciences, vol. 5, pp. 178-187, 1997, https://doi.org/10.1109/HICSS.1997.663173.
[9] M. A. Elaziz et al., “An Improved Marine Predators Algorithm With Fuzzy Entropy for Multi-Level Thresholding: Real World Example of COVID-19 CT Image Segmentation,†IEEE Access, vol. 8, pp. 125306-125330, 2020, https://doi.org/10.1109/ACCESS.2020.3007928.
[10] M. Wang, Z. Zhang, X. Qiu, S. Gao and Y. Wang, “ATASI-Net: An Efficient Sparse Reconstruction Network for Tomographic SAR Imaging With Adaptive Threshold,†IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1-18, 2023, https://doi.org/10.1109/TGRS.2023.3268132.
[11] W. Zhu, H. Gu, Z. Fan and X. Zhu, “Robust Stereo Road Image Segmentation Using Threshold Selection Optimization Method Based on Persistent Homology,†IEEE Access, vol. 11, pp. 122221-122230, 2023, https://doi.org/10.1109/ACCESS.2023.3329056.
[12] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,†IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979, https://doi.org/10.1109/TSMC.1979.4310076.
[13] J. Yen, F. Chang and S. Chang, “A new criterion for automatic multilevel thresholding,†IEEE Transactions on Image Processing, vol. 4, no. 3, pp. 370-378, 1995, https://doi.org/10.1109/83.366472.
[14] K. Chen, Y. Zhou, Z. Zhang, M. Dai, Y. Chao, and J. Shi, “Multilevel Image Segmentation Based on an Improved Firefly Algorithm,†Mathematical Problems in Engineering, vol. 2016, no. 1, pp. 1-12, 2016, https://doi.org/10.1155/2016/1578056.
[15] M. A. El Aziz, A. A. Ewees, and A. E. Hassanien, “Whale Optimization Algorithm and Moth-Flame Optimization for multilevel thresholding image segmentation,†Expert Systems with Applications, vol. 83, pp. 242–256, 2017, https://doi.org/10.1016/j.eswa.2017.04.023.
[16] P. Prathusha and S. Jyothi, “A Novel Edge Detection Algorithm for Fast and Efficient Image Segmentation,†Data Engineering and Intelligent Computing, pp. 283–291, 2018, https://doi.org/10.1007/978-981-10-3223-3_26.
[17] Y. Chong, X. Chen and S. Pan, “Context Union Edge Network for Semantic Segmentation of Small-Scale Objects in Very High Resolution Remote Sensing Images,†IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1-5, 2022, https://doi.org/10.1109/LGRS.2020.3021210.
[18] J. Li, F. Pu, H. Chen, X. Xu and Y. Yu, “Crop Segmentation of Unmanned Aerial Vehicle Imagery Using Edge Enhancement Network,†IEEE Geoscience and Remote Sensing Letters, vol. 21, pp. 1-5, 2024, https://doi.org/10.1109/LGRS.2024.3358983.
[19] C. Wang, H. Chen and S. Zhao, “RERN: Rich Edge Features Refinement Detection Network for Polycrystalline Solar Cell Defect Segmentation,†IEEE Transactions on Industrial Informatics, vol. 20, no. 2, pp. 1408-1419, 2024, https://doi.org/10.1109/TII.2023.3275705.
[20] F. He, M. A. Parvez Mahmud, A. Z. Kouzani, A. Anwar, F. Jiang, and S. H. Ling, “An Improved SLIC Algorithm for Segmentation of Microscopic Cell Images,†Biomedical Signal Processing and Control, vol. 73, p. 103464, 2022, https://doi.org/10.1016/j.bspc.2021.103464.
[21] M. S. Chaibou, K. Kalti, B. Solaiman and M. A. Mahjoub, “A Combined Approach Based on Fuzzy Classification and Contextual Region Growing to Image Segmentation,†2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV), pp. 172-177, 2016, https://doi.org/10.1109/CGiV.2016.41.
[22] E. O. Rodrigues, A. Conci and P. Liatsis, “ELEMENT: Multi-Modal Retinal Vessel Segmentation Based on a Coupled Region Growing and Machine Learning Approach,†IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 12, pp. 3507-3519, 2020, https://doi.org/10.1109/JBHI.2020.2999257.
[23] X. Wang and S. Ji, “Roof Plane Segmentation From LiDAR Point Cloud Data Using Region Expansion Based L0 Gradient Minimization and Graph Cut,†IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 10101-10116, 2021, https://doi.org/10.1109/JSTARS.2021.3113083.
[24] Y. Li, Z. Li, Z. Guo, A. Siddique, Y. Liu and K. Yu, “Infrared Small Target Detection Based on Adaptive Region Growing Algorithm With Iterative Threshold Analysis,†IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-15, 2024, https://doi.org/10.1109/TGRS.2024.3376425.
[25] C. Senthilkumar and R. K. Gnanamurthy, “A Fuzzy clustering based MRI brain image segmentation using back propagation neural networks,†Cluster Computing, vol. 22, pp. 12305–12312, 2018, https://doi.org/10.1007/s10586-017-1613-x.
[26] C. Hung, J. Underwood, J. Nieto, and S. Sukkarieh, “A Feature Learning Based Approach for Automated Fruit Yield Estimation,†Field and Service Robotics, pp. 485–498, 2020, https://doi.org/10.1007/978-3-319-07488-7_33.
[27] H. Chen et al., “Unsupervised Local Discrimination for Medical Images,†IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 12, pp. 15912-15929, 2023, https://doi.org/10.1109/TPAMI.2023.3299038.
[28] Y. Chen, Z. Wang and X. Bai, “Fuzzy Sparse Subspace Clustering for Infrared Image Segmentation,†IEEE Transactions on Image Processing, vol. 32, pp. 2132-2146, 2023, https://doi.org/10.1109/TIP.2023.3263102.
[29] X. Tian, T. Sun and Y. Qi, “Ancient Chinese Character Image Segmentation Based on Interval-Valued Hesitant Fuzzy Set,†IEEE Access, vol. 8, pp. 146577-146587, 2020, https://doi.org/10.1109/ACCESS.2020.3014219.
[30] J. Fu, J. Zhao and F. Li, “Infrared Sea-Sky Line Detection Utilizing Self-Adaptive Laplacian of Gaussian Filter and Visual-Saliency-Based Probabilistic Hough Transform,†IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1-5, 2022, https://doi.org/10.1109/LGRS.2021.3111099.
[31] Y. Ding, Y. Sun, X. Yu, D. Cheng, X. Lin and X. Xu, “Bezier-Based Hough Transforms for Doppler Localization of Human Targets,†IEEE Antennas and Wireless Propagation Letters, vol. 19, no. 1, pp. 173-177, 2020, https://doi.org/10.1109/LAWP.2019.2956842.
[32] Z. Long et al., “Motor Fault Diagnosis Based on Scale Invariant Image Features,†IEEE Transactions on Industrial Informatics, vol. 18, no. 3, pp. 1605-1617, 2022, https://doi.org/10.1109/TII.2021.3084615.
[33] G. Flitton, T. P. Breckon, and N. Megherbi, “Object recognition using 3D SIFT in complex CT volumes,†Applied Mathematics and Computing Group School of Engineering, pp. 1–12, 2010, https://bmva-archive.org.uk/bmvc/2010/conference/paper11/abstract11.pdf.
[34] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),†Computer Vision – ECCV 2006, vol. 110, no. 3, pp. 346–359, 2008, https://doi.org/10.1007/11744023_32.
[35] E. Rublee, V. Rabaud, K. Konolige and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,†2011 International Conference on Computer Vision, pp. 2564-2571, 2011, https://doi.org/10.1109/ICCV.2011.6126544.
[36] G. Dong and G. Kuang, “Classification on the Monogenic Scale Space: Application to Target Recognition in SAR Image,†IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2527-2539, 2015, https://doi.org/10.1109/TIP.2015.2421440.
[37] A. Novak, N. Armstrong, T. Caelli and I. Blair, “Bayesian Contrast Measures and Clutter Distribution Determinants of Human Target Detection,†IEEE Transactions on Image Processing, vol. 26, no. 3, pp. 1115-1126, 2017, https://doi.org/10.1109/TIP.2016.2644269.
[38] K. Vodrahalli and A. K. Bhowmik, “3D computer vision based on machine learning with deep neural networks: A review,†Journal of the Society for Information Display, vol. 25, no. 11, pp. 676–694, 2017, https://doi.org/10.1002/jsid.617.
[39] F. Gao, T. Huang, J. Sun, J. Wang, A. Hussain, and E. Yang, “A New Algorithm for SAR Image Target Recognition Based on an Improved Deep Convolutional Neural Network,†Cognitive Computation, vol. 11, no. 6, pp. 809–824, 2019, https://doi.org/10.1007/s12559-018-9563-z.
[40] Y. Tian, H. Meng and F. Yuan, “Multiscale and Multilevel Enhanced Features for Ship Target Recognition in Complex Environments,†IEEE Transactions on Industrial Informatics, vol. 20, no. 3, pp. 4640-4650, 2024, https://doi.org/10.1109/TII.2023.3327570.
[41] M. Gong and Y. Shu, “Real-Time Detection and Motion Recognition of Human Moving Objects Based on Deep Learning and Multi-Scale Feature Fusion in Video,†IEEE Access, vol. 8, pp. 25811-25822, 2020, https://doi.org/10.1109/ACCESS.2020.2971283.
[42] F. Yu, B. He and J. -X. Liu, “Underwater Targets Recognition Based on Multiple AUVs Cooperative via Recurrent Transfer-Adaptive Learning (RTAL),†IEEE Transactions on Vehicular Technology, vol. 72, no. 2, pp. 1574-1585, 2023, https://doi.org/10.1109/TVT.2022.3211862.
[43] K. Tseng, Y. Zhang, Q. Zhu, K. L. Yung, and W. H. Ip, “Semi-supervised image depth prediction with deep learning and binocular algorithms,†Applied Soft Computing, vol. 92, p. 106272, 2020, https://doi.org/10.1016/j.asoc.2020.106272.
[44] F. Deng, L. Zhang, F. Gao, H. Qiu, X. Gao and J. Chen, “Long-Range Binocular Vision Target Geolocation Using Handheld Electronic Devices in Outdoor Environment,†IEEE Transactions on Image Processing, vol. 29, pp. 5531-5541, 2020, https://doi.org/10.1109/TIP.2020.2984898.
[45] J. Zhao and R. S. Allison, “The Role of Binocular Vision in Avoiding Virtual Obstacles While Walking,†IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 7, pp. 3277-3288, 2021, https://doi.org/10.1109/TVCG.2020.2969181.
[46] M. Chen, W. Lu, X. Liu, S. Li, F. Gao and F. Shuang, “Hybrid Vision/Force Control of Robotic Autonomous Assembly for 110kV Hot-line Maintenance,†2021 China Automation Congress (CAC), pp. 690-695, 2021, https://doi.org/10.1109/CAC53003.2021.9727413.
[47] O. Salunkhe, O. Stensöta, M. Åkerman, Å. F. Berglund, and P. A. Alveflo, “Assembly 4.0: Wheel hub nut assembly using a cobot,†IFAC-PapersOnLine, vol. 52, no. 13, pp. 1632–1637, 2019, https://doi.org/10.1016/j.ifacol.2019.11.434.
[48] M. Koike, K. Kurabe, K. Yamashita, Y. Kato, K. Jinno and K. Tatsuno, “An approach to object recognition for a power distribution line maintenance robot. The case of identifying a mechanical bolt to be tightened with a nut,†2016 International Symposium on Micro-NanoMechatronics and Human Science (MHS), pp. 1-7, 2016, https://doi.org/10.1109/MHS.2016.7824219.
[49] Y. Chen, J. Yu, L. Shen, Z. Lin and Z. Liu, “Vision-Based High-Precision Assembly with Force Feedback,†2023 9th International Conference on Control, Automation and Robotics (ICCAR), pp. 399-404, 2023, https://doi.org/10.1109/ICCAR57134.2023.10151762.
[50] R. Holladay, T. Lozano-Pérez and A. Rodriguez, “Planning for Multi-stage Forceful Manipulation,†2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6556-6562, 2021, https://doi.org/10.1109/ICRA48506.2021.9561233.
[51] D. Son, H. Yang and D. Lee, “Sim-to-Real Transfer of Bolting Tasks with Tight Tolerance,†2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9056-9063, 2020, https://doi.org/10.1109/IROS45743.2020.9341644.
[52] T. S. H. Ting et al., “A Review of Advanced Force Torque Control Strategies for Precise Nut-to-Bolt Mating in Robotic Assembly,†International Journal of Robotics and Control Systems, vol. 5, no. 1, pp. 139-158, 2024, https://doi.org/10.31763/ijrcs.v5i1.1604.
[53] B. Shi, H. Liu, E. Zappa and X. Ni, “A Looseness Recognition Method for Rail Fastener Based on Semantic Segmentation and Fringe Projection Profilometry,†IEEE Sensors Journal, vol. 24, no. 11, pp. 18608-18621, 2024, https://doi.org/10.1109/JSEN.2024.3389297.
[54] N. Akai, T. Hirayama and H. Murase, “Semantic Localization Considering Uncertainty of Object Recognition,†IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4384-4391, 2020, https://doi.org/10.1109/LRA.2020.2998403.
[55] X. Ma, Q. Wu, X. Zhao, X. Zhang, M. -O. Pun and B. Huang, “SAM-Assisted Remote Sensing Imagery Semantic Segmentation With Object and Boundary Constraints,†IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-16, 2024, https://doi.org/10.1109/TGRS.2024.3443420.
[56] M. I. M. Ameerdin, M. H. Jamaluddin, A. Z. Shukor, and S. Mohamad, “A Review of Deep Learning-Based Defect Detection and Panel Localization for Photovoltaic Panel Surveillance System,†International Journal of Robotics and Control Systems, vol. 4, no. 4, pp. 1746–1771, 2024, https://doi.org/10.31763/ijrcs.v4i4.1579.
[57] G. Chuang and C. Li-Jia, “Bridge Crack Detection Based on Attention Mechanism,†International Journal of Robotics and Control Systems, vol. 3, no. 2, pp. 259–269, 2023, https://doi.org/10.31763/ijrcs.v3i2.929.
[58] A. N. Handayani, F. A. Pusparani, D. Lestari, I. M. Wirawan, A. P. Wibawa, and O. Fukuda, “Real-Time Obstacle Detection for Unmanned Surface Vehicle Maneuver,†International Journal of Robotics and Control Systems, vol. 3, no. 4, pp. 765–779, 2023, https://doi.org/10.31763/ijrcs.v3i4.1147.
[59] I. G. S. M. Diyasa et al., “Enhanced Human Hitting Movement Recognition Using Motion History Image and Approximated Ellipse Techniques,†International Journal of Robotics and Control Systems, vol. 5, no. 1, pp. 222–239, 2024, https://doi.org/10.31763/ijrcs.v5i1.1599.
[60] X. Yue, K. Qi, X. Na, Y. Zhang, Y. Liu, C. Liu, “Improved YOLOv8-Seg Network for Instance Segmentation of Healthy and Diseased Tomato Plants in the Growth Stage,†Agriculture, vol. 13, no. 8, p. 1643, 2023, https://doi.org/10.3390/agriculture13081643.
[61] Y. K. Tan, K. M. Chin, T. S. H. Ting, Y. H. Goh and T. H. Chiew, “Research on YOLOv8 Application in Bolt and Nut Detection for Robotic Arm Vision,†2024 16th International Conference on Knowledge and Smart Technology (KST), pp. 126-131, 2024, https://doi.org/10.1109/KST61284.2024.10499651.
Refbacks
- There are currently no refbacks.
Copyright (c) 2024 Yan Kai Tan, Kar Mun Chin, Yeh Huann Goh, Tsung Heng Chiew, Terence Sy Horng Ting, Ge MA, Chong Keat How

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
About the Journal | Journal Policies | Author | Information |
International Journal of Robotics and Control Systems
e-ISSN: 2775-2658
Website: https://pubs2.ascee.org/index.php/IJRCS
Email: ijrcs@ascee.org
Organized by: Association for Scientific Computing Electronics and Engineering (ASCEE), Peneliti Teknologi Teknik Indonesia, Department of Electrical Engineering, Universitas Ahmad Dahlan and Kuliah Teknik Elektro
Published by: Association for Scientific Computing Electronics and Engineering (ASCEE)
Office: Jalan Janti, Karangjambe 130B, Banguntapan, Bantul, Daerah Istimewa Yogyakarta, Indonesia