Accurate Robot Navigation Using Visual Invariant Features and Dynamic Neural Fields

(1) * Younès Raoui Mail (Mohammed V University in Rabat, Morocco)
(2) Nouzha Elmennaoui Mail (Mohammed V University in Rabat, Morocco)
*corresponding author

Abstract


Robot navigation systems are based on Simultaneous Localization and Mapping (SLAM) and obstacle avoidance. We construct maps for the robot using computer vision methods requiring high repeatability for consistent feature tracking. Also, the obstacle avoidance method needs an efficient tool for fusing data from multiple sensors. This research enhances SLAM accuracy and obstacle avoidance using advanced visual processing and dy namic neural fields (DNF). We propose two key methods: (1) an enhanced multiscale Harris detector using steerable filters for robust feature extrac tion, achieving around 90% repeatability; and (2) a dynamic neural field algorithm that predicts the optimal heading angle by integrating visual de scriptors and LIDAR data. The first method’s experimental results show that the new feature detector achieves high accuracy, outperforming exist ing methods. Its invariance to the orientation of the image makes it insen sitive to the rotations of the robot. We applied it to the monocular SLAM and remarked that the positions of the robot were computed precisely. In the second method, the results showed that the dynamic neural fields algo rithm ensures efficient obstacle avoidance by fusing the gist of the image and LIDAR data, resulting in more accurate and consistent navigation than laser-only methods. In conclusion, the study presents significant advance ments in robot navigation through robust feature detection for SLAM and effective obstacle avoidance using dynamic neural fields. These advance ments significantly enhance precision and reliability in robot navigation, paving the way for future innovations in autonomous robotic applications.


Keywords


Robot Navigation; Monocular SLAM; Visual Feature Points; Dynamic Neural Fields; Obstacle Avoidance

   

DOI

https://doi.org/10.31763/ijrcs.v4i4.1545
      

Article metrics

10.31763/ijrcs.v4i4.1545 Abstract views : 372 | PDF views : 112

   

Cite

   

Full Text

Download

References


[1] S.-i. Amari, “Dynamics of pattern formation in lateral-inhibition type neural fields,†Biological Cybernetics, vol. 27, pp. 77–87, 1977, https://doi.org/10.1007/BF00337259.

[2] R. Mur-Artal and J. D. Tardós, “ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras,†in IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, 2017, https://doi.org/10.1109/TRO.2017.2705103.

[3] D. DeTone, T. Malisiewicz and A. Rabinovich, “SuperPoint: Self-Supervised Interest Point Detection and Description,†2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 337-33712, 2018, https://doi.org/10.1109/CVPRW.2018.00060.

[4] M. Dusmanu et al., “D2-Net: A Trainable CNN for Joint Description and Detection of Local Features,†2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8084-8093, 2019, https://doi.org/10.1109/CVPR.2019.00828.

[5] A. F. de Araújo, “Deep image features for instance-level recognition and matching,†Proceedings of the 2nd Workshop on Structuring and Understanding of Multimedia heritAge Contents, 2020, https://doi.org/10.1145/3423323.3423414.

[6] J. Revaud, P. Weinzaepfel, C. R. De Souza, N. Pion, G. Csurka, Y. Cabon, and M. Humenberger, “R2D2: Repeatable and reliable detector and descriptor,†in Advances in Neural Information Processing Systems, pp. 12405–12415, 2019, https://arxiv.org/pdf/1906.06195.

[7] D. Mishkin, F. Radenovic, and J. Matas, “Repeatability is not enough: Learning affine regions via discriminability,†in European Conference on Computer Vision, pp. 287–304, 2018, https://doi.org/10.1007/978-3-030-01240-318.

[8] Z. Luo, L. Zhou, X. Bai, Y. Yao, J. Li, Z. Hu, M. Chai, and L. Quan, “Aslfeat: Learning local features of accurate keypoint detection and matching,†in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6589–6598, 2020, https://openaccess.thecvf.com/content_CVPR_2020/papers/Luo_ASLFeat_Learning_Local_Features_of_Accurate_Shape_and_Localization_CVPR_2020_paper.pdf.

[9] M. Dusmanu et al., “D2-Net: A Trainable CNN for Joint Description and Detection of Local Features,†2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8084-8093, 2019, https://doi.org/10.1109/CVPR.2019.00828.

[10] Z. Luo et al., “ContextDesc: Local Descriptor Augmentation With Cross-Modality Context,†2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2522-2531, 2019, https://doi.org/10.1109/CVPR.2019.00263.

[11] F. Darmon, M. Aubry, and P. Monasse, “Learning to guide local feature matches,†2020 International Conference on 3D Vision (3DV), pp. 1127–1136, 2020, https://doi.org/10.1109/3DV50981.2020.00123.

[12] J. Corsetti, D. Boscaini, and F. Poiesi, “Revisiting fully convolutional geometric features for object 6d pose estimation,†2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pp. 2095-2104, 2023, https://doi.org/10.1109/ICCVW60793.2023.00224.

[13] Q. Chang, Q. Liu, X. Yang, Y. Huang, F. Ren, and Y. Cui, “The relocalization of slam tracking based on spherical cameras,†IEEE Access, vol. 9, pp. 159764–159783, 2021, https://doi.org/10.1109/ACCESS.2021.3130928.

[14] Y. Ono, E. Trulls, P. Fua, and K. M. Yi, “Lf-net: Learning local features from images,†in Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 6234–6244, 2018, https://arxiv.org/pdf/1805.09662.

[15] S. A. K. Tareen and Z. Saleem, “A comparative analysis of sift, surf, kaze, akaze, orb, and brisk,†2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–10, 2018, https://doi.org/10.1109/ICOMET.2018.8346440.

[16] D. A. Jatmiko and S. U. Prini, “Study and performance evaluation binary robust invariant scalable keypoints (brisk) for underwater image stitching,†IOP Conference Series: Materials Science and Engineering, vol. 879, 2020, https://doi.org/10.1088/1757-899X/879/1/012111.

[17] R. Rahmania, M. S. Anggreainy, I. H. Kartowisastro, and W. Budiharto, “Identification of clustering brisk keypoint feature in grocery product using the elbow method,†2024 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), pp. 106–111, 2024, https://doi.org/10.1109/IAICT62357.2024.10617640.

[18] Q. Mu, Y. Wang, S. Guo, and Z. Li, “Indoor visual odometry algorithm based on adaptive feature fusion,†2022 2nd International Conference on Algorithms, High Performance Computing and Artificial Intelligence (AHPCAI), pp. 121–127, 2022, https://doi.org/10.1109/AHPCAI57455.2022.10087841.

[19] L. Ruotsalainen, A. Morrison, M. Mäkelä, J. Rantanen, and N. Sokolova, “Improving computer vision based perception for collaborative indoor navigation,†IEEE Sensors Journal, vol. 22, no. 6, pp. 4816–4826, 2022, https://doi.org/10.1109/JSEN.2021.3106257.

[20] G. D. Molina, M. Hansen, J. Getchius, R. S. Christensen, J. A. Christian, S. M. Stewart, and T. Crain, “Aas 22-113 visual odometry for precision lunar landing,†2022, https://www.semanticscholar.org/paper/AAS-22-113-VISUAL-ODOMETRY-FOR-PRECISION-LUNAR-Molina-Hansen/fdb89e10da6e2587f5701e4d693958cc245f0353.

[21] F. Bellavia and D. Mishkin, “Harrisz+ : Harris corner selection for next-gen image matching pipelines,†Pattern Recognit. Lett., vol. 158, pp. 141–147, 2022, https://doi.org/10.1016/j.patrec.2022.04.022.

[22] E. Riba, D. Mishkin, D. Ponsa, E. Rublee, and G. R. Bradski, “Kornia: an open source differentiable computer vision library for pytorch,†2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 3663–3672, 2020, https://doi.org/10.1109/WACV45572.2020.9093363.

[23] Y. Jin, D. Mishkin, A. Mishchuk, J. Matas, P. Fua, K. M. Yi, and E. Trulls, “Image matching across wide baselines: From paper to practice,†International Journal of Computer Vision, vol. 129, pp. 517–547, 2021, https://doi.org/10.1007/s11263-020-01385-0.

[24] A. Fontan, J. Civera, and M. Milford, “Anyfeature-vslam: Automating the usage of any feature into visual slam,†in Robotics: Science and Systems (RSS), 2024, https://doi.org/10.15607/rss.2024.xx.084.

[25] R. Elvira, J. D. Tardós, and J. M. M. Montiel, “Orbslam-atlas: a robust and accurate multi-map system,†2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6253–6259, 2019, https://doi.org/10.1109/IROS40897.2019.8967572.

[26] J. Orr and A. Dutta, “Multi-agent deep reinforcement learning for multi-robot applications: A survey,†Sensors, vol. 23, no. 7, p. 3625, 2023, https://doi.org/10.3390/s23073625.

[27] K. Sviatov, A. Miheev, S. Sukhov, Y. Lapshov, and S. Rapp, “Detection of obstacle features using neural networks with attention in the task of autonomous navigation of mobile robots,†in Computational Science and Its Applications – ICCSA 2020, pp. 1013–1026, 2020, https://doi.org/10.1007/978-3-030-58817-572.

[28] S. S. Gu, E. Holly, T. P. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,†2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3389–3396, 2017, https://doi.org/10.1109/ICRA.2017.7989385.

[29] L. Schulze and H. Lipson, “High-degrees-of-freedom dynamic neural fields for robot self-modeling and motion planning,†pp. 3064–3070, 2024, https://doi.org/10.1109/ICRA57147.2024.10611047.

[30] J. Zhang, L. Jin, T. Deng, and M. Tahir, “Editorial: Learning and control in robotic systems aided with neural dynamics,†Frontiers in Neurorobotics, vol. 17, 2023, https://doi.org/10.3389/fnbot.2023.1193119.

[31] J. Li, J. Chavez-Galaviz, K. Azizzadenesheli, and N. Mahmoudian, “Dynamic obstacle avoidance for usvs using cross-domain deep reinforcement learning and neural network model predictive controller,†Sensors, vol. 23, no. 7, p. 3572, 2023, https://doi.org/10.3390/s23073572.

[32] O. Bastani and S. Li, “Safe reinforcement learning via statistical model predictive shielding,†Robotics: Science and Systems XVII, 2021, https://doi.org/10.15607/RSS.2021.XVII.026.

[33] H. Xing and L. Yu, “Target-driven multi-input mapless robot navigation with deep reinforcement learning,†Journal of Physics: Conference Series, vol. 2513, 2023, https://doi.org/10.1088/1742-6596/2513/1/012004.

[34] G. Chen, L. Pan, Y. Chen, P. Xu, Z. Wang, P. Wu, J. Ji, and X. Chen, “Deep reinforcement learning of map-based obstacle avoidance for mobile robot navigation,†SN Computer Science, vol. 2, no. 417, 2021, https://doi.org/10.1007/s42979-021-00817-z.

[35] K. N. V. S. Varma and S. L. Kumari, “Robotic vision based obstacle avoidance for navigation of unmanned aerial vehicle using fuzzy rule based optimal deep learning model,†Evolutionary Intelligence, vol. 17, pp. 2193–2212, 2024, https://doi.org/10.1007/s12065-023-00881-9.

[36] B. Wang, Y. Sun, N. Zhao and G. Gui, â€Learn to Coloring: Fast Response to Perturbation in UAV-Assisted Disaster Relief Networks,†in IEEE Transactions on Vehicular Technology, vol. 69, no. 3, pp. 3505-3509, 2020, https://doi.org/10.1109/TVT.2020.2967124.

[37] K.-T. Wei and B. Ren, “A method on dynamic path planning for robotic manipulator autonomous obstacle avoidance based on an improved rrt algorithm,†Sensors, vol. 18, no. 2,p. 571, 2018, https://doi.org/10.3390/s18020571.

[38] R. Cimurs, J. H. Lee, and I. H. Suh, “Goal-oriented obstacle avoidance with deep reinforcement learning in continuous action space,†Electronics, vol. 9, no. 3, p. 411, 2020, https://doi.org/10.3390/electronics9030411.

[39] Z. Chu, F. Wang, T. Lei and C. Luo, â€Path Planning Based on Deep Reinforcement Learning for Autonomous Underwater Vehicles Under Ocean Current Disturbance,†in IEEE Transactions on Intelligent Vehicles, vol. 8, no. 1, pp. 108-120, 2023, https://doi.org/10.1109/TIV.2022.3153352.

[40] J. Cai, A. Du, X. Liang, and S. Li, “Prediction-based path planning for safe and efficient human-robot collaboration in construction via deep reinforcement learning,†Journal of Computing in Civil Engineering, vol. 37, no.1, 2023, https://doi.org/10.1061/(ASCE)CP.1943-5487.0001056.

[41] C. Yan, G. Chen, Y. Li, F. Sun, and Y. Wu, “Immune deep reinforcement learning-based path planning for mobile robot in unknown environment,†Applied Soft Computing, vol. 145, p. 110601, 2023, https://doi.org/10.1016/j.asoc.2023.110601.

[42] H. Li, Z. Li, N. Ü. Akmandor, H. Jiang, Y. Wang and T. Padır, “StereoVoxelNet: Real-Time Obstacle Detection Based on Occupancy Voxels from a Stereo Camera Using Deep Neural Networks,†2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 4826-4833, 2023, https://doi.org/10.1109/ICRA48891.2023.10160924.

[43] P. Wenzel, T. Schön, L. Leal-Taixé, and D. Cremers, “Vision-based mobile robotics obstacle avoidance with deep reinforcement learning,†2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 14360–14366, 2021, https://doi.org/10.1109/ICRA48506.2021.9560787.

[44] R. V. W. Putra, A. Marchisio, F. Zayer, J. Dias, and M. Shafique, “Embodied neuromorphic artificial intelligence for robotics: Perspectives, challenges, and research development stack,†ArXiv, 2024, https://doi.org/10.48550/arXiv.2404.03325.

[45] N. Adiuku, N. P. Avdelidis, G. Tang, and A. Plastropoulos, “Improved hybrid model for obstacle detection and avoidance in robot operating system framework (rapidly exploring random tree and dynamic windows approach),†Sensors, vol. 24, no. 7, p. 2262, 2024, https://doi.org/10.3390/s24072262.

[46] J. Zhao, S. Liu, and J. Li, “Research and implementation of autonomous navigation for mobile robots based on slam algorithm under ros,†Sensors, vol. 22, no. 11, p. 4172, 2022, https://doi.org/10.3390/s22114172.

[47] R. Grieben, J. P. Spencer, and G. Schöner, “Visual selective attention: Priority is all you need,†in Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 46, 2024, https://doi.org/10.1037/rev0000245.

[48] S. Sehring, R. Koebe, S. Aerdker, and G. Schöner, “A neural dynamic model autonomously drives a robot to perform structured sequences of action intentions,†in Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 46, 2024, https://escholarship.org/uc/item/3gh831nw.

[49] D. Sabinasz, M. Richter, and G. Schöner, “Neural dynamic foundations of a theory of higher cognition: the case of grounding nested phrases,†Cognitive Neurodynamics, vol. 18, pp. 557–579, 2024, https://doi.org/10.1007/s11571-023-10007-7.

[50] S. Kamkar, H. A. Moghaddam, R. Lashgari, and W. Erlhagen, “Brain-inspired multiple-target tracking using dynamic neural fields,†Neural Networks, vol. 151, pp. 121–131, 2022, https://doi.org/10.1016/j.neunet.2022.03.026.

[51] A. Oliva and A. Torralba, “Modeling the shape of the scene: a holistic representation of the spatial envelope,†International Journal of Computer Vision, vol. 42, no. 3, pp. 145–175, 2001, https://doi.org/10.1023/A:1011139631724.

[52] P. Kao, M. Chahine, A. Ray, M. Lechner, A. Amini, and D. Rus, Robust flight navigation with liquid neural networks, Doctoral dissertation, Massachusetts Institute of Technology, 2022.

[53] M. Chahine, R. Hasani, P. Kao, A. Ray, R. Shubert, M. Lechner, A. Amini, and D. Rus, “Robust flight navigation out of distribution with liquid neural networks,†Science Robotics, vol. 8, no. 77, 2023, https://doi.org/10.1126/scirobotics.adc8892.

[54] C. Vorbach, R. Hasani, A. Amini, M. Lechner, and D. Rus, “Causal navigation by continuous-time neural networks,†arXiv, 2021, https://arxiv.org/pdf/2106.08314.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 RAOUI Younès, Elmennaoui Nouzha

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

 


About the JournalJournal PoliciesAuthor Information

International Journal of Robotics and Control Systems
e-ISSN: 2775-2658
Website: https://pubs2.ascee.org/index.php/IJRCS
Email: ijrcs@ascee.org
Organized by: Association for Scientific Computing Electronics and Engineering (ASCEE)Peneliti Teknologi Teknik IndonesiaDepartment of Electrical Engineering, Universitas Ahmad Dahlan and Kuliah Teknik Elektro
Published by: Association for Scientific Computing Electronics and Engineering (ASCEE)
Office: Jalan Janti, Karangjambe 130B, Banguntapan, Bantul, Daerah Istimewa Yogyakarta, Indonesia