Robot intelligent perception and control

Research Article
Open access

Robot intelligent perception and control

Suhang Ma 1*
  • 1 School of Electromechanical and Vehicle Engineering, Chongqing Jiaotong University, Chongqing, China    
  • *corresponding author 1679384078@qq.com
Published on 25 October 2024 | https://doi.org/10.54254/2755-2721/95/20241763
ACE Vol.95
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-641-9
ISBN (Online): 978-1-83558-642-6

Abstract

In recent years, with the rapid development of robotics, intelligent perception and control has become the core research direction in the field of robotics. This technology plays a key role in many fields such as industrial automation, medical assistance, and autonomous driving, significantly improving production efficiency and service quality, and also promoting the intelligent processes of various industries. On the basis of an in-depth analysis of robot intelligent perception and control technology, this paper systematically reviews the development of visual and non-visual perception technology and discusses the application of multi-modal perception in complex environments. The latest development of motion control and force control technology is analyzed, and its importance in fine operation is expounded. The future development direction of robot intelligent perception and control technology is proposed, and the necessity of enhancing real-time robustness and system integration is emphasized, which provides theoretical support and technical reference for promoting the continuous innovation of robot intelligent perception and control technology.

Keywords:

Intelligent perception, intelligent control, machine vision, multi-sensor fusion.

Ma,S. (2024). Robot intelligent perception and control. Applied and Computational Engineering,95,248-257.
Export citation

Keyword: Intelligent perception, intelligent control, machine vision, multi-sensor fusion.

1. Introduction

Robot intelligent perception and control refers to the process of enabling robots to autonomously perceive their environment and make appropriate control decisions through advanced algorithms and technologies. Intelligent perception involves the acquisition of environmental data using various sensors (such as vision, liDAR, depth sensors, etc.) and the understanding of the environment through data fusion and analysis techniques. Intelligent control refers to decision-making and control based on perceptual data to ensure that the robot can perform tasks efficiently.

The development of robot intelligent perception and control began in the mid-20th century. With the progress of computer and sensor technology, robots gradually evolved from simple mechanical equipment to intelligent systems. From the 1950s to the 1970s, the concept of robots was initially formed, mainly used in industrial automation, with basic perception and control capabilities. By the 1980s, the application of photoelectric and ultrasonic sensors and the introduction of PID controllers improved control accuracy. In the 1990s, computer vision and artificial intelligence technologies began to be used for robot path planning and autonomous behavior. In the 2000s, multi-sensor fusion technology enhanced environmental perception, and SLAM technology enabled autonomous navigation. Since the 2010s, deep learning and reinforcement learning have greatly improved the perception and decision-making capabilities of robots, and the combination of cloud computing and the Internet of Things has further enhanced the performance of intelligent perception and control. Robot intelligent perception and control technology plays a crucial role in various fields of modern society, promoting the intelligent and automated process of robot systems.[1]

The purpose of this paper is to systematically discuss the intelligent sensing and control technology of robots and analyze the current development status, technical challenges, and future research directions. Firstly, this paper will introduce the composition of a robot perception system in detail, including the core technology and application scenarios of visual perception and non-visual perception. Then, the technology of motion control and force control in robot control systems and their performance in practical application are discussed. Finally, this article will summarize the current technical challenges and look at the possible future directions.

2. Robot perception system

Robot perception system is the core part of modern robot technology; it gives the robot the ability to perceive the environment, understand the change of the surrounding environment, and make the corresponding decision. At present, the research of robot perception systems has made remarkable progress in both visual perception and non-visual perception.

2.1. Visual Perception

Visual perception refers to the ability to process and understand visual information in the environment through the visual system, which enables organisms to recognize a variety of visual features such as shape, color, depth, and motion. In the field of artificial intelligence, visual perception usually refers to the technology that enables computers to "see" and understand the content in images and videos and is one of the main means for robots to obtain environmental information. With the development of computer vision and image processing technology, robot vision perception systems have been widely used in many fields.

Vision sensors: Vision sensors are used to acquire image information and three-dimensional positioning of targets. Common vision sensors include monocular cameras, stereo cameras, and multispectral cameras. As the "electronic eye" that captures external information, vision sensors play a crucial role in many fields, such as machine vision systems[2] and security monitoring systems[3]. The research on vision sensors not only has profound theoretical significance but also meets the extensive application needs.

Visual information processing technology: Based on the rapid development of the field of vision sensors, visual information processing technology is becoming more and more important with the growth of society's demand for intelligent systems. Visual Information Processing refers to the process of obtaining image information from visual sensors and conducting analysis and processing to extract useful information, understand, and make decisions[4]. Visual information processing method refers to the use of computer technology and algorithms to process and analyze visual information (such as images and videos) a series of technologies such as image processing, target recognition, and so on. Among them, image processing technology and target recognition technology have made great progress, as shown in the following table.

Table 1. Related information of visual information processing methods.

Methods

Function

Principle

Process

Research Status

Image processing technology

Improve image quality and reduce noise

The algorithm is used to analyze and transform the image

The image is treated as a pixel array for signal processing, which covers feature extraction,pattern recognition,and optimization. It includes image preprocessing (such as denoising[5]), feature extraction (texture[6] and other analysis), image recognition and classification (using CNN, RNN, and other deep learning architectures), and ensuring system performance through post-processing and model evaluation

Algorithm based on retinal noise modeling[7], new robust possibility clustering method[8], filter based on memory effect (ME)[9]

target identification technique

Identify and classify objects in images

Using computer and radar to identify distant objects

By analyzing the characteristic information of the target in the radar echo, the physical characteristic function of the target is estimated by the mathematical multi-dimensional space transformation algorithm, and the sample set is formed by summarizing the acquired data. After a lot of training, the discriminant function is determined, and the discriminant decision is performed in the classifier

Artificial intelligence method of two-stage Convolutional Neural Network(T-SCNN)[10], Convolutional Neural network CNN CDRTD model based on 64-layer SqueezeNet architecture[11]

Machine vision: Machine vision is an indispensable part of robots, and its application in the field of intelligent robots also has many difficulties, such as environmental complexity, object moving speed, and so on. However, the proposal of SLAM technology[12] solves these problems to a certain extent. SLAM (Synchronous Positioning and Map Construction) technology uses visual information to help robots build maps in unknown environments and realize autonomous positioning. The core principle is to continuously estimate the pose of robots by integrating environmental data obtained by various sensors and using filtering or optimization algorithms. The environment map is constructed synchronously to realize the robot's independent exploration and navigation without prior environmental knowledge.

Traditional SLAM technology does not take into account the dynamic object problem in complex environments, which often leads to inaccurate positioning, inaccurate accuracy, and other problems. To solve these problems, Tian et al.[13] proposed a SLAM system based on ORB-SLAM2 in dynamic environments, effectively improving the accuracy of SLAM in dynamic environments. On this basis, in 2024, Liang et al.[14] developed an RGB-D SLAM system named DIG-SLAM, which significantly improved camera attitude estimation accuracy and system robustness compared with dynamic semantic SLAM in complex dynamic environments. At the same time, aiming at the problems of view difference and high storage cost, Liu et al.[15] designed a semantic-based bionic SLAM framework working system in 2024, which achieved a similar level of accuracy under the condition of low information content of key frames required by traditional algorithms. Further improve the positioning accuracy. They have improved and updated SLAM technology to varying degrees, which is conducive to the further development of machine vision, so as to promote the more extensive application of machine vision in the field of robotics.

2.2. Non-visual perception

In the field of robotics, non-visual perception plays a crucial role, which significantly improves the robot's ability to perceive the environment by integrating various types of non-visual sensors. Non-visual perception refers to the perception that does not depend on the visual system, and it includes the ability to obtain external information and internal state of the body through hearing, touch, smell, taste, and internal senses. These perceptual modes enable us to fully understand the surrounding environment and our own state and to effectively interact and navigate even when visual information is unavailable or insufficient [16]. As the hardware of non-visual perception, the non-visual sensor, as an indispensable part of non-visual perception, occupies a crucial position. Non-visual perception mainly includes environmental perception using lidar, ultrasonic sensors, tactile sensors, etc. These technologies have unique advantages in specific scenes.

Table 2. Types of non-visual sensors and their introduction

Types

Principle

Specific role

Advantages

Fields of application

LIDAR

Remote sensing technology of laser pulse

Measure distances

High precision, strong anti-interference, and large range detection capability

Autonomous vehicles [17], terrain mapping[18], archaeology [19], etc

Ultrasonic sensors

Principle of ultrasonic wave

Detect and measure distances or objects

Low cost, simple structure, and unaffected by light and color

Autonomous driving[20], industrial automation[21], and medical equipment[22]

Tactile sensors

Simulates the tactile function of human skin

Detect object contact information

Enables machines to make more complex and fine-grained environmental interactions

Robot grasping[23], medical surgery[24] and virtual reality[25]

The non-visual information captured by non-visual sensors has rich content and various forms. In order to effectively transform this information into knowledge that robots or systems can understand and act on, it is particularly important to study non-visual information processing methods. Non-visual information processing method refers to the technology of processing and analyzing the data captured by non-visual sensors. These methods are often used to extract and interpret information that cannot be obtained directly by the naked eye or traditional image sensors. This is shown in the following table.

Table 3. Non-visual information processing methods

Methods

Principle

Process

Research Status

Signal Processing

The useful information in the signal is extracted, enhanced, and converted by sampling, filtering, amplifying, and calculating the signal.

The steps of signal acquisition, pre-processing, feature extraction (such as time-frequency analysis[26]), transformation (such as Fourier transform[27]), filter design (such as digital filter[28]), parameter estimation and decision, etc.

Communication systems[29], Seismology[30], etc

Data Fusion Technology

Using computer technology, the information from multiple sensors is intelligently integrated and processed to achieve more accurate, complete, and reliable decision-making and estimation.

Firstly, multi-source sensor data is collected. Secondly, data features are extracted. Then, pattern recognition is used for target detection and recognition. Finally, data fusion is carried out to improve the system environment perception ability.

A scalable semantic data fusion framework[31], a Feedback Convolutional Neural Network (CNN) architecture[32]

3. Robot Control System

3.1. Motion Control

Motion control system is an important part of robot control system, which mainly involves robot motion planning, path planning, and execution control strategy. In recent years, motion control has made remarkable progress in both algorithm optimization and practical application.

In terms of motion planning and path planning algorithms, graph search-based methods such as the A* algorithm[33] are still widely used in many applications, with the advantages of simple algorithms and convenient implementation. In order to achieve a higher level of control, sampling-based path planning algorithms such as PRM (Probabilistic Roadmap)[34] perform well in path planning in high-dimensional space and are especially suitable for robot motion planning in complex environments. On this basis, Stochastic Trajectory Optimization for Motion Planning algorithms such as STOMP[35] are optimized. These algorithms further improve the effect of motion planning by optimizing the smoothness and safety of paths.

Robots mainly have the following motion control strategies:

Model Predictive Control (MPC)[36]: MPC improves control accuracy and robustness in complex systems by predicting future system states and optimizing control inputs at each control cycle.

Adaptive Control[37]: In view of system model uncertainty and external interference, adaptive control methods such as L1 adaptive control and gain scheduling control can dynamically adjust control parameters to improve system stability.

In manufacturing industrial robots, motion control systems are used for high-precision operations such as welding, disassembly, and spraying to improve production efficiency and quality. For example, a two-stage method for path planning of welding robots using multi-sensor interaction[38] effectively realizes the development of automatic welding.

Motion control systems help robots achieve autonomous navigation, path planning, and task execution in the fields of home service robots and medical assistance robots. A slave manipulator[39] with the function of cooperating the guide wire and catheter was developed to perform surgery instead of the doctor, greatly improving the safety of surgery.

3.2. Force Control

The force control system gives the robot the ability to sense and regulate forces, allowing it to physically interact with the environment. In recent years, force control technology has improved significantly in terms of accuracy, stability, and application range.

Force/Moment sensors These sensors are used to measure the contact force and moment between the robot's end effector and the environment, providing real-time force feedback data. In order to improve the stability and safety of the robot in fine operation, the contact force is generally monitored and adjusted in real time by the force feedback system so as to achieve a better force control effect. A force feedback system is a kind of human-computer interaction technology that simulates real tactile feeling. It applies force corresponding to operation to users through mechanical devices to enhance the sense of operation in a virtual environment[40]. It is characterized by providing high authenticity of force feedback and strong interactivity while ensuring the safety of operation. With the advancement of technology, its application potential and value in a number of industries continue to show.

At present, there are mainly the following robot manpower control algorithms:

Impedance control[41]: By simulating the spring-damping system, impedance control regulates the interaction force between the robot and the environment and is widely used in fine operation and man-machine collaboration.

Hybrid control[42]: Hybrid control combines position control and force control to achieve simultaneous adjustment of robot motion and contact force, which is suitable for complex operation tasks.

In robotic arms used in manufacturing, medical surgery, and other fields, force control systems are used for precise operation and complex task execution. For example, a practical collision detection and coordinated compliance control method based on a momentum observer[43] is applied to a dual-arm robot, which effectively improves the safety of fine operation.

Medical robots, such as surgical robots, can achieve accurate surgical operation through force feedback to improve the safety and success rate of surgery. The ultrasonic robot integrates the force control mechanism, the force/moment measuring mechanism, and the on-line adjustment method[44] for scanning, which improves the safety and efficiency of the surgery.

4. Current challenges and future prospects

As the core support of modern robot systems, robot intelligent sensing and control technology has made remarkable progress in recent years. These technologies not only give robots the ability to autonomously perceive the environment, make real-time decisions, and perform tasks efficiently, but also promote the widespread popularization and upgrading of robot applications in many fields. However, with the continuous expansion of application scenarios and the increasing complexity of task requirements, the field of robot intelligent perception and control is also facing a series of new challenges.

Perception accuracy and real-time problem: perception accuracy is affected by sensor noise, uncertainty, and environmental interference. For example, vision sensors can be biased under light changes, motion blur, and target occlusion, reducing the reliability of task execution. In addition, the complexity of the perception algorithm, the limitation of computing resources, and the delay of communication will affect the real-time performance of the system, especially in the highly dynamic scenario, where the delay of a large amount of data processing may lead to the system response lag, affecting the security and task success rate. To address these challenges, researchers are exploring technologies such as multi-modal sensor fusion, efficient sensing algorithms, edge computing, and distributed computing to improve the accuracy and real-time performance of sensing systems and support the application of robots in complex environments.

Robustness and adaptability of the control system: First, insufficient robustness can lead to unstable behavior of the system in the face of external interference or sensor data noise, especially under extreme environmental conditions. Secondly, lack of adaptability makes it difficult for the control system to learn and adjust quickly in changing application scenarios, limiting its flexibility in different tasks. To address these challenges, researchers are exploring control methods such as model-based predictive control (MPC) combined with reinforcement learning to enhance the robustness and adaptability of the system. In addition, by introducing adaptive control algorithms and online learning mechanisms, robots can continuously optimize their control strategies during task execution, thereby improving their adaptability in complex environments. In the future, with the deep integration of multi-modal sensing and control systems, the performance of robot control systems in terms of robustness and adaptability will be significantly improved to support a wider range of application scenarios.

System integration and complexity problem: The effectiveness of a robot intelligent perception and control system depends largely on the integration and cooperation of subsystems. However, as the complexity of the task increases, the challenge of system integration becomes more and more significant. Although multimodal data fusion can improve perceptual accuracy and system robustness, it leads to the complexity of data format differences, time synchronization, and computing resource consumption. In addition, system integration requires coordinating multiple hardware devices and managing complex software architectures, especially in resource-constrained embedded environments, where efficient collaboration between software and hardware is critical. The expansion of system functions leads to an exponential increase in the integration complexity, which increases the difficulty of design, debugging, and maintenance and puts forward higher requirements for system scalability and reliability. Future research will focus on the development of modular design, standardized interfaces, and automated test tools to simplify the system integration process, improve system maintainability and stability, and support smarter and more complex robotic systems.

5. Conclusion

Based on the research progress in the field of intelligent sensing and control of robots and its application in various fields in recent years, the following work is completed:

(1) This paper reviews the development of intelligent perception and control technology of robots, from the improvement of visual and non-visual perception technology to the development of control systems, especially the optimization of motion control and force control technology, and introduces the technological progress at each stage and its impact on high-precision target recognition and autonomous navigation of robots in detail.

(2) The application examples of robot intelligent sensing and control technology in industrial automation, medical assistance, automatic driving, and other fields are analyzed, and how multi-sensor fusion and autonomous decision-making technology can promote the integration and intelligent application of robot systems is demonstrated, laying a solid foundation for the wide application of related fields.

(3) This paper discusses the challenges and future development directions in the field of robot intelligent perception and control, especially how to combine the latest sensor technology, artificial intelligence algorithms, edge computing, and other emerging technologies to further improve the autonomy and collaboration performance of robots in complex environments so as to promote technology innovation and application.

The continuous development of robot intelligent perception and control technology not only promotes technological innovation but also provides more flexible and efficient solutions for dealing with diverse needs in complex environments. The research in this paper not only provides valuable reference materials for academic research but also provides theoretical support and practical guidance for the intelligent process and technological innovation of industry.


References

[1]. He J ,Gao F .Mechanism, Actuation, Perception, and Control of Highly Dynamic Multilegged Robots: A Review[J].Chinese Journal of Mechanical Engineering,2020,33(1):1-30.

[2]. Li Y , Ai J , Sun C .Online Fabric Defect Inspection Using Smart Visual Sensors[J].Sensors, 2013, 13(4):4659-4673.

[3]. Abdalla A A ,Sagayan V A ,B. H N H , et al.Optimizing Visual Sensor Coverage Overlaps for Multiview Surveillance Systems[J].IEEE Sensors Journal,2018,18(11):4544-4552.

[4]. Yap H G F ,Yen H .A Survey on Sensor Coverage and Visual Data Capturing/Processing/Transmission in Wireless Visual Sensor Networks[J].Sensors,2014,14(2):3506-3527.

[5]. Singh, Lokesh, and Rekhram Janghel. "Image denoising techniques: A brief survey." Harmony Search and Nature Inspired Optimization Algorithms: Theory and Applications, ICHSA 2018 (2019): 731-740.

[6]. Reva N ,Shankar S T .Plant disease identification using fuzzy feature extraction and PNN[J].Signal, Image and Video Processing,2023,17(6):2809-2815.

[7]. Itziar Z ,Mateo C ,Cesar D , et al.Retinal noise emulation: a novel artistic tool for cinema that also improves compression efficiency[J].IEEE Access,2020,81-1.

[8]. Wu C ,Liu T .Interval type-2 possibilistic picture C-means clustering incorporating local information for noisy image segmentation[J].Digital Signal Processing,2024,149104492-.

[9]. Qianqian C ,Hexiang H ,Xiaoqing X , et al.Memory Effect Based Filter to Improve Imaging Quality Through Scattering Layers[J].IEEE Photonics Journal,2018,10(5):1-10.

[10]. Wu T , Yang X , Song B ,et al.T-SCNN: A Two-Stage Convolutional Neural Network for Space Target Recognition[C]//IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium.IEEE, 2019.

[11]. Nirupam D ,Noor R S ,Selim M H .Deep learning-based circular disk type radar target detection in complex environment[J].Physical Communication,2023,58

[12]. He B ,Zhang S ,Yan T , et al.A Novel Combined SLAM Based on RBPF-SLAM and EIF-SLAM for Mobile System Sensing in a Large Scale Environment[J].Sensors,2011,11(11):10197-10219.

[13]. Tian, Y., Xu, G., Li, J., & Sun, Y. (2022, October). Visual SLAM Based on YOLOX-S in Dynamic Scenes. In 2022 International Conference on Image Processing, Computer Vision and Machine Learning (ICICML) (pp. 262-266). IEEE.

[14]. Rongguang L ,Jie Y ,Benfa K , et al.DIG-SLAM: an accurate RGB-D SLAM based on instance segmentation and geometric clustering for dynamic indoor scenes[J].Measurement Science and Technology,2024,35(1):

[15]. Liu D , Wu J , Du Y ,et al.SBC-SLAM: Semantic Bioinspired Collaborative SLAM for Large-Scale Environment Perception of Heterogeneous Systems[J].IEEE Transactions on Instrumentation and Measurement, 73[2024-08-12].

[16]. Timmermans M ,Massalimova A ,Li R , et al.State-of-the-Art of Non-Radiative, Non-Visual Spine Sensing with a Focus on Sensing Forces, Vibrations and Bioelectrical Properties: A Systematic Review[J].Sensors,2023,23(19):

[17]. Sun C , Sun P , Wang J ,et al.Understanding LiDAR Performance for Autonomous Vehicles Under Snowfall Conditions[J].IEEE Transactions on Intelligent Transportation Systems, PP[2024-08-13].

[18]. Yichen L ,Shuhua Q ,Kaitao L , et al.Mapping the Forest Height by Fusion of ICESat-2 and Multi-Source Remote Sensing Imagery and Topographic Information: A Case Study in Jiangxi Province, China[J].Forests,2023,14(3):454-454.

[19]. Pieraccini M ,Miccinesi L ,Conti A , et al.Integration of GPR and TLS for investigating the floor of the ‘Salone dei Cinquecento’ in Palazzo Vecchio, Florence, Italy[J].Archaeological Prospection,2020,30(1):27-32.

[20]. Wenyuan X ,Chen Y ,Weibin J , et al.Analyzing and Enhancing the Security of Ultrasonic Sensors for Autonomous Vehicles[J].IEEE Internet of Things Journal,2018,5(6):5015-5029.

[21]. Dai H ,Zhao S ,Jia Z , et al.Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction[J].Sensors,2013,13(9):11818-11841.

[22]. Ayodele S ,Antonio V ,C T .Inductive and ultrasonic multi-tier interface for low-power, deeply implantable medical devices.[J].IEEE transactions on biomedical circuits and systems,2012,6(4):297-308.

[23]. Sami M ,Beatriz L ,Pasi K , et al.Model of tactile sensors using soft contacts and its application in robot grasping simulation[J].Robotics and Autonomous Systems,2012,61(1):1-12.

[24]. Huanran W ,X P L ,Shuxiang G , et al.A catheter side wall tactile sensor: design, modeling and experiments.[J].Minimally invasive therapy & allied technologies : MITAT : official journal of the Society for Minimally Invasive Therapy,2010,19(1):52-60.

[25]. Liao X ,Song W ,Zhang X , et al.Hetero-contact microstructure to program discerning tactile interactions for virtual reality[J].Nano Energy,2019,60127-136.

[26]. Stanković L ,Mandić D ,Daković M , et al.Time-frequency decomposition of multivariate multicomponent signals[J].Signal Processing,2018,142468-479.

[27]. Bahri M ,Hitzer S E ,Ashino R , et al.Windowed Fourier transform of two-dimensional quaternionic signals[J].Applied Mathematics and Computation,2010,216(8):2366-2379.

[28]. Hong Y , Lian Y .A Memristor-Based Continuous-Time Digital FIR Filter for Biomedical Signal Processing[J].IEEE Transactions on Circuits & Systems I Regular Papers, 2015, 62(5):1392-1401.

[29]. Zhang J A , Liu F , Masouros C ,et al.An Overview of Signal Processing Techniques for Joint Communication and Radar Sensing[J]. 2021.

[30]. Bahia B , Jafargandomi A , Sacchi M D .Hypercomplex Processing of Vector Field Seismic Data: Toward vector-valued signal processing [Hypercomplex Signal and Image Processing][J].IEEE Signal Processing Magazine, 41[2024-08-14].

[31]. Al-Baltah A I ,Ghani A A A ,Al-Gomaei M G , et al.A scalable semantic data fusion framework for heterogeneous sensors data[J].Journal of Ambient Intelligence and Humanized Computing,2020,(prepublish):1-20.

[32]. Cai K , Chen H , Ai W ,et al.Feedback Convolutional Network for Intelligent Data Fusion Based on Near-infrared Collaborative IoT Technology[J].IEEE Transactions on Industrial Informatics, 2021, PP(99):1-1.

[33]. Yan L ,Hongyan Z ,Huaizhong Z , et al.IBAS: Index Based A-Star[J].IEEE Access,2018,611707-11715.

[34]. Zheng X ,Cao J ,Zhang B , et al.Path planning of PRM based on artificial potential field in radiation environments[J].Annals of Nuclear Energy,2024,208110776-110776.

[35]. Yanzhe W ,Lai W ,Kunpeng D , et al.An online collision-free trajectory generation algorithm for human–robot collaboration[J].Robotics and Computer-Integrated Manufacturing,2023,80

[36]. Sebastien G ,Mario Z .Learning for MPC with stability & safety guarantees[J].Automatica,2022,146

[37]. He B ,Li G .Intelligent Self-Adaptation Data Behavior Control Inspired by Speech Acts[J].ACM Transactions on Sensor Networks (TOSN),2017,13(2):1-32.

[38]. Tran C C , Lin C Y .An Intelligent Path Planning of Welding Robot Based on Multisensor Interaction[J].IEEE sensors journal, 2023, 23(8):8591-8604.

[39]. Jin X , Guo S , Guo J ,et al.Development of a Tactile Sensing Robot-assisted System for Vascular Interventional Surgery[J].IEEE Sensors Journal, 2021, PP(99):1-1.

[40]. Overtoom M E ,Horeman T ,Jansen F , et al.Haptic Feedback, Force Feedback, and Force-Sensing in Simulation Training for Laparoscopy: A Systematic Overview[J].Journal of Surgical Education,2018,76(1):242-261.

[41]. Bo Z ,Fuyang S ,Yirong L , et al.Robust sliding mode impedance control of manipulators for complex force-controlled operations[J].Nonlinear Dynamics,2023,111(24):22267-22281.

[42]. Han L , Zhang Y , Wang H .Hybrid Adaptive Vision-Force Control Under the Bottleneck Constraint[J].IEEE Transactions on Control Systems Technology, 31[2024-08-16].

[43]. Liang H ,Wenfu X ,Bing L , et al.Collision Detection and Coordinated Compliance Control for a Dual-Arm Robot Without Force/Torque Sensing Based on Momentum Observer[J].IEEE/ASME Transactions on Mechatronics,2019,24(5):2261-2272.

[44]. Xianqiang B ,Shuangyi W ,Lingling Z , et al.A Novel Ultrasound Robot with Force/torque Measurement and Control for Safe and Efficient Scanning.[J].IEEE transactions on instrumentation and measurement,2023,7211-12.


Cite this article

Ma,S. (2024). Robot intelligent perception and control. Applied and Computational Engineering,95,248-257.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 6th International Conference on Computing and Data Science

ISBN:978-1-83558-641-9(Print) / 978-1-83558-642-6(Online)
Editor:Alan Wang, Roman Bauer
Conference website: https://2024.confcds.org/
Conference date: 12 September 2024
Series: Applied and Computational Engineering
Volume number: Vol.95
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. He J ,Gao F .Mechanism, Actuation, Perception, and Control of Highly Dynamic Multilegged Robots: A Review[J].Chinese Journal of Mechanical Engineering,2020,33(1):1-30.

[2]. Li Y , Ai J , Sun C .Online Fabric Defect Inspection Using Smart Visual Sensors[J].Sensors, 2013, 13(4):4659-4673.

[3]. Abdalla A A ,Sagayan V A ,B. H N H , et al.Optimizing Visual Sensor Coverage Overlaps for Multiview Surveillance Systems[J].IEEE Sensors Journal,2018,18(11):4544-4552.

[4]. Yap H G F ,Yen H .A Survey on Sensor Coverage and Visual Data Capturing/Processing/Transmission in Wireless Visual Sensor Networks[J].Sensors,2014,14(2):3506-3527.

[5]. Singh, Lokesh, and Rekhram Janghel. "Image denoising techniques: A brief survey." Harmony Search and Nature Inspired Optimization Algorithms: Theory and Applications, ICHSA 2018 (2019): 731-740.

[6]. Reva N ,Shankar S T .Plant disease identification using fuzzy feature extraction and PNN[J].Signal, Image and Video Processing,2023,17(6):2809-2815.

[7]. Itziar Z ,Mateo C ,Cesar D , et al.Retinal noise emulation: a novel artistic tool for cinema that also improves compression efficiency[J].IEEE Access,2020,81-1.

[8]. Wu C ,Liu T .Interval type-2 possibilistic picture C-means clustering incorporating local information for noisy image segmentation[J].Digital Signal Processing,2024,149104492-.

[9]. Qianqian C ,Hexiang H ,Xiaoqing X , et al.Memory Effect Based Filter to Improve Imaging Quality Through Scattering Layers[J].IEEE Photonics Journal,2018,10(5):1-10.

[10]. Wu T , Yang X , Song B ,et al.T-SCNN: A Two-Stage Convolutional Neural Network for Space Target Recognition[C]//IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium.IEEE, 2019.

[11]. Nirupam D ,Noor R S ,Selim M H .Deep learning-based circular disk type radar target detection in complex environment[J].Physical Communication,2023,58

[12]. He B ,Zhang S ,Yan T , et al.A Novel Combined SLAM Based on RBPF-SLAM and EIF-SLAM for Mobile System Sensing in a Large Scale Environment[J].Sensors,2011,11(11):10197-10219.

[13]. Tian, Y., Xu, G., Li, J., & Sun, Y. (2022, October). Visual SLAM Based on YOLOX-S in Dynamic Scenes. In 2022 International Conference on Image Processing, Computer Vision and Machine Learning (ICICML) (pp. 262-266). IEEE.

[14]. Rongguang L ,Jie Y ,Benfa K , et al.DIG-SLAM: an accurate RGB-D SLAM based on instance segmentation and geometric clustering for dynamic indoor scenes[J].Measurement Science and Technology,2024,35(1):

[15]. Liu D , Wu J , Du Y ,et al.SBC-SLAM: Semantic Bioinspired Collaborative SLAM for Large-Scale Environment Perception of Heterogeneous Systems[J].IEEE Transactions on Instrumentation and Measurement, 73[2024-08-12].

[16]. Timmermans M ,Massalimova A ,Li R , et al.State-of-the-Art of Non-Radiative, Non-Visual Spine Sensing with a Focus on Sensing Forces, Vibrations and Bioelectrical Properties: A Systematic Review[J].Sensors,2023,23(19):

[17]. Sun C , Sun P , Wang J ,et al.Understanding LiDAR Performance for Autonomous Vehicles Under Snowfall Conditions[J].IEEE Transactions on Intelligent Transportation Systems, PP[2024-08-13].

[18]. Yichen L ,Shuhua Q ,Kaitao L , et al.Mapping the Forest Height by Fusion of ICESat-2 and Multi-Source Remote Sensing Imagery and Topographic Information: A Case Study in Jiangxi Province, China[J].Forests,2023,14(3):454-454.

[19]. Pieraccini M ,Miccinesi L ,Conti A , et al.Integration of GPR and TLS for investigating the floor of the ‘Salone dei Cinquecento’ in Palazzo Vecchio, Florence, Italy[J].Archaeological Prospection,2020,30(1):27-32.

[20]. Wenyuan X ,Chen Y ,Weibin J , et al.Analyzing and Enhancing the Security of Ultrasonic Sensors for Autonomous Vehicles[J].IEEE Internet of Things Journal,2018,5(6):5015-5029.

[21]. Dai H ,Zhao S ,Jia Z , et al.Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction[J].Sensors,2013,13(9):11818-11841.

[22]. Ayodele S ,Antonio V ,C T .Inductive and ultrasonic multi-tier interface for low-power, deeply implantable medical devices.[J].IEEE transactions on biomedical circuits and systems,2012,6(4):297-308.

[23]. Sami M ,Beatriz L ,Pasi K , et al.Model of tactile sensors using soft contacts and its application in robot grasping simulation[J].Robotics and Autonomous Systems,2012,61(1):1-12.

[24]. Huanran W ,X P L ,Shuxiang G , et al.A catheter side wall tactile sensor: design, modeling and experiments.[J].Minimally invasive therapy & allied technologies : MITAT : official journal of the Society for Minimally Invasive Therapy,2010,19(1):52-60.

[25]. Liao X ,Song W ,Zhang X , et al.Hetero-contact microstructure to program discerning tactile interactions for virtual reality[J].Nano Energy,2019,60127-136.

[26]. Stanković L ,Mandić D ,Daković M , et al.Time-frequency decomposition of multivariate multicomponent signals[J].Signal Processing,2018,142468-479.

[27]. Bahri M ,Hitzer S E ,Ashino R , et al.Windowed Fourier transform of two-dimensional quaternionic signals[J].Applied Mathematics and Computation,2010,216(8):2366-2379.

[28]. Hong Y , Lian Y .A Memristor-Based Continuous-Time Digital FIR Filter for Biomedical Signal Processing[J].IEEE Transactions on Circuits & Systems I Regular Papers, 2015, 62(5):1392-1401.

[29]. Zhang J A , Liu F , Masouros C ,et al.An Overview of Signal Processing Techniques for Joint Communication and Radar Sensing[J]. 2021.

[30]. Bahia B , Jafargandomi A , Sacchi M D .Hypercomplex Processing of Vector Field Seismic Data: Toward vector-valued signal processing [Hypercomplex Signal and Image Processing][J].IEEE Signal Processing Magazine, 41[2024-08-14].

[31]. Al-Baltah A I ,Ghani A A A ,Al-Gomaei M G , et al.A scalable semantic data fusion framework for heterogeneous sensors data[J].Journal of Ambient Intelligence and Humanized Computing,2020,(prepublish):1-20.

[32]. Cai K , Chen H , Ai W ,et al.Feedback Convolutional Network for Intelligent Data Fusion Based on Near-infrared Collaborative IoT Technology[J].IEEE Transactions on Industrial Informatics, 2021, PP(99):1-1.

[33]. Yan L ,Hongyan Z ,Huaizhong Z , et al.IBAS: Index Based A-Star[J].IEEE Access,2018,611707-11715.

[34]. Zheng X ,Cao J ,Zhang B , et al.Path planning of PRM based on artificial potential field in radiation environments[J].Annals of Nuclear Energy,2024,208110776-110776.

[35]. Yanzhe W ,Lai W ,Kunpeng D , et al.An online collision-free trajectory generation algorithm for human–robot collaboration[J].Robotics and Computer-Integrated Manufacturing,2023,80

[36]. Sebastien G ,Mario Z .Learning for MPC with stability & safety guarantees[J].Automatica,2022,146

[37]. He B ,Li G .Intelligent Self-Adaptation Data Behavior Control Inspired by Speech Acts[J].ACM Transactions on Sensor Networks (TOSN),2017,13(2):1-32.

[38]. Tran C C , Lin C Y .An Intelligent Path Planning of Welding Robot Based on Multisensor Interaction[J].IEEE sensors journal, 2023, 23(8):8591-8604.

[39]. Jin X , Guo S , Guo J ,et al.Development of a Tactile Sensing Robot-assisted System for Vascular Interventional Surgery[J].IEEE Sensors Journal, 2021, PP(99):1-1.

[40]. Overtoom M E ,Horeman T ,Jansen F , et al.Haptic Feedback, Force Feedback, and Force-Sensing in Simulation Training for Laparoscopy: A Systematic Overview[J].Journal of Surgical Education,2018,76(1):242-261.

[41]. Bo Z ,Fuyang S ,Yirong L , et al.Robust sliding mode impedance control of manipulators for complex force-controlled operations[J].Nonlinear Dynamics,2023,111(24):22267-22281.

[42]. Han L , Zhang Y , Wang H .Hybrid Adaptive Vision-Force Control Under the Bottleneck Constraint[J].IEEE Transactions on Control Systems Technology, 31[2024-08-16].

[43]. Liang H ,Wenfu X ,Bing L , et al.Collision Detection and Coordinated Compliance Control for a Dual-Arm Robot Without Force/Torque Sensing Based on Momentum Observer[J].IEEE/ASME Transactions on Mechatronics,2019,24(5):2261-2272.

[44]. Xianqiang B ,Shuangyi W ,Lingling Z , et al.A Novel Ultrasound Robot with Force/torque Measurement and Control for Safe and Efficient Scanning.[J].IEEE transactions on instrumentation and measurement,2023,7211-12.