Analyze of Application of Artificial Intelligence in Robotic Guide Dogs

Research Article
Open access

Analyze of Application of Artificial Intelligence in Robotic Guide Dogs

Tianrui Xu 1*
  • 1 Saint Joseph High School, Connecticut, USA    
  • *corresponding author txu2026@sjcadets.org
Published on 25 October 2024 | https://doi.org/10.54254/2755-2721/93/2024BJ0068
ACE Vol.93
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-627-3
ISBN (Online): 978-1-83558-628-0

Abstract

As society develops, people are paying more attention to disabilities. Visually impaired people as a group, are one of the largest populations of disabilities. Nowadays, scientists are finding approaches to give them more convenience. Robotic guide dogs, which aid the visually impaired to travel, becoming a standout topic in society. As an emerging technology in recent years, artificial intelligence (AI) gives technical support to the study of robotic guide dogs. By examining a number of current robotic guide dog models, this paper will demonstrate how AI can be used to improve robotic guide dogs in a number of ways. The research shows that AI plays an important role in the operation of the robotic guide dog. This research will also point out some directions for future studies.

Keywords:

Robotic guide dogs, artificial intelligence, visually impaired

Xu,T. (2024). Analyze of Application of Artificial Intelligence in Robotic Guide Dogs. Applied and Computational Engineering,93,35-39.
Export citation

1. Introduction

Humans have many senses, and each has its own function. Eyes are one of the most important senses. It enables human beings to see, to read, and to observe. Because of this, many things have been created. Human ancestors created texts to convey information, and scientists invented lights to illuminate the night. Street signs on the road, menus in restaurants, televisions at home — all of these are made for the majority, that is people who can see.

The Visually impaired are not in the minority, based on the data from Centers of Disease Control and Prevention, visually impaired populations in the United States were over 7 million in 2017[1]. They have the right to be treated equally. However, today’s society does not give them much convenience, including difficulties of going out and being discriminated against by others. Therefore, many countries are forced to improve their living environments. The United States has estimated the Americans with Disabilities Act, which is a federal civil rights law that guarantees the basic rights of persons with disabilities [2]. The Act includes the prohibition of discrimination against this group of people and ensures opportunities for them to be employed. These regulations protect the basic lives of disabled persons. Most countries force the construction of typhlosis and such assistance infrastructure. Many organizations spontaneously help disabilities in every way. For instance, Guangzhou, China currently owns the world’s largest school for the blind, facing teenagers from elementary school to high school, which gives them the equal rights to be educated and opportunities to learn to live independently.

However, only typhlosis and other infrastructure cannot support visually impaired people to go outdoors independently. Leading by guide dogs became one of the ways that solved the problem, but not perfectly. According to the International Guide Dog Federation, the number of working guide dogs around the world is an average of twenty thousand [3]. It is visible that the number of guide dogs is much less than the number of visually impaired people. In fact, it costs a lot of money and time to train a successful guide dog. It processes through choosing the right dog, a long period of training, and matching the human and dog. All of these conditions make it extremely difficult for a visually impaired person to own a guide dog.

To solve the problem, scientists started to study robotic guide dogs. Back in 1977, Tachi used Meldog Mark I to IV to explore the feasibility of robotic guide dogs. They researched three main parts that are navigating between landmarks, obstacle detection, and communication [4]. Their study first confirms the feasibility of using a guide dog robot. Today, scientists are still working in this area, trying to find the best solution. Many models appear successively, contributing to this subject.

This paper is going to analyze the use of AI over robotic guide dogs. By researching existing robotic guide dog models to classify the use of AI into various aspects of robotic guide dogs, and analyze its feasibility and its advantages. This has certain significance for the future research direction and trend of robotic guide dogs.

2. Existing operation schemes of robotic guide dog

Currently, lots of universities and programs are researching supporting visually impaired people by creating robotic guide dogs. They design both hardware and software to aim for this goal. Several teams have constructed samples of their robot. These robots are now being tested or trained. However, there are two domain problems with these robots. Table 1 shows a summary of the current research progress on various representative robot models, and details of them include applied scenarios, use of hardware and software, and the ability to self-determine.

Table 1. Part of representative robotic guide dogs

Robot’s prototype

Application scenarios

Hardware and Software

Ability of self-determination

Spot [5]

Park

Camera, AI algorithms, EMCA

Operator needed

Mini Cheetah [6]

Narrow space

(indoor)

Lidar, depth camera, force sensor, leash

Operator needed

(Self-developed model) [7]

Simulate Urban Environment

Robotic arm, mobile platform, genetic algorithm, ANN

Operator needed

Firstly, most of the teams design and apply their robots to specific application scenarios, including going across a traffic light, navigating in specific areas like hospitals, or passing through narrow paths, which leads to a serious limitation for these robotic dogs. Bing Hong and his colleagues generally classify robotic guide dogs into two separate types, guide dog robots that are applied in specific application scenes and those applied in complex ones, and claim that they use different proposals to apply to these scenes [8]. Designing a robotic dog to deal with specific scenes can reduce the complexity of the study process. However, it brings a huge disadvantage to these robotic dogs compared to traditional guide dogs (TGD). These kinds of robots are not available to run in other scenes because their hardware and software are both designed to complete a specific task. A blind man who wants to go to the hospital from home independently can achieve his goal by using a TGD. But when using robotic guide dogs designed for specific scenes, man needs to use different kinds of them to assist daily life, at least a robotic guide dog which can navigate outdoors and another one which is designed to be used in hospitals. By comparing both situations, the robotic guide dog assistance is much more impractical. Therefore, the first main problem of robotic guide dogs is that most of them are limited in specific scenes, resulting in incapacity to apply in every situation.

Few robotic guide dogs are able to function in different scenes. Apart from that, these robots still have additional challenges. These robots cannot move independently but need assistance from operators. In Brain’s study, the researcher decides to use a pattern of “VIP +robodog+operator” to study how visually impaired people navigate in space and claims that it is hard to complete this activity by any one of the elements among the pattern [5]. The study shows the difficulty of human-robot interaction, not only for the robot to receive and understand human order but also for the human to receive information [5]. Besides, Bing Hong’s study has also mentioned intelligent interaction and intelligent disobedience as future directions and challenges [8]. Visually impaired people sometimes cannot send the right orders, as a response, robotic guide dogs should have the ability to evaluate the correctness of human order. Therefore, robotic guide dogs still have inadequacy over human-robot interaction that is the most important part to apply them into practical application.

3. Apply AI to a robotic guide dog

Applying artificial intelligence (AI) to robotic guide dogs is mainly in response to the main challenges mentioned earlier. AI systems can perform tasks like learning, reasoning, problem-solving, and perception by combining data collection and processing, deep learning, and neural networks. It has the advantages of image and speech recognition. Researchers now use this advantage in different areas. For instance, Hu claims the ability to use AI to recognize and evaluate bone age [9]. Identically, AI can also be applied to robotic guide dogs which can be used to analyze captured images to enhance its ability to guide visually impaired people.

Currently, AI has been applied to robotic guide dogs in several parts. For instance, using AI models to recognize objects and obstacles. Convolutional neural networks (CNN) are one of the common AI models being applied to different aspects. It has the ability to analyze images and classify objects. However, more deeply using AI can benefit in different ways. The following are how to deeply engage AI in robotic guide dogs and the benefits brought by these changes.

3.1. Environmental understanding and perception

Robotic guide dogs have the ability to recognize objects by their data collection, which usually involves several steps. Firstly, by using lidar and camera, the robotic guide dog can gather data about the surroundings. Then use AI algorithms such as CNN to detect objects and obstacles. At last, using Simultaneous Localization and Mapping (SLAM) and path planning to navigate.

However, it is not enough for them to navigate visually impaired people alone. Using a camera to detect objects and obstacles cannot react to sudden dangers, and this is the reason why an operator is needed for a robotic guide dog. Using AI for environmental understanding can enable robots to be used in more complex situations. By using SLAM, the robot can generate a computer vision of the surroundings. The AI model can combine information to calculate distances from obstacles and evaluate the motion trajectory of moving objects that may threaten the users’ safety.

A deep understanding of the environment is important for robotic guide dogs. Using AI for scene analysis can help it to make correct decisions. For example, when the robot detects that the street is busy, it can decide to slow down or change route. Also, AI supports the robot to detect changes in the environment. It can process the sensor’s data in real-time, allowing it to adapt to sudden changes. Such an advantage can be shown when the robotic guide dog faces different types of terrain. When the robotic guide dog recognizes a gravel road, it will slow down. With the heliometer, cameras, and lidar, it will keep the level, and lead users to avoid potholes and uneven roads.

3.2. Human-robot interaction

Human-robot interaction (HRI) is one of the most important parts of the robot because it guarantees the connection between humans and robotic guide dogs. In order to make the robot understand the human's order, the HRI must be reliable. Using AI during communication can enhance this connection.

To achieve the goal, natural language processing (NLP) will be involved as part of this process. According to Nadkarni, natural language has much more complexity than written language, and this is the reason to find NLP, which is used to extract meaning from human speech [10]. Similar NLP systems include Alexas and Siri which people used in their daily lives. By using the NLP system, it will allow the robotic guide dogs to recognize and interpret human’s speech. The robot can also confirm instructions when it cannot understand the words. To respond to the user, robotic guide dogs can use spoken language by speech synthesis. Thus, it will allow the robotic guide dogs to communicate with users.

HRI is not limited to communication, moreover, it should have other functions to ensure effective guidance, safety, and adaptation. In some cases, commanding robotic guide dogs by speaking is impractical. It is hard for the robotic guide dog to detect the user's speech accurately in a noisy environment. Using other techniques to send orders can address this issue. One way is to force the tow rope. Users can customize its setting, for example when clenching the lead rope, it means to stop the robot. Through determination of the magnitude and direction of different forces, the robot can be able to understand the users’ instructions.

Also, HRI includes other functions. For instance, velocity coordination can make users feel more comfortable. Robotic guide dogs can use their cameras to detect the user's emotions and analyze the user’s walking posture to adjust its speed by using AI models. Nonetheless, HRI can enhance the connection between the robotic guide dog and visually impaired people. The cooperation between the robot and the human is one of the most important parts engaged in the whole process.

3.3. User customization

Visually impaired people lack one sense, it is possible that sometimes they will make wrong orders. Therefore, it is necessary for the robot to determine the order’s correctness. For instance, when the user asks the robot to cross a road, the robot should first detect signals from traffic lights and whether the surrounding vehicles are threats to the user. Then, it has to make a decision to go across the road or not.

Also, AI systems can study a user's behavior, walking style, and some other requirements. Through long-term cooperation, robots can constantly adjust themselves to meet the needs of users. These can help to ensure user comfort. For instance, the robot is capable of observing the user’s walking behavior, such as step size and stride frequency. By analyzing these data, the robot can value a suitable speed and traction for the user. Therefore, the robotic guide dog can use such an AI system to cooperate with the user and improve human-robot interaction.

In addition, by analyzing data collected over periods, AI can study places where users often go. When the user has no need for the robot, it can analyze navigation data and improve route planning. These advanced analyses can diminish the time used for analysis data, besides, reduce the resources used on the way to these locations. Thus, as the user has emergencies, the robotic guide dog can lead him to the hospital at the shortest time. Moreover, advanced analysis reduces the use of resources such as computing power on the way to the hospital, it can enhance the ability of the robot to face other problems. In this way, it greatly improves the robotic guide dog’s ability to deal with emergencies.

4. Discussion

AI applying into robotic guide dogs did solve several challenges, like the HRI and environmental understanding. However, it will also have some disadvantages. Here are some comparisons and discussions between robotic guide dogs and traditional guide dogs.

In terms of the current level of science and technology, AI as a new field still needs to be developed. AI now can do some basic kinds of work for robotic guide dogs, such as CNN and NLP. But these technologies still have limitations for now. There is no doubt that creating an intelligent mechanical guide dog for visually impaired people is challenging. These robots have a high demand for computing power, and yet cause the high expectation of hardware and effective algorithms. High technical difficulty and cost are the main problems of robotic guide dogs. However, compared to TGD, these robots still have great advantages. It costs less than TGD in the long term. The average cost of a TGD is forty thousand dollars, and a visually impaired person needs at least five guide dogs in their lifetime [11]. The whole cost of TGD will be more than two hundred thousand dollars. In contrast, a robot named “Spot” made by Boston Dynamic Company costs less than one hundred thousand dollars. Robotic guide dogs have a higher ceiling. Robotic guide dogs are robots and follow directions from programs. It has advantages over guide dogs that have emotions and need to rest. Robotic guide dogs are more stable while working. This discussion relates to the following topic.

A TGD is a life. Although it brings emotions to visually impaired people, it is hard for these groups of people to take care of them and bring them happiness [12]. However, robotic guide dogs are different. They just need to be charged or even can charge themselves by a charge station. This greatly reduces the burden on visually impaired people. TGD and robotic guide dogs have their own advantages. However, because of the scarcity of TGD, it is necessary to create robotic guide dogs and continue to reduce its cost by development.

5. Conclusion

This paper summarizes the application of AI in various aspects of robotic guide dogs and finds the importance of the use of AI in the operation process of the robot. The conclusion suggests further use and study of AI which can improve the functions and safety of robotic guide dogs and also help to develop a more intelligent human-robot interaction.

In the future, as AI technology becomes more mature, it should be applied to robotic guide dogs more comprehensively. So that the functions of the robotic guide dogs will fit the practical application of the visually impaired people. As the different functions of robotic guide dogs gradually be perfected, scientists should pay more attention to reducing the cost of a robotic guide dog, making it affordable. Hoping that all of the visually impaired people can own their robotic guide dog to lead them and safeguard their safety.


References

[1]. Centers for Disease Control and Prevention. "Vision Loss Prevalence." Centers for Disease Control and Prevention, 12 Oct. 2022, www.cdc.gov/vision-health-data/prevalence-estimates/vision-loss-prevalence.html. Accessed 30 Aug. 2024.

[2]. U.S. Department of Justice, Civil Rights Division. Americans with Disabilities Act. www.ada.gov/. Accessed 30 Aug. 2024.

[3]. International Guide Dog Federation. "Facts and Figures." International Guide Dog Federation, www.igdf.org.uk/about-us/facts-and-figures/. Accessed 30 Aug. 2024.

[4]. Tachi, Susumu, and Kiyoshi Komoriya. "Guide dog robot." Autonomous mobile robots: Control, planning, and architecture (1984): 360-367..

[5]. Due, Brian L. "A walk in the park with Robodog: Navigating around pedestrians using a spot robot as a “guide dog”." Space and Culture (2023): 12063312231159215.

[6]. Xiao, Anxing, et al. "Robotic guide dog: Leading a human with leash-guided hybrid physical interaction." 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.

[7]. Bruno, Diego Renan, Marcelo Henrique de Assis, and Fernando Santos Osório. "Development of a mobile robot: Robotic guide dog for aid of visual disabilities in urban environments." 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE). IEEE, 2019.

[8]. Hong, Bin, et al. "Development and application of key technologies for Guide Dog Robot: A systematic literature review." Robotics and Autonomous Systems 154 (2022): 104104.

[9]. TING-HONG, H. U., W. A. N. Lei, and L. I. U. TAI-ANG. "Advantages and application prospects of deep learning in image recognition and bone age assessment." Journal of Forensic Medicine 33.6 (2017): 629.

[10]. Nadkarni, Prakash M., Lucila Ohno-Machado, and Wendy W. Chapman. "Natural language processing: an introduction." Journal of the American Medical Informatics Association 18.5 (2011): 544-551.

[11]. Wirth, Kathleen E., and David B. Rein. "The economic costs and benefits of dog guides for the blind." Ophthalmic Epidemiology 15.2 (2008): 92-98.

[12]. Rickly, Jillian M., et al. "Travelling with a guide dog: Experiences of people with vision impairment." Sustainability 13.5 (2021): 2840.


Cite this article

Xu,T. (2024). Analyze of Application of Artificial Intelligence in Robotic Guide Dogs. Applied and Computational Engineering,93,35-39.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2nd International Conference on Machine Learning and Automation

ISBN:978-1-83558-627-3(Print) / 978-1-83558-628-0(Online)
Editor:Mustafa ISTANBULLU, Xinqing Xiao
Conference website: https://2024.confmla.org/
Conference date: 21 November 2024
Series: Applied and Computational Engineering
Volume number: Vol.93
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Centers for Disease Control and Prevention. "Vision Loss Prevalence." Centers for Disease Control and Prevention, 12 Oct. 2022, www.cdc.gov/vision-health-data/prevalence-estimates/vision-loss-prevalence.html. Accessed 30 Aug. 2024.

[2]. U.S. Department of Justice, Civil Rights Division. Americans with Disabilities Act. www.ada.gov/. Accessed 30 Aug. 2024.

[3]. International Guide Dog Federation. "Facts and Figures." International Guide Dog Federation, www.igdf.org.uk/about-us/facts-and-figures/. Accessed 30 Aug. 2024.

[4]. Tachi, Susumu, and Kiyoshi Komoriya. "Guide dog robot." Autonomous mobile robots: Control, planning, and architecture (1984): 360-367..

[5]. Due, Brian L. "A walk in the park with Robodog: Navigating around pedestrians using a spot robot as a “guide dog”." Space and Culture (2023): 12063312231159215.

[6]. Xiao, Anxing, et al. "Robotic guide dog: Leading a human with leash-guided hybrid physical interaction." 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.

[7]. Bruno, Diego Renan, Marcelo Henrique de Assis, and Fernando Santos Osório. "Development of a mobile robot: Robotic guide dog for aid of visual disabilities in urban environments." 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE). IEEE, 2019.

[8]. Hong, Bin, et al. "Development and application of key technologies for Guide Dog Robot: A systematic literature review." Robotics and Autonomous Systems 154 (2022): 104104.

[9]. TING-HONG, H. U., W. A. N. Lei, and L. I. U. TAI-ANG. "Advantages and application prospects of deep learning in image recognition and bone age assessment." Journal of Forensic Medicine 33.6 (2017): 629.

[10]. Nadkarni, Prakash M., Lucila Ohno-Machado, and Wendy W. Chapman. "Natural language processing: an introduction." Journal of the American Medical Informatics Association 18.5 (2011): 544-551.

[11]. Wirth, Kathleen E., and David B. Rein. "The economic costs and benefits of dog guides for the blind." Ophthalmic Epidemiology 15.2 (2008): 92-98.

[12]. Rickly, Jillian M., et al. "Travelling with a guide dog: Experiences of people with vision impairment." Sustainability 13.5 (2021): 2840.