1. Introduction
The introduction of artificial intelligence (AI) has had a substantial impact on the healthcare industry. Topol et al. state that AI can facilitate clinical workflow by fostering healthcare data collection and medical information organization [1]. This suggests that AI applications in healthcare can help improve medical efficacy, resulting in AI's growing influence in the healthcare industry, particularly considering numerous global health issues. As life expectancy increases and fertility rates decline in the 21st century, the proportion of the geriatric population continues to rise. By the end of the century, more than 30 percent of the European Union's population will be 65 or older [2]. Biologically speaking, cellular and molecular injury accumulation causes human aging [3]. This implies that as people age, their health deteriorates progressively, with common symptoms including hearing loss, osteoarthritis, diabetes, and dementia [4]. Nearly 95% of the elderly suffer from at least one chronic disease, and 80% suffer from two or more chronic diseases. Chronic diseases can reduce patients' independence, requiring elderly patients to rely on long-term care from institutions or families to perform daily tasks. According to the World Health Organization statistics, by 2030, more than one-sixth of the world's population will be over 60 [4]. Many vulnerable elderly individuals in poor health will place an immense strain on limited medical resources. In 2013, the European Union had a 1,6 million healthcare employee shortage. This shortage is anticipated to reach 4,1 million by 2030 based on a pattern of continuous expansion [5]. The accelerated aging of the population will increase the severity of the shortage of medical personnel. The application of AI in the medical and healthcare disciplines can maximize medical resources and alleviate a situation where medical resources are in short supply. This will help accomplish one of the sustainable development objectives of the United Nations, which is to ensure that people of all ages live healthful lives [6]. However, using AI tools can result in various medical errors, including algorithm errors that result in life-threatening misdiagnoses and data integration errors that lead to unnecessary treatment [7]. In the absence of explicit legal accountability for medical errors caused by AI, patients, and physicians will be hesitant to implement AI tools in the healthcare industry, particularly due to concerns regarding algorithm security.
2. Applications of AI in healthcare
By incorporating clinical data, AI-based medical applications have the potential to diagnose, predict, and treat diseases [1]. In heart disease, ML algorithms have been used to calculate the 10-year risk of developing cardiovascular disease, resulting in more accurate cardiac risk scores [8]. Moreover, AI-assisted prediction using clinical data records can be more precise than statistically-derived risk models, indicating that incorporating AI into medicine and healthcare could improve patient care by allowing for more accurate diagnoses and interventions [9].
Moreover, by incorporating physiological data from patients with renal diseases, AI tools can predict the incidence of acute kidney injury (AKI) within 48 hours of hospitalization and the post-operative risk of AKI surgery. In addition, the diagnostic accuracy of AI prediction models can be comparable to that of medical personnel, and their efficacy can be enhanced when combined with other tools [10, 11].
Incorporating AI into healthcare can alleviate healthcare congestion in EU nations with insufficient medical personnel. By prioritizing patients based on their medical data and health status, predictive models utilizing AI algorithms can aid medical personnel in developing higher-quality surgical plans, thereby optimizing healthcare resources [12]. AI can also assist in analyzing emergency department patient arrivals, enabling the development of efficient resource allocation strategies [13-15].
In 28 EU countries, mental illness costs more than 4% of the gross domestic product [16]. AI can provide patients with conversational companionship and emotional support without healthcare personnel treating mental illness [17, 18]. Interactive chatbots can digitally monitor patients' emotions using voice and facial recognition sensors, providing patients with the emotional support they require.
The bureaucracy of the healthcare system requires healthcare workers to spend fifty percent of their time on administrative duties, such as patient data acquisition and devouring valuable resources [19]. By performing these duties more efficiently and precisely, AI can save healthcare professionals valuable time and reduce the staffing shortage.
3. Challenge
Utilizing artificial intelligence (AI) in the healthcare industry presents numerous obstacles. As a comparatively new application, its use is not governed by established laws and regulations, which could potentially injure its users. In cases where patients are injured due to medical errors, the lack of traceability in medical AI makes it difficult to assign responsibility. Artificial intelligence (AI) in the diagnosis or treatment process further complicates the relationship between physicians and patients [20], which could reduce their propensity to employ AI.
Concerns over algorithmic security could threaten the viability of AI medical instruments. Unfortunately, in 2020, the AI company Cense AI was the victim of a cyber-attack, which exposed the sensitive information of more than 2.5 million patients worldwide, including personal diagnosis records, private addresses, and names [21]. The risk of data security incidents makes it more challenging to promote medical AI tools because no one wants their private information, such as their name and address, to be extensively distributed.
The ethical considerations of AI use further complicate the healthcare industry's propagation of AI tools. Individual patient data is regarded as a commodity on the market from the perspective of surveillance capitalism, and sales strategies are devised to increase their purchases. In addition, the pharmaceutical industry uses patient data for drug development and marketing strategies, which may raise ethical concerns [22].
Moreover, data-level errors can impact the precision of AI predictions. During the ultrasound scanning process, for instance, human error can cause the input data of AI tools to be discordant with the patient's actual condition, which can be understood as data noise. Due to operator ineptitude or uncooperative patients, data disturbance may exist [23].
A 2021 survey of 6,000 people revealed that the majority of individuals need more knowledge of AI's use in daily life [24]. This suggests a considerable danger of data disturbance when diagnosing patients from the general population using AI tools.
Even worse, there needs to be more instruction in extant curriculums that seeks to teach clinically trained physicians how to use AI tools. A study conducted at 19 institutions in the United Kingdom revealed that medical students are required to take only a handful of AI courses [25]. Healthcare personnel still need to gain the experience necessary to be the dominant consumers of AI tools, so they cannot guide and assist patients and may generate data noise. A patient donning a wedding ring may position their hand on their chest during the scanning procedure. An X-ray technician may place an adhesive electrocardiogram electrode on the chest. These circular artifacts may be misidentified as one of the known thoracic lesions, resulting in false-positive results [26]. The data pollution caused by these actions will result in algorithmic errors in artificial intelligence and increase the risk of misdiagnosis. The risk of erroneous diagnosis due to improper application will impede the widespread adoption of AI tools in the medical and healthcare sectors.
Moreover, data collected from different hospitals and machine learning models used to differentiate between distinct populations may result in incorrect classification by AI due to changes in the dataset's quality [27]. This implies that the data model will reduce the accuracy of predictions even in the absence of data disturbance. Using optical coherence tomography (OCT), DeepMind has created an AI digital model system that can automatically diagnose retinal diseases. However, the diagnostic error increases from 5.5% to 46% when the AI system obtains data images from multiple devices [28].
As it is typically designed by computer and data scientists, the development of medical AI technology often needs more input from end-users such as patients, nurses, and physicians [29]. This absence of involvement can make it difficult for users to comprehend and employ these tools effectively, increasing the likelihood of human error when using AI tools.
In addition, the development of medical AI tools involves a large number of participants, resulting in a lack of transparency in the development process, which further inhibits the use of AI tools in diagnostic methods by users who do not comprehend how AI models operate in the real world [30]. This lack of transparency makes it difficult to determine who is responsible for possible errors, whether AI developers, data administrators, physicians, or others.
Moreover, the complexity of informed consent procedures and the lack of transparency of AI algorithms may necessitate that patients comprehend how their data is utilized and shared, posing potential ethical risks [31]. Establishing data protection laws to ensure the responsible use of AI tools in healthcare is crucial.
4. Suggestions
To optimize algorithm safety, technical research, laws, regulations, and policy systems, and increased transparency and interpretability of algorithm applications are needed to ensure that AI applications in healthcare can be carried out safely and securely. The application of AI in healthcare also needs to take ethical issues into account. The safe application and promotion of AI in healthcare can be promoted by developing ethical guidelines and codes and increasing education and awareness to serve patients and healthcare organizations better. Ethical issues such as patient privacy and data protection must be considered when using AI. The balance of interests between healthcare providers, doctors, patients, and technology companies needs to be considered when developing guidelines and norms. There is also a need to consider how to ensure the fairness and transparency of algorithms in the development and application of AI.
From the perspective of AI use, the skills and awareness of healthcare professionals should be improved. Healthcare professionals should receive professional training on how to use AI tools properly. Manufacturers and providers should provide transparency for AI and inform patients about how their data is used and shared. Governments and hospitals should have laws and policies to protect data when AI is used. This includes specifying what data can be used, how it is collected and used, and how patients' privacy is protected.
5. Conclusion
Artificial intelligence (AI) has made significant contributions to the healthcare industry by improving diagnostic accuracy and relieving pressure on limited healthcare resources. It has improved the comprehension and treatment of various cardiovascular, renal, and psychological diseases. Concerns about data privacy and security have accompanied the advantages of AI in healthcare, as cyber attacks have resulted in the disclosure of sensitive patient information. In addition, data disturbance and shifting can result in the misdiagnosis of patients, resulting in severe consequences and liability issues. To ensure that AI tools positively impact the healthcare industry, it is essential to strengthen system security and protect patients' sensitive data from potential attacks.
In addition, patients must be completely apprised and provide consent before using their data. Although these measures may be costly and time-consuming, the long-term benefits of utilizing medical AI tools outweigh the costs. AI has the potential to enhance patient outcomes and increase the efficacy of the healthcare industry, even though it presents some challenges in healthcare.
References
[1]. Topol, E. J.: High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56 (2019).
[2]. Eurostat.: Ageing Europe—Statistics on Population Developments. Eurostat, (2020).
[3]. WHO Homepage, https://www.who.int/news-room/fact-sheets/detail/ageing-and-health, last ac-cessed 2022/10/01.
[4]. WHO Homepage, https://www.who.int/health-topics/health-workforce#tab=tab_1, last accessed 2019/08/07.
[5]. WHO Homepage, https://www.euro.who.int/en/health-topics/noncommunicable-diseases/mentalhealth/news/news/2012/10/depression-in-europe/depression-in-europe-facts-and-figures, last accessed 2012.
[6]. UNESCO.: Artificial Intelligence and Gender Equality: Key Findings of UNESCO’S Global Dialogue, (2020).
[7]. Europarl Homepage, https://www.europarl.europa.eu/stoa/en/document/EPRS_STU(2022) 729512, last accessed 2022/01/06.
[8]. Quer, G., Arnaout, R., Henne, M., Arnaout, R.: Machine learning and the future of cardiovascu-lar care: JACC state-of-the-art review. Journal of the American College of Cardiology 77(3), 300-313 (2021).
[9]. Jamthikar, A. D., Gupta, D., Saba, L., Khanna, N. N., Viskovic, K., Mavrogeni, S., Suri, J. S.: Artificial intelligence framework for predictive cardiovascular and stroke risk assessment models: A narrative review of integrated approaches using carotid ultrasound. Computers in Biology and Medicine 126, 104043 (2020).
[10]. Bera, K., Schalper, K. A., Rimm, D. L., Velcheti, V., Madabhushi, A.: Artificial intelligence in digital pathology—new tools for diagnosis and precision oncology. Nature Reviews Clinical Oncology 16(11), 703-715 (2019).
[11]. Bejnordi, B. E., Veta, M., Van Diest, P. J., Van Ginneken, B., Karssemeijer, N., Litjens, G., CAMELYON16 Consortium.: Diagnostic assessment of deep learning algorithms for detec-tion of lymph node metastases in women with breast cancer. Jama 318(22), 2199-2210 (2017).
[12]. Miotto, R., Li, L., Kidd, B. A., Dudley, J. T.: Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Scientific Reports 6(1), 1-10 (2016).
[13]. Berlyand, Y., Raja, A. S., Dorner, S. C., Prabhakar, A. M., Sonis, J. D., Gottumukkala, R. V., Yun, B. J.: How artificial intelligence could transform emergency department operations: the American journal of emergency medicine 36(8), 1515-1517 (2018).
[14]. Menke, N. B., Caputo, N., Fraser, R., Haber, J., Shields, C., Menke, M. N.: A retrospective analysis of the utility of an artificial neural network to predict ED volume. The American Journal of emergency medicine 32(6), 614-617 (2014).
[15]. Jiang, S., Chin, K. S., & Tsui, K. L.: A universal deep learning approach for modeling the flow of patients under different severities. Computer Methods and Programs in Biomedicine 154, 191-203 (2018).
[16]. Europarl Homepage, https://health.ec.europa.eu/system/files/2020-12/2020_healthatglance_rep _en_0, last accessed 2020/12.
[17]. Firth, J., Torous, J., Nicholas, J., Carney, R., Pratap, A., Rosenbaum, S., Sarris, J.: The efficacy of smartphone‐based mental health interventions for depressive symptoms: a meta‐analysis of randomized controlled trials. World Psychiatry 16(3), 287-298 (2017).
[18]. Mohr, D. C., Riper, H., Schueller, S. M.: A solution-focused research approach to achieve an implementable revolution in digital mental health. JAMA psychiatry 75(2), 113-114 (2018).
[19]. Clay, H., Stern, R.: Making time in general practice. Primary Care Foundation, 1-83 (2015).
[20]. Adamson, A. S., Smith, A.. Machine learning and health care disparities in dermatology. JAMA dermatology 154(11), 1247-1248 (2018).
[21]. Alder, S.: AI company exposed 2.5 million patient records over the internet. HIPAA Journal, (2020).
[22]. Hocking, L., Parks, S., Altenhofer, M., Gunashekar, S.: Reuse of health data by the European pharmaceutical industry, (2019).
[23]. Farina, R., Sparano, A.: Errors in sonography. Errors in radiology, 79-85 (2012).
[24]. Gillespie, N., Lockey, S., Curtis, C.: Trust in artificial intelligence: A five country study, (2021).
[25]. Sit, C., Srinivasan, R., Amlani, A., Muthuswamy, K., Azam, A., Monzon, L., Poon, D. S. Atti-tudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights into imaging 11, 1-6(2020).
[26]. Yu, K. H., Kohane, I. S.: Framing the challenges of artificial intelligence in medicine. BMJ Quality & Safety 28(3), 238-241 (2019).
[27]. Wang, H. E., Landers, M., Adams, R., Subbaswamy, A., Kharrazi, H., Gaskin, D. J., Saria, S. A bias evaluation checklist for predictive models and its pilot application for 30-day hospital re-admission models. Journal of the American Medical Informatics Association 29(8), 1323-1333 (2022).
[28]. De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., Ronneberger, O.: Clinically applicable deep learning for diagnosis and referral in retinal dis-ease. Nature Medicine 24(9), 1342-1350 (2018).
[29]. Von Gerich, H., Moen, H., Block, L. J., Chu, C. H., DeForest, H., Hobensack, M., Peltonen, L. M.: Artificial Intelligence-based technologies in nursing: A scoping literature review of the evidence. International Journal of nursing studies, 127, 104153 (2022).
[30]. Mora-Cantallops, M., Sánchez-Alonso, S., García-Barriocanal, E., Sicilia, M. A.. Traceability for trustworthy ai: A review of models and tools. Big Data and Cognitive Computing 5(2), 20 (2021).
[31]. Vyas, D. A., Eisenstein, L. G., Jones, D. S.: Hidden in plain sight-reconsidering the use of race correction in clinical algorithms. New England Journal of Medicine 383(9), 874-882 (2020).
Cite this article
Zhang,H. (2023). Artificial intelligence in healthcare: Opportunities and challenges. Theoretical and Natural Science,21,130-134.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 3rd International Conference on Biological Engineering and Medical Science
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Topol, E. J.: High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56 (2019).
[2]. Eurostat.: Ageing Europe—Statistics on Population Developments. Eurostat, (2020).
[3]. WHO Homepage, https://www.who.int/news-room/fact-sheets/detail/ageing-and-health, last ac-cessed 2022/10/01.
[4]. WHO Homepage, https://www.who.int/health-topics/health-workforce#tab=tab_1, last accessed 2019/08/07.
[5]. WHO Homepage, https://www.euro.who.int/en/health-topics/noncommunicable-diseases/mentalhealth/news/news/2012/10/depression-in-europe/depression-in-europe-facts-and-figures, last accessed 2012.
[6]. UNESCO.: Artificial Intelligence and Gender Equality: Key Findings of UNESCO’S Global Dialogue, (2020).
[7]. Europarl Homepage, https://www.europarl.europa.eu/stoa/en/document/EPRS_STU(2022) 729512, last accessed 2022/01/06.
[8]. Quer, G., Arnaout, R., Henne, M., Arnaout, R.: Machine learning and the future of cardiovascu-lar care: JACC state-of-the-art review. Journal of the American College of Cardiology 77(3), 300-313 (2021).
[9]. Jamthikar, A. D., Gupta, D., Saba, L., Khanna, N. N., Viskovic, K., Mavrogeni, S., Suri, J. S.: Artificial intelligence framework for predictive cardiovascular and stroke risk assessment models: A narrative review of integrated approaches using carotid ultrasound. Computers in Biology and Medicine 126, 104043 (2020).
[10]. Bera, K., Schalper, K. A., Rimm, D. L., Velcheti, V., Madabhushi, A.: Artificial intelligence in digital pathology—new tools for diagnosis and precision oncology. Nature Reviews Clinical Oncology 16(11), 703-715 (2019).
[11]. Bejnordi, B. E., Veta, M., Van Diest, P. J., Van Ginneken, B., Karssemeijer, N., Litjens, G., CAMELYON16 Consortium.: Diagnostic assessment of deep learning algorithms for detec-tion of lymph node metastases in women with breast cancer. Jama 318(22), 2199-2210 (2017).
[12]. Miotto, R., Li, L., Kidd, B. A., Dudley, J. T.: Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Scientific Reports 6(1), 1-10 (2016).
[13]. Berlyand, Y., Raja, A. S., Dorner, S. C., Prabhakar, A. M., Sonis, J. D., Gottumukkala, R. V., Yun, B. J.: How artificial intelligence could transform emergency department operations: the American journal of emergency medicine 36(8), 1515-1517 (2018).
[14]. Menke, N. B., Caputo, N., Fraser, R., Haber, J., Shields, C., Menke, M. N.: A retrospective analysis of the utility of an artificial neural network to predict ED volume. The American Journal of emergency medicine 32(6), 614-617 (2014).
[15]. Jiang, S., Chin, K. S., & Tsui, K. L.: A universal deep learning approach for modeling the flow of patients under different severities. Computer Methods and Programs in Biomedicine 154, 191-203 (2018).
[16]. Europarl Homepage, https://health.ec.europa.eu/system/files/2020-12/2020_healthatglance_rep _en_0, last accessed 2020/12.
[17]. Firth, J., Torous, J., Nicholas, J., Carney, R., Pratap, A., Rosenbaum, S., Sarris, J.: The efficacy of smartphone‐based mental health interventions for depressive symptoms: a meta‐analysis of randomized controlled trials. World Psychiatry 16(3), 287-298 (2017).
[18]. Mohr, D. C., Riper, H., Schueller, S. M.: A solution-focused research approach to achieve an implementable revolution in digital mental health. JAMA psychiatry 75(2), 113-114 (2018).
[19]. Clay, H., Stern, R.: Making time in general practice. Primary Care Foundation, 1-83 (2015).
[20]. Adamson, A. S., Smith, A.. Machine learning and health care disparities in dermatology. JAMA dermatology 154(11), 1247-1248 (2018).
[21]. Alder, S.: AI company exposed 2.5 million patient records over the internet. HIPAA Journal, (2020).
[22]. Hocking, L., Parks, S., Altenhofer, M., Gunashekar, S.: Reuse of health data by the European pharmaceutical industry, (2019).
[23]. Farina, R., Sparano, A.: Errors in sonography. Errors in radiology, 79-85 (2012).
[24]. Gillespie, N., Lockey, S., Curtis, C.: Trust in artificial intelligence: A five country study, (2021).
[25]. Sit, C., Srinivasan, R., Amlani, A., Muthuswamy, K., Azam, A., Monzon, L., Poon, D. S. Atti-tudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights into imaging 11, 1-6(2020).
[26]. Yu, K. H., Kohane, I. S.: Framing the challenges of artificial intelligence in medicine. BMJ Quality & Safety 28(3), 238-241 (2019).
[27]. Wang, H. E., Landers, M., Adams, R., Subbaswamy, A., Kharrazi, H., Gaskin, D. J., Saria, S. A bias evaluation checklist for predictive models and its pilot application for 30-day hospital re-admission models. Journal of the American Medical Informatics Association 29(8), 1323-1333 (2022).
[28]. De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., Ronneberger, O.: Clinically applicable deep learning for diagnosis and referral in retinal dis-ease. Nature Medicine 24(9), 1342-1350 (2018).
[29]. Von Gerich, H., Moen, H., Block, L. J., Chu, C. H., DeForest, H., Hobensack, M., Peltonen, L. M.: Artificial Intelligence-based technologies in nursing: A scoping literature review of the evidence. International Journal of nursing studies, 127, 104153 (2022).
[30]. Mora-Cantallops, M., Sánchez-Alonso, S., García-Barriocanal, E., Sicilia, M. A.. Traceability for trustworthy ai: A review of models and tools. Big Data and Cognitive Computing 5(2), 20 (2021).
[31]. Vyas, D. A., Eisenstein, L. G., Jones, D. S.: Hidden in plain sight-reconsidering the use of race correction in clinical algorithms. New England Journal of Medicine 383(9), 874-882 (2020).