1. Introduction
In recent years, artificial intelligence has emerged as a transformative and revolutionary technology that has the potential to reshape economies globally and bring the whole productivity of each industry to another level. The fierce growth and development of this technology have drawn the attention and curiosity of individuals worldwide, as the result, promoting many researchers and scholars to explore the variation of the use of this technology. This paper aims to contribute to the ongoing research in this field by measuring the economic implications of AI. The origins of AI can be traced back to the mid-20th century when researchers began excavating the possibility of creating self-learning machines capable of performing tasks traditionally reserved for humans. Over the decades, the area of AI has undergone significant progression, with major breakthroughs in neural networks and deep learning. Getting back to modern society, this technology has already become an integral part of many industries with a variety of applications, yet, based on the rate of growth of the field of artificial intelligence application, what’s certain is that there are still unforeseen possibilities and capabilities of this technology that might change our perception toward either the future or the knowledge that we currently have.
By investigating the evolutionary improvement of AI from its early research to its current state, this research provides a comprehensive overview of the technology’s progression and implications for the global economy. Furthermore, with the analysis of the variation among different types of AI and their applications in different industries, this paper provides insights into the potential benefits and flaws of AI in different contexts.
The rest sections of this paper are organized as follows: section 2 introduces an overview of the origins and progressions of AI, section 3 explores the variation of AI and its applications, section 4 discusses current trends and innovations in AI, and finally, section 5 examines the future processing, explainable AI, and edge computing.
2. Origins and Progression of AI
2.1. Early AI Research
The first wave of Ai research was a period of intense exploration into the possibility of creating machines that have the ability to perform tasks that were originally done by humans. This period of AI research began in the mid-1950s and lasted until the mid-1970s when researchers made significant strides in the development of AI and laid the foundation for future elevation in the field of this technology.
The invention of AI can be attributed to a group of scientists and mathematicians who initially thought of creating machines that could think and learn just like humans. In 1956, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon organized the Dartmouth Conference, which has been widely considered the birthplace of artificial intelligence. The conference brought together leading experts and scholars in the field to discuss the opportunity and possibility of creating intelligent machines and furthermore, laid the sturdy base and the frame for future study in this field [1].
The main focus was on developing systems that could reason and solve problems like humans during the very first years of AI research. The researchers experimented with different approaches, including rule-based systems, symbolic logic, and heuristic algorithms. One of the early successes in the research was the development of the General Problem Solver (GPS) by Herbert Simon and Allen Newell, which differs from the other GPS (Global Positioning System) that most people in the modern world recognize. It was the first useful AI program and it could solve a wide range of problems using a set of predefined rules.
Nevertheless, early AI research faced significant challenges like limited computing power and a lack of real-world practical applications. Thus, the focus of AI research shifted in response to the problem, such as speech and image recognition, natural language processing, and expert systems. This has led to the emergence of the second wave of AI research which was focused on developing specialized AI systems for specific tasks for various uses.
Despite these challenges, early AI research laid the foundation for future advancements in the field and lighted a revolution in computing that has transformed the way we live and work. Nowadays AI is an integral part of many industries, with applications ranging from auto-pilot driving transportation to virtual reality and AI assistants. As it continues to evolve, it’s very likely that it to play an even greater role in shaping the future of technology and society
2.2. Emergence of Machine Learning and Neural Networks
The emergence of machine learning and neural networks is a good representation and a significant milestone in the history of AI development. In the early years of AI research, most of the systems were only based on predefined rules or logic that limited their ability to self-learn from new data or adapt to changing conditions, however, with the advent of machine learning, researchers were able to come up with systems that could learn from experience and improve their performance in the long run.
Machine learning is a type of AI that involves training algorithms to learn from data and make predictions or decisions based on that set of data. There are many different variations of machine learning, such as supervised learning and unsupervised learning. In terms of supervised learning, the algorithm is trained on a labeled dataset, where each individual data is associated with a specific label or outcome. On the other hand, the algorithm for unsupervised learning is trained on an unlabeled dataset, which must identify patterns or correlations on its own. Neural networks are a type of machine learning algorithm that is modeled according to the structure and functionality of the human brain. They are consisting of layers of interconnected nodes or you could just relate to something similar in ourselves----“neurons”, they process information and make predictions or decisions [2]. This algorithm can be used for a wide range of applications, ranging from image recognition to language processing.
The emergence of machine learning and neural networks has made great changes in many industries, including healthcare, finance, and transportation. Let’s take the machine learning algorithm as an example, this algorithm can be used to analyze medical images and diagnose diseases more accurately, besides, neural networks can be used to predict stock prices or route optimization.
2.3. Rise of Deep Learning & Revolutionary Discoveries
Speaking of the rise of deep learning, most people have been considering it a game-changer for the field of artificial intelligence, as it involves training neural networks with multiple layers to recognize patterns and make predictions or decisions based on complex data, which none of the previous AI systems has achieved. This newly invented approach has enabled breakthroughs in many areas, including image recognition, natural language processing, and robotics.
One of the most significant discoveries in deep learning is the development of convolutional neural networks for image recognition. They are designed to identify patterns in images by analyzing smaller sub-regions of the image and then combining the results to make a final prediction, this method has led to advancements and upgrades in areas like object detection, facial recognition, and a little autopilot.
Another important development in deep learning is the use of recurrent neural networks for natural language processing. The main purpose of this system is to analyze sequences of data and make them suited for tasks like foreign language translation and speech recognition. Some well-known examples are Siri and Google Assistant to interpret spoken language and generate text-based responses.
As mentioned earlier, deep learning has also led to significant progress in robotics and automation. For example, researchers are using deep reinforcement learning to train robots and make them perform complex tasks such as grasping objects and manipulating tools. This approach involves training robots by trial and error, the robots are capable of learning from their mistakes and it gradually improves their performance over time [3].
3. Variation of AI & the Applications
3.1. Types of AI
Artificial Intelligence (AI) can be categorized into several types based on its capabilities and functionalities, they are often referred to as narrow or weak AI and general or strong AI.
Narrow and weak AI is referring to systems that are designed to perform specific tasks or sets of tasks, they are trained to identify patterns in data and make selections according to that data. Image & facial recognition systems and recommendation algorithms used by streaming services are all good examples of narrow AI.
On the contrary, general or strong AI are AI systems designed to complete any intellectual task that a human could do, they are good at reasoning, problem-solving, and learning from experience, these characteristics and features of this system make them highly adaptable and versatile. As strong Ai stays largely theoretical at this point, researchers are making significant progress in developing smarter AI systems.
Another type of AI that has been mentioned in 2.2 already is machine learning, which refers to AI systems that learn from experience, therefore, are capable of self-adjustment and long-term improvement of performance. Finally, hybrid Ai is also one of the best AI systems that combine different types of AI to create more powerful and flexible systems, for example, it could combine machine learning algorithms with natural language processing to create a chatbot that is able to understand and respond to user inquiries effectively [4].
With an understanding of different types of AI, you can have a clearer image of what each AI system is created for and its functionalities, without this relative knowledge, you won’t be able to choose the right type of AI for a given task or application. To create systems that are highly efficient, accurate, and adaptable, researchers and developers must gain enough knowledge and personal understanding of these diverse AI systems.
3.2. Applications of AI in Different Industries
With the assistance and additional support of artificial intelligence, industries have gained numerous benefits such as increased efficiency, reduced cost of production, and improves labor and working precision and accuracy. Detailed examples of how AI is being used in different industries are going to be included in the following paragraphs.
Artificial intelligence is being significantly used in various medical fields such as radiology, where AI can analyze medical images and provide accurate diagnoses; cardiology, where it can analyze electrocardiograms and provide insights into heart conditions, and with that additional support, physicians are able to make faster and more accurate diagnoses; constructing personalized treatment plans by analyzing the patient’s personal data and predict treatment outcomes, in this way, the treatments toward different individuals will be more effective and qualitative [5].
In the field of finance, AI can automate and streamline various financial tasks, improve customer service, and help identify potential risks. Using the function of analyzing and predicting a vast amount of data, AI can help financial institutions minimize and identify risks, assist traders to predict price movements and execute trades quicker with more confidence. Moreover, AI-based chatbots and virtual assistants can provide customer support at any time in the day 24/7 by answering frequently asked questions and providing account information.
AI is being used to optimize productivity, reduce time wastes, and impose better quality control throughout the process of production, and so reduce the cost of production as they would be able to help the manufacturers to avoid more faulty waste products. It could also be used to personalize the customer experience, improve supply chain efficiency, and optimize inventory management.
The use of AI in the area of transportation is also worth mentioning, with the use of AI, every individual would be able to see the best route planning and foresee the potential risks and costs, it’s been doing extremely well in terms of reducing travel time and route optimization along the past few years.
Overall, the potential applications of artificial intelligence can be found almost in every industry and platform in our surroundings, they offer significant opportunities for businesses and organizations to improve their operations and management, increase productivity and provide better customer service. As people are getting to know this technology better, with a deep understanding of how it could be applied in every area of our daily life, inventors and researchers can continue to unlock its full potential and drive innovation across a range of sectors.
3.3. Applications of AI in Past & Modern Society
Artificial intelligence has revolutionized many aspects of modern society, from entertainment to education, the emergence of this progressively improving technology has changed people’s way of living. In the past, AI was primarily only used for simple tasks such as data organizing, however, as time passes, this technology has developed more diversification and versatility. From the daily lives of individuals around the world, it’s very common to see AI utilizations such as transportation, security, and communication.
One of the most significant ways that AI has brought to modern society is through personal assistants such as Siri and Alexa. These AI assistants are used in households to control appliances, remote control devices, and food ordering. They both use natural language processing to interpret spoken commands, and machine learning algorithms to improve their performance over time.
AI has also been used to improve medical services, with a wide range of applications, from disease diagnosis to drug development. Doctors have to rely solely on their experience and intuition to diagnose patients back in the day, but with the rapid growth of artificial intelligence technology, they can now analyze patient data in order to improve their accuracy and precision during work.
AI has been used to automate routine tasks and detect fraudulent activities, it can analyze large amounts of financial data to identify patterns and make predictions. Additionally, it is also being used to improve transportation with auto-driving cars being the most notable and representative example. These vehicles use a combination of sensors and AI algorithms to navigate roads and avoid obstacles [6].
All in all, AI has become an indispensable part of our modern society, with all these real-life applications, AI systems are continuously and largely used to automate tasks, analyze data sets, and make predictions. As it continues to develop, the dependence on this technology will definitely be mounted in the future society .
4. Current Trends & Innovations
4.1. Advancements in Natural Language Processing (NLP)
Natural Language Processing has seen exponential growth in recent years, that’s basically due to the development of more advanced algorithms and machine learning models. One of the most significant breakthroughs was the GPT-3 introduced by OpenAI, released in June 2020. That program showed a remarkable understanding of human words and relevant implications, in addition to that, it also demonstrated a great ability on generating highly coherent text. With 175 billion parameters, this program has been used in various applications including chatbots, translation services, and content generation [7].
Other than GPT-3, other notable advancements in NLP include the development of Bidirectional Encoder Representations from Transformers) BERT by Google in 2018. It has been instrumental in improving search engine results, sentiment analysis, and question-answering systems. Other than that, the introduction of the Embedding from Language Models (ELMo) in 2018 has made significant strides in allowing NLP models to incorporate contextual information, therefore, leading to improved performance [8].
All of these advancements in NLP have allowed businesses to automate customer support, provide real-time translation services, and improve content curation. Many companies have invested a tremendous amount of money in NLP research and applications, for instance, Google, Amazon, and Microsoft, as a result, leading to a rapidly growing market expected to reach $43.3 billion by 2027.
4.2. Explainable Artificial Intelligence (XAI)
As AI systems become more sophisticated and integrated into every individual’s daily life, there is an escalating demand for transparency and accountability. Explainable Artificial Intelligence is a field that aims toward creating AI models that can provide understandable and interpretable explanations for their decisions and actions. This is particularly important for applications in high-stakes industries like finance, healthcare, and law enforcement.
One approach to achieving explainable AI is through the use of Local Interpretable Model-agnostic Explanations (LIME), which can provide insights into the decision-making process of a specific model. Another technique called Shapley Additive exPlanations (SHAP) helps identify the contribution of each input feature to a model’s prediction outcome. These techniques are essential in fostering trust and acceptance of AI systems by stakeholders and the general public among others [9].
4.3. Edge AI and the internet of things (IoT)
Edge AI is referring to the deployment of artificial intelligence algorithms on edge devices like smartphones, sensors, and IoT devices, rather than relying solely on cloud-based processing. This alternation enables real-time processing, latency reduction, and improved data privacy. With the rapid growth of IoT devices which is estimated to reach 75.4 billion by 2025, the demand for edge Ai is also increasing.
Here are some examples of edge AI applications in our daily life that might remind you: smart home devices, autonomous vehicles, and wearable health monitors. These devices collect and process huge data locally, enabling faster and more efficient decision-making. Companies like NVIDIA and Intel are investing heavily in edge AI hardware, with the former introducing the Jetson platform for edge AI applications [6].
5. Future Potential of AI
5.1. The Impact of AI on Job Markets and Employment
The impact of AI on job markets and employment can be a complicated issue that requires a deep understanding of the technology and its potential applications. Firstly, the displacement of workers is one of the most significant concerns for many industries in modern society, also, it’s important to acknowledge that certain job roles have a higher risk of automation compared to others. For example, industries that involve manual labor or routine data processing are more likely to find a decrease in the demand for human labor forces. Nevertheless, the adoption of AI is also creating new job opportunities, for instance, AI trainers, explain ability experts, and also ethical AI specialists can be counted as one potential option as well.
Additionally, the impact of AI on these areas is not limited to solely the displacement of workers. At some point, AI can also affect the wages and working conditions of those who remain employed. While AI can increase productivity and efficiency of working, it could also lead to a concentration of wealth and power in the hands of those who control the technologies. Thus, it is crucial for policymakers to develop corresponding strategies to ensure that the benefits of artificial intelligence are shared equally among all stakeholders.
A potential solution to mitigate the negative effects of AI on employment is the implementation of a Universal Basic Income (UBI) scheme, which can provide a safety net for workers who have lost their jobs due to the automation of work and allow them to transit to new fields without experiencing any financial hardship. However, the feasibility and effectiveness of UBI are still being debated, and more research is needed to determine its potential impact on the economy and society [10].
All in all, the impact of AI on job markets and employment is a sophisticated issue that requires a nuanced understanding of the technology and its potential applications. Other than leading to the displacement of workers, AI can also create new job opportunities and increase productivity and efficiency as well. Policymakers and organizations must invest more in education and reskilling programs to help workers adapt to the changing job market and ensure a smooth transition to an AI-driven economy. In addition to those points, even though strategies like UBI may be partially beneficial for those affected by the automation of work, yet, more research is needed to determine and evaluate their viability and effectiveness.
5.2. Ethical Considerations of AI
As AI continues to be integrated into various industries and applications, the ethical considerations of AI are becoming increasingly relevant. Issues including fairness, accountability, and transparency in AI systems arise due to the potential for these systems to amplify existing societal biases and inequalities. Biases in AI systems can result from several factors including biased data, algorithmic bias, and design choices.
Addressing these ethnicity concerns requires a comprehensive approach that considers both the entire AI development and deployment process, which includes understanding the societal implications of AI, identifying potential biases, and implementing strategies to mitigate those biases. One approach that addresses these concerns is the use of diverse and representative data sets to train AI models. In addition, AI systems can be designed to be transparent, explainable, and most importantly, accountable, allowing for more effective oversight and regulation.
To establish ethical guidelines for AI, it’s important for ethicists, computer scientists, and policymakers to construct interdisciplinary collaborations. The guidelines are able to help ensure AI’s development and deployment in a responsible manner that upholds ethical principles like fairness, accountability, and transparency. For instance, the European Commission's Ethics Guidelines for Trustworthy AI outlines seven key requirements that AI systems should be able to meet, including guaranteeing enough transparency, non-discriminatory, and respecting both privacy and the data [11].
The implications of AI on ethical considerations also extend beyond technical considerations. Concerns regarding the impact on human rights and individual autonomy have been raised due to the use of AI in decision-making processes like criminal justice systems. It is a must to address these issues in order to ensure that AI is being used in a way that upholds ethical principles and respects human dignity.
5.3. AI and Healthcare
AI may be used to monitor patient health and find disease early warning indicators in addition to its potential in diagnostics and drug research. Wearable tech with AI algorithms can monitor a patient's vital signs and give them and their healthcare providers real-time feedback. By spotting possible health problems before they develop into significant ones and enabling prompt actions, can improve patient outcomes.
AI can also assist healthcare workers in making better decisions by evaluating and combining information from various sources, including genetic data, medical imaging, and electronic health records. As a result, patients may receive customized treatment plans based on their unique characteristics, improving outcomes and lowering healthcare costs.
However, there are difficulties in adopting AI in healthcare, just as there are with any new technology. Making sure AI systems are accurate and reliable is one of these challenges because mistakes could have a big impact on patient health. To guarantee that patient information is safeguarded, issues regarding data privacy and security must also be addressed.
5.4. AI and Sustainability
AI can play a significant role in resource management other than optimizing energy consumption and aiding in environmental monitoring. To further illustrate the point, AI can help distinguish and address inefficiencies in supply chains, reduce waste, and optimize resource allocation, with that being mentioned, this could result in significant cost savings and environmental benefits.
Furthermore, AI has the ability to facilitate sustainable agriculture practices by optimizing crop yield and reducing the total use of pesticides. Nowadays, lots of companies are using computer vision and machine learning to identify needs, then precisely spray herbicides to the locations that virtually needed them, and so reducing overall pesticide use [12].
Also, AI can be helpful in climate modeling and prediction, the most common and recognizable example would be the weather forecast report. With the help of AI technology, computer scientists and other specialists can enable more accurate forecasting of weather patterns and climate trends. The information gathered can be used to inform policy decisions and also develop effective climate adaptation strategies.
Nonetheless, admittedly, the potential negative impacts of AI on sustainability have always been a big part of the total consideration and determinant of the evaluation of AI’s sustainability. Let’s take the cost of energy as an example. The energy consumption associated with training and running AI models can be costly, especially with the growing scale of AI applications. In addition to that, there’s also a risk that AI-driven solutions may lead to unintended consequences such as increasing resource consumption in other areas. Thus, it’s essential to carefully consider the possible and potential trade-offs and opportunity costs, and also, ensure that the technology is used in a responsible and sustainable manner.
6. Conclusion
In conclusion, as time passes, AI has already become an astonishing, significant, and rapidly advancing technology that has the potential to change the world from many different aspects. The impact of AI on different sectors of the world and the economy is likely to be significant and profound while there are also concerns emerging about the potential negative consequences of it. Moreover, the potential benefits of AI in areas like medical use, and sustainability, without a doubt, cannot be undermined and ignored. As this technology continues to evolve, the potential for innovation and progress will only improve in the long run. Therefore, it is essential to continue to invest time, efforts, and investigations in AI research and development while addressing ethical concerns, and also, ensuring that the technology is used responsibly. AI presents both significant opportunities and challenges for our society, the successful integration of AI into various sectors will require collaboration and cooperation among policymakers, industry leaders, and researchers to ensure that it’s deployed ethically, responsibly, and for the betterment of society as a whole. The full potential of artificial intelligence is still yet to come, but there is one thing that we can be sure of, which is the enormous possibility and potential of this technology, still, it is up to individuals around the world to ensure that we harness its full potential for a better and more sustainable future.
References
[1]. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine, 27(4), 12-14.
[2]. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536.
[3]. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
[4]. Russell, S. J., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
[5]. Bughin, J., Hazan, E., Ramaswamy, S., Chui, M., Allas, T., Dahlström, P., ... & Trench, M. (2017). Artificial intelligence: The next digital frontier? McKinsey Global Institute.
[6]. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25.
[7]. Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. OpenAI. Retrieved from https://openai.com/blog/language-models-are-few-shot-learners/
[8]. Devlin, J., Chang, M.W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. Google AI. Retrieved from https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html
[9]. Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. Retrieved from https://arxiv.org/abs/1802.05365
[10]. McKinsey Global Institute. (2017). Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages. Retrieved from https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
[11]. European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
[12]. DeeMind. (2016). DeepMind AI reduces Google data centre cooling bill by 40%. Retrieved from https://deepmind.com/blog/article/deepmind-ai-reduces-google-data-centre-cooling-bill-40
Cite this article
Sun,Q. (2023). The Economic Implications of AI: Origins, Progression, and Prospect. Advances in Economics, Management and Political Sciences,24,277-285.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 2023 International Conference on Management Research and Economic Development
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine, 27(4), 12-14.
[2]. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536.
[3]. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
[4]. Russell, S. J., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
[5]. Bughin, J., Hazan, E., Ramaswamy, S., Chui, M., Allas, T., Dahlström, P., ... & Trench, M. (2017). Artificial intelligence: The next digital frontier? McKinsey Global Institute.
[6]. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25.
[7]. Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. OpenAI. Retrieved from https://openai.com/blog/language-models-are-few-shot-learners/
[8]. Devlin, J., Chang, M.W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. Google AI. Retrieved from https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html
[9]. Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. Retrieved from https://arxiv.org/abs/1802.05365
[10]. McKinsey Global Institute. (2017). Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages. Retrieved from https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
[11]. European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
[12]. DeeMind. (2016). DeepMind AI reduces Google data centre cooling bill by 40%. Retrieved from https://deepmind.com/blog/article/deepmind-ai-reduces-google-data-centre-cooling-bill-40