1 Introduction
The integration of AI avatars within organizational structures marks a significant shift in workplace communication and management. These digital entities, designed to mimic human interactions, are poised to reshape the process of organizational socialization. While AI avatars offer the potential to enhance decision-making and streamline operations through data-driven insights, their introduction also raises critical challenges, particularly concerning employee acceptance and interaction.
This study seeks to answer the following research question: How do AI avatars, particularly in leadership roles, affect the organizational socialization of new employees, and how can Expectation Violation Theory be applied to understand employee responses to these AI-driven interactions?
2 Literature Review
2.1 Technological influence on employee
Expanding on the technological influence on employee interaction, Lee, Kramer, and Guo [16] posit that the characteristics of social media—persistence, editability, visibility, and relevance—significantly impact the three phases of employees' organizational socialization: anticipatory socialization, encounter phase, and metamorphosis phase. These characteristics facilitate employee adaptation to organizational culture and role adoption while presenting challenges related to impression management and privacy concerns. This framework aids in understanding how employees might interact with AI avatars.
In the workplace, the utilization of social media has demonstrated a beneficial influence on team and employee performance [27]. Specifically, the co-use of work-oriented social media (e.g., DingTalk) and socialization-oriented social media (e.g., WeChat) can lead to a reinforcing effect, driving performance. That could suggest that AI avatars will be used pervasively by employees in scenarios where they merge the benefits of both types of social media. Further, another piece of research at this angle was conducted by Chu and Chu, who attempted in 2011 to look into novice hospitality employees' adopting social intranets through the TAM framework. The study identified perceived usefulness (PU) and perceived ease of use (PEOU) as mediators between the frequency of intranet use on the one hand and the organizational socialization of new employees on the other. This would argue that employee acceptance of AI avatars can be maximized if their design conforms to expectations about ease of use and perceived usefulness.
2.2 The Effect of AI on the Role of Business Managers
With continuous development in AI technologies, business managers face many unprecedented opportunities and challenges. Not only does AI change the way businesses conduct their operations, but it dramatically does so with the role and functions of leaders in organizations. As reported by Peña et al. [23], having the ability to customization AIs may enlist different leadership styles, further impacting individual altruistic behaviors and perceptions of empowerment toward leadership roles. It tended to come out that those customizing the image of a democratic business leader were more likely to exhibit altruistic behaviors, whereas one customizing the image of an authoritarian business leader exhibited otherwise. This result suggests that social behaviors or perceptions relating to some particular leadership styles can be evoked using tailored AI images. Smith and Green in [26] approached the influence of AI machines as a new class of followers with respect to roles associated with leadership. They further pointed out that more attention had to be paid to the ethical and moral instructions while constructing relationships between human beings and AI machines, which would demand both bottom-up and top-down approaches in order to deal with ethics in machines. AI technologies come to bring new insights and tools for business managers. Business managers have to develop new leadership skills and an organization culture supporting their companies' adoption and use of AI.
2.3 Employee Acceptance and Integration of AI in the Workplace
Then there is what Lichtenthaler [18] refers to as the complexity and diversity in employees' acceptance of AI, which, in great part, feeds off of their deep concern over losing jobs that the machines seem destined for. This may relate again more to overall job security and further career development than purely being technical. In contrast, Deborah Petrat and colleagues looked at the question of acceptance of AI as a management tool from the perspective of organizational leaders. Their study showed that acceptance by employees was highest if AI was introduced as a digital cognitive assistant. This AI-assisted approach to team supervision not only facilitated management but also brought a culture of feedback steered by data, hence serving as an example of how the AI role in leadership can influence employee engagement and the perceived value of AI in the workplace [18].
Similarly, Brenda K. Wiederhold argues that there is a call for careful consideration in integrating AI technology into practice, especially in virtual realities. She asserts the need to maintain trust from human beings toward AI and to be more conscious of user privacy and data security as AI systems continue to evolve. While David Allen Larson discusses the future role of AI in dispute resolution, if artificial intelligence is not yet able to resolve complex, interactive, interpersonal tasks, it has already started taking upon itself a good deal of responsibilities traditionally held by the human practitioner in alternative dispute resolution.
These studies, in interaction, examine socialization processes via social media and communication technologies within various organizational and technological contexts, examining the consequences for employee performance and fit within them. They provide a comprehensive explanation of factors that, by themselves, drive employee acceptance and the willingness to use AI avatars, highlighting the need for careful integration in step with employees' expectations.
However, it provides limited insight into how AI avatars, particularly in leadership roles, align to or deviate from employee expectations—a key determinant of their successful integration. Further research is required into the interaction patterns between AI avatars and employees with regard to established paradigms of leadership. Furthermore, the use of expectation violation theory with AI avatars has not been fully explored. EVT would provide influential insight into the cognitive and emotional responses of employees to AI-driven leadership and thus prove to be an important framework for analyzing such dynamics.
3 Theoretical Framework
The aim of this section is to develop a total theoretical framework that can be used to assess the extent of employee perception and acceptance of AI avatars. In this study, the Expectation Violation Theory (EVT) and the Technology Acceptance Model (TAM) combine in such a way that EVT is the main theory and the TAM acts as the supporting theory for further research; all this so as to meet the objective.
3.1 EVT
Our framework begins with expectations for leadership interaction, and EVT explains new employee uses violation of expected leadership interactions to interpret and make perceptions for themselves.
Even with the introduction of AI avatars, there may be a violation of people's expectations. The response to the violations relies on violation valence, and the communicator rewards valence. As participants feel that the AI avatars are useful and easy for them to use, newcomers are quickly relieving the negative reactions for developing positive attitudes and improving later use skills.
Research has shown that positive expectancy violations from AI avatars can bring about improvement in users' social judgment and enhancement of the quality of interaction. For instance, AI avatars that came up with responses beyond expectations were construed as more credible and more engaging, in line with EVT's prediction, in which positive deviations are able to bring about improved relational outcomes.
When EVT was first proposed by Judee K. Burgoon back in 1978, it accounted for reactions on the part of people against unexpected acts or situations that crop up in interpersonal contact.
A recent study applying EVT to the interaction of AI-powered virtual influencers on social media reported that deviances from the intended interaction are very influential to the parasite and behavior intentions of people [10]. This thus places EVT as very instrumental in understanding the response to AI-mediated communication, more so in leadership representation.
EVT can explain this study in terms of AI avatars representing today's leaders—one more phenomenon: how communication expectations with a leader, as set by employees, are violated and changed when they communicate with an AI avatar instead of a human leader [3, 29].
Furthermore, research indicates that users, when expectations are violated by issuing most relevant, accurate, and personalized, are more content with and trusting of the AI agent. It thus indicates that favorable violations of expectation ameliorate an agent's perception; AI avatars thus created and designed to do more than one would expect might be used to enhance leadership effectiveness and even employee acceptance.
3.2 TAM
This research will also utilize TAM to discuss the technological aspects by considering how perceived ease of use and perceived usefulness of AI avatars influence these perceptions and acceptance.
Knowing this, the Technology Acceptance Model focuses on how users go ahead to accept the use of technology; hence, perceived usefulness and perceived ease of use are taken as major factors in determining the adoption of techs (Davis, 1989).
A new research stream has extended the TAM to include factors that are specific to AI technologies. This will enable models to investigate trust that is situated at a level comparable to human interactions, with a special focus on AI in service sectors such as the tertiary industry [13].
Knowing that our study focuses on AI avatars, TAM can help explain why workers are more open to interacting with and accepting this novel style of leadership communication. In the present research, an expanded version of the Unified Theory of Acceptance and Utilization of Technologies was applied, which has shown that the factors of UTAUT2 are relevant to explain the users' behavior in the case of smart virtual assistants, thus showing their explicative power in the understanding of the acceptance of AI-mediated communication technologies.
In conclusion, as this study is directed at examining the impact of AI avatars, the theoretical integration can also provide a complete framework for understanding that the introduction and implementation of AI avatars in a business organizational setting plays an extremely important role in expanding the credibility of business leaders and the acceptance of new employees.
4 Methodology
4.1 Research Design
This study adopted a quantitative research design to understand the thoughts of the questionnaire participants in business management and the connotations characterized by the different degrees. According to Sis International Research (n.d.), the quantitative approach involves systematically collecting and analyzing data from various sources and using statistical tools to produce conclusive results. The advantage of quantitative research is that statistical methods can be used to analyze numerical data and obtain results that can be predicted, leading to more objective and generalizable conclusions [6].
On the contrary, qualitative studies are more exploratory. Most research uses this approach to collect verbal, behavioral, or observational data that are subjectively interpreted. Although qualitative approaches provide insight into individual respondents' experiences, they may lack the ability to generalize findings to a larger population. [8]. On the other hand, the quantitative approach supports objective measurement and statistical verification of the hypothesis to assure the reliability and validity of the results [2].
In the process, stratified random sampling was used. The approach is going to ensure that different subgroups of the population are well represented to increase the generalizability of the results [20].
4.2 Question Design
The questions are specifically designed so that residency interaction, perception, and acceptance of AI avatars, particularly in leadership roles, are reviewed and understood. They are formed according to some theoretical and practical needs. The data that is extracted through such research will be used to inform strategies for the better integration of AI in organizational environments.
Demographics (Q1-Q4): These questions collect basic information about the respondent regarding gender, age, education, and duration of employment. This is useful for understanding the background of respondents and enables demographic segmentation when analyzing the data.
Workplace Integration (Q5-Q8): The questions relate to stages of the socialization and adaption of participants to the current work environment, which in turn may influence how comfortable they are in their roles and therefore being open to AI avatars.
Previous Experience with AI (Q9-Q11): These questions measure past exposure to AI tools, and they set a baseline of familiarity with AI technologies.
AI Avatar Interaction (Q12-Q24): Core questions in the survey focus on the interaction with AI avatars, such as willingness to engage, perception of effectiveness of conversations, and expectation. This is linked with the study's objective: to understand what the factors really will make a difference in AI avatar acceptance.
Open-Ended Responses (Q25): These questions bring on the discussion of concerns and suggestions for improvement, thus bringing forth qualitative attitudes that the respondents would have towards AI avatars.
4.3 Data collection
Data collection was through the structured questionnaire. The research instrument was made up of a total of (25) questions on a Likert scale from 1 to 7, where 1 stands for "Strongly Disagree," while 7 stands for "Strongly Agree." A scale of this nature provides a quantification of the attitudes of the response, and it becomes easier to take the counting and analysis [19]. The structured format of the questionnaire ensures consistency of responses and enables the researcher to draw meaningful comparisons and conclusions [12].
Participants are recruited during the later stages of their training programs. The company leaders gave out questionnaires to the employees as a step of the training process, which ensured a high response rate. Also, as a generally younger demographic, new employees are more likely to have prior exposure to AI, which would improve the quality and richness of the data [28]. This group's higher willingness to engage in surveys and provide detailed responses further justifies their selection [11].
The data collection platform is Wenjuanxing (https://www.wenjuan.com), a widely used online survey tool in China. Based on Wenjuanxing, the questionnaire can be saved in real time and available for statistical analysis. Eighty-three (83) copies of the questionnaire were completed and returned. According to the agreement between the respondents and the researchers, participants are assured of the confidentiality of their responses, which would assist in getting fair and unbiased data [9].
5 Conclusion
The gender ratio of the survey subjects is nearly 1:1, and they are primarily focused on the first phase of employee socialization, which will, to a certain extent, enhance the validity of the analysis and reduce interfering factors. Through data analysis and summary, we have found that the onboarding training for most people is carried out by their superiors and colleagues, with not many external guest lecturers or direct training by the CEO. Research indicates that the stage where employees need the most help is concentrated in the early period of work, and they hope for personalized, hands-on assistance, which is very important and also aligns with the role of AI avatars. In addition to this, emotional factors also play a significant role; the attitude and dedication of the trainers can greatly affect the psychological factors of employees. Although at the current stage, the use of AI tools by employees is not widespread, the tolerance of employees for AI avatars is very high, indicating that there is still great potential for the popularization of AI avatars. However, it is worth noting that the response speed, professional ability, and accuracy of the AI avatar will greatly influence the employees' willingness to accept and trust it. The practicality and actuality of the help provided by the AI avatar will become an important evaluation criterion.
Authors’ Contributions
Yichen Liu and Yaqi Zhang contributed equally to this work and should be considered co-first authors.
References
[1]. Bevan, J. L., Ang, P.-C., & Fearns, J. B. (2014). Being unfriended on Facebook: An application of Expectancy Violation Theory. Computers in Human Behavior, 33, 171–178. https://doi.org/10.1016/j.chb.2014.01.029
[2]. Bryman, A. (2016). Social research methods. Oxford university press.
[3]. Burgoon, J. K. (1978). A communication model of personal space violations: Explication and an initial test. Human Communication Research, 4(2), 129-142.
[4]. Chu, A. Z.-C.& Chu, R. J.-C. (2011). The intranet's role in newcomer socialization in the hotel industry in Taiwan – technology acceptance model analysis. The International Journal of Human Resource Management, 22(05), 1163-1179. https://doi.org/10.1080/09585192.2011.556795
[5]. Cohen, J. (2013). Statistical power analysis for the behavioral sciences. routledge.
[6]. Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.
[7]. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340.
[8]. Denzin, N. K., & Lincoln, Y. S. (Eds.). (2011). The Sage handbook of qualitative research. sage.
[9]. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method. John Wiley & Sons
[10]. Fan, F., Fu, L., & Jiang, Q. (2023). Virtual idols vs online influencers vs traditional celebrities: How young consumers respond to their endorsement advertising. Young Consumers. https://doi.org/10.1108/YC-08-2023-1811
[11]. Fishbein, M., & Ajzen, I. (2011). Predicting and changing behavior: The reasoned action approach. Psychology press.
[12]. Fowler Jr, F. J. (2013). Survey research methods. Sage publications.
[13]. Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169.
[14]. Gong, W., Jung, J., & Lim, J. S. (2022). Exploring parasocial relationships with AI-powered virtual influencers: An expectancy violation perspective. Frontiers in Psychology, 13, 993935.
[15]. Hutson, J., Ratican, J., & Biri, C. (2023). Essence as Algorithm: Public Perceptions of AI-Powered Avatars of Real People. DS Journal of Artificial Intelligence and Robotics, 1(2), 1-14. https://doi.org/10.59232/AIR-V1I2P101
[16]. Lee, S. K., Kramer, M. W., & Guo, Y. (2019). Social media affordances in entry-level employees’ socialization: Employee agency in the management of their professional impressions and vulnerability during early stages of socialization. New Technology, Work and Employment, 34(3), 244-259. https://doi.org/10.1111/ntwe.12147
[17]. Levin, K. A. (2006). Study design III: Cross-sectional studies. Evidence-based dentistry, 7(1), 24-25.
[18]. Lichtenthaler, U. (2020). Extremes of acceptance: Employee attitudes toward artificial intelligence. Journal of Business Strategy, 41(5), 39-45. DOI: 10.1108/JBS-12-2018-0204
[19]. Likert, R. (1932). A technique for the measurement of attitudes. Archives of psychology.
[20]. Lohr, S. L. (2021). Sampling: design and analysis. Chapman and Hall/CRC.
[21]. Na, S., Heo, S., Han, S., Shin, Y., & Roh, Y. (2022). Acceptance Model of Artificial Intelligence (AI)-Based Technologies in Construction Firms: Applying the Technology Acceptance Model (TAM) in Combination with the Technology–Organisation–Environment (TOE) Framework. Buildings, 12(2)
[22]. Petrat, D., Yenice, I., Bier, L., & Subtil, I. (2022). Acceptance of artificial intelligence as organizational leadership: A survey. TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis / Journal for Technology Assessment in Theory and Practice, 31(2), 64-69. https://doi.org/10.14512/tatup.31.2.64
[23]. Peña, J., et al. (2023). Virtual leaders: Can customizing authoritarian and democratic business leader avatars influence altruistic behavior and leadership empowerment perceptions? Computers in Human Behavior, 141, 107616.
[24]. Peifer, Y., et al. (2022). Artificial Intelligence and its Impact on Leaders and Leadership. Procedia Computer Science, 200, 1024–1030.
[25]. Sharma, M., & Vemuri, K. (2022). Accepting Human-like Avatars in Social and Professional Roles. ACM Transactions on Human-Robot Interaction, 11(3), Article 28. https://doi.org/10.1145/3526026
[26]. Smith, A. M., & Green, M. (2018). Artificial Intelligence and the Role of Leadership. Journal of Leadership Studies, 12(3), 85-87. DOI:10.1002/jls.21605
[27]. Song, Q., Wang, Y., Chen, Y., Benitez, J., & Hu, J. (2019). Impact of the usage of social media in the workplace on team and employee performance. Information & Management, 56(103160). https://doi.org/10.1016/j.im.2019.04.003
[28]. Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS quarterly, 157-178.
[29]. Warfield, D. (2015). Expectancy violations theory. In C. R. Berger & M. E. Roloff (Eds.), The International Encyclopedia of Interpersonal Communication (pp. 1-9). John Wiley & Sons.
Cite this article
Liu,Y.;Zhang,Y. (2024). AI-Mediated Leadership and New Employee Onboarding: Applying Expectation Violation Theory to Understand Acceptance of AI Avatars. Journal of Applied Economics and Policy Studies,15,1-5.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Journal:Journal of Applied Economics and Policy Studies
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Bevan, J. L., Ang, P.-C., & Fearns, J. B. (2014). Being unfriended on Facebook: An application of Expectancy Violation Theory. Computers in Human Behavior, 33, 171–178. https://doi.org/10.1016/j.chb.2014.01.029
[2]. Bryman, A. (2016). Social research methods. Oxford university press.
[3]. Burgoon, J. K. (1978). A communication model of personal space violations: Explication and an initial test. Human Communication Research, 4(2), 129-142.
[4]. Chu, A. Z.-C.& Chu, R. J.-C. (2011). The intranet's role in newcomer socialization in the hotel industry in Taiwan – technology acceptance model analysis. The International Journal of Human Resource Management, 22(05), 1163-1179. https://doi.org/10.1080/09585192.2011.556795
[5]. Cohen, J. (2013). Statistical power analysis for the behavioral sciences. routledge.
[6]. Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.
[7]. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340.
[8]. Denzin, N. K., & Lincoln, Y. S. (Eds.). (2011). The Sage handbook of qualitative research. sage.
[9]. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method. John Wiley & Sons
[10]. Fan, F., Fu, L., & Jiang, Q. (2023). Virtual idols vs online influencers vs traditional celebrities: How young consumers respond to their endorsement advertising. Young Consumers. https://doi.org/10.1108/YC-08-2023-1811
[11]. Fishbein, M., & Ajzen, I. (2011). Predicting and changing behavior: The reasoned action approach. Psychology press.
[12]. Fowler Jr, F. J. (2013). Survey research methods. Sage publications.
[13]. Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169.
[14]. Gong, W., Jung, J., & Lim, J. S. (2022). Exploring parasocial relationships with AI-powered virtual influencers: An expectancy violation perspective. Frontiers in Psychology, 13, 993935.
[15]. Hutson, J., Ratican, J., & Biri, C. (2023). Essence as Algorithm: Public Perceptions of AI-Powered Avatars of Real People. DS Journal of Artificial Intelligence and Robotics, 1(2), 1-14. https://doi.org/10.59232/AIR-V1I2P101
[16]. Lee, S. K., Kramer, M. W., & Guo, Y. (2019). Social media affordances in entry-level employees’ socialization: Employee agency in the management of their professional impressions and vulnerability during early stages of socialization. New Technology, Work and Employment, 34(3), 244-259. https://doi.org/10.1111/ntwe.12147
[17]. Levin, K. A. (2006). Study design III: Cross-sectional studies. Evidence-based dentistry, 7(1), 24-25.
[18]. Lichtenthaler, U. (2020). Extremes of acceptance: Employee attitudes toward artificial intelligence. Journal of Business Strategy, 41(5), 39-45. DOI: 10.1108/JBS-12-2018-0204
[19]. Likert, R. (1932). A technique for the measurement of attitudes. Archives of psychology.
[20]. Lohr, S. L. (2021). Sampling: design and analysis. Chapman and Hall/CRC.
[21]. Na, S., Heo, S., Han, S., Shin, Y., & Roh, Y. (2022). Acceptance Model of Artificial Intelligence (AI)-Based Technologies in Construction Firms: Applying the Technology Acceptance Model (TAM) in Combination with the Technology–Organisation–Environment (TOE) Framework. Buildings, 12(2)
[22]. Petrat, D., Yenice, I., Bier, L., & Subtil, I. (2022). Acceptance of artificial intelligence as organizational leadership: A survey. TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis / Journal for Technology Assessment in Theory and Practice, 31(2), 64-69. https://doi.org/10.14512/tatup.31.2.64
[23]. Peña, J., et al. (2023). Virtual leaders: Can customizing authoritarian and democratic business leader avatars influence altruistic behavior and leadership empowerment perceptions? Computers in Human Behavior, 141, 107616.
[24]. Peifer, Y., et al. (2022). Artificial Intelligence and its Impact on Leaders and Leadership. Procedia Computer Science, 200, 1024–1030.
[25]. Sharma, M., & Vemuri, K. (2022). Accepting Human-like Avatars in Social and Professional Roles. ACM Transactions on Human-Robot Interaction, 11(3), Article 28. https://doi.org/10.1145/3526026
[26]. Smith, A. M., & Green, M. (2018). Artificial Intelligence and the Role of Leadership. Journal of Leadership Studies, 12(3), 85-87. DOI:10.1002/jls.21605
[27]. Song, Q., Wang, Y., Chen, Y., Benitez, J., & Hu, J. (2019). Impact of the usage of social media in the workplace on team and employee performance. Information & Management, 56(103160). https://doi.org/10.1016/j.im.2019.04.003
[28]. Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS quarterly, 157-178.
[29]. Warfield, D. (2015). Expectancy violations theory. In C. R. Berger & M. E. Roloff (Eds.), The International Encyclopedia of Interpersonal Communication (pp. 1-9). John Wiley & Sons.