1. Introduction
In recent years, Artificial Intelligence (AI) has rapidly evolved from an auxiliary tool to a core element in the decision-making processes of modern organisations. It is applied in front-line positions, such as screening job applications and generating performance feedback. For instance, ChatGPT has been integrated into customer service platforms to achieve real-time response automation. AI-generated résumé screening systems use pre-trained algorithms to assess thousands of applications [1, 2]. These changes are expected to bring higher efficiency, but they have also raised urgent questions regarding trust and fairness in the workplace.
The involvement of AI in decision-making is increasing, and the boundary between human and machine responsibilities is beginning to blur. This transformation has also brought about organisational behaviour challenges, such as employees’ lack of understanding of AI interpretability, lack of transparency, and resistance triggered by hierarchical cultures. These have already emerged in workplaces that introduce AI without adequate communication or design considerations, manifesting as employee resistance, ethical disputes, and a decline in work morale [3, 4]. Against this backdrop, this study seeks to answer a core question: How is AI reshaping the operational logic of organisations, and what governance strategies are required to adapt to this transformation?
The remainder of this paper is organised as follows: Chapter 2 discusses typical scenarios of AI applications in organisational management, including AI in recruitment, smart performance evaluation, and the role of AI in predicting employees’ emotions and turnover risks. Chapter 3 identifies the key challenges faced by AI in organisational management applications, namely the difficulty in interpretation, lack of transparency, and employee resistance influenced by organisational culture. Chapter 4 proposes practical solution pathways, including the role of interpretive AI, managerial communication, emotional support mechanisms, and cultural transformation towards participatory governance.
Understanding these dynamics is of utmost importance. AI is not merely a tool; it is also a participant in the human work system. If the integration strategy between humans and machines is not carefully formulated, the potential of AI may be undermined. Therefore, this research contributes to the literature on the interaction between AI and humans. It also provides actionable insights for managers who wish to deploy AI in an efficient, ethical, and people-oriented manner.
2. Typical scenarios of AI application in organisational management
Organisational management now features new characteristics such as data-driven approaches and automated decision-making. AI holds great potential in specific scenarios such as recruitment and performance evaluation [5].
2.1. AI in recruitment: from smart screening to virtual interviews
AI systems are now widely used in various recruitment processes, from résumé screening to interview management [6]. Firstly, Natural Language Processing (NLP) technology is used in the initial screening of résumés to extract matching information from candidates’ submissions. AI systems such as hireEZ can achieve precise matching between candidates and jobs at the semantic level. Moreover, in the pre-interview evaluation stage, platforms such as Pymetrics extract data through cognitive tests to evaluate candidates’ risk tolerance and emotional management abilities. The dimensions of AI evaluation are closely aligned with the specific requirements of positions, enabling more accurate recruitment results [7].
HireVue employs multimodal AI technology during the interview process to analyse the behaviour of candidates. This system not only interprets applicants’ language but also captures subtle expressions and movements such as the frequency of smiles and the duration of gaze pauses. The algorithm weights these multimodal signals to generate predictive interview scores. Unilever reported that after introducing HireVue, the average recruitment duration from job posting to final hire was shortened from four months to two weeks. This indicates that the application of AI management systems in recruitment has significantly improved the quality of candidates [8].
2.2. Smart performance evaluation: efficiency enhancement
AI application in performance evaluation has transformed previous subjective, manager-dominated evaluation methods into objective and data-driven approaches. AI systems can process large amounts of employee data in real time, such as task completion rates, timeliness, and participation in collaboration tools. These data inputs are used to generate dashboards and performance scores, thereby influencing decisions regarding employee promotions, bonuses, or training opportunities.
AI systems are typically integrated with enterprise platforms such as Slack, Microsoft Teams, Asana or Jira to automatically track metrics including message frequency, response latency, task completion rate, schedule activities, and version control logs [4]. These multi-dimensional data are input into machine learning models, often using clustering or regression techniques, to generate performance indicators aligned with organisational benchmarks. Some platforms utilise NLP to assess tone and emotions in workplace communication, identifying potential signs of employee alienation or burnout [9]. These assessments are not conducted once but updated continuously or periodically (e.g. weekly or monthly), and visualised through dashboards to track trends over time at the individual and team levels.
2.3. Role of AI in predicting employees’ emotions and turnover risks
AI management systems can predict employees’ emotions and voluntary turnover risk by analysing absence records and system usage data. Some systems offer intervention methods such as mental health tools or work plan adaptation according to the emotional status of employees. AI can also track behavioural cues such as meeting load and the frequency of interactions with colleagues. NLP can collect emotional content from text materials, such as questionnaires and feedback forms [9]. This vectorised data is then fed into supervised learning models (e.g., random forest classifiers) trained on past turnover data, which can express job burnout or voluntary turnover risk.
Such predictions have both diagnostic and guiding value. Advanced platforms such as Workday Peakon and Microsoft Viva Insights utilise these predictions to trigger personalised intervention paths. These may range from proactive reminders (e.g., suggesting a manager check in) to formal action plans (e.g., workload reallocation or career path re-planning). Recommendations are typically presented to HR staff or department managers through dashboards, often with confidence scores and interpretability layers explaining the key drivers of predictions [4]. Integrating AI into organisational management can reduce voluntary turnover rates by 20% to 30%. When AI is transparent and combined with human supervision, this effect is even more significant [10].
3. Issues and challenges of AI application in organisational management
The extensive application of AI in organisational management has also brought about issues and challenges. Recent studies have identified difficulties in AI interpretability, a lack of transparency, and employee resistance triggered by hierarchical cultures.
3.1. Difficulty of employees in AI interpretability
One pressing issue at present is that employees find it difficult to understand AI. The application of AI in organisational management often lacks sufficient explanation, resembling a “black box”. Therefore, even systems developed with good intentions may be perceived by employees as biased. This, in turn, can negatively impact proactive behaviour. From a technical perspective, the origin of this issue lies in the complexity of advanced machine learning algorithms. Models such as deep neural networks and ensemble decision trees learn complex nonlinear patterns in high-dimensional data. These models encompass hundreds of elements ranging from numerical performance indicators to qualitative inputs, and generate single output results through multiple layers of transformation. In such complex structures, it is often difficult to determine how a single input affects a specific prediction outcome. If employees have little understanding of a model’s internal workings, they will have great difficulty comprehending the AI-generated evaluation results [11].
The issue of interpretability can also raise concerns at both the ethical and practical application levels. Scholars have noted that algorithmic decisions lacking an interpretable mechanism are particularly detrimental in high-stakes contexts, including employee evaluation or promotion, eroding employees’ trust. Without such interpretability, AI systems can misinterpret employee data, which reduces employees’ trust in the organisation [4]. Similarly, Binns et al. explained that when interpretability mechanisms are insufficient, employees cannot query AI results, leading to feelings of alienation from the organisation [12].
3.2. Lack of transparency
When the internal operation mechanism of AI tools is opaque, employees inevitably question the fairness of AI decisions. This not only undermines trust in these tools but also damages the reputation of the institutions using them.
The low transparency of AI tools is due to the proprietary nature of commercial platforms. Their underlying architecture, training data, and decision rules are usually inaccessible to employees and HR professionals. Even when organisations attempt to enhance transparency through techniques such as Shapley Additive Explanations (SHAP) or Local Interpretable Model-Agnostic Explanations (LIME), the output results are often abstract or technical. For instance, learning that “38% of your performance score is driven by the density of internal communication within the department” may be meaningless unless employees understand how this metric is defined, tracked, and compared.
Furthermore, AI-generated evaluations often fail to cover the emotional aspects of human work. AI systems have difficulty quantifying performance in resolving interpersonal conflicts or team collaboration, but these elements are indispensable for achieving fair evaluation. If ignored, they weaken employees’ perception of the fairness of the AI evaluation. Colquitt et al. found that when employees do not understand the background of the AI evaluation or lack channels for expression, they are less likely to accept the results [13]. Supporting this, Binns et al. found that employees who did not receive adequate explanations were often reluctant to accept the assessment results [12]. Similarly, Shin argued that AI decision-making opacity was likely to reduce motivation [3]. Algorithmic inscrutability also undermines the legitimacy of digital systems by preventing effective communication between users and technology [14].
3.3. Employee resistance triggered by hierarchy culture
In organisational cultures where decision-making power is centralised, introducing AI for performance evaluation can make employees feel overly monitored. This sense of surveillance intensifies if staff are not involved in designing or rolling out the system. Burrell pointed out that in hierarchical organisational cultures, employees often accept algorithmic systems out of obedience, but this may lead to feelings of alienation from the organisation [11]. Such contradictions between obedience and unease suggest that hierarchical culture is not conducive to AI adoption.
Empirical research confirms that cultural background influences employees’ internalisation of algorithmic authority. Agrawal et al. found in a cross-national comparison of OECD countries and India that respondents from India tended to attribute moral responsibility to both humans and machines simultaneously [15]. India is known for its high-context communication and strict hierarchical norms, and these respondents had lower trust in AI decisions.
Hofstede’s cultural dimension research further confirmed these phenomena. In cultures with high uncertainty avoidance, employees demand clearer explanations of algorithmic results. Conversely, in environments with low power distance, AI may be seen as a tool that breaks down hierarchies and promotes fairness [3]. These findings indicate that resistance is closely linked to organisational and social cultural structure. Therefore, effective integration of artificial intelligence requires culturally sensitive and inclusive design practices.
4. Solutions of AI application in organisational management
4.1. Technique solution: enhancing AI interpretability
Improving the interpretability of AI systems can reduce employees’ resistance. XAI encompasses model techniques and interpretability tools that reveal the reasons behind decision-making, rather than merely presenting the results. From a technical perspective, interpretability can be achieved through the following three strategies:
1. Using models such as decision trees or linear models with clearly interpretable structures for organisational management predictions, because the structures of these models are more conducive to human examination.
2. Feature attribution tools such as SHAP and LIME can assign importance scores to features, which help users understand the impact of changes in input variables (such as the task delay indicator) on the prediction results.
3. Counterfactual explanations, which answer “if this attribute had been different, would the outcome change?” This helps employees envision what actions could influence AI assessments (e.g. “if your communication frequency had been 10% higher, your retention risk would drop by X%).
Empirical findings show that when interpretability is improved, users report greater trust and willingness to accept AI‑driven decisions. Chaudhary et al. investigated the negative consequences of non-transparent AI and found that organisations which embed interpretability features in employee-facing analytics see higher levels of perceived fairness and lower resistance [16].
4.2. Communication solution: clear explanation, training, and role clarity
Beyond model-level transparency, organisational communication plays a pivotal role in shaping how employees perceive and interact with AI tools. Clear communication strategies—including transparent disclosure of AI’s role, function, and limitations—help reduce uncertainty and foster trust. When employees fully understand the working principle and reasons for the AI application, they are more likely to regard it as a beneficial resource. For example, when organisations emphasise training during the introduction and promotion of AI systems, employees show higher acceptance of AI management systems [17].
Organisations should enhance employee training to build a practical understanding of AI. Training should be repeated regularly to allow employees to express their views and adjust their working methods accordingly. Research shows that effective communication improves employee performance in the context of AI management. Florea and Croitoru found that when managers clearly explain the purpose and decision-making logic of AI, employees display higher task commitment and acceptance [18]. Similarly, Behn et al. found that providing employees with training on the AI management system by the organisation helps increase trust in emotional AI analysis tools [19]. Therefore, communication and training must be regarded as important measures for successful AI integration.
4.3. Trust solution: enhancing emotional support of AI for employees
Employees’ resistance to AI often stems from feelings of being monitored and from the lack of emotional sensitivity in AI systems. To alleviate this resistance, organisations can design AI systems that incorporate elements of emotional support. For example, platforms such as Humu (used by Google and Intel) consider employees’ communication styles and interactions with leaders before providing performance feedback. Employees who prefer directness may receive straightforward messages such as, “You’ve increased your deadline efficiency this quarter by 8%. Keep it up.” They may also receive motivating messages such as, “You really brought a burst of energy into our team this week.” Similarly, platforms like Lattice and Workday (Peakon) allow managers to create custom feedback templates and automate feedback based on communication style or level of engagement.
Including human-in-the-loop evaluation alongside AI outputs can promote employees’ acceptance of AI recommendations. Human judgment can also strengthen fairness in AI evaluations. Watanabe et al. found that combining AI-generated feedback with human emotional support improves employees’ self-efficacy and motivation [20]. Leadership support can further help mitigate feelings of isolation and job exhaustion caused by long-term AI monitoring [21]. These findings show that embedding emotional cognition in AI-human interaction design helps foster employees’ well-being and long-term acceptance of AI systems in organisational management.
4.4. Culture solution: cultivating participatory, trusting organisational culture
Improvements in AI system technology or communication can help employees accept AI management. However, for sustainable, long-term application, organisations must also cultivate an inclusive and participatory organisational culture [22]. Participation involves including employees in the selection, training, and practical application of AI tools. It also promotes values such as psychological safety, openness to feedback, and shared learning (mistakes allowed, questions encouraged).
Some practical initiatives include:
1. Employee Involvement in Decision-Making
Establishing cross-functional committees that include employee representatives during AI system selection and rollout can increase legitimacy and perceived fairness [23].
2. Feedback Loops for Continuous Improvement
Encouraging frontline workers to contribute insights or suggest refinements to AI tools creates a sense of ownership and adaptive learning [24].
3. Embedding Psychological Safety into Organisational Norms
Formally integrating values that support open dialogue about AI errors—such as admitting when AI “gets it wrong”—reinforces a collective learning mindset [25].
4. Celebrating Early Success Stories Together
Publicly recognising instances where AI tools have helped individuals or teams—not just improved KPIs—helps shift the narrative from surveillance to support [26].
Ultimately, when employees feel their voices are heard and their concerns are addressed, they are more likely to engage constructively with AI systems. This reinforces a virtuous cycle where trust and participation amplify the positive impacts of technological innovation.
5. Conclusion
AI reshapes organisational behaviour by increasing HRM efficiency, but introduces risks like poor interpretability, low transparency, and hierarchy-driven resistance. To address these, organisations can enhance AI interpretability (via XAI), strengthen communication/training, incorporate emotional support mechanisms, and foster participatory cultures. Future research could explore cross-cultural AI acceptance. Policies should support transparent AI governance and algorithm accountability, ensuring AI serves organisations sustainably.
References
[1]. Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, 10 October. Available at: https: //www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G [Accessed 22 Sep.2025]
[2]. Amulya, G.T., Fernandes, S., Prakash, A., & Ashalatha, D. (2025). An empirical study of AI in talent acquisition: Resume screening & matching, bias reduction, enhanced candidate experience.J. Innov. Employ. Res., 5(2). https: //doi.org/10.52783/jier.v5i2.2492
[3]. Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI.Int. J. Hum.–Comput. Stud., 146(83), 102551. https: //doi.org/10.1016/j.ijhcs.2020.102551
[4]. Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Acad.Manage. Rev., 46(1), 192–210. https: //doi.org/10.5465/amr.2018.0072
[5]. Al Samman, A.M. (2024). The use of AI in fostering and embracing organisational culture. IEEE Xplore.
[6]. Dadaboyev, S.M.U., Abdullayeva, J., Abbosova, N., et al. (2025). Role of artificial intelligence in employee recruitment: Systematic review and future research directions.Discov. Glob. Soc., 3, Article 99. https: //doi.org/10.1007/s44282-025-00246-w
[7]. Chamorro-Premuzic, T., Wade, M., & Jordan, J. (2019). AI and the future of work: How artificial intelligence is transforming talent management. Harv. Bus. Rev. Available at: https: //hbr.org/2019/07/ai-and-the-future-of-work [Accessed 23 Jun. 2025].
[8]. Upadhyay, A.K., & Khandelwal, K. (2018). Applying artificial intelligence: Implications for recruitment.Strateg. HR Rev., 17(5), 255–258. https: //doi.org/10.1108/SHR-07-2018-0051
[9]. Sharma, P., Mittal, S., & Goyal, M. (2020). AI-enabled employee engagement: Sentiment analysis and organisational productivity.Int. J. Hum. Cap. Inf. Technol. Prof., 11(3), 1–15. https: //doi.org/10.4018/IJHCITP.2020070101
[10]. Sumlin, C., De Oliveira, M.J.J., & Conde, R. (2024). Do the management process and organisational behavior modification enhance an ethical environment and organisational trust in the US and Brazil?Int. J. Organ. Anal., 33(5), 969–984.
[11]. Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms.Big Data Soc., 3(1), 1–12. https: //doi.org/10.1177/2053951715622512
[12]. Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. CHI Conference on Human Factors in Computing Systems, 1–14.
[13]. Colquitt, J.A., Conlon, D.E., Wesson, M.J., Porter, C.O.L.H. and Ng, K.Y. (2001). Justice at the millennium: A meta-analytic review of 25 years of organisational justice research.J. Appl. Psychol., 86(3), 425–445.
[14]. Ananny, M. and Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.New Media Soc., 20(3), 973–989. https: //doi.org/10.1177/1461444816676645
[15]. Agrawal, V., Kandul, S., Kneer, M., & Christen, M. (2023). From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts. arXiv. Available at: https: //arxiv.org/abs/2303.15411.
[16]. Chaudhary, M., Gaur, L., Chakrabarti, A., Singh, G., Jones, P., & Kraus, S. (2025). An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence.J. Innov. Knowl., 10(3), p.100700. https: //doi.org/10.1016/j.jik.2025.100700
[17]. Daly, S.J., Wiewiora, A., & Hearn, G. (2025). Shifting attitudes and trust in AI: Influences on Organisational AI adoption. Technol. Forecast. Soc. Change. https: //www.sciencedirect.com/science/article/pii/S0040162525001398
[18]. Florea, N.V., & Croitoru, G. (2025). The impact of artificial intelligence on communication dynamics and performance in organizational leadership.Behaviors, 15(2), Article 33. https: //doi.org/10.3390/behaviors 15020033
[19]. Behn, O., Leyer, M., & Iren, D. (2024). Employees’ acceptance of AI based emotion analytics from speech on a group level in virtual meetings.Technol. Soc., 76, Article 102466. https: //doi.org/10.1016/j.techsoc.2024.102466
[20]. Watanabe, Y., Takahashi, K., & Kono, T. (2025). AI feedback and workplace social support in enhancing employee development: The moderating role of emotional support from supervisors and colleagues.Sci. Rep., 15, Article 94985. https: //doi.org/10.1038/s41598 025 94985 0
[21]. Meng, Q., Wang, S., Li, Y., & Zhao, J. (2025). Effects of Employee Artificial Intelligence (AI) Collaboration on Emotional Fatigue and Counterproductive Work Behavior: The Moderating Role of Leader Emotional Support.Behav. Sci., 15(5), Article 696. https: //doi.org/10.3390/bs15050696
[22]. Rožman, M., Tominc, P., & Milfelner, B. (2023). Maximizing employee engagement through artificial intelligent organizational culture in the context of leadership and training of employees: Testing linear and non linear relationships.Cogent Bus. Manag., 10(2), Article 2248732. https: //doi.org/10.1080/23311975.2023.2248732
[23]. San Taslim, W. (2025). Employee involvement in AI driven HR decision making: A systematic review.SA J. Hum. Resour. Manag., 23, 1–12. https: //doi.org/10.4102/sajhrm.v23i0.2856
[24]. Vrontis, D., Arslan, M., & Pereira, V. (2023). Responsible artificial intelligence: A research agenda toward sustainable and ethical AI adoption in organizations.Technol. Forecast. Soc. Change, 187, 122273. https: //doi.org/10.1016/j.techfore.2022.122273
[25]. Kim, B.J. (2025). The dark side of artificial intelligence adoption: Linking organizational AI adoption and employee depression via psychological safety.Humanit. Soc. Sci. Commun., 12(1), 5040. https: //doi.org/10.1057/s41599-025-05040-2
[26]. Performio (2025). Unlocking AI Productivity: Bridging Human Adoption Gaps. [online] Performio. Available at: https: //www.performio.co/insight/unlocking-ai-productivity [Accessed 12 Sep. 2025].
Cite this article
Cui,J. (2025). Reimagining organisational management in the age of AI: opportunities, challenges, and design solutions. Advances in Operation Research and Production Management,4(2),53-58.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Journal:Advances in Operation Research and Production Management
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, 10 October. Available at: https: //www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G [Accessed 22 Sep.2025]
[2]. Amulya, G.T., Fernandes, S., Prakash, A., & Ashalatha, D. (2025). An empirical study of AI in talent acquisition: Resume screening & matching, bias reduction, enhanced candidate experience.J. Innov. Employ. Res., 5(2). https: //doi.org/10.52783/jier.v5i2.2492
[3]. Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI.Int. J. Hum.–Comput. Stud., 146(83), 102551. https: //doi.org/10.1016/j.ijhcs.2020.102551
[4]. Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Acad.Manage. Rev., 46(1), 192–210. https: //doi.org/10.5465/amr.2018.0072
[5]. Al Samman, A.M. (2024). The use of AI in fostering and embracing organisational culture. IEEE Xplore.
[6]. Dadaboyev, S.M.U., Abdullayeva, J., Abbosova, N., et al. (2025). Role of artificial intelligence in employee recruitment: Systematic review and future research directions.Discov. Glob. Soc., 3, Article 99. https: //doi.org/10.1007/s44282-025-00246-w
[7]. Chamorro-Premuzic, T., Wade, M., & Jordan, J. (2019). AI and the future of work: How artificial intelligence is transforming talent management. Harv. Bus. Rev. Available at: https: //hbr.org/2019/07/ai-and-the-future-of-work [Accessed 23 Jun. 2025].
[8]. Upadhyay, A.K., & Khandelwal, K. (2018). Applying artificial intelligence: Implications for recruitment.Strateg. HR Rev., 17(5), 255–258. https: //doi.org/10.1108/SHR-07-2018-0051
[9]. Sharma, P., Mittal, S., & Goyal, M. (2020). AI-enabled employee engagement: Sentiment analysis and organisational productivity.Int. J. Hum. Cap. Inf. Technol. Prof., 11(3), 1–15. https: //doi.org/10.4018/IJHCITP.2020070101
[10]. Sumlin, C., De Oliveira, M.J.J., & Conde, R. (2024). Do the management process and organisational behavior modification enhance an ethical environment and organisational trust in the US and Brazil?Int. J. Organ. Anal., 33(5), 969–984.
[11]. Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms.Big Data Soc., 3(1), 1–12. https: //doi.org/10.1177/2053951715622512
[12]. Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. CHI Conference on Human Factors in Computing Systems, 1–14.
[13]. Colquitt, J.A., Conlon, D.E., Wesson, M.J., Porter, C.O.L.H. and Ng, K.Y. (2001). Justice at the millennium: A meta-analytic review of 25 years of organisational justice research.J. Appl. Psychol., 86(3), 425–445.
[14]. Ananny, M. and Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.New Media Soc., 20(3), 973–989. https: //doi.org/10.1177/1461444816676645
[15]. Agrawal, V., Kandul, S., Kneer, M., & Christen, M. (2023). From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts. arXiv. Available at: https: //arxiv.org/abs/2303.15411.
[16]. Chaudhary, M., Gaur, L., Chakrabarti, A., Singh, G., Jones, P., & Kraus, S. (2025). An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence.J. Innov. Knowl., 10(3), p.100700. https: //doi.org/10.1016/j.jik.2025.100700
[17]. Daly, S.J., Wiewiora, A., & Hearn, G. (2025). Shifting attitudes and trust in AI: Influences on Organisational AI adoption. Technol. Forecast. Soc. Change. https: //www.sciencedirect.com/science/article/pii/S0040162525001398
[18]. Florea, N.V., & Croitoru, G. (2025). The impact of artificial intelligence on communication dynamics and performance in organizational leadership.Behaviors, 15(2), Article 33. https: //doi.org/10.3390/behaviors 15020033
[19]. Behn, O., Leyer, M., & Iren, D. (2024). Employees’ acceptance of AI based emotion analytics from speech on a group level in virtual meetings.Technol. Soc., 76, Article 102466. https: //doi.org/10.1016/j.techsoc.2024.102466
[20]. Watanabe, Y., Takahashi, K., & Kono, T. (2025). AI feedback and workplace social support in enhancing employee development: The moderating role of emotional support from supervisors and colleagues.Sci. Rep., 15, Article 94985. https: //doi.org/10.1038/s41598 025 94985 0
[21]. Meng, Q., Wang, S., Li, Y., & Zhao, J. (2025). Effects of Employee Artificial Intelligence (AI) Collaboration on Emotional Fatigue and Counterproductive Work Behavior: The Moderating Role of Leader Emotional Support.Behav. Sci., 15(5), Article 696. https: //doi.org/10.3390/bs15050696
[22]. Rožman, M., Tominc, P., & Milfelner, B. (2023). Maximizing employee engagement through artificial intelligent organizational culture in the context of leadership and training of employees: Testing linear and non linear relationships.Cogent Bus. Manag., 10(2), Article 2248732. https: //doi.org/10.1080/23311975.2023.2248732
[23]. San Taslim, W. (2025). Employee involvement in AI driven HR decision making: A systematic review.SA J. Hum. Resour. Manag., 23, 1–12. https: //doi.org/10.4102/sajhrm.v23i0.2856
[24]. Vrontis, D., Arslan, M., & Pereira, V. (2023). Responsible artificial intelligence: A research agenda toward sustainable and ethical AI adoption in organizations.Technol. Forecast. Soc. Change, 187, 122273. https: //doi.org/10.1016/j.techfore.2022.122273
[25]. Kim, B.J. (2025). The dark side of artificial intelligence adoption: Linking organizational AI adoption and employee depression via psychological safety.Humanit. Soc. Sci. Commun., 12(1), 5040. https: //doi.org/10.1057/s41599-025-05040-2
[26]. Performio (2025). Unlocking AI Productivity: Bridging Human Adoption Gaps. [online] Performio. Available at: https: //www.performio.co/insight/unlocking-ai-productivity [Accessed 12 Sep. 2025].