1 Introduction
As a result of the intersection of artificial intelligence (AI) and operations management, organisations increasingly use AI systems to make decisions, enhance operations and allocate resources. A rapid infusion of AI in operations management raises significant ethical issues regarding fairness, accountability, transparency and privacy. Stakeholders, ranging from employees and customers to suppliers and contractors, can be impacted directly or indirectly by AI decisions made by operations management systems. Fairness in AI can be defined by ensuring non-discrimination. For instance, certain AI systems can discriminate against certain genders or racial groups if the underlying historical data reflect biases. Meanwhile, algorithmic programming can unintentionally create discriminatory outcomes against certain genders or races. Accountability requires organisations to be responsible for the decisions or actions made by AI systems when these decisions or actions affect human lives. Transparency builds trust by helping stakeholders understand how AI models arrive at their decisions, but transparency efforts can clash with privacy requirements, which are often paramount in fields such as health and finance. We engage with these issues in more detail, reviewing existing AI governance frameworks such as the OECD AI Principles and the EU Ethics Guidelines for Trustworthy AI. We also propose a hybrid governance framework for operations management, to help tailor existing frameworks to the specific needs of the industry [1]. Building on this concept, we illustrate the proposed framework through a case study using financial data, and provide an example of how to balance transparency and privacy in practice. The paper contributes to the field of AI ethics by drawing on recent work in this space and providing recommendations for organisations looking to develop AI responsibly and ethically in operational contexts.
2 Literature Review
2.1 AI Ethics in Operations Management
With the increasing integration of AI into operations management, issues of fairness, accountability and transparency as drivers of trust in AI are becoming increasingly important.¹ Fairness as it relates to AI can be defined as the non-discriminatory treatment of various stakeholders across all data processing and decision-making steps.² However, historical bias in data, inadvertent programming bias, and the esoteric nature of algorithms can cause AI systems to create unfair outcomes, which in turn can undermine stakeholder trust. Accountability implies that organisations be held responsible for AI-influenced decisions, especially where these decisions have an impact on employees, customers or suppliers. This section will explore concrete strategies for ensuring accountability, for example through the establishment of ethics oversight boards, algorithmic audits and responsibility by design throughout the entire AI lifecycle [2]. Transparency is another key driver of AI trust for stakeholders, as it enables them to understand the AI decision-making processes. This can lead to more acceptable outcomes but also AI systems that are more ethically aligned in organisational contexts. This section will analyse these principles and their implications for AI governance in operations.
2.2 Data Bias in AI Systems
Bias in data might be due to historical prejudice in training datasets, data-collection sampling biases, or preprocessing biases. It can have dire consequences in operations management, such as skewed resource allocation, workforce management or supplier relationships. This section explores the most common types of bias in AI models – such as representation bias, automation bias and bias amplification – and the mixed effect of all of them on decision-making. It also focuses on techniques that can be applied to mitigate these types of bias through bias detection algorithms, diversified data sampling and application of fairness constraints in training models. Case studies illustrate how organisations have successfully leveraged these bias mitigation techniques to promote fairness and ensure greater equity in the decision-making process [3].
2.3 Privacy Risks in AI Operations
AI systems commonly use large datasets, which can include sensitive personal information such as employee information and customer information. Privacy risks occur when organisations do not adequately protect this information, leading to it being subject to unauthorised access, misuse or breach. The ethical risk posed by privacy is intensified by regulatory requirements such as GDPR and CCPA, which mandate responsible collection, storage and sharing of data. This section outlines the major privacy risks associated with the use of AI in operational applications such as customer relationship management, employee tracking and supply chain visibility, and describes privacy-by-design principles as one way to proactively mitigate these risks. Data minimisation, anonymisation and secure storage should be baked into AI design from the start [4]. The section also outlines privacy-preserving methods such as differential privacy, secure multi-party computation and homomorphic encryption, which can be used to allow organisations to leverage AI for operational purposes without compromising sensitive data.
3 Governance Mechanisms for Reducing Data Bias and Privacy Risks
3.1 Governance Frameworks for AI in Operations
Governance frameworks are essential for ethically implementing and mitigating risk in operations management. Figure 1 demonstrates a comprehensive AI governance process, commencing with a well-defined use case, followed by identifying and understanding the data, documenting the models and results and verifying and monitoring the AI applications. This feedback loop ensures that the use cases are well-defined and documented and employs a continuous oversight. There are several international frameworks for AI that are aligned with this model of ethical best practices, ranging from data collection to model deployment, such as the OECD AI Principles and the EU’s Ethics Guidelines for Trustworthy AI [5]. These are not only a dominant best practice but also promote responsible use of AI and a transparent approach, offering many strengths, including enhanced transparency and management of risk. However, the one limitation is their lack of adaptability to meet individual industry needs. This section proposes a hybrid governance framework for the unique needs of operations management, demonstrating how multiple governance strategies can be adapted to specific operational contexts to ensure AI use is more effective and compliant [6].
Figure 1. AI Governance Framework (Source:Collibra)
3.2 Accountability and Transparency in AI Governance
Accountability and transparency are fundamental to ethical AI governance, and this section describes concrete steps to embed these principles at each stage in operationalising AI, such as creating ethics review boards, conducting regular audits, and ensuring that stakeholders are involved in every step of the process. Transparency is also further enhanced by XAI technologies so that users can understand how AI is making decisions – and build greater trust in them. Case studies from real-world organisations show how and where these principles could be baked in, as well as lessons learned on building accountability and transparency into AI. For example, some companies have created tools for algorithmic explainability, while others have developed data documentation standards to track the entire data lifecycle and report to the public [7].
3.3 Case Studies: Ethical AI Governance in the Financial Sector
As an experimental setting, we designed a simulated environment that replicates the work practices of a financial institution’s customer service department, where the fundamental ethical challenge of privacy in AI-driven customer service is simulated. The privacy challenges in AI-driven customer service boil down to the risks of unauthorised access to sensitive customer data, data breaches, and misuse of personal information. The experimental process is shown in Table 1.
Table 1. Experimental Process
Step |
Description |
Data Collection and Anonymization |
Synthetic customer data was generated and anonymized using GDPR-compliant pseudonymization and data masking techniques to protect identifiable information. |
AI Model Development |
A customer service chatbot was developed and trained to answer common inquiries, with data minimization principles ensuring access to essential data only. |
Privacy Compliance Mechanisms |
Privacy governance mechanisms were implemented, including automated data logging, real-time monitoring, and role-based access controls, alongside periodic privacy audits. |
Customer Satisfaction and Privacy Monitoring |
A feedback system was used to gather customer satisfaction on privacy, with real-time tracking of privacy incidents like unauthorized access attempts and processing errors. |
As shown in Figure 2, these experimental findings revealed substantial improvements in customer satisfaction and privacy compliance: GDPR-compliant anonymisation and privacy-preserving methods increased customer satisfaction by more than 20 per cent as users felt that their personal information was being better handled, while the implementation of privacy compliance system mechanisms, such as role-based access controls and data usage monitoring, resulted in a measurable reduction of privacy violations: over the experimental period, attempts to access data without authorisation reduced by around 30 per cent. This finding reinforces the necessity of strong privacy governance in AI-assisted customer service for banks. Institutionalizing privacy safeguards, such as data anonymity, access controls or real-time monitoring, will improve customer confidence in online service relationships and enhance regulatory compliance [8]. This is a revelatory case study showing the power of governance solutions based on good data privacy. It provides an example that other financial organizations can apply to address the ethical issues of AI applications.
Figure 2. Experimental Results in Ethical AI Governance for Customer Service
4 Transparency in AI Operations: Enhancing Ethical Practices
4.1 The Role of Transparency in AI
Transparency can help build trust and acceptance in these processes, since the greater the transparency the better chance an organisation has of avoiding misunderstandings about AI that could prompt resistance. There are several practical ways to make AI more transparent. The first is for organisations to disclose how key algorithms can fail, putting stakeholders on guard against situations where AI may not perform well, or produce biased results. For instance, an organisation could offer a document that explains the sources of each data point, the assumptions that underpin an AI model, and/or any well-known weaknesses. One way to do this is by setting up educational programmes that inform organisational stakeholders of AI’s capabilities and limitations. Such an initiative could involve training sessions, workshops or digital resources that explain to employees, managers and even customers how AI works and the ways in which it is utilised within the organisation. Educating stakeholders about AI can help them understand and interpret the AI’s outputs in a way that can provide a sense of control and familiarity. It’s also important to keep communication channels open during the deployment phase of an AI model [9]. Providing regular updates that detail the progress of the development, testing and application of an AI model, as well as the chance to provide feedback, can help to ensure that stakeholders’ concerns are addressed and that the AI is implemented in a way that fits with organisational values and stakeholder expectations, and therefore reduce resistance.
4.2 Transparent Data Practices and Bias Reduction
Transparent data practices help ensure that AI systems are accurate and free from bias. This section outlines strategies for increasing transparency along the data lifecycle from data gathering all the way through to model training. Documenting the sources of data, how it has been processed and transformed, and the rationale behind design decisions can help teams minimise implicit bias and improve the accountability of AI systems. Developments such as data lineage measures, automated auditing tools and bias detection algorithms are outlined as useful technologies for transparent and ethical data technology [10]. The value of cross-functional and culturally diverse teams in data processing is highlighted as a strategy for reducing implicit bias. Organisations such as Facebook and the Responsible Data Forum’s ‘Data Miners Diversity’ group are also featured for their progress in ensuring diverse and equitable data processing.
4.3 Balancing Transparency and Privacy
It is a major challenge to balance these two principles because, in most cases, more transparency in data processing and AI decision-making can breach privacy. This is particularly true when it comes to AI in healthcare or finance, where privacy is of utmost concern. When people call for more transparency in how AI works, they might mean different things. In many domains, it is enough to ensure that the stakeholders understand how AI models arrive at their conclusions. Sometimes, this requires providing information about the sources of data, the processing of data, or even the logic of a particular decision. But too much detail about data handling or model parameters could potentially reveal private or confidential information. A number of technical approaches have been developed that simultaneously promote transparency by revealing the insights provided by datasets or responses to queries, while also protecting privacy by keeping individual information hidden. For example, differential privacy adds noise (or uncertainty) to a dataset or to the results of a query so that the underlying information patterns can be analysed, but without revealing sensitive information about any individual. In a different approach, data anonymisation involves removing or encrypting the personally identifiable information in a dataset, and can also incorporate techniques such as pseudonymisation or masking to prevent re-identification [11]. For circumstances where the data is not shareable at all, such as in healthcare, where patient data might be subject to confidentiality regulation, or in finance, where regulatory constraints might restrict sharing, federated learning offers an innovative solution. Here, the AI model is trained at many decentralised devices and/or servers. The model updates, not the raw data, are shared to conduct the collaborative training across many sources while respecting privacy.
5 Conclusion
As AI increasingly supports decision-making in operations management, it is vital to design it ethically at its outset to enable the building of trust (among other benefits), maintain regulatory compliance and maximise the benefits of AI-based decision-making. This paper shows that fairness, accountability, transparency and privacy are important foundational principles for ethical AI governance. We evaluate existing international governance frameworks as well as propose a hybrid model for the unique context of operations management that identifies adaptable strategies for firms. In the financial sector, experimental results show that privacy-preserving methods such as differential privacy and federated learning improve customer satisfaction and reduce access to unauthorised data. These findings highlight the importance of trade-offs between privacy and transparency in sensitive industries. The findings contribute to the ethics of AI discourse by providing a framework and recommendations for firms who want to utilise AI responsibly. Ethical AI governance is critical for protecting both the stakeholders and enhancing operational resilience as well as building stakeholder trust to lay the foundations for sustainable AI adoption in operations management.
References
[1]. Venkatesh, V., Raman, R., & Cruz-Jesus, F. (2024). AI and emerging technology adoption: A research agenda for operations management. International Journal of Production Research, 62(15), 5367-5377.
[2]. Heyder, T., Passlack, N., & Posegga, O. (2023). Ethical management of human-AI interaction: Theory development review. The Journal of Strategic Information Systems, 32(3), 101772.
[3]. Attard-Frost, B., De los Ríos, A., & Walters, D. R. (2023). The ethics of AI business practices: A review of 47 AI ethics guidelines. AI and Ethics, 3(2), 389-406.
[4]. Camilleri, M. A. (2024). Artificial intelligence governance: Ethical considerations and implications for social responsibility. Expert Systems, 41(7), e13406.
[5]. Cebulla, A., Szpak, Z., & Knight, G. (2023). Preparing to work with artificial intelligence: Assessing WHS when using AI in the workplace. International Journal of Workplace Health Management, 16(4), 294-312.
[6]. Saeidnia, H. R. (2023). Ethical artificial intelligence (AI): Confronting bias and discrimination in the library and information industry. Library Hi Tech News.
[7]. Giovanola, B., & Tiribelli, S. (2023). Beyond bias and discrimination: Redefining the AI ethics principle of fairness in healthcare machine-learning algorithms. AI & Society, 38(2), 549-563.
[8]. Sham, A. H., et al. (2023). Ethical AI in facial expression analysis: Racial bias. Signal, Image and Video Processing, 17(2), 399-406.
[9]. Akinrinola, O., et al. (2024). Navigating and reviewing ethical dilemmas in AI development: Strategies for transparency, fairness, and accountability. GSC Advanced Research and Reviews, 18(3), 050-058.
[10]. Albahri, A. S., et al. (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156-191.
[11]. Elendu, C., et al. (2023). Ethical implications of AI and robotics in healthcare: A review. Medicine, 102(50), e36671.
Cite this article
Li,Z. (2024). AI Ethics and Transparency in Operations Management: How Governance Mechanisms Can Reduce Data Bias and Privacy Risks. Journal of Applied Economics and Policy Studies,13,89-93.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Journal:Journal of Applied Economics and Policy Studies
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Venkatesh, V., Raman, R., & Cruz-Jesus, F. (2024). AI and emerging technology adoption: A research agenda for operations management. International Journal of Production Research, 62(15), 5367-5377.
[2]. Heyder, T., Passlack, N., & Posegga, O. (2023). Ethical management of human-AI interaction: Theory development review. The Journal of Strategic Information Systems, 32(3), 101772.
[3]. Attard-Frost, B., De los Ríos, A., & Walters, D. R. (2023). The ethics of AI business practices: A review of 47 AI ethics guidelines. AI and Ethics, 3(2), 389-406.
[4]. Camilleri, M. A. (2024). Artificial intelligence governance: Ethical considerations and implications for social responsibility. Expert Systems, 41(7), e13406.
[5]. Cebulla, A., Szpak, Z., & Knight, G. (2023). Preparing to work with artificial intelligence: Assessing WHS when using AI in the workplace. International Journal of Workplace Health Management, 16(4), 294-312.
[6]. Saeidnia, H. R. (2023). Ethical artificial intelligence (AI): Confronting bias and discrimination in the library and information industry. Library Hi Tech News.
[7]. Giovanola, B., & Tiribelli, S. (2023). Beyond bias and discrimination: Redefining the AI ethics principle of fairness in healthcare machine-learning algorithms. AI & Society, 38(2), 549-563.
[8]. Sham, A. H., et al. (2023). Ethical AI in facial expression analysis: Racial bias. Signal, Image and Video Processing, 17(2), 399-406.
[9]. Akinrinola, O., et al. (2024). Navigating and reviewing ethical dilemmas in AI development: Strategies for transparency, fairness, and accountability. GSC Advanced Research and Reviews, 18(3), 050-058.
[10]. Albahri, A. S., et al. (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156-191.
[11]. Elendu, C., et al. (2023). Ethical implications of AI and robotics in healthcare: A review. Medicine, 102(50), e36671.