Ethical AI training is among the top decision-making priorities for C-level leaders, according to Deloitte’s latest study on AI governance, leadership, and workforce management. In the survey of 100 C-level executives, the study found decision-making priorities evolve around AI training, risks, and governance structures. The study also analyzed leadership confidence in their organizations’ ethics frameworks and the impact of applying systematic governance in AI’s adoption and use.
Latest CyberTech News: SharpRhino Malware: New Threat to Corporate Networks
Here are the key takeaways from the Deloitte report on ethical AI training.
Strong ethical AI training and governance help organizations step up technology innovation
89% of C-level leaders cited that ethical governance supports an organization’s ability to innovate on technology. The majority of leaders agreed their workforce can independently make ethical decisions about AI. However, leaders have the final say in almost one-fourth of the cases. 53% of leaders feel professionals can decide on AI independently at larger companies with annual revenues of $1 billion and above. So, stronger ethical AI training and technology innovations can be linked to the size of the company, its revenue generation capacity, and governance structure influenced by empowerment at the individual level.
Beena Ammanath, executive director, a Global Deloitte AI Institute and Trustworthy AI leader at Deloitte LLP said- “As leaders look to strike a balance between innovation and regulation, ethically designed governance structures are important to hold both leaders and employees accountable in the responsible use of this technology. Recruiting and upskilling to build a prepared talent pool, providing employee trainings and establishing structures of leadership are some of the tactics that have emerged to drive AI innovation with an ethical focus.”
Workforce and Board of Directors trained on ethical AI best practices
Training leads the way for governance structures related to ethical AI, with over three-quarters (76%) of respondents indicating their organization conducts ethical AI training for the workforce, and 63% saying that they conduct ethical AI training for their organization’s board of directors. Trainings were followed by ethical AI review committees (46%) and ethical AI risk management frameworks (44%).
Balancing innovation with regulation (62%) emerged as the top priority among respondents regarding ethical issues in AI development and use, followed by ensuring transparency in data collection and use (59%) and addressing user and data privacy concerns (56%).
Internal AI upskilling versus lateral hiring from the talent markets
Over half of the respondents indicated their organization hired or is planning to hire an AI researcher (59%) and policy analyst (53%) for roles related to ethical decision-making for AI. More prominently, they are sourcing skilled employees through internal training (63%) over experienced hire and academic pipelines.
Some industries are more AI-driven than others.
According to Deloitte, AI is expected to have the biggest positive impact on factors such as supply chain responsibility and employee retention. Survey respondents cite supply chain responsibility (77%), brand reputation (75%), and revenue growth (73%) as the top three operational areas AI could positively impact in their organization. When it comes to their workforce, respondents predict AI to have a positive impact on employee retention (82%) followed by worker well-being (77%) and accessibility to professional education (77%).
Kwasi Mitchell, Deloitte US chief purpose & DEI officer. “By adopting procedures designed to promote responsibility and safeguard trust, leaders can establish a culture of integrity and innovation that enables them to effectively harness the power of AI, while also advancing equity and driving impact.”
The survey is a follow-up to Deloitte’s “State of Ethics and Trust in Technology” annual report that assessed if and how ethical standards are being applied to emerging technology.