AI risks, compliance & leadership: why younger mentors are key
By tapping into the expertise of younger employees, companies can build a more AI-literate leadership team that helps future-proof their companies.
AIEDUCATION AND TRAININGDATA PROTECTION LEADERSHIPGOVERNANCE
Tim Clements
3/10/20253 min read


For too long, AI governance has been treated as a legal and technical issue when, in reality, it demands strategic leadership. The EU AI Act has made AI literacy a business imperative, and companies that fail to prioritise this will struggle to navigate the evolving regulatory landscape.
The solution? Reverse mentorship.
Reverse mentorship is not new, it's been around since the 1990s. I believe it's worth re-visiting given all the complexities of data protection, AI governance, and compliance with emerging laws and regulations. With the EU AI Act in particular, one challenge stands out: AI literacy at the leadership level. If your company is a provider and deployer of AI systems then you need to be aware of your obligations under article 4 of the EU AI Act.
This requirement extends to decision-makers who must navigate compliance, risk assessment, and ethical considerations surrounding AI-driven solutions. Yet, many senior leaders lack the digital fluency necessary to make informed decisions in these areas.
Traditional mentorship models, where senior executives guide younger employees, is no longer fit for purpose where digital fluency and emerging technologies are second nature to junior employees. Instead, companies can embrace reverse mentorship, a leadership tool where junior employees educate and guide senior executives on AI trends, data-related risks, and responsible governance.
Building a structured reverse mentorship programme
To ensure meaningful and effective impact, reverse mentorship must be structured, goal-oriented, and aligned with business priorities and context. Here’s how companies can implement it effectively:
1. Identify AI-savvy mentors from within
Junior employees, particularly those in data science, cybersecurity, or digital strategy roles, often have a better grasp of AI technologies than senior executives. These employees should be strategically selected to mentor senior leaders based on their expertise in AI ethics, data protection laws, and digital transformation.
2. Match mentors and mentees based on AI literacy needs
Pairing should be based on knowledge gaps. For example, an AI governance specialist could mentor a CISO on bias mitigation in machine learning models, while a privacy analyst could educate a CMO on AI-driven marketing risks and GDPR compliance.
3. Set objectives aligned with the EU AI Act
Mentorship goals should be explicitly tied to AI literacy requirements outlined in EU AI Act and could include objectives aligned with:
Safeguarding rights and well-being – understanding AI’s impact on fundamental rights and safety. Learning ethical AI principles and real-world risks like bias and misinformation.
Democratic oversight and accountability – gaining relevant knowledge of the EU AI Act, compliance requirements, governance structures, and best practices for aligning AI with democratic values.
Empowering informed decision-making – understanding AI capabilities, benefits, and risks. Understanding where to embed AI considerations into business strategies and spread relevant AI literacy across teams.
Ensuring technical and ethical integrity – understanding key AI components, ethical considerations, and the importance of human oversight to prevent failures and biases.
Transparency and explainability – learning how AI makes decisions, how to communicate AI-driven outcomes clearly, and ensuring compliance with transparency laws.
Building trustworthy AI – understanding AI’s impact on jobs, building stakeholder trust, ensuring compliance, and using AI for responsible innovation.
4. Encourage open, two-way learning
While junior employees serve as mentors, learning must be reciprocal. Senior leaders should share strategic insights, helping younger employees understand the broader business context. This helps create collaborative decision-making and aligns AI governance with company goals.
5. Use real-world AI case studies
Practical examples enhance engagement. Reviewing case studies about AI incidents, such as biased hiring algorithms or data breaches due to weak governance, will help senior leaders visualise potential risks and apply learnings in their business context.
6. Measure impact and adjust accordingly
Tracking progress is essential. Companies should assess:
AI literacy improvements among senior leaders through pre- and post-programme assessments.
Implementation of AI governance frameworks post-mentorship.
Leadership engagement in AI-related decision-making and compliance strategies.
Reverse mentorship is not just about upskilling executives, it’s about creating a culture where responsible AI and data protection are embedded in leadership thinking. It's also about setting the right 'tone from the top' that has a huge impact on the rest of the company.
Last thought: avoid generic solutions that only tick a box
Like so many aspects of TechReg, complying is a team effort involving multiple competences, from interpreting legal requirements through to analysing the impact of the requirements in the context of the business, to identifying appropriate solutions, to designing, building, testing and deploying the solution - and the needed project management to estimate and plan the work.
AI literacy is no exception for your solution to be effective, your strategy needs to be aligned to business context - not generic or copy/paste, and remember your executives are just one target group of stakeholders where reverse mentorship may help. Your AI literacy strategy needs to also take into account the unique needs of other groups.
It's a team effort and Purpose and Means typically supports legal teams with change management work - get in touch if you need help scoping and developing your AI literacy programme.
Purpose and Means
Helping compliance leaders turn digital complexity into clear, actionable strategies
BaseD in Copenhagen, OPerating Globally
tc@purposeandmeans.io
+45 6113 6106
© 2025. All rights reserved.