Starting February 2025, The European Union's Artificial Intelligence Act (EU AI Act) mandated an "AI Literacy" requirement. This means that any provider or deployer of an AI system needs to ensure that anyone using the system is informed on AI basics and how to use it responsibly.
Understanding your role and responsibilities
First of all, this mandate does not just apply to companies that are headquartered in the EU. You’re also affected if:
- You’re a U.S.-based company that places an AI system on the EU market— aka you offer AI products or services to customers within the EU.
- You’re a U.S. company that employs individuals within the EU and deploys AI systems that affect these employees.
“AI system” is a broad term. Essentially, it’s any software or machine-based tool that processes data to make decisions, predictions, or recommendations, often with some level of automation or adaptability – think ChatGPT, Amazon Alexa, Tesla Autopilot, or Google Translate.
And depending on how you’re involved in an AI deployment, you’re expected to meet this mandate differently. Here’s a basic breakdown of provider vs. deployer vs. user under the EU AI Act:
Providers: Organizations developing an AI system or general purpose AI model (or having a third party develop for them) that they market under their own brand.
Providers are responsible for ensuring employees involved in AI development, testing, and compliance are AI literate and that deployers of their AI system have proper instruction and transparency on use.
Deployers: Organizations that integrate or use AI systems in their operations but did not develop them – this is likely you.
Deployers are responsible for training anyone interacting with or making decisions based on AI on its limitations and risks, and monitoring AI performance and threats.
Users: People who interact with AI systems as a part of their job function.
Users are entitled to transparency on how AI is used in processes that impact them and to contest AI decisions that negatively affect them.
What is “AI literacy”?
The EU doesn’t directly define "AI literacy" – but to meet the requirement, AI providers and deployers must ensure that users of an AI system have enough AI literacy to:
- Make informed decisions about the use and oversight of AI systems based on their role and expertise.
- Recognize the opportunities and risks associated with AI, including its impact on individuals and society.
- Understand and mitigate potential harms that AI technologies might cause, ensuring responsible deployment.
The AI Act also doesn’t define a universal standard or passable level of AI literacy. Using a context-specific approach, employers are expected to determine an acceptable level of AI literacy for their employees based on the following criteria:
- Role and responsibilities: The specific duties of employees in relation to AI systems.
- Complexity and risk-level of AI systems: More complex or high-risk AI applications require a deeper understanding.
- Organizational context: The size, resources, and industry-specific requirements of the organization.
How to stay compliant
The first thing to understand is if the EU considers your AI system “high-risk” or not, because that will impact what you need to do to remain compliant (and help you avoid fines and other consequences).
An AI solution is high-risk if it:
- Directly impacts people's rights or safety (e.g., AI making hiring decisions, grading student exams, or approving medical treatments).
- Acts as a "safety component" in regulated products (e.g., an AI-powered medical diagnosis tool integrated into hospital equipment).
- Is used in a decision-making process where errors could cause serious consequences (e.g., AI in border control approving or denying visas).
If your solution is high-risk, you should expect mandatory compliance checks, mandated risk monitoring and pro-active incident reporting, and you may even need pre-market approval before deploying your AI system. Essentially, you will face far more scrutiny on the AI literacy of your team.
If your solution is not considered high-risk, compliance with this mandate is much less strict. You won’t be subjected to routine audits or investigation unless you have an incident with your AI system or receive a complaint, you don’t need approval to launch your AI system, and spot checks on your compliance are possible but not very likely.
That being said, keeping basic records of your AI literacy efforts is good practice to avoid any issues in the long run.
The EU AI Act does not prescribe specific steps for building AI literacy, but it does provide case studies of how compliant companies have approached it. Based on those references, here’s a plan we recommend.
1. Benchmark the current AI proficiency of your team
Based on the EU’s guidelines, your team needs a combination of AI knowledge, skill, and understanding in order to be AI literate. To assess their current strengths and weaknesses in these areas, you need to run a diagnostic or survey.
This diagnostic should test employees on their basic understanding of what AI is, how it works, and how to use it. Meaning, outside of asking basic multiple choice questions, you should also give them prompting tests to see if they know how to safely use AI in common use cases.
Pro tip: You don’t need to build this from scratch. We’ve already built an AI Proficiency Diagnostic that we help companies run internally, and use the data to help them set AI policies and strategy. Reach out to learn more.
2. Roll out AI literacy training
Your initial AI training should cover four areas:
- Foundational AI knowledge and skill – all employees should understand how AI works, how to prompt it at a basic level, and what to use it for.
- Responsible AI & governance – this goes deeper on how to handle biases, hallucinations, and data use, as well as how to remain accountable and compliant when using AI in decision making.
- Function-specific use cases – each business unit, especially those that are more language-intensive like marketing and sales, should get specialized training on how AI can impact their work.
- Role-level AI skills – to align with the EU’s context-based approach, executives, managers, engineers, compliance teams, and general employees need training that addresses their different levels of responsibility.
If you’re in a highly regulated industry, or your AI system is considered high risk, you should also provide industry-specific training that covers the risks and considerations that are unique to your industry.
Many of the companies in the EU’s compilation of best practices use certification as a way to ensure compliance and track progress. So structure your training programs around AI literacy certifications, role-specific competency benchmarks, and measurable KPIs (e.g., % of employees certified, AI skill improvement rates, risk awareness evaluations etc.).
These companies also tend to favor hands-on learning, including simulated use cases and ethical AI risk testing, to make sure employees can critically evaluate AI outputs.
And because AI’s capabilities are changing so rapidly, employees will need access to continuous learning in order to remain compliant with this mandate.
Pro tip: You don’t need to design all this training in-house. Section’s AI Academy is an annual membership to a suite of constantly growing certified AI courses that follow practical, hands-on frameworks – from core skills, to function-specific workshops, to leadership and strategy. We can even customize the content to your AI policies and tools.
3. Conduct and document regular audits
The diagnostic you ran at the beginning should be regularly run to help you monitor the progress of your team’s AI literacy, identify gaps in knowledge and skill, and help you iterate on your policies.
We recommend running this diagnostic annually – so you can have a recent report of AI literacy handy, but also so that you can action on areas of opportunity in your AI strategy.
Start now
Even if the chances of an audit are pretty low for your company, don’t wait until a random compliance check to get your house in order. Start benchmarking the AI literacy of your team and get training started – because beyond staying compliant, you need to stay ahead of the curve. This mandate is an opportunity to better take advantage of a technology that’s revolutionizing how we work.
And if this all seems a little daunting, don’t shoulder it alone. Section is an AI transformation partner that works with businesses of all sizes to audit, train, and transform their workforce with AI. Just get in touch.
Learn more about how Section works with companies and book a meeting with a member of our team.