All articles
News14 min read

EU AI Act 2026: A Practical Guide for European Companies

On August 2, 2026 the EU AI Act becomes fully enforceable with fines up to 35M€. Operational guide for SMEs: checklist, risk levels, sandbox and grants.

Why is August 2, 2026 the key date for European companies?

On August 2, 2026 EU Regulation 2024/1689 — the AI Act — becomes fully applicable to high-risk artificial intelligence systems. From that date, national market surveillance authorities can issue fines up to €35 million or 7% of global turnover for the most serious violations. For European SMEs this is not a distant deadline reserved for legal departments: it is the date when any company using a CV screening tool, a credit scoring system or a conversational assistant must demonstrate it is compliant.

According to the Polytechnic of Milan AI Observatory (January 2026) and parallel findings published by market surveillance authorities across the EU, more than 60% of European companies using AI tools have not yet started a formal compliance path. The gap is not technological but informational. Large corporations have dedicated teams and in-house counsel; SMEs are often behind because they lack operational guidance that translates the Regulation's articles into concrete checklists.

This guide answers that exact need: what actually changes on August 2, 2026, how the four AI Act risk tiers work, how local implementing laws (such as Italy's Law 132/2025) interact with the European Regulation, which steps to take immediately, what fines are enforceable and how to leverage regulatory sandboxes and grants reserved for SMEs and startups. With real examples, precise timelines and a compliance checklist designed for companies under 250 employees.

August 2, 2026 does not introduce only technical obligations: it marks the shift from preparation to effective enforcement. National market surveillance authorities have already opened preliminary inspections across Italy, Spain and France, including investigations of AI vendors commonly used by SMEs.

What are the four risk levels of the AI Act?

The AI Act adopts a risk-based approach and classifies AI systems into four categories: unacceptable risk (prohibited), high risk, limited risk and minimal risk. The category determines applicable obligations, ranging from an outright ban to free circulation with basic transparency requirements. Understanding where each system falls is the first step of any compliance journey.

1. Unacceptable risk: the bans already in force

Since February 2, 2025, prohibited across the EU: governmental social scoring, emotion recognition in workplaces and schools, biometric categorization based on sensitive data (ethnicity, sexual orientation, political beliefs), and untargeted scraping of facial images to build recognition databases. For companies the ban is operational now: using emotion-monitoring software to evaluate staff performance exposes the company to the maximum sanction of €35 million or 7% of global turnover.

2. High risk: the core of 2026 obligations

The most demanding category operationally. Annex III lists eight high-risk areas: biometric identification, critical infrastructure, education and vocational training, employment and workforce management, access to essential public and private services, law enforcement, migration and asylum management, administration of justice. Within these areas fall very concrete SME use cases: CV screening tools, credit scoring, employee performance evaluation, health and insurance risk management platforms.

From August 2, 2026 these systems must meet stringent requirements: documented risk management across the lifecycle, quality and representativeness of training datasets, full technical file, automatic log recording, effective human oversight, registration in the EU high-risk database, CE marking and declaration of conformity.

3. Limited risk: transparency obligations

Includes chatbots, virtual assistants and systems that generate or manipulate content (deepfakes, AI-generated images and text). The main obligation is transparency: users must be informed they are interacting with a machine and that content was generated by AI. An SME using a chatbot on its website or a voice assistant for bookings falls here, with labeling and disclosure obligations proportionate to the purpose.

4. Minimal risk: free use

Spam filters, AI in video games, basic product recommendation systems, grammar checkers. No specific obligations beyond standard consumer law and data protection rules. Most AI systems in use by SMEs today fall into this category, which is why, once systems are mapped, compliance is often more manageable than expected.

Risk LevelExample SystemsMain ObligationsApplication Date
UnacceptableSocial scoring, workplace emotion recognitionFull prohibitionFebruary 2, 2025
HighCV screening, credit scoring, HR decisionsRisk management, technical file, CEAugust 2, 2026
LimitedChatbots, deepfakes, AI-generated contentTransparency and labelingAugust 2, 2026
MinimalSpam filters, recommendations, gamesNo specific obligationsImmediate

How do national laws (e.g. Italy's Law 132/2025) integrate the AI Act?

Several EU member states have adopted implementing laws that complement the AI Act. Italy's Law 132 of September 23, 2025, in force since October 10, 2025, is the first comprehensive national framework dedicated to AI. It does not replace the European Regulation — which remains directly applicable — but integrates it along three directions: areas left to member state discretion, rules for AI use in intellectual professions and the workplace, national instruments of governance, supervision and funding. Other EU countries are following similar patterns.

Dual compliance: the risk of applying only one layer

The coexistence of European and national rules creates what legal experts call dual compliance. SMEs must simultaneously meet the technical requirements of the AI Act (risk management, documentation, human oversight) and the additional obligations of local laws (worker disclosures, human dignity safeguards, transparency on automated decisions). The most common mistake is applying only one layer: when audited, national authorities contest the violation of both, with cumulative sanctions.

Additional obligations for workplace and regulated professions

Italy's Law 132/2025 requires that AI use in HR functions — selection, monitoring, performance evaluation — be explicitly disclosed to workers, with clear information on decision logic, anti-discrimination measures and the right to request human review of automated decisions. For regulated professions (lawyers, doctors, accountants) disclosure to the client is mandatory whenever AI plays a material role in the service. France's CNIL has announced similar enforcement priorities for HR systems from autumn 2026.

Competent authorities across Europe

Supervision is distributed across multiple bodies. In Italy: AgID and ACN for technical and cybersecurity aspects, the Data Protection Authority for privacy, sectoral regulators for banking (Bank of Italy), financial services (CONSOB) and media (AGCOM). In Spain AESIA, in France CNIL, DGCCRF and Arcom, in Germany BNetzA and federal regulators. The European Commission retains direct powers on general-purpose AI (GPAI) models of systemic scope. Evolus integrates audit log and traceability tools that streamline the preparation of evidence required by each authority.

SME compliance checklist: where to start?

AI Act compliance does not require radical technology transformation: it requires order, documentation and training. SMEs can reach a solid level of conformity in 8 to 12 weeks of structured work, without excessive costs. Here are the five operational steps to complete before August 2, 2026, in priority order.

1. Map AI systems in use

Build a corporate AI systems register: all tools containing AI components (chatbots, HR software, CRM with lead scoring, marketing automation platforms, content generation tools, intelligent surveillance systems, voice assistants). For each, record vendor, purpose, estimated risk category, data processed, internal owner, adoption date. Without this map it is impossible to understand which obligations apply or prepare the required documentation.

2. Risk classification

For each mapped system, verify its classification under the AI Act criteria: is it high-risk under Annex III? Is it limited-risk with transparency obligations? The check is not trivial: a recruiting tool using AI to filter CVs falls among high-risk systems, even when the vendor describes it as a simple decision support tool. The European Commission's official guidance, national authority opinions and sectoral guidelines are the reference points.

3. AI literacy: mandatory training for everyone

Article 4 of the AI Act requires AI literacy for all personnel using, developing or managing AI systems. From August 3, 2026 authorities can verify compliance with this obligation. No university degree is needed: documented training paths are required, enabling employees to recognize system errors, understand algorithmic bias and verify the reliability of generated outputs. Documentation is essential: hour logs, topics covered, acquired competencies, certificates issued.

4. Governance and technical documentation

High-risk systems require a complete technical file: risk management policy, training dataset description, performance and bias test results, human oversight procedure, post-market monitoring plan, user instructions. Limited-risk systems require much less: user notice, AI-generated content labeling, complaint handling procedure. Evolus provides traceability, audit log and integrated reporting tools that streamline the systematic collection of this evidence over time.

5. Responsibilities and procedures

Appoint an internal AI officer (no need for a new dedicated role: it can be the existing DPO, the IT lead or a management member), define incident response procedures for serious malfunctions or discriminatory outcomes, plan an annual review of the AI register. Governance is not a static document: it is a live cycle that follows the evolution of tools adopted and of the regulatory framework.

  1. Weeks 1-2: map all AI systems in use and identify vendors
  2. Weeks 3-4: risk classification for each system and prioritization
  3. Weeks 5-8: AI literacy training and documentation production
  4. Weeks 9-10: appoint AI officer, define procedures, collect vendor disclosures
  5. Weeks 11-12: final review, internal audit, continuous improvement plan

An SME with 50 employees and 6 AI systems in use can reach compliance with an investment estimated between €8,000 and €15,000, covering training, legal counsel and documentation. The average cost of a sanction — even under the attenuated SME regime — is at least 50 times higher.

What are AI Act fines and who enforces them?

AI Act fines are the most severe ever introduced in EU digital regulation, exceeding even GDPR's maximum ceilings. Article 99 of the Regulation sets three tiers calibrated on violation severity. SMEs and startups benefit from an attenuated regime: the applicable fine is the lower of the fixed ceiling and the turnover percentage, dramatically reducing financial exposure.

The three sanction tiers under Article 99

Violation TypeMaximum FineApplication Date
Prohibited AI practices (Art. 5)€35M or 7% of global turnoverAlready in force since 2/2/2025
Non-compliance of high-risk systems€15M or 3% of global turnoverFrom 2/8/2026
Inaccurate information to authorities€7.5M or 1% of global turnoverFrom 2/8/2026

Attenuated regime for SMEs and startups

For SMEs (under 250 employees and under €50M turnover) and startups, the applicable fine is the lower of the fixed ceiling and the turnover percentage. Concrete example: an SME with €2M turnover violating high-risk rules faces a maximum of 3% of €2M, i.e. €60,000, not €15M. A real protection that nevertheless does not exempt from substantive obligations: the goal is not to punish disproportionately but to ensure compliance remains sustainable for smaller businesses.

Who issues the fines

Fines are issued by national market surveillance authorities. In Italy AgID and ACN for technical and cybersecurity aspects, the Data Protection Authority for personal data processing, sectoral regulators for regulated domains. Across Europe: AESIA in Spain, CNIL, DGCCRF and Arcom in France, BNetzA and federal regulators in Germany. The European Commission retains direct powers on general-purpose AI (GPAI) models of systemic scope, coordinating action among member states.

Regulatory sandboxes and grants: what can an SME obtain?

The AI Act does not impose only obligations: it also introduces a system of SME-specific incentives and startups intending to develop or test AI solutions. Article 62 of the Regulation provides priority and free access to national regulatory sandboxes, discounts on conformity assessment fees and simplified technical documentation. At national level, Italy's Law 132/2025 establishes an investment fund of up to €1 billion for startups and SMEs active in AI, cybersecurity and emerging technologies.

What a regulatory sandbox is and how it works

A regulatory sandbox is a controlled experimentation space where an SME can test an innovative AI system in real conditions, under the supervision of a national authority, without incurring sanctions for the duration of the test, provided that it respects the agreed plan and follows authority guidance in good faith. It is the ideal tool to innovate without getting stuck in red tape: it allows validation of a solution, accumulation of technical evidence and final certification with reduced time and costs.

Incentives for innovative startups and SMEs

In 2026 innovative European startups can access particularly rich incentive systems. In Italy alone: Smart&Start Italia (zero-rate financing up to €1.5M), ON – Oltre Nuove Imprese a Tasso Zero, R&D tax credit, the new Hyperamortization 2026-2028 for 4.0 assets and AI solutions. These tools are cumulable with AI compliance support measures, making AI investment economically sustainable even for very small companies. Similar programs exist in France, Germany and Spain at national and regional level.

How to apply

To access a national sandbox, submit an application to the competent authority with a detailed project: AI system description, experimentation goals, data used, risk mitigation measures, exit plan. Selection prioritizes solutions with relevant social or industrial impact and SMEs with an innovative profile. Operational guides published on the European Commission portal and by national authorities are updated monthly: worth checking before launching a project.

Frequently asked questions on the AI Act 2026

Are SMEs under 50 employees exempt from the AI Act?

No. The AI Act applies to all companies that use or provide AI systems, regardless of size. However, attenuated measures are in place for SMEs and startups: simplified technical documentation, free access to regulatory sandboxes, fines reduced to the lower of the fixed ceiling and the turnover percentage. No total exemption, but a lighter path that makes compliance sustainable even for small businesses.

Is our customer care chatbot a high-risk system?

Generally no. Customer care chatbots fall under the AI Act's limited-risk category, with obligations mainly on transparency: informing users they are interacting with a machine and labeling AI-generated content. It becomes high-risk only if the chatbot makes automated decisions with significant impact on the user, such as granting credit or excluding from an essential service.

What happens if we use AI software from non-EU vendors?

If the AI system is used in the European Union or produces effects on European citizens, the AI Act applies regardless of the vendor's location. The deploying SME, as deployer, must verify that the provider has met its obligations (CE marking, technical documentation, declaration of conformity). Responsibility for compliant use always remains with the European company adopting the solution.

Could the August 2, 2026 deadline be postponed?

A Digital Omnibus proposal is under discussion at EU level that would postpone some high-risk deadlines until December 2, 2027. As of April 2026 the proposal has not yet been adopted: the August 2, 2026 deadline remains legally binding. Companies should proceed with compliance without relying on unconfirmed postponements.

What is the difference between the AI Act and national laws like Italy's Law 132/2025?

The AI Act is EU Regulation 2024/1689, directly applicable in all member states without transposition. National laws like Italy's Law 132/2025 integrate the AI Act in areas left to member state discretion: labor, regulated professions, healthcare, justice, public administration, governance and funding. Companies operating in those countries must comply with both layers simultaneously — so-called dual compliance.

Comparisons

Compare Evolus with competitors

See how Evolus stacks up against other AI platforms on the market.

Want to see AI in action?

Request a personalized demo and discover how artificial intelligence can transform your business processes.