EU AI Act 2025 for Indian AI Builders: GPAI Duties, High-Risk Rules, Deadlines, Fines, and Exactly How to Comply

EU AI Act 2025 for Indian AI Builders: GPAI Duties, High-Risk Rules, Deadlines, Fines, and Exactly How to Comply

EU AI Act 2025: If you build in India but sell or serve users in the EU, the EU Artificial Intelligence Act now applies to you on a clear schedule. The Act entered into force on 1 August 2024; GPAI (general-purpose AI model) obligations began on 2 August 2025; most high-risk AI system rules apply from 2 August 2026 (with some product-embedded timelines extending to 2027).

Non-EU providers may also need an EU authorised representative. Non-compliance can attract fines up to €35 million or 7% of global turnover. Below is a practical, step-by-step playbook tailored to Indian founders, CTOs, and counsel, with only official, citable sources. 

Table of Contents

The Timeline: What Kicks In When (and Why It Matters)

  • 1 Aug 2024: Act enters into force. Guidance starts trickling out.
  • 2 Feb 2025: Prohibitions (unacceptable-risk uses) and AI literacy begin to apply.
  • 2 Aug 2025: GPAI obligations and governance provisions apply (plus GPAI Code of Practice and guidelines published July 2025).
  • 2 Aug 2026: Core high-risk system requirements become applicable; broader transparency obligations (e.g., deepfake disclosures) complement them.
  • 2 Aug 2027: Extended window for high-risk AI embedded in regulated products.

The Commission has repeatedly said the timeline stands; GPAI → Aug 2025, high-risk → Aug 2026. Keep your Gantt chart aligned to these dates. 

Do You Fall Under “GPAI” or “High-Risk” (or Both)?

U AI Act 2025 for Indian AI Builders
  • GPAI model provider: You provide a general-purpose AI model (foundation/base model) that can be adapted across uses. Your obligations apply to the model itself, regardless of downstream use.
  • High-risk AI system provider: You place on the EU market an AI system used in Annex III areas (e.g., employment, education, essential services, biometrics) or safety components in regulated products.
  • You can be both: If you integrate your model into a system, you carry GPAI duties (Art 53) and any high-risk system requirements that apply.

Your GPAI To-Do List (Applies from Aug 2, 2025)

1) Draw up and maintain technical documentation on the model

Maintain living documentation explaining training & testing process, evaluations, and lifecycle info; use the AI Office’s model documentation form (in the GPAI Code of Practice) to streamline. 

2) Copyright compliance policy + publish a training-data summary

You must implement a policy to respect the EU Copyright Directive and publish a “sufficiently detailed summary” of the content used for training—using the official template issued July 2025. This applies to GPAI with or without systemic risk. 

3) Share info with integrators (downstream AI-system providers)

Provide necessary information/instructions to those integrating your model into AI systems—while protecting IP/confidentiality. 

4) If you’re outside the EU, appoint an EU authorised representative

Article 54 requires non-EU GPAI providers to mandate an EU-based representative before placing models on the EU market (some exemptions for fully open-source models that don’t pose systemic risk). 

5) If your GPAI is deemed “with systemic risk”

You will face additional duties (Article 55) such as model evaluations, adversarial testing, incident reporting, and cybersecurity controls; the Code of Practice provides a route to demonstrate compliance. 

Open-Source Models: What’s Exempt—and What Isn’t

Open-source GPAI providers may be exempt from some Article 53 items (e.g., parts of tech docs for integrators), yet must still publish the training-data summary and maintain copyright compliance; EU rep is typically not required unless the model presents systemic risk. Confirm scope carefully. 

High-Risk Systems: The Eight Pillars You Must Implement (from Aug 2, 2026)

If your system falls in Annex III (e.g., recruitment, education, essential services, biometrics), you’ll need to evidence the following:

  1. Risk management system (continuous).
  2. Data & data governance (quality, relevance, representativeness).
  3. Technical documentation (per Annex IV).
  4. Record-keeping (logs & traceability).
  5. Transparency & information to deployers (instructions, limitations).
  6. Human oversight (design for effective human control).
  7. Accuracy, robustness & cybersecurity (state performance & resilience).
  8. Post-market monitoring & incident reporting (ongoing vigilance).

CE Marking for High-Risk AI

High-risk systems need CE marking (or digital CE) to show conformity; include the notified body ID where applicable. 

FRIA: Fundamental Rights Impact Assessment (for some deployers)

Before deployment, certain public bodies or public-service providers and some Annex III use-cases must complete a FRIA to assess rights impacts (e.g., bias, exclusion, due process). If you supply such customers, expect FRIA inputs in RFPs. 

Limited-Risk & Transparency: Don’t Forget the Labels

If you run chatbots or synthetic media (deepfakes), you must tell people they’re interacting with AI and label AI-generated or manipulated content—subject to limited exceptions under law-enforcement contexts. Keep design patterns clear and accessible. 

Penalties: The New Cost of Non-Compliance

  • Banned practices: up to €35m or 7% global turnover.
  • Most other breaches (incl. high-risk/GPAI obligations): up to €15m or 3%.
  • Incorrect/misleading info to authorities: up to €7.5m or 1%.

    SME calibrations exist, but don’t bank on leniency.

Official Tools You Can Actually Use

  • GPAI Code of Practice (Transparency, Copyright, Safety/Security chapters) + Model Documentation Form; signatory option available.
  • Training-data summary template (with explanatory notice).
  • Commission FAQs/Guidelines clarifying the scope of GPAI duties and compute thresholds for “systemic risk”.

India → EU: Three Fast Tracks to Compliance

Track A — You provide a 

GPAI model

 (APIs, SDKs, on-prem)

  • Appoint an EU authorised representative (Article 54) if you’re outside the EU and not fully exempt as open-source. Document mandate, contacts, and 10-year retention.
  • Publish your training-data summary using the AI Office template, link it from your model card, and keep it updated with major version changes.
  • Copyright policy: implement opt-outs, DMCA/EU mechanisms, notice-and-action workflows, and training-set governance to align with the Copyright Directive.
  • Model docs: fill the model documentation form from the Code; map risks, evals, red-team/adversarial tests (if systemic risk).

Track B — You ship a high-risk AI system (Annex III)

  • Build the Annex IV tech dossier early (architecture, datasets provenance, testing, limits, human oversight).
  • Conformity assessment route: self-assessment vs notified body (depends on the module and whether harmonised standards are used). Affix CE mark at go-live.
  • FRIA readiness: prepare FRIA-ready inputs for public-sector buyers or where mandated (Article 27).

Track C — You do both (GPAI → product)

  • Treat model and system as separate compliance tracks; reuse artefacts (evals, data sheets) but avoid gaps—GPAI compliance doesn’t replace high-risk duties.

Your Product Marketing Isn’t Immune: Transparency Labels

If your app includes AI chat or generates images/video/audio, build UI labels that meet Article 50 (e.g., “AI-generated” overlays, bot disclosures). Spain has already flagged major fines aligned with the EU regime for unlabeled deepfakes—expect more national enforcement models. 

Enforcement & Fines: Who Polices, What’s at Stake

Member-state market-surveillance authorities and the EU AI Office coordinate enforcement; GPAI breaches can draw up to 3%/€15m, while prohibited practices can reach 7%/€35m. Document why your use case is not prohibited/high-risk, and keep your dossiers inspection-ready. 

Frequently Misunderstood (Indian Builders’ Edition)

  • “I host in India, so EU law doesn’t apply.” False. If your model/system output is used in the EU, the Act can apply.
  • “Open-source means no obligations.” Not quite—copyright policy and training summary still apply (unless specific exclusions), and systemic risk changes the calculus.
  • “GPAI compliance covers my hiring AI.” No—system rules are separate, and Annex III areas (like employment) have their own bar.

Ship to Europe with Confidence

A 30-day sprint plan for Indian AI teams

  • Day 1–5: Classify your product(s): GPAI vs system; check Annex III; draft your regulatory scope memo.
  • Day 6–10: For GPAI—start the model documentation form, adopt the training-data summary template, lock a copyright policy.
  • Day 11–15: If non-EU GPAI, shortlist and mandate an EU authorised representative (Article 54).
  • Day 16–25: If high-risk system—build the Annex IV dossier, plan conformity assessment, and CE route.
  • Day 26–30: Implement Article 50 UI labels (bots, deepfakes); write your post-market monitoring SOP; run an internal mock audit.

Bookmark these official pages for your SOP:

  • Application timeline & overview (EC), GPAI Code of Practice, GPAI FAQs/Guidelines, Training-data summary template, AI Act Service Desk (Article pages).

Read Also: Sustainable Innovations: Climate Tech Shaping a Greener 2026

FAQs: EU AI Act 2025

Q1. We’re an Indian startup offering an API to EU customers. Do the GPAI rules apply?

If your model is made available on the EU market, Article 53 applies from Aug 2, 2025; if you’re outside the EU, appoint an authorised representative (Art 54). 

Q2. What exactly must my training-data summary include?

Use the AI Office template; it must be “sufficiently detailed” to help rightsholders exercise their rights—not a token list. Keep it updated per release. 

Q3. Does open-source status exempt us from all GPAI duties?

No. Some tech-doc obligations are relaxed, but copyright policy and the training-data summary remain; systemic-risk models face stricter duties. 

Q4. How do I know if my system is high-risk?

Check Annex III categories and Article 6 classification rules; if in scope, build the Annex IV dossier and plan CE marking. 

Q5. Who must run a FRIA—and when?

Certain public bodies/public-service deployers and specific Annex III cases must perform a FRIA before deployment. Suppliers should be prepared to provide inputs. 

Q6. What are the fines?

Up to €35m/7% for prohibited practices, €15m/3% for other breaches (incl. GPAI/high-risk obligations), €7.5m/1% for misleading regulators. 

Q7. Are there official tools to help?

Yes: GPAI Code of Practice, Guidelines/FAQs, and the training-data summary template—all from the Commission/AI Office. 

Q8. Are deadlines likely to slip?

Despite industry pressure, the Commission has stuck with the schedule; plan on Aug 2025/26 milestones. 

Leave a Reply

Your email address will not be published. Required fields are marked *