AI Business value

See How Easily You Can Break the Law With AI

The EU AI Act is not a policy exercise—it’s a liability framework. Certain AI practices are prohibited, others require documented controls, logging, human oversight, and incident processes. This article breaks down your obligations as a deployer or provider and includes a one-page readiness scorecard to identify your biggest legal gaps in 10 minutes.

You don’t need a complex AI system to create legal risk. In most companies, a single prompt, a copied spreadsheet, or a “quick check” with an AI tool is enough. No intent. No bad faith. Just everyday work.

 

Lets look at some example use cases that break the law – without your company noticing. 

 

This article shows simple, common ways AI is already being used across HR, finance, and procurement — and how those seemingly harmless actions can violate data protection, employment law, or internal controls. AI compliance is not about stopping innovation. It is about understanding how easily the line is crossed without anyone noticing.

1. Procurement (Sourcing / Purchasing)

1.1 Supplier evaluation using generative AI

A buyer uploads bid data, service descriptions, or email correspondence into an AI tool to compare suppliers or generate rankings.
Risk:

  • Confidential contract and pricing data leaves the organization
  • No traceability of how evaluations are generated (bias, hallucinations)
  • Potential violations of NDAs and competition law
  • Decisions are not audit-proof or properly documented

1.2 Contract drafting or clause review using AI

AI is used to “quickly review” procurement contracts or suggest alternative clauses.
Risk:

  • Legally incorrect or incomplete clauses are adopted
  • Responsibility for legal assessment becomes unclear
  • False sense of security: “the AI has checked it” replaces proper legal review

1.3 Preparing price negotiations with AI

Prompting such as: “How can I push this supplier to lower their price?” including specific supplier information.

Risk:

  • Sensitive business relationships are disclosed externally
  • AI generates aggressive or unethical negotiation strategies
  • Reputational damage if such usage becomes known

2. Human Resources

2. 1 Applicant pre-selection using AI

CVs and application documents are uploaded to AI tools to create rankings or shortlists.

Risk:

  • Processing of highly sensitive personal data
  • Discrimination due to training bias (age, gender, origin)
  • Violations of GDPR and anti-discrimination law
  • Lack of transparency towards applicants

2.2 Performance evaluations via AI-generated summaries

Managers use AI to summarize or assess employee feedback, goal achievement, or meeting notes.

Risk:

  • Subjective or biased assessments are “objectified”
  • Employee data is processed without clear purpose limitation
  • Escalation and liability risks in cases of termination or promotion decisions

2.3 Drafting warnings or termination letters

HR uses AI to “efficiently formulate” employment law documents.

Risk:

  • Legally flawed wording
  • Lack of case-specific consideration
  • High legal risk in labor court proceedings

3. Finance (Accounting, Controlling, Finance)

3.1 Financial data analysis using AI

Controlling uploads P&L data, cash flow figures, or forecasts into AI tools to analyze deviations or cost-saving potential.

Risk:

  • Highly sensitive financial data leaves the organization
  • No control over storage or secondary use
  • Violations of internal control systems (ICS)

3.2 Automated forecasts and predictions

AI is used to generate revenue or liquidity forecasts that directly influence management decisions.

Risk:

  • False assumptions or hallucinations remain undetected
  • Management relies on unvalidated models
  • Liability risks due to wrong decisions

3.3 Support for financial statements or valuations

AI assists with provisions, valuations, or commentary for annual financial statements.

Risk:

  • Blurring of decision preparation and decision-making
  • Lack of traceability for auditors
  • Compliance and governance risks

Clear conclusion

These use cases are not “wrong” in the sense of being explicitly forbidden — they are risky because:

  • Responsibility becomes unclear
  • Data is used in an uncontrolled manner
  • Decisions are no longer explainable
  • Business units use AI as a shortcut without governance

This is exactly where meaningful AI governance comes in: not as a brake on innovation, but as clarity about who is allowed to do what, with which data, for which purpose — and who ultimately carries responsibility.