Get ahead of AI regulation

Hannia Zia, VP Product

Hannia Zia, VP Product

Feb 5, 2025

In both Europe and the United States, a broad array of regulations has been established to push for transparency and explainability when using AI in key workflows in many sectors.

There is also regulation around ensuring accuracy, such that companies may be liable to hefty fines if their AI “hallucinates” and misleads the end user.

Europe + UK

EU AI Act: Penalties for up to €7.5 million or 1.5% of annual global turnover for providing incorrect or misleading information. High-risk AI systems must be transparent, explainable, and auditable, otherwise they will be fined up to €20 million or 4%. High risk is defined as:

  • Recruitment

  • Medical devices/decisions (software included)

  • Creditworthiness

  • Benefits

  • Health and life insurance

  • Safety of critical infrastructure (e.g., energy, transport)

GDPR (General Data Protection Regulation) - Article 22: Automated decision-making that significantly affects individuals must be explainable.

Financial Conduct Authority (FCA) AI Regulations: AI models used in financial services must be explainable and accurate.

US

Equal Credit Opportunity Act (ECOA) & Fair Credit Reporting Act (FCRA): When AI/ML models are used for credit decisions, lenders must provide reasons for adverse decisions.

FDA Regulations for AI in Healthcare: AI-driven medical decisions must be interpretable.

New York City Local Law 144: Requires businesses to conduct bias audits on their automated employment decision tools 

National Defense Authorization Act for Fiscal Year 2020: Ethical use of artificial intelligence in military systems, emphasizing the need for reliability, transparency, and governability.

Algorithmic Accountability Act: Proposed legislation that would require companies to conduct impact assessments of their AI systems related to privacy, security, and the potential for bias, particularly when these systems are used for decision-making processes that impact consumers.

Consumer Financial Protection Bureau (CFPB) Enforcement: Actions against companies with AI systems producing inaccurate consumer credit results.

Federal Trade Commission (FTC) Regulations: Penalties for AI systems that provide false or misleading information to consumers.

How UnlikelyAI’s tech can help

  • Explainability: UnlikelyAI provides transparent, auditable reasoning for decisions, unlike black-box LLMs.

  • Accuracy: Minimizes hallucinations by using LLMs only for narrow, low-risk tasks (e.g., data extraction).

  • Flexibility: Handles sparse or evolving information by flagging "don’t know" scenarios for human review.

  • Regulatory Compliance: Tailored for high-stakes industries (e.g., healthcare, legal, finance) where accuracy and compliance are critical.