USA Business Today

Using AI? You May Be Legally Required to Say So

New laws across the U.S. are forcing companies to be more transparent about their use of AI—especially in hiring, pricing, and customer service.

New laws across the U.S. are forcing companies to be more transparent about their use of AI—especially in hiring, pricing, and customer service.

As artificial intelligence continues to infiltrate every corner of modern business—from chatbots and credit scoring to hiring and personalization—the regulatory landscape is catching up fast. And in 2025, that means disclosure is no longer optional.

A growing wave of state laws, agency guidance, and enforcement actions are requiring companies to be upfront when using AI-driven tools—especially when those tools make or influence decisions that affect people’s rights, opportunities, or finances.

If your business uses AI and hasn’t updated its privacy policy, UX language, or operational workflows, you could be one audit—or one lawsuit—away from a major compliance issue.

Here’s what’s happening, where, and what you need to do.

What Is AI Disclosure?

AI disclosure refers to a company’s obligation to inform users when a process or decision is made (or influenced) by artificial intelligence or automated systems.

It’s no longer enough to hide behind “smart tools” or “digital optimization.” Regulators want clear, understandable notices—especially in areas where AI is:

  • Evaluating humans (like hiring or lending)
  • Making decisions with real-world consequences
  • Replacing human interaction (such as in customer service)

The Legal Landscape in 2025

As of this year, no federal law mandates AI disclosures nationwide—but several states and cities do, and more are joining them.

🟨 New York City

Passed Local Law 144, now in full effect, requiring:

  • Companies using automated employment decision tools to perform bias audits
  • Public disclosure of AI usage in hiring decisions
  • Applicant notification before any AI-powered screening

🟦 California

Passed the DELETE Act, which enhances data privacy and:

  • Requires companies to disclose data usage that contributes to AI training
  • Expands consumer rights to know how their data influences automated outcomes

🟥 Illinois

While known for BIPA (governing biometric data), recent court rulings suggest that AI-based facial recognition and other automation tools require explicit notice and consent

🟩 Colorado, Vermont, and Washington

These states are drafting bills that would require:

  • Notices when AI is used in high-risk areas (employment, housing, finance)
  • Mandatory opt-outs or human review options


Federal Pressure Is Building

While Congress has yet to pass comprehensive AI regulation, multiple federal agencies have issued strong guidance:

  • The FTC has warned that undisclosed or deceptive AI use may violate Section 5 of the FTC Act
  • The White House “AI Bill of Rights” (2022) laid the groundwork, encouraging:
    • Notice and explanation
    • Access to human alternatives
    • Protections against algorithmic discrimination

Even if non-binding, this framework is shaping enforcement expectations and industry standards.

Where AI Is Most Legally Sensitive

Not every AI use case triggers disclosure requirements—but these high-risk categories are being targeted by new rules:

AreaExamplesDisclosure Risk
Hiring & HR TechResume screening, interview scoring🚨 High
Finance & CreditLoan approvals, credit scoring, insurance rates🚨 High
Customer ServiceChatbots, automated email responses⚠️ Medium
Healthcare & WellnessDiagnosis tools, treatment recommendations🚨 High
Personalized PricingDynamic pricing based on user profile/behavior⚠️ Medium
Biometrics & SurveillanceFacial recognition, emotion detection🚨 High

If your AI tool evaluates, scores, or decides on people in these contexts, it should come with a disclosure.


What Businesses Should Be Doing in 2025

If you’re using AI in your workflows—either internally or via a vendor—you need a compliance strategy. Here’s how to start:

✅ 1. Audit Your AI Usage

Make a list of where automation or AI is involved:

  • Are you using ChatGPT or a chatbot?
  • Are resumes filtered by an algorithm?
  • Are prices adjusted dynamically based on user behavior?

Even third-party tools can trigger liability.

✅ 2. Disclose Clearly and Proactively

Don’t hide behind vague language like “technology-assisted decisions.” Use direct phrasing:

“This conversation is powered by AI.”
“Your application will be reviewed by an automated system.”
“Pricing may vary based on automated personalization.”

Put disclosures at the point of interaction—not buried in privacy policies.

✅ 3. Provide a Human Alternative

Several proposed laws—and the FTC’s guidance—stress the importance of access to human review:

  • Let users appeal automated decisions
  • Offer a contact option if they want to speak with a person
  • Flag areas where AI is used but not the final decision-maker

✅ 4. Update Privacy Policies and UX Copy

Make sure your privacy policy:

  • Discloses AI-powered features
  • Lists categories of personal data used to power those features
  • Explains users’ rights (opt-outs, appeals, etc.)

Your website and app UI should also match what your policy says—consistency matters in court.


Consequences of Noncompliance

So what happens if you don’t disclose?

  • Lawsuits: Under state privacy laws like California’s CCPA or Illinois’ BIPA, nondisclosure can lead to class-action lawsuits and statutory damages.
  • Fines: The FTC can fine companies for “deceptive practices,” including failure to disclose automated decision-making.
  • Brand damage: Hidden AI use (especially in hiring or pricing) can quickly spiral into PR crises, especially if bias or unfairness is uncovered.


Final Thought: Transparency Is the New Standard

If 2023 was about experimenting with AI and 2024 was about integrating it, 2025 is about being honest with your users. Customers, candidates, and regulators are asking tougher questions—not just about what your AI does, but whether you told people it was doing it.

AI isn’t going away. But if your business wants to keep using it without facing legal headaches, the message is simple:

Use it. Disclose it. And give people the right to question it.

Related Articles