AI in Hiring: What U.S. Employers Must Know About NYC Local Law 144

AI in Hiring: What U.S. Employers Must Know About NYC Local Law 144

AI recruitment technology transforming hiring processes in U.S. companies

As artificial intelligence reshapes recruitment across the United States, New York City has emerged as a regulatory pioneer with Local Law 144. This groundbreaking legislation mandates bias audits for automated employment decision tools (AEDTs), setting a precedent that could transform how American employers approach AI-driven hiring nationwide.

Understanding NYC Local Law 144: Core Requirements for U.S. Employers

NYC Local Law 144, enacted in 2021 and enforced since July 5, 2023, represents the first comprehensive regulation addressing algorithmic bias in employment decisions. The law applies to any employer or employment agency using automated tools to evaluate candidates or employees for positions within New York City—regardless of where the company is headquartered.

Compliance audit technology for business data analysis

What Qualifies as an Automated Employment Decision Tool?

An AEDT is any AI-powered system that provides simplified outputs—such as scores, rankings, or recommendations—used to substantially assist or replace human decision-making in hiring or promotions. Common examples include:

  • Resume screening software that automatically filters candidates
  • Video interview platforms with AI-powered assessment capabilities
  • Chatbots conducting initial candidate evaluations
  • Predictive analytics tools scoring applicant fit
  • Machine learning algorithms ranking candidates for roles

The Three Pillars of Compliance

1. Independent Bias Audits

Employers must engage independent third-party auditors to conduct annual bias assessments. These audits evaluate disparate impact across demographic categories, focusing on race, ethnicity, and sex—including intersectional combinations like "Hispanic women" or "Asian men."

For binary pass/fail systems, auditors calculate the selection rate: the number of candidates advanced divided by total candidates within each demographic group. For continuous scoring systems, auditors measure the scoring rate—the proportion of candidates scoring above the median within each category.

The critical metric is the impact ratio: each group's rate compared to the highest-performing group. Following the traditional four-fifths rule, ratios below 80% may signal potential bias requiring remediation.

Diverse job candidates in professional workplace interview setting

2. Transparent Candidate Notification

Organizations must provide candidates with at least 10 business days' notice before deploying AEDTs in their evaluation. This notification must clearly specify:

  • That an automated tool will be used
  • Which job qualifications and characteristics the AEDT assesses
  • The types and sources of data collected
  • The company's data retention policy
  • Options for alternative assessment methods, where reasonable

3. Public Disclosure of Audit Results

Employers must publish audit summaries on the employment section of their websites, including:

  • The date of the most recent bias audit
  • Summary of results showing impact ratios by demographic category
  • The distribution date of the AEDT version audited

Geographic Scope: Why All U.S. Employers Should Pay Attention

While NYC Local Law 144 technically applies only to positions within New York City limits, its practical impact extends nationwide. Any U.S. employer hiring remote workers who reside in NYC must comply—making this a concern for companies across all 50 states.

New York City skyline at sunset representing business and professional services

Moreover, NYC's law signals a broader regulatory trend. Similar legislation is under consideration in Illinois, California, and at the federal level. Forward-thinking employers are treating LL 144 compliance as a template for responsible AI governance that will position them favorably as regulations evolve.

Penalties and Enforcement

Non-compliance carries tangible consequences. The New York City Department of Consumer and Worker Protection (DCWP) imposes:

  • $500 fine for first violations occurring on the same day
  • $500 to $1,500 fines for each subsequent violation
  • Daily penalties—each day of non-compliance constitutes a separate violation

Beyond financial penalties, non-compliance poses significant reputational risks. In an era where candidates increasingly scrutinize employer ethics, failing to address algorithmic bias can damage your employer brand and competitive positioning in talent markets.

Practical Compliance Roadmap for U.S. Employers

Step 1: Inventory Your AI Tools

Conduct a comprehensive audit of your HR technology stack. Identify all systems that could qualify as AEDTs, including applicant tracking systems (ATS), assessment platforms, and interview technologies. Document where and how these tools influence employment decisions.

Step 2: Assess Data Availability

Determine whether you have the demographic data necessary for bias audits. Many organizations discover gaps in their data collection practices—particularly regarding race and ethnicity—that must be addressed before audits can proceed.

Step 3: Select an Independent Auditor

Choose qualified third-party auditors with demonstrated expertise in algorithmic fairness assessment. The auditor must be genuinely independent—vendors of the AEDT itself cannot conduct the bias audit.

AI recruitment technology solving hiring challenges for employers

Step 4: Develop Candidate Communication Protocols

Create clear, accessible notice templates that explain AEDT usage to candidates. Establish processes for handling alternative assessment requests and document how you respond to candidate concerns.

Step 5: Establish Ongoing Monitoring

Compliance isn't a one-time event. Implement continuous monitoring systems to track AEDT performance between annual audits, ensuring you catch and address emerging bias patterns before they become compliance issues.

Beyond Compliance: Building Ethical AI Hiring Practices

Smart employers view Local Law 144 not as a burden but as an opportunity. By proactively addressing algorithmic bias, organizations can:

  • Enhance diversity by identifying and eliminating hidden barriers in selection processes
  • Strengthen employer brand by demonstrating commitment to fairness
  • Reduce legal risk beyond just LL 144 compliance, including Title VII and ADA considerations
  • Improve hiring quality by ensuring AI tools truly identify the best candidates rather than perpetuating historical biases

Frequently Asked Questions

Does Local Law 144 apply to remote positions?

Yes. If you're hiring for a position where the employee will reside in New York City—even if working remotely—you must comply with Local Law 144.

Can we use an AEDT that shows bias in the audit?

The law doesn't explicitly prohibit using tools that show disparate impact. However, using such tools may violate federal, state, and local anti-discrimination laws. Most employers choose to remediate bias before deployment or discontinue problematic tools.

How often must bias audits be conducted?

Audits must be completed at least annually. The audit used for compliance must be no more than one year old at the time the AEDT is used.

What if a demographic category is too small for analysis?

Auditors may exclude categories representing less than 2% of the data from bias calculations. However, you must still disclose the number of individuals in these excluded categories.

The Future of AI Hiring Regulation in the United States

NYC Local Law 144 is just the beginning. The European Union's AI Act includes similar provisions for high-risk AI systems in employment. Several U.S. states are considering or have passed their own AI transparency requirements. Illinois mandates notification for AI video interview analysis. California is exploring comprehensive AI accountability legislation.

Federal agencies are also engaged. The EEOC has issued guidance on AI and discrimination, signaling increased scrutiny of algorithmic hiring tools. The White House has released a Blueprint for an AI Bill of Rights emphasizing algorithmic fairness.

Forward-thinking employers are getting ahead of this regulatory wave by establishing robust AI governance frameworks now—using NYC's law as a practical starting point.

Take Action: Share This Guide

Found this guide helpful? Share it with your HR and legal teams to ensure everyone understands the implications of NYC Local Law 144. Use the social sharing buttons below to spread awareness about responsible AI hiring practices across your professional network.

Share on Twitter Share on LinkedIn Share on Facebook

Next Post Previous Post
No Comment
Add Comment
comment url