Hochul Scales Back New York AI Bill

Emily Lauderdale
hochul scales back ai bill
hochul scales back ai bill

Two New York Democrats say Gov. Kathy Hochul stripped key protections from a proposed AI safety bill after industry pushback, escalating a fight over how the state should police artificial intelligence. Assemblymember Alex Bores and State Sen. Andrew Gounardes, who backed the measure in Albany, argue the governor’s office removed safeguards they considered central to the bill’s purpose.

New York Dems Alex Bores and Andrew Gounardes say New York Gov. Kathy Hochul gutted their AI safety bill, citing pressure from industry leaders who Bores says have targeted him.

The dispute highlights a growing national debate: how to encourage new technology while preventing harm to workers, consumers, and public agencies. It also exposes tension inside the Democratic Party over the balance between economic growth and guardrails for emerging tools.

Background: A State Searching for Guardrails

States across the country are weighing new AI rules as federal action remains uncertain. New York City already regulates hiring tools under Local Law 144, which requires bias audits and disclosures for automated employment decision systems. That framework has set expectations for transparency in the workplace. State lawmakers have been exploring whether to extend similar checks into other areas where AI is used, such as government services, health, and finance.

Hochul has promoted tech investment during her tenure, including efforts to attract AI research and data center projects. Supporters of strong rules say guardrails can build public trust and prevent discrimination and safety failures. Industry groups warn that sweeping mandates could saddle startups with costs and paperwork, pushing jobs to other states.

See also  Five Essential Steps to Switch Car Insurance Providers

What Changed—and Why It Matters

Bores and Gounardes say the governor’s team removed the bill’s core protections. They contend these changes would leave the public without clear information about how AI systems are built, tested, and monitored for bias or safety risks. Bores also says he was personally targeted by industry leaders, suggesting a hard-edged pressure campaign around the bill.

While the lawmakers did not list every provision affected, debates in New York and elsewhere often center on three questions:

  • Should companies disclose when AI is used in decisions affecting people’s jobs, healthcare, credit, or housing?
  • Should independent audits be required to test for bias, safety flaws, or security gaps?
  • Who enforces compliance, and what penalties apply when AI systems cause harm?

Supporters of stronger rules say clear standards would protect workers and consumers and reduce legal uncertainty. Critics say early, strict mandates could lock in approaches before the technology stabilizes.

Pressure From Industry and a Clash of Agendas

The lawmakers’ claim of “pressure from industry leaders” fits a familiar pattern in state tech policy. Technology firms and trade associations often seek flexible rules, arguing that rapid product cycles make static regulation risky. Civil rights and labor groups counter that the cost of inaction will fall on people denied jobs, loans, or services by opaque systems.

Gounardes, a Brooklyn Democrat, has backed digital privacy and safety measures in past sessions. Bores, who represents parts of Manhattan, has focused on tech governance and public-sector modernization. Their alliance on AI safety suggests a wider bloc in the legislature that wants clearer standards for high-impact tools.

See also  Fed Projections See 3.4% Rate in 2026

Albany Dynamics and Next Steps

Disagreements between the executive branch and lawmakers are common late in the session, when policy and the state’s tech strategy collide. Negotiations can produce compromise language, sunset clauses, or pilot programs to test new rules before expanding them. If the governor’s office prefers a lighter touch, lawmakers may push for incremental steps: disclosures, risk assessments, or a task force with deadlines and public reporting.

The stakes are significant for employers that are already using AI to screen resumes, detect fraud, route public benefits, or analyze medical data. Without clear statewide rules, companies face a patchwork of local requirements and potential lawsuits. Advocates worry that weak oversight will allow biased or unsafe systems to spread.

What It Means for New Yorkers

For workers, the outcome could determine whether they learn when automated tools influence hiring or promotions, and whether those tools are tested for fairness. For consumers, it could shape how lenders, insurers, and landlords use algorithms in decisions that affect daily life. Public agencies face pressure to adopt new tools while maintaining accuracy, privacy, and due process.

Business leaders say predictability matters most. Clear, narrow rules can reduce risk and help companies plan investments. Advocates say transparency and meaningful enforcement will protect the public while allowing responsible innovation.

The clash over the bill shows how hard it is to write policy that keeps pace with technology. As negotiations continue, watch for revised text, public hearings, and whether lawmakers can secure audits, disclosures, or enforcement mechanisms with real teeth. The final deal will signal how New York plans to balance AI growth with public safety—an approach other states are likely to study.

See also  France Urged to Address Pension Deficit

About Self Employed's Editorial Process

The Self Employed editorial policy is led by editor-in-chief, Renee Johnson. We take great pride in the quality of our content. Our writers create original, accurate, engaging content that is free of ethical concerns or conflicts. Our rigorous editorial process includes editing for accuracy, recency, and clarity.

Emily is a news contributor and writer for SelfEmployed. She writes on what's going on in the business world and tips for how to get ahead.