Jinbo Chen is the founder of Valis Residential, a proptech company based at the UNC Innovation Center in Chapel Hill, North Carolina, focused on AI infrastructure for multifamily leasing and demand capture. Opinions are the author’s own.
Picture a standard Saturday at a class A property. The leasing office is closed. An artificial intelligence-powered chatbot handles incoming inquiries.
Prospect A asks about a balcony view. The AI responds instantly with photos, pricing and an application link. Prospect A secures the unit.
Ten minutes later, Prospect B asks about wheelchair accessibility for the same unit. The AI triggers its safety fallback: "I'm unable to answer specific accessibility questions. A leasing agent will be in touch Monday morning."
By Monday, the unit is gone.

In the eyes of fair housing law, this is not a technical glitch. It's a documented, two-tiered service system — frictionless access for a standard inquiry, a 48-hour barrier for a protected class. Under the Fair Housing Act, that's Disparate Impact. Intent doesn't matter.
The cases are already there
Operators who haven't been tracking the docket are behind.
In November 2024, a federal judge approved a $2.275 million settlement against SafeRent Solutions for producing discriminatory outcomes against Black, Hispanic and Housing Choice Voucher applicants. SafeRent didn't intend discrimination. Neither did the landlords who deployed its tool. The algorithm simply weighted certain data in ways that produced disparate outcomes at scale. Under the FHA, that was sufficient — and both the vendor and the operators faced liability.
In 2023, a private fair housing nonprofit sued Harbor Group Management after its AI leasing chatbot was found to systematically screen out Housing Choice Voucher holders. In August 2024, the U.S. Department of Justice sued RealPage for antitrust violations related to its AI pricing algorithm. By January 2025, Greystar and five other major operators had been added as co-defendants.
Three cases. Three different AI applications. Three different legal theories. One thing in common: operators who deployed AI without understanding what it was producing, and found out in federal court.
"FHA-compliant" is not an architecture
Every AI vendor will tell you their system is compliant. What they usually mean is that they've injected a compliance instruction into the system prompt — something like "do not discriminate."
That is not compliance. It's a wishful instruction to a probabilistic engine.
Large language models generate responses based on statistical patterns, not legal reasoning. A system told not to discriminate cannot reliably distinguish between a pet rent question and an ESA accommodation request. It cannot evaluate whether routing a wheelchair user to a Monday callback — while processing a balcony inquiry in real time — constitutes a documented Fair Housing violation. It predicts the next word. At scale, across thousands of interactions, those predictions create an evidentiary record that operators cannot see.
HUD's May 2024 guidance made this explicit: Algorithmic tools are subject to the FHA, disparate outcomes are actionable regardless of intent, and "we didn't know" is not a defense. Disability discrimination represented 54.59% of all fair housing complaints in 2024. AI leasing tools are now the primary point of contact for exactly these interactions — accessibility questions, ESA requests, accommodation inquiries — every night, every weekend, outside business hours.
Why the regulatory pullback makes this more dangerous, not less
In late 2025, HUD moved to de-prioritize Disparate Impact enforcement. Many operators read this as reduced exposure.
The opposite is true.
Private nonprofit fair housing organizations processed 74% of all housing discrimination complaints in 2024, versus HUD's 4.85%. These organizations operate entirely outside the administration's enforcement priorities — and they now have AI monitoring tools that can test a property's leasing chatbot remotely, anonymously and at scale. A fair housing org can run dozens of protected-class test inquiries against a live AI system, document every response and build an evidentiary case in an afternoon, from a laptop, without ever visiting the property.
ADA digital accessibility lawsuits surged 20% in 2025, approaching 5,000 filings. Forty percent are now filed by self-represented plaintiffs using AI tools to identify violations and draft complaints. Twelve state attorneys general are actively pursuing AI discrimination claims under state law, where protections often exceed federal standards.
The federal enforcement ceiling is lowering. The private litigation floor is rising. That gap is where cases are being built — at your properties, right now.
What operators can do this quarter
Compliance is not a prompt, it's architecture. Here are three practical steps:
Audit outputs, not system prompts. Run a systematic sample of protected-class-adjacent inquiries against your live AI system and document what it actually produces. The gap between the instruction and the output is your legal exposure.
Enforce equal transactional access as a hard constraint. Every sensitive inquiry — accessibility, ESA, Housing Choice Vouchers, accommodation modifications — must result in the same application link, the same tour scheduling access and the same documentation any other prospect receives. Not a referral to a human. Not a promise of follow-up. The same outcome quality, in the same interaction.
Require contractual accountability from vendors. The SafeRent settlement established that both vendor and operator face liability. "The vendor handles it" has no standing in a Fair Housing complaint. Operators must retain audit access to system outputs, specify how sensitive categories are handled, and maintain termination rights when the architecture doesn't meet the standard.
One question worth asking today
Your AI leasing assistant keeps a complete transcript of every interaction it has ever had.
Fair housing organizations are testing systems like yours right now. The question is not whether your vendor told you it's compliant. The question is: what do your transcripts actually say?
Most operators don't know. That's what the next wave of complaints is counting on.
Click here to sign up to receive multifamily and apartment news like this article in your inbox every weekday.