Own your path
Excited about what’s next?
Book a demo
Most property managers don't know their AI screening tool is breaking the law. They find out when the lawsuit lands.
That's the uncomfortable reality behind a conversation Findigs CEO Steve Carroll had recently with ApartmentBuildings.com about AI bias in tenant screening. The problem isn't theoretical. Courts are settling these cases, regulators are issuing guidance, and the liability is landing on operators, not just the vendors who sold them the tool.
The numbers are concrete. SafeRent paid $2.275 million to resolve a class-action alleging its algorithm discriminated against minorities and voucher holders. TransUnion settled with the FTC and CFPB for $15 million over consumer reporting violations. In May 2024, HUD applied the Fair Housing Act's disparate impact standard directly to AI-based screening tools, making operators co-liable for the systems they deploy. Colorado went further in 2024, requiring annual impact assessments and discrimination-risk disclosures from vendors.
The mechanism behind this liability is subtle. AI models don't flag race or national origin. They flag zip codes. Income types. Employment structures. Those proxies correlate tightly with protected classes, and the model doesn't care. It just optimizes for whatever it was trained on. If the training data reflected historical discrimination in lending or housing, the model amplifies it.
Steve named three warning signs operators should watch for now: approval rates that fluctuate in ways that don't track industry norms, especially across income types or geographies; applicant disputes that rise month over month; and disputes that cluster among protected classes, like self-employed workers or gig-income earners. Any one of those patterns is worth a hard look at the tool generating the decisions.
The industry's response to these cases has mostly been to issue statements and point fingers. That's not enough anymore. Steve's observation cuts to the core of it: "The operators have had enough of vendors who are proud of their successes but blame others when things go wrong."
Findigs took a different position. It introduced a contractual fraud guarantee: if Findigs approves a fraudulent application, the company shares the financial consequence. That's accountability written into the contract, not the marketing deck. No other vendor in the category has done this.
Fair housing liability from AI screening isn't coming. It's here. Operators using black-box tools with no audit trail, no disparate-impact monitoring, and no contractual accountability are holding risk they likely haven't priced. The legal standard is moving fast, and the vendors selling "AI-powered screening" without the ability to explain their decisions won't survive the next wave of enforcement.
The right question for any operator evaluating a screening platform is not whether the AI is smart. It's whether the vendor stands behind every decision it makes.
Source: Steve Carroll's conversation with ApartmentBuildings.com