The Fair Housing Act (“FHA”), enacted more than fifty years ago, prohibits discriminatory practices in housing. The FHA makes it illegal to “make unavailable or deny . . . a dwelling to any person” or “discriminate against any person in the terms, conditions, or privileges of sale or rental of a dwelling, or in the provision of services or facilities in connection therewith” because of that person’s race, color, religion, sex, familial status, national origin, or disability.[i] In many jurisdictions, it is also illegal to discriminate on the basis of income (e.g. Section 8 vouchers).[ii]
But recent technological advancements have raised new questions about the statute’s reach—both in terms of which entities may be liable for violating the FHA and what new technologies may run afoul of the statute’s prohibitions. For example, companies that use, facilitate, or support digital advertising need to be particularly cognizant of the FHA’s purview. And lenders that utilize algorithms or contract with third parties that use proprietary technology to evaluate prospective applicants need to be conscious of potentially discriminatory methods or effects.
Artificial Intelligence – Efficiency and Exposure
It is no secret that artificial intelligence (“AI”) provides both companies and consumers with powerful and effective tools to manage and sift through the myriad of data constantly at our fingertips. A simple Google search of your favorite TV show will likely reveal thousands or even millions of articles. Search engines around the world are now programmed to learn the products you like and then instantaneously deliver targeted electronic advertisements on the basis of clickstream data, search data, purchase data, and profile data. AI in your car can scan your surroundings and detect dangers long before the human brain can comprehend a threat; AI in your music can recommend new favorite songs; AI in your thermostat can anticipate and adjust the temperature in your home. AI promises to revolutionize the legal industry. Without doubt, AI and the technological revolution we are currently experiencing are changing the way we experience life. But like all revolutions, unintended consequences abound. Among them is the potential for unintentional illegal discrimination.
All companies that trade in AI—including the internet power brokers—are attempting to navigate a world in which their customers, the purchasers of advertising, demand and pay for powerful tools that target their intended customers. And on its face, there is nothing wrong with providing the consumer with products they likely desire. But what happens when these advertisements unintentionally discriminate, and thus violate existing consumer protection laws such as the FHA?
Federal Challenges to Algorithmic Abilities
The Department of Housing and Urban Development (“HUD”) is reportedly investigating multiple companies for their use of AI and its algorithmic ability to provide advertising to homeseekers. Namely, the targeted advertising allows advertisers to target particular individuals, thereby avoiding certain protected classes and thus discriminating in violation of the FHA.
In the civil context, in August 2018, plaintiffs sued CoreLogic Rental Property Solutions, a leading consumer reporting agency specializing in tenant screening, for alleged discrimination. In March 2019, plaintiffs defeated CoreLogic’s motion to dismiss.[iii] CoreLogic offers a product called “CrimSAFE” to assist landlords in determining whether to accept a prospective tenant based on his or her criminal history. Plaintiffs alleged that CrimSAFE’s proprietary algorithm did not take into account, among other things, certain factors related to whether the plaintiff applicant posed any actual threat to safety or property. On this basis, plaintiffs alleged multiple violations of the FHA, including that CoreLogic’s practice of making automatic decisions based on the existence of a criminal record or charge vis a vis their algorithm—without holistic assessments per tenant—had an unlawful disparate impact on Latinos and African Americans. Although CoreLogic alleged that the FHA applied only to housing providers; that there was an insufficient nexus between its policies and the denial of housing; and plaintiffs could not state a claim for disparate treatment or disparate impact, the District of Connecticut denied the Motion to Dismiss on all counts.
Algorithms Under Attack?
Legislators are taking notice of the potential inadvertent discriminatory effects of algorithmic-driven technology. Namely, on April 10, 2019, three U.S. Senators, including presidential candidate Cory Booker, introduced the Algorithmic Accountability Act.[iv] This legislation would require companies to “study and fix flawed computer algorithms that result in inaccurate, unfair, biased, or discriminatory decisions impacting Americans.”[v] The legislation would grant the Federal Trade Commission (“FTC”) broad enforcement measures, including the ability to treat algorithmic-based discrimination as Unfair or Deceptive Acts or Practices or “UDAP,” a law that has been successfully used by the FTC and the states to extract millions of dollars in fines from offending companies.[vi]
Although the passage of the Algorithmic Accountability Act is far from a sure bet, we can be certain that new laws and ever-changing precedent will continue to evolve with the digital AI revolution. This means that all companies need to be vigilant in their understanding of how technology affects their existing legal obligations, especially under anti-discrimination laws such as the FHA. As HUD’s General Counsel Paul Compton stated, “Even as we confront new technologies, the fair housing laws enacted over half a century ago remain clear—discrimination in housing-related advertising is against the law.”[vii]
When in doubt, if contemplating implementing new technology, particularly technology using algorithms, seek advice of counsel to be sure any effects are not inadvertently violating the law. Although AI promises exciting developments and cost-savings, with great efficiency comes responsibility.
[i] 42 U.S.C. § 3604(a), (f)(1); 42 U.S.C. § 3604(b), (f)(2).
[ii] Poverty & Race Research Action Council, Expanding Choice: Practical Strategies for Building a Successful Housing Mobility Program (Jan. 30, 2019), https://prrac.org/pdf/AppendixB.pdf.
[iii] Connecticut Fair Hous. Ctr. v. Corelogic Rental Prop. Sols., LLC, 369 F. Supp. 3d 362 (D. Conn. 2019).
[iv] Algorithmic Accountability Act of 2019, S. ____,116th Cong. (2019).
[v] Press Release, Senator Cory Booker, Booker, Wyden, Clarke Introduce Bill Requiring Cos. to Target Bias in Corp. Algorithms (Apr. 10, 2019), https://www.booker.senate.gov/?p=press_release&id=903.
[vi] 15 U.S.C § 45(a)(1). UDAP laws are foundational consumer protection laws that grew out of the 1914 Federal Trade Commission Act (the “FTC Act”). In prohibiting “unfair or deceptive acts or practices,” Congress intentionally framed the FTC Act’s language in broad terms so that courts could develop and refine definitions of “unfair or deceptive practices.” Both the federal and state governments have a broad scope in the type of conduct under their purview—telemarketing, automobile sales, mortgage lending, credit repair organizations, and other marketplace transactions are regularly subject to UDAP actions.
[vii] See Press Release, Dep’t of Hous. and Urban Dev., HUD Charges Facebook with Hous. Discrimination over Co.’s Targeted Advert. Practices (Mar. 28, 2019), https://www.hud.gov/press/press_releases_media_advisories/HUD_No_19_035.