Introduction
The Fair Housing Act (“FHA”), enacted more than fifty years ago, prohibits discriminatory practices in housing. The FHA makes it illegal to “make unavailable or deny . . . a dwelling to any person” or “discriminate against any person in the terms, conditions, or privileges of sale or rental of a dwelling, or in the provision of services or facilities in connection therewith” because of that person’s race, color, religion, sex, familial status, national origin, or disability.[i] In many jurisdictions, it is also illegal to discriminate on the basis of income (e.g. Section 8 vouchers).[ii]
But recent technological advancements have raised new questions about the statute’s reach—both in terms of which entities may be liable for violating the FHA and what new technologies may run afoul of the statute’s prohibitions. For example, companies that use, facilitate, or support digital advertising need to be particularly cognizant of the FHA’s purview. And lenders that utilize algorithms or contract with third parties that use proprietary technology to evaluate prospective applicants need to be conscious of potentially discriminatory methods or effects.
Artificial Intelligence – Efficiency and ExposureIt is no secret that artificial intelligence (“AI”) provides both companies and consumers with powerful and effective tools to manage and sift through the myriad of data constantly at our fingertips. A simple Google search of your favorite TV show will likely reveal thousands or even millions of articles. Search engines around the world are now programmed to learn the products you like and then instantaneously deliver targeted electronic advertisements on the basis of clickstream data, search data, purchase data, and profile data. AI in your car can scan your surroundings and detect dangers long before the human brain can comprehend a threat; AI in your music can recommend new favorite songs; AI in your thermostat can anticipate and adjust the temperature in your home. AI promises to revolutionize the legal industry. Without doubt, AI and the technological revolution we are currently experiencing are changing the way we experience life. But like all revolutions, unintended consequences abound. Among them is the potential for unintentional illegal discrimination.
All companies that trade in AI—including the internet power brokers—are attempting to navigate a world in which their customers, the purchasers of advertising, demand and pay for powerful tools that target their intended customers. And on its face, there is nothing wrong with providing the consumer with products they likely desire. But what happens when these advertisements unintentionally discriminate, and thus violate existing consumer protection laws such as the FHA?
Federal Challenges to Algorithmic AbilitiesThe Department of Housing and Urban Development...