Proposed Algorithmic Accountability Act Targets Bias in Artificial Intelligence
In Short
The Situation: There have been numerous reports that computer algorithms used in artificial intelligence ("AI") systems have created or contributed to biased and discriminatory outcomes. To reduce such bias and discrimination, Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) recently proposed the Algorithmic Accountability Act of 2019 ("Act"), with Rep. Yvette Clarke (D-NY) sponsoring an equivalent bill in the House. The Act is the first federal legislative effort to regulate AI systems across industries in the United States, and it reflects a growing and legitimate concern regarding the lawful and ethical implementation of AI.
The Result: The Act authorizes and directs the Federal Trade Commission ("FTC") to issue and enforce regulations that will require certain persons, partnerships, and corporations using, storing, or sharing consumers' personal information to conduct impact assessments and "reasonably address in a timely manner" any identified biases or security issues.
Looking Ahead: The Act would affect AI systems used not only by technology companies, but also by banks, insurance companies, retailers, and many other consumer businesses. Entities that develop, acquire, and/or utilize AI must be cognizant of the potential for biased decision-making and outcomes resulting from its use. Such entities should make efforts now to mitigate such potential biases and take corrective action when it is found.
Background: The Potential for Bias in AI
Employed across industries, AI applications unlock smartphones using facial recognition, make driving decisions in autonomous vehicles, recommend entertainment options based on user preferences, assist the process of pharmaceutical development, judge the creditworthiness of potential homebuyers, and screen applicants for job interviews. AI automates, quickens, and improves data processing by finding patterns in the data, adapting to new data, and learning from experience. In theory, AI is objective—but in reality, AI systems are informed by human intelligence, which is of course far from perfect. Humans typically select the data used to train machine learning algorithms and create parameters for the machines to "learn" from new data over time. Even without discriminatory intent, the training data may reflect unconscious or historic bias. For example, if the training data shows that people of a certain gender or race have fulfilled certain criteria in the past, the algorithm may "learn" to select those individuals at the exclusion of others.
The Intent of the Act
The Act was introduced on the heels of several widely publicized reports of AI leading to biased or discriminatory outcomes. Concerned that algorithms may exacerbate discrimination, Senator Wyden explained that the Act "requires companies to study the algorithms they use, identify bias in these systems and fix any discrimination or bias they find." Senator Booker noted that the Act represents a "key step toward ensuring more accountability" from companies using AI software. As Representative Clarke put it: "Algorithms shouldn't have an exemption from our anti-discrimination laws."
Key Provisions of the Act
As a preliminary matter, the Act would apply to any "covered entity"—namely, any person, partnership, or corporation that is subject to the FTC's jurisdiction and makes more than $50 million per year, possesses or controls personal information on at least one million people or devices, or primarily acts as a data broker that buys and sells consumer data.
Within two years of the Act's enactment, the FTC is charged with requiring such "covered entities" to conduct impact assessments for any existing or new: (i)"high-risk" automated decision systems; and (ii) "high-risk" information systems.
An "automated decision system" is any computational process that employs AI or machine learning to make or facilitate making a decision that affects consumers (e.g., product recommendations based on a user's search history or past buying habits). An "information system" is a "process, automated or not, that involves personal information" but does not qualify as an "automated decision system."
"High risk" systems subject to the impact assessments include those that:
- Pose a significant risk to the privacy or security of personal information of consumers, or of resulting in or contributing to inaccurate, unfair, biased, or discriminatory decisions affecting consumers;
- Make or facilitate human decision-making based on systematic and extensive evaluations of consumers (including attempts to analyze/predict sensitive aspects of their lives) and alter legal rights of consumers or otherwise affect consumers;
- Involve the personal information of consumers regarding race, religion, health, gender, gender identity, criminal convictions or arrests, and other factors;
- Monitor public places; and
- Meet any other criteria established by the FTC.
An impact assessment of a "high-risk" automated decision system must evaluate the system and its training data "for impacts on accuracy, fairness, bias, discrimination, privacy, and security" and must include (among other things) a description of the duration for which the system stores personal information and results, what information about the system is available to consumers, the extent to which consumers have access to the results of the system and may correct or object to its results, etc. An impact assessment for a "high-risk" information system (called a "data protection impact assessment" under the Act) evaluates the extent to which the system protects the privacy and security of the personal information that it processes.
The Act provides that "if reasonably possible," impact assessments are to be performed in consultation with external third parties, including independent auditors and independent technology experts. Moreover, the assessments are not "one and done"; rather, they will have to be conducted as often as the FTC determines is necessary (and likely as often as necessary to assess the ever-changing state of the AI system, its decision-making, and outcomes). Finally, the assessments cannot be avoided with consumer waivers. However, consumers would not necessarily be privy to any impact assessment, as the decision to make an assessment public would be left to the covered entity in its sole discretion.
What Now?
Employers utilizing AI already must be cognizant of the potential for disparate treatment or, more likely, disparate impact claims under equal employment opportunity laws such as Title VII, the Age Discrimination in Employment Act, or the Americans with Disabilities Act as a result of algorithmic bias. Whether or not the Act (or a modified version of it) ultimately becomes law, companies utilizing AI should be prepared for additional government oversight of AI and increased consumer scrutiny regarding the lawful and ethical use of AI. The Act does not offer a perfect solution—significant questions remain as to whether it goes far enough in imposing neutral or third-party "checks" on AI systems, how its requirements would be policed and enforced, whether it requires sufficient transparency to consumers and the public, and whether even the most well-meaning companies can keep pace with their constantly evolving AI technologies. Nevertheless, the Act marks a meaningful step in advancing the dialogue on this important issue.
Entities reliant on AI systems can take steps now to proactively address potential bias issues. While the steps ultimately taken would depend on the entity, the nature of the AI, and other facts, companies with direct input and control over the development of AI can:
- Evaluate the development processes used for the systems and the system outputs;
- Develop training programs for those engaged in AI development and data processing to raise awareness of inherent biases in the data;
- Implement an audit system for regularly checking the input data and results generated by the AI;
- Document key decision-making in AI software development;
- Develop AI tools that improve the traceability of AI decisions to provide real-time insights into how decisions are made; and
- Increase transparency to consumers regarding data and AI use.
Companies that do not develop AI in-house, but instead utilize AI systems developed by third parties, can make efforts to understand the third party's policies and procedures for eliminating bias in such systems. They can also try to structure contracts with these third parties to limit the company's liability for legal violations arising from decisions made by the system. Moreover, employers who utilize AI in making employment decisions (e.g., hiring, discipline, termination) must be aware of the potential for algorithmic bias and should consider evaluating whether the AI system is producing a disparate impact based on age, race, sex, national origin, or any other protected class.
Three Key Takeaways
- As AI becomes ubiquitous in its applications across industries, so does its potential for bias and discrimination. Understanding the inherent biases in underlying data and developing automated decision systems with explainable results will be key to addressing and correcting the potential for unfair, inaccurate, biased, and discriminatory AI systems.
- Whether or not the Act becomes law in its present form or becomes law at all, algorithmic bias is a significant issue—and consumer businesses can take steps to address it now.
- Qualified individuals both within and outside the company should be selected and empowered to investigate and rectify bias and security flaws in AI systems. Outside consultants should be consulted on best practices. Legal counsel should be consulted regarding potential legislation and regulation (within and outside the United States) and regarding how to handle document assessments, disclosures, and corrective actions.