Advocates push to fight algorithm bias in state contracts
They are used in virtually every sector, helping decision-makers mete out punishment, choose a medical treatment, price on a product or hire a new team member, for example. Algorithms, also called automated decision systems (ADS), are both ubiquitous and powerful, but to most invisible. And critics say because of the way they are created they can be biased, and serve to perpetuate racial discrimination, economic disenfranchisement, and loss of opportunity.
Civil rights groups, academic researchers and now state lawmakers increasingly are shining a light on algorithmic bias and calling for reforms, citing the harms and potential for bias to undermine trust in the public sector.
State Assemblymember Ed Chau in December 2020 introduced the Systems Accountability Act of 2021. As amended in March 2021, AB 13 would create the first statewide algorithmic accountability framework for high-risk public sector algorithms. Certain bids for goods or services that use an automated decision system (ADS) would require an impact assessment that includes tests of the system for risks posed to privacy, security or personal information and risks that may result in inaccurate, unfair, biased, or discriminatory decisions.
The bill also would also require the California Department of Technology to establish and make public ADS guidelines by January 2023 and state agencies would be required to submit to the department a high-risk ADS accountability report. These reports would include a description of any disparate impacts from use of the system and a detailed mitigation plan within 10 days of awarding such a contract. AB 13 is now before the Assembly Appropriations Committee.
“If thoughtfully designed and implemented, algorithm-driven decision systems can assist government by improving operations and in the delivery of services,” said Assemblymember Ed Chau. “However, government also has the responsibility to ensure these systems do not harm the legal rights, health and well-being of individuals. It is therefore necessary to establish criteria for the procurement of high-risk algorithms to minimize the risk of discriminatory impacts resulting from their design and application.”
AB 13 is sponsored by the Oakland-based Greenlining Institute, which in February released a 36-page report explaining algorithms and how they are used across various sectors, both public and private and why they can perpetuate discriminatory practices. The bill is opposed by a coalition of the technology industry and business groups that argue it would slow down the government procurement process and force companies to expose proprietary business information.
A computer algorithm is a set of rules or instructions that use statistical patterns and data analytics that can be employed to make a prediction or solve a problem. According to the Greenline Institute, they are increasingly used by employers to sort resumes, decide social services eligibility, determine who sees ads for jobs, housing, and loans, choose which employees to hire, fire or promote, determine access to housing, credit, and health insurance, predict crime and recidivism risk and decide on health care treatment plans.
The problem is that when poorly designed – i.e. the input data or predictor variables favor one group over another – ADS can create unfair, biased, or inaccurate results and cause disproportionate harm to low-income families and communities of color. For example, an ADS may take certain characteristics of a person like their age, gender or income and predict their likelihood of defaulting on a loan – information used to approve or deny that loan.
At a California Fair Employment and Housing Council hearing in April on potential anti-bias regulations for algorithms, Maeve Elise Brown, executive director at Housing and Economic Rights Advocates, explained the potential harm in this scenario.
In a Council press release, Brown was quoted as saying that the lending criteria built into the algorithmic decision-making may actually be “proxies for race and gender that appear facially neutral but may result in targeting of particular lending decisions based on personal characteristics and directed towards legally protected groups.”
In government programs, poorly designed algorithms can unfairly deny people services they are entitled to receive. In Arkansas, for example, hundreds of people saw their Medicaid benefits cut because of the algorithm developer’s coding errors and miscalculations.
The Greenlining Institute recommends laws to ensure algorithm transparency and accountability to ensure that algorithms comply with applicable laws and to build public trust and confidence. AB 13 aims to do that by requiring the Department of Technology to publish the ADS impact assessment submitted by the contractor and the state’s accountability report on the internet.