
AI activists warn California officials against hidden threats to privacy and civil liberties
Tesla’s Elon Musk labeled artificial intelligence as our “biggest existential threat.” Apple’s Tim Cook described its potential perils as “profound.” And Microsoft’s Bill Gates likened A.I. to nuclear energy, projecting it to be “both promising and dangerous.”
Yet, for Ashley Casovan, A.I.’s most significant and immediate threat has nothing to do with machine monarchs or dystopian futures. To Casovan, the executive director of the advocacy group Responsible A.I., the real problem today stems from A.I.’s inconspicuous decision-making. Depending on its programming or data, she said A.I. can make biased judgments, and even complete errors and governments should be watching.
“We want to make sure that when thinking about policy development or any sort of practices related to implementing AI in government—but really anywhere—that they are being done in a way that is responsible,” Casovan said, speaking to the California Department of Technology on June 30. “So, we need to design, build, and deploy them throughout their entire life cycle in a way that’s fair and impacts people in a positive way.”
Casovan’s organization, which operates under the acronym “RAI,” is warning California regulators of A.I.’s potential harm (if unchecked) and advocating the use of its certification system to safeguard governments from A.I. bias in lending, fraud detection, health systems, employment, and automated diagnosis. With A.I.’s explosive growth in recent years, RAI is advocating states to amplify their scrutiny of tech vendors.
A study by the tech firm PUC, estimated that by 2030 AI will contribute more than $15.7 trillion to the global economy annually. This figure is greater than the current contributions of China and India combined. Further, between 2021 and 2028, researchers at Fortune Business Insights project the A.I. industry itself will grow from $47.5 billion to $360.4 billion, increasing 33.6 percent each year.
RAI sees these figures as a wake-up call for government policymaking.
“A.I. systems are completely ubiquitous and becoming more and more prolific, and we really need to understand then those policy measures….” Casovan said. “How do we work with government, industry, academia, and civil society to put safeguards around these different types of tools and systems.”
Seeing the hidden hazards
A.I.’s growth notwithstanding, it may be easy to dismiss its risks. That indifference may come from its portrayal as a future technology or its promotion as a tool so advanced, it’s inaccessible to local government.
To visualize risks and dangers, Casovan recommended agencies apply a broader definition, one that is scalable, starts at basic algorithms, and extends upward into analytics, machine learning, and the cutting-edge deep learning systems employed by the likes of Google. The wide lens is pragmatic. It boils down A.I. to any tech that can automate, or be a significant influencer, in decision-making.
“When we talk about artificial intelligence, we don’t know necessarily what that means. And for me, I’m not really interested in defining artificial intelligence,” Casovan said. “I think that, especially when we’re thinking about this from a policy perspective. It’s really important to go where people are.”
Calling out top threats, Casovan highlighted predictive policing, automated health decision making, misuse of biometrics (such as facial recognition), and A.I.-based hiring system as especially problematic. Yet said improper use of A.I., whether for service design or funding decisions, can also be an issue if the data is bad or the algorithm is biased.
Take predictive policing. At face value, the act of using historical crime data to project areas and people most likely to commit crimes may seem like a positive use of technology. Yet if mismanaged, predictive policing can also turn into harassment as officers increase surveillance on targeted suspects and neighborhoods without any crimes to point to.
An article from the Brookings Institution listed numerous examples of error-prone predictive policing. The data-driven policing programs showed officers adding innocent citizens to gang databases in Los Angeles, labeling clusters of low-income children as potential life-long criminals in Pasco County, Florida; and police tying a fictional gang name to protestors in Phoenix.
“There is no evidence that predictive policing or gang databases have been or can be cleaned of bias,” said Brookings’ Angel Diaz, who is also a lecturer of law at the University of California Los Angeles. “While there are important efforts to uncover and mitigate the ways that racial bias can infect machine learning, there is no excuse for continued funding and deployment of policing tools whose only consistent track record is a string of scandals.”
Outside the justice system, AI issues have popped up across numerous industries. The insurer Lemonade was caught bragging about using facial recognition to deny claims and increase profits. Amazon had to scrap an A.I. recruiting system after learning it discriminated against women. Google’s image recognition A.I. mistakenly applied the racist epithet of guerilla to African Americans. And the list goes on.
“When I say ‘harm,’ I can mean it in the most subtle sense,” Casovan said. “It could be a lack of access to credit. It could be not being recommended for a job because of a facial recognition system…these are the things we’re working on now.”
Policies, certifications, and other solutions
In its role as an industry regulator, governments may feel hard-pressed to find a solution to monitor and enforce such a highly sophisticated industry like A.I. Departments are often overwhelmed with daily operations, budgets can be tight, and tech expertise out of reach. Yet one regulatory tool that has proved highly effective for California, and nationally, is state-wide legislation and the costly legal liabilities that come from non-compliance.
On Jan. 1, 2020, the California Consumer Privacy Act (CCPA) took effect, requiring companies to disclose personal collected data to consumers, the data shared with third parties, and give consumers the right to opt-out of such sharing. The law not only applies to businesses with a physical address in California but those who do business with California consumers, a caveat that essentially gives the CCPA national reach. The legislation is strengthened by California Privacy Rights and Enforcement Act, which also applies to A.I. applications gathering data. Whether data is compiled directly, or taken from multiple sources and data services, companies still have to account for their data consumption.
“Companies that are not in compliance not only run the risk of financial ramifications through fines but also put their brand reputation on the line,” said Christy Wyatt, CEO of the tech company Absolute to Forbes. “Today’s modern enterprises, those that want to win, need to be laser-focused on transparency and trust–and ready for rapid response when that trust is misplaced.”
Outside of legislation, RAI has designed an A.I. certification system, which in simple terms, is basically an A.I. framework and evaluation service for organizations and government agencies to prevent bias and errors in analytics and A.I. programs. Casovan said RAI designed the certification in 2019 with support from the Global AI Action Alliance for the World Economic Forum (WEF).
The certification has four performance levels to rank a system and measures factors such as accountability, data quality, robustness, bias and fairness, as well as interoperability and “explain-ability” (the ability to explain how an A.I. processes data and comes to its conclusions).
The process helps certification applicants with assessments tools and resources. Then provides a professional assessment, a technical audit, followed by an official awarded ranking. Casovan said the group’s goals are doing away with black box A.I. — a term to describe scenarios where few know how an A.I. is operating — and to ensure legal compliance for organizations wherever they may be operating.
“We are doing this in partnership with the World Economic Forum and we’re building off of various work that’s been out there,” Casovan said. “Things like Google’s What-If Tool or IBM’s Fairness 360 Toolkit, these and other resources have really been driving…our work in defining what fairness means and how to measure it in a mathematical way.”