AI Now report urges government, industry, and public work together to strengthen algorithm accountability

AI Now report urges government, industry, and public work together to strengthen algorithm accountability

With technology developing faster than oversight, transparency all along the labor chain is necessary to protect companies - and governments - from embarrassing and possibly illegal results

Written by
Edited by Miranda Spivack

Governments and private companies using artificial intelligence to make significant decisions should be much more transparent about their work - and quit claiming the details are trade secrets they can keep from public scrutiny, says a new report from the AI Now Institute, based at New York University.

The recommendation is part of the institute’s 2018 annual report, which also includes several other proposals to enhance transparency and accountability in the rapidly-developing predictive and programmatic analytics field. Among them, the report recommends:

  • Expansion by government agencies of their own oversight of their AI use and increased regulation of the AI industry. This would include greater scrutiny of AI’s promises compared with what it actually delivers.

  • Greater involvement by the public in challenges to algorithmic use and in policy discussions.

  • Development in academia of interdisciplinary curricula for AI programs, enabling students to consider the societal effects of their work

  • Protection of whistleblowers who challenge AI’s impacts.

  • Limits on use by private companies and governments of trade secrecy to restrict public information about AI’s impact.

The use and study of algorithms and AI has expanded significantly in the last two decades, but, as AI Now noted when it was founded in 2017, little attention has been paid to how their use is affecting daily life. The institute is among the first academically-sponsored initiatives to focus primarily on AI’s societal effects, creating, in addition to its annual reports, other explanatory material for policymakers, advocates, and litigators. The overarching theme promoted by the group is the public need to pressure government and businesses for greater transparency in places where algorithms and AI are employed - even when the operators are private entities.

“AI companies should waive trade secrecy and other legal claims that would prevent algorithmic accountability in the public sector,” the 2018 report said. “Governments and public institutions must be able to understand and explain how and why decisions are made, particularly when people’s access to healthcare, housing, and employment is on the line.”

Many recent AI research advances have been driven by private companies, and the report insists on transparency all along the labor chain, well beyond implementation. AI Now suggested that companies themselves should value and encourage airing of employee concerns about the impact of AI as a means of providing internal accountability. By designing mechanisms for encouraging discussion and dissent within private companies and governments, corporations can promote equitable outcomes with AI use, and potentially protect the companies - and governments - from embarrassing and maybe illegal results.

“[T]here’s usually some PR nightmare that leads to a government choosing to end a contract with a vendor or it was already time limited as a pilot or something where they received outside grants,” Rashida Richards, Director of Policy at AI Now, said in an interview. “But with the exception of where there’s a grant requirement to study the outcomes, it doesn’t seem like any of the same forms of accountability that would exist elsewhere in government - they’re oddly missing.”

The dangers of affect recognition, in particular, a type of facial recognition that purports to identify internal mental states by examining outward appearance, featured prominently in the report as one of the most concerning developments in law enforcement’s use of AI. Though much current research on affect recognition has been discredited, it is, nonetheless, being used in technologies increasingly employed by law enforcement agencies throughout the country, the report said.

“The case of affect detection reveals how machine learning systems can easily be used to intensify forms of classification and discrimination,, “even when the basic foundations of these theories remain controversial among psychologists.”

As an example, a test of “Rekognition,” Amazon’s facial recognition software, conducted by the American Civil Liberties Union last summer revealed that the program misidentified 28 members of Congress as individuals previously arrested for a crime.

The company, which has evolved from an online shopping website into a vast commercial enterprise that offers a wide range of internet services and computing offerings, said in a statement following the ACLU’s findings that “the Rekognition results can be significantly skewed by using a facial database that is not appropriately representative.”

While the biased nature of training data in many law enforcement systems has been recognized at this point, the mechanisms for fixing these systems once they’re in place are less well-defined.

“The next task now is addressing these harms,” the institute’s report noted. “This is particularly urgent given the scale at which these systems are deployed, the way they function to centralize power and insight in the hands of the few, and the increasingly uneven distribution of costs and benefits that accompanies this centralization.”

Read the full AI Now report embedded below.

Creative Commons License
Algorithmic Control by MuckRock Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at https://www.muckrock.com/project/algorithmic-control-automated-decisionmaking-in-americas-cities-84/.

Image via the Computer History Museum