The Algorithms Among Us

The Algorithms Among Us

The future of decision making is coming to a town near you. Do you know how?

Written by
Edited by Miranda Spivack

Drivers in suburban Virginia were shocked late last year when they were hit with $40 rush hour tolls from a new surge pricing system on a major commuter route into Washington. Boston parents were outraged when the public schools abruptly announced bell schedule changes that threatened to wreak havoc on carefully-timed family and work schedules. And in New York City, a departing City Council member recently won approval for a unique task force to study the freshest, and perhaps least understood, tool in government policymaking: algorithms like those that sparked sudden changes in commuter tolls and school schedules.

Increasingly, governments - federal, state and local - are turning to automated decision making systems to try to finetune their operations, save money, and increase efficiency. The public, however, has little understanding or access to information about how governments are using data, much of it collected quietly, to feed the algorithms that make decisions about everyday life. And, in an uncomfortable twist, the government agencies themselves often do not fully understand how algorithms influence their decisions. Officials in many jurisdictions are beginning to realize that automated decision making doesn’t necessarily deliver better outcomes or more equitable access to services.

“[W]hen [algorithms] are making critical decisions about our lives,” said Vera Eidelman, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project, “whether that’s whether or not we are suspects in a criminal trial, whether or not our community gets policed more, whether or not we get fired or lose medical benefits - we need to know how those decisions are made.”

This piece is the first in a joint series between MuckRock and Rutgers Institute for Information Policy and Law. You can find all of the requests and related articles on the project page, and register for email alerts for each piece here.

Private companies developing the new technologies often don’t want to make public details about how algorithms are designed, claiming the information is proprietary and exempt from public disclosure laws. The government agencies themselves are sometimes barred from obtaining details about how the technology works. That means the tax-paying public is also often left in the dark. Without greater public disclosure, it is difficult for the governments and the public to analyze the impact algorithms have. That means they cannot measure the potential for improving government functions and saving taxpayer dollars. Nor can they assess if there are unintended side effects, such as perpetuating biases against people of color, women, poor people, immigrants, and others whose treatment by governments historically has been inequitable or illegal, or both.

When algorithms go bad — without anyone watching

“Algorithms aren’t inherently good or bad. They’re tools that are created by human beings and used by human beings, and so it’s possible that they could do a better job,” said Eidelman. “It’s also possible they could do a worse job or the same job as human decision makers, and so that’s really why it’s incredibly important to have public access and the ability to adversarially test these various programs.”

A 2016 federal gun possession case in the Southern District of New York highlights the limits of algorithms. In that case, defense counsel requested through discovery the source code for New York’s Forensic Statistical Tool (FST). The since-discontinued software had been used by prosecutors to test DNA samples in hundreds of cases. But when the details were disclosed, an expert for the defense determined that the tool, when confronted with samples of mixed DNA, was omitting important swaths of the data from its calculations.

Subsequently, ProPublica compelled public release of the code as part of its Machine Bias series, arguing that the public had a right to know the basis for FST’s influence on the criminal justice system. The news organization also examined the ways that risk assessment tools used by the court system are often wrong in estimating the odds that a defendant will be a repeat offender, reinforcing racial bias. The stories helped spur the creation of the New York City task force to study and recommend transparency measures. “There are a number of potential way to ensure algorithmic accountability,” said Councilman James Vacca (D), seeking support for his measure establishing the task force. “…[T]o my knowledge we are the first city and the first legislative body of any size in this country to take this issue on.”

Finding bias when it’s built in

Despite the challenges, algorithms can be used to solve common problems, such as bus routes and traffic jams. But all decision making, human and computer alike, can be flawed. Bias is inherent in every dataset, said Andrew Burt, chief privacy officer at Immuta, a company that sells data management software.

“What data is doing and what models are doing, is they’re approximating reality,” he said. “They’re purely just finding correlations of data and really mimicking intelligence.”

Bias is a problem that the systems did not account for, he said. “Bias, is by definition, built into all data, it’s not going to representative, so then what do we do about it,” he said.

Still, the tech-driven approach to governance is growing, despite the glitches. Industry, academic, and governments are all actively pushing algorithmic integration across most aspects of existence, including public policy. And policy makers are realizing they can learn from algorithmic mistakes.

“Nobody fully understands failure. And so it’s very important when [governments are] doing something with advanced technology, not just to be focused on success but to be focused on what failure looks like, so they can identify it and respond to it once it actually occurs,” Burt said.

The proposed changes to the Boston school schedule, for example, are part of a widespread recognition of the impact of sleep deprivation on student success. Even after public pushback caused the school system to scrap the plan, contributing to the resignation of the school system’s superintendent at the end of the 2017-2018 school year, dozens of other school districts contacted the MIT team behind the algorithm’s creation looking for guidance.

Nor is Virginia’s traffic surge pricing an isolated concern. Los Angeles, Georgia, and, soon, Tampa Bay and much of Florida, will be using a similar, algorithm-driven surge pricing system to try to clear congestion during peak travel hours. And the criminal justice system, already widely using systems based on algorithms, is likely to continue to be one of the most popular places for algorithms, as police departments and court systems expand their use to decide patrol routes, identify likely victims and potential perpetrators, and estimate future threats to public safety.

Smart city initiatives are taking root worldwide, with hundreds of municipalities competing against one another for private funding and public grants to develop their data collection and technological capabilities, each hoping for economic growth and increased efficiency. For advocates and academics studying this evolution in public policy making, the truly smart move would be to ensure clear and fair ways for the public to understand and benefit from these advances, Vacca and others tracking the issue have said.

“If we’re going to be governed by machines and algorithms and data,” Vacca said during the unanimous Council vote, “well, they better be transparent.”

Want to help us continue to explore this important issue? Submit an algorithm to investigate below, and subscribe to receive updates as new articles are published.

Algorithmic Control is part of a joint research and reporting project from MuckRock and the Rutgers Institute for Information Policy and Law. Support for this project is provided by Rutgers, but the reporting is editorially independent. View the full database of requests, learn more about the project, or get in touch with the reporter.

Images licensed under public domain via NASA and US Census

Creative Commons License
Algorithmic Control by MuckRock Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at