Algorithmic Control: Automated Decisionmaking in America’s Cities
Governing bodies throughout the United States are turning to automated decision making systems in an attempt to make their operations more efficient, their services more equitable, and their economies more robust. These technologies, though, aren’t free from the biases and bad calculations that also plague human decision making, and they’ll need their own accountability measures and guarantees of transparency to protect the populace against institutionalizing poor choices.
MuckRock and the Rutgers Institute for Information Policy & Law (RIIPL) are collaborating on a new reporting and research project about local government use of big data, artificial intelligence, and algorithms.
According to Rutgers Law Professor Ellen P. Goodman, who will be partnering on the project, “Algorithms are playing an ever larger part in who goes to jail, who gets dibs on the best education, how we move through cities, and every other part of public life – we need to know more about them.”
Through interviews with leading experts and public records requests filed across the country, MuckRock Projects Editor/Senior Reporter Beryl Lipton will investigate city contracts, requests for proposals, and in-house development of these systems of governance to build an open, searchable database of how these technologies are in use.
We’ll be looking at the data going into these algorithms, the models they use, the outcomes they produce, and the policies dictating how these tools are being integrated into our current systems.
Have a suggestion or know of an algorithmic development near you? Send it to email@example.com or submit it via the form below.
The AI Now Institute is calling for checks on the datasets used by predictive policing systems because of concerns that the technology can perpetuate, rather than address, “dirty” policing practices.
Shifting from Tasers to AI, Axon wants to use terabytes of data to automate police records and redactions
Axon, one of the most prominent suppliers of tech tools to law enforcement, is shifting from selling its signature Tasers to embedding artificial intelligence in police departments around the country
Teams of scholars at the Massachusetts Institute of Technology tackling bias in facial recognition technology have two recommendations for its developers: more external oversight and more representative training data.