The U.S. needs to “get AI right” — and fast — says government group

The U.S. needs to “get AI right” — and fast — says government group

National Security Commission on Artificial Intelligence warns of foreign and domestic challenges

Written by
Edited by Michael Morisy

China, private industry, and adversaries abroad could all prosper at America’s expense if the federal government does not move to support and lead the direction of artificial intelligence, the National Security Commission on Artificial Intelligence said in a report released earlier this month.

More than 30 countries in the last five years have publicly committed to some nationwide strategic investment in AI capabilities. In February, President Donald Trump signed an executive order to prioritize the use and development of AI-based technologies across the government.

With a flurry of recent AI advances, there is growing pressure to set rules on how to proceed — along with raised stakes.

“We’ve gone through a few of these cycles now, and there’s at least some reason to believe that this time it might be the real deal,” said Ryan Budish, assistant research director at the Berkman Klein Center for Internet & Society. “There have been advances in large-scale datasets that enable more advanced AI. There’ve been breakthroughs in building AI systems. There have been advances in processing powers and GPU or custom silicon to enable more AI developments. I think that there is reason to think this time AI will have substantial transformative impacts. We’re already seeing that in some places.”

The NSCAI, created last year as part of the National Defense Authorization Act, has been studying AI’s potential to both advance and threaten U.S. national security interests. The group is considering how current options, like trade tariffs and other economic controls, can be used to mitigate unjust threats to intellectual property, support American enterprise interests, and direct development in an economically and ethically favorable way. The release is its most recent step toward recommendations for federal action on AI, which will be included in a final report due October 2020.

Though the group stated considering AI’s potential is “like Americans in the late 19th century pondering the impact of electricity on war and society,” it is guided by a belief that global AI leadership is an urgent matter of national security. It will require cooperation between the academic-government-industry triangle to cultivate the necessary workforce and to protect American ethical, economic, and legal values.

The report and nearly all U.S. conversations on AI feature an emphasis on China: its aim to be the AI world leader by 2040, major investments in U.S. AI companies, a promised $30 billion in Chinese research and development, and human rights violations being enabled by AI. At the NSCAI conference held in early November, U.S. Secretary of Defense Mark Esper spoke of China’s sale of autonomous drones in the Middle East and Russia’s use of combat AI.

“We will harness the power of AI to create a force fit for our time. We believe there’s a tremendous opportunity to enhance a wide range of the department’s capabilities, from the back office to the front line,” Esper told the room, “and we will do this while being recognized as the world leader in military ethics by developing principles for using AI in a lawful and ethical manner.”

The multinational nature of AI’s development has complicated the ability to enact restrictions unilaterally.

To begin to address these issues, the NSCAI is focusing on five AI action areas: research funding, integration of AI into national security missions, cultivation and global recruitment of a talented AI workforce, protection of existing American AI products, and the creation of a global consensus of AI standards.

Right now, each of these areas is facing serious obstacles. Federal funding for AI research hasn’t kept pace with its potential or industry investments. Commercial products aren’t being adopted efficiently enough to boost productivity and save tax dollars. Universities don’t have enough instructors to teach the next generation, and their researchers are being drawn away to private industry.

Beyond these, the military has encountered the same issue seen by most people trying to use data to improve processes: it’s just too incomplete and unreliable. While data management and analysis have seen vast improvements and the technology’s potential feels palpable, the practical issue is that dirty data just won’t produce results that are accurate, useful, or fair. Lt. General Jack Shanahan, who ran the government’s Project Maven and now leads the Joint Artificial Intelligence Center, said in a recent interview with Breaking Defense, “I can’t think of anything that is really truly AI-ready.”

This reality has spurred a backlash against local and state use of certain AI and automated systems and motivated proactive limits on the use of certain automated systems. Predictive policing systems and criminal risk assessment systems have been found to be inaccurate and unfair. Cities like San Francisco and Somerville have enacted bans on facial recognition software, considered one of the most actively concerning AI-powered technologies because of its impact on privacy, the potential-disastrous consequences of misattributed identifications, and its widespread and widely-unregulated use. Federally, though, legal limitations have been slower to develop.

“Regulating computing in general is difficult,” Michael O’Hanlon of the Brookings Institute wrote. “In this sense, AI should be viewed as but one manifestation of advanced computational methods (and potentially linked with advanced robotics). I don’t think you try to answer your question in the abstract, except in terms of ethical limitations on killer robotics.”

“If and when there are technological capabilities that you want to constrain/prohibit, you can at that point look to see if there is a realistic way to do it. It will be a daunting effort to control verifiably this type of technology in any meaningful way.”

Autonomous weapons systems—“killer robots”—have been one of the scariest AI use cases with the potential to be realized and help to illustrate the extreme consequences of these new technologies.

“[T]here was enough alarming content in the interim report that we can see where it is going,” wrote Clare Conboy of the Campaign to Stop Killer Robots, which is expecting to meet with members of the Commission in early December.

The NSCAI said that it is actively welcoming feedback from the public as it continues its work.

“There’s an enormous glut right now of high level principles of which this is just one more. They’re not all that dissimilar. They all talk about different issues that ultimately touch on things like human rights and human agency over the system and what the role of humans should be in the system,” Budish said.

“The real question now is how do we move from these specific principles to actual actionable steps that organizations, regardless of whether they’re in the public or private sector, can follow. There’s a big gap between a high level principle about respecting human rights to actually making difficult tradeoffs when designing a system.”


Header photo by Ars Electronica licensed under CC BY-NC-ND 2.0

Algorithmic Control is part of a joint research and reporting project from MuckRock and the Rutgers Institute for Information Policy and Law. Support for this project is provided by Rutgers, but the reporting is editorially independent. View the full database of requests, learn more about the project, or get in touch with the reporter.

Creative Commons License
Algorithmic Control by MuckRock Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at https://www.muckrock.com/project/algorithmic-control-automated-decisionmaking-in-americas-cities-84/.