Report Raises Risks of Artificial Intelligence Playing Out in Political Reality
Artificial intelligence has incredible promise to make the world a better place. But the technology is also creating a growing concern about its potential to do harm.
AU School of Public Affairs Assistant Professor, Thomas Zeitzoff, contributed to a new report that outlines the potential danger of artificial intelligence and machine learning capabilities to digital, physical, and political security.
Zeitzoff was one of several authors of "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," a 99-page report sponsored by the Future of Humanity Institute, Centre for the Study of Existential Risk, Center for a New American Security, and the Electronic Frontier Foundation.
The analysis focuses on how the growth of artificial intelligence and machine learning will influence future security threats. In response to the changing threat landscape, the report includes recommendations for increased collaboration, security, and sharing of best practices.
Zeitzoff, with a research focus on social media, lent his expertise to the section on politics. "The idea is to understand the rise of artificial intelligence and the implications for the military and global politics," he said. The report examines how governments might use artificial intelligence in the future as it relates to protests, propaganda, and repression. The report's findings are particularly relevant as the United States recently brought a federal indictment against 13 Russian nationals for a plot to influence the 2016 presidential election through social media propaganda.
"Previously it was difficult to flood a zone with propaganda, and it was obvious to spot," said Zeitzoff. "But as artificial intelligence has gotten a lot better, governments can use fake, paid trolls that look like real users to interrupt communication networks." Thus, the types of attacks and influence operations that Russia carried out in the backdrop of the 2016 U.S. election are going to be even more sophisticated — and cheaper.
For instance, China uses an army of trolls to counter "problematic" news and distribute misinformation. The Chinese government currently has one of the largest automated databases and facial recognition technology and is already rolling out a system to give people citizenship scores that measure their "trustworthiness."
As systems become more sophisticated, Zeitzoff said these developments will influence politics and the ways people interact. "The big question is - will challengers and dissidents be more easily able to evade the government? Or will the government have the advantage to more easily track them?"
Zeitzoff said the forward-thinking report highlights the need to consider the political and social implications of big tech companies, such as Google and Facebook, acquiring mountains of data. "It's important to consider what it means when it becomes very difficult for individuals to figure out if they are interacting with an actual human or a bot or some automated intelligence," said Zeitzoff. "What does it mean when governments can more quickly, cheaply, and easily push out their message using these methods? It's something people need to think about. It's already here."