There may be no technological innovation as fraught with concern as the field of artificial intelligence, or AI. From Isaac Asimov’s “Robot” novels to the “Terminator” movies, popular culture and human imagination have spurred a deep distrust of the idea that computers can teach themselves and make decisions that affect people.
At the same time, citizens in the United States are constantly tugged in different directions over the appropriate role of government and how much it should regulate emerging technologies.
Politico addressed that dilemma recently when it convened a panel on “The New Age of Innovation: Government’s Role in AI.”
Rep. John Delaney (D-Md.) jumped right in. “The toothpaste is already out of the tube in terms of how AI is affecting our lives,” he said. “I don’t think it’s incompatible to [combine] a national strategy with a hands-off approach” by the government.
“China has an AI strategy, the EU has an AI strategy, and I think we should have one,” he continued. “It should include the things we need to do to protect the privacy of our citizens and the steps we want to take as a society … [The strategy] should have a set of common goals, agreed to by the public sector and the private sector.”
Rep. Will Hurd (R-Texas) said that the best-case scenario is that the U.S. currently is tied with China in AI.
“Their investment in AI research, their ability to force their private sector and government to work together, is unparalleled. Because it’s an authoritarian government, privacy isn’t an issue,” Hurd said.
He said he gets frustrated with people’s concern over AI, “because I think people look at AI as the destination, but it’s a tool.”
Dean Garfield, president and CEO of the Information Technology Industry Council, agreed federal agencies should be encouraging AI development and supportive of self-regulation of the field.
In the wake of Cambridge Analytica, “is the call for self-regulation around AI and other topics difficult to make right now? I don’t think so,” Garfield said. “Self-regulation doesn’t mean no government involvement.” He pointed to the National institute of Standards and Technology’s work on data and data privacy as an example of setting a requirements framework without prescribing specific solutions.
Walter Copan, under secretary of Commerce and, coincidentally, NIST’s director, responded, noting that the agency is working on a more “trustworthy framework” and observing there are “a series of roles” to be filled.
“Engagement between government, industry and academia, for standards that are open and transparent,” Coban said. The government “can sponsor research in strategically important areas for our nation, also to seek synergy … We’re looking at the workforce implications. AI is indeed with us [and] we’re recognizing its power.”
Rashida Richardson, director of policy research at the AI Now Institute at New York University, was at least a little more cautious about companies self-regulating in the field.
“I think it’s not just a technical issue,” she said. “We need a national [AI] agenda because there’s a lot of priority alignments. Right now the emphasis is on speed rather than accuracy.”
Richardson suggested that government involvement in the setting of standards doesn’t mean the industry can’t self-regulate, but it’s the government that can guarantee there are many voices at the table as the agenda is set.
Garfield compared AI to some of the most fundamental, revolutionary changes in all of human history, from learning to walk upright to the development of agriculture.
“It’s important to recognize that this technology, which will be some of the most transformative [ever seen], requires all of society to be involved,” he said. “The obstacles to AI will not be technical, they will be societal.”
Did you enjoy this read? Subscribe to receive more informative stories like this one in your inbox today.