The introduction of any disruptive technology creates all manner of questions, and it’s no different as federal agencies begin to adopt artificial intelligence (AI). Some misconceptions about AI include the ideas that it will replace all human jobs, create a dystopian world run by robots and also, that AI is unbiased.
A report commissioned by the Administrative Conference of the United States, explains that 45 percent of agencies have implemented AI and machine learning solutions. Most of these are in the initial stages of planning and piloting much like the U.S. Air Force’s RPA Training Next, an AI-infused pilot training program. For the 33 percent of agencies whose strategies are fully deployed, they’ve seen myriad benefits in areas of public health and safety as well as environmental protection. However, the key to reaping these benefits starts with not only knowing how to use or implement AI but also understanding the common misconceptions that surround it.
According to Michael Kanaan, Director of Operations for the U.S. Air Force/MIT AI Accelerator, AI has been, and continues to be, synonymous with consciousness.
“We really don’t know any consciousness that occurs outside of animal life,” explained Kanaan. “As humans, we intuitively develop an understanding that consciousness comes only with heartbeats and blood. Tellingly to that point, it’s rare that a child doesn’t impute some sense of consciousness into a stuffed animal. On the other hand, it’s unusual to see a child talking to a nondescript round piece of plastic but if other pieces of plastic are pushed into it, ones that look like eyes and ears, then consciousness becomes something that a child will eagerly assume.”
Part of this comes as a result of human-like design features glamorized in Hollywood through the likes of Westworld, I, Robot, and 2001: A Space Odyssey. Another derivative of Hollywood is the idea that this “conscious” artificial intelligence is rooted in evil intent and that humans will ultimately be overwhelmed, which will lead to dystopian possibilities. However, intelligence doesn’t always equate to consciousness, which is where this misconception takes hold.
Also, a result of the consciousness misconception is the idea that AI will, at some point, replace all human-performed jobs. In reality, it’s the positive impact of AI and automation — like increased efficiency — that rings truer than the demise of manual labor.
But perhaps the most profound of all of these misconceptions is the idea that AI is 100 percent objective or completely unbiased. AI algorithms that analyze data reflect human biases and once an agency uses this data, these biases are continually ingrained in processes, functions, and the agency’s mission.
“AI in its ethical, legal, and moral use is of concern to our nation and the federal government. And why is that? Because of bias. Machine learning applications are designed to analyze data and formulate predictions without any overall guidance. That doesn’t mean, however, that machine learning is safe from the influences of human biases – it’s not,” explained Kanaan during DataRobot’s AI Experience Worldwide virtual conference. “Just because an algorithm’s analysis is based on data doesn’t mean its output will be neutral or objectively fair.”
To negate biased AI, there are best practices that agencies can implement. First, agencies must understand the problem: that data reflects biases and for now, at least, AI can’t decipher these biases.
“Discovering human patterns is often the very purpose of a learning algorithm. The problem though is this — it’s very difficult for AI to determine if our patterns and behaviors are based on fair, desirable, attributes or if they result from unfair prejudices,” explained Kanaan.
The next best practice is to have the right data. This starts with setting the right metrics-based goals so that predictive models can produce insights that will ultimately help to advance an agency’s mission. Prior to setting metrics-based goals – and another best practice — agencies must be rooted in a culture of data literacy which requires hiring the right employees who ensure data is not only an agency priority, but that what’s collected is also as unbiased as possible.
“[The] best leadership takes its cues from the rest of its ranks or workforce. Likewise, democracy takes direction from the voice of its people,” explained Kanaan.
AI won’t replace all manual labor or create a dystopian world and despite the common misconception, AI isn’t unbiased. It is critical for agencies to recognize this. From there, they can take the steps needed to ensure that they are able to minimize bias as they integrate AI into processes and projects. Ultimately, with the right data in place, predictive analytics models are able to produce meaningful, impactful, and mission-oriented insights.
Ready to learn more? Click here.