In my previous post of the Government Technology Insider, I shared the findings of a recently-released Thought Piece by the global government consultancy, Booz Allen Hamilton, which delved into the role that Artificial Intelligence (AI) and Machine Learning (ML) can play in the analysis of intelligence data.
Ultimately, the Thought Piece concluded that AI and ML could help to alleviate some of the more redundant and tedious tasks that normally fall on the plate of the analyst community – tasks such as reviewing countless hours of ISR video, watching for the slightest of changes and discrepancies that could be of interest to the mission and national security. This could shift the role of the analyst from searching for red flags, to analyzing the red flags for pertinence to the mission.
However, the Thought Piece also laid out a problem with the adoption of AI and ML in the intelligence community and military. That problem was effectively fear – fear that the machines couldn’t do an extremely important job as effectively as humans, and fear that jobs could be eliminated if the machines did the task too well.
Just because there is reticence among the analyst community doesn’t mean that embracing AI and ML is off the table. In fact, the people of Booz Allen Hamilton also had some ideas on how those analyst fears could be put to rest. It’s an approach that they’re referring to as, “Analyst 2.0.”
It’s all about communication
AI and ML are really about complex systems of algorithms that sort through data to find patterns or disparities. Once these patterns, disparities or other outliers are identified, they trigger a response. For AI and ML to work, data scientists have to be behind the scenes, programming the AI and ML systems with the algorithms necessary to do the job.
This is where some of the fear comes in for the analyst community. Analysts aren’t data scientists – nor do they want to be. Data scientists aren’t analysts. And they most likely don’t want to be either. If the machine is going to find what is essential for an analyst to do their jobs, it can’t fall on the data scientists, alone, to develop the AI and tools that the analysts will use. The analysts need input into that process.

Click the image above to download a complimentary copy of the Thought Piece, “Analyst 2.0: Redefining the Analysis Tradecraft.”
To quote Jeff Goldblum in Jurassic Park, “Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” Scientists – even data scientists – are often more enamored with pushing the limits of what’s possible with new technologies such as AI and ML than with building what’s necessary for the task at hand. They worry more about what’s possible and less about what’s right or needed.
By facilitating an environment and culture of collaboration and communication between the data scientists and the analysts, the agency can ensure that the data scientists are building what the analysts need to do their job – and not just trying to push the limits of what’s possible with AI.
The end result of increased collaboration and communication between these parties could be the creation of a “killer app” that puts everything an analyst needs to do their job right at their fingertips and all in one place.
This app would – in theory – make the job of the analyst easier by presenting them everything they need to make informed decisions and analyses, and do so in an interface that is familiar and comfortable for them. This would go a long way to eliminating the fear that AI is there to take their place, since the AI would clearly be there in a supporting role – enabling them to do their job better.
So what about the fear of failure?
Trust, but verify
You probably remember that old Russian proverb thanks in large part to Ronald Reagan, who used it when talking about nuclear disarmament. Ultimately, the phrase has come to mean, “trust that someone/something will do what they promise to do, but check in on them just to be sure.”
This applies to new technologies the same as it applies to countries that have promised to dismantle their weapons of mass destruction.
When Google unveiled the Spam filter in their revolutionary Gmail application, they knew that there would be people that didn’t trust the AI to filter out what was Spam and what was legitimate mail. So, the global tech giant built the Spam folder so that people could double check and make sure that only what was truly Spam was being relegated to the trash bin. They were trusting that the Spam filter was doing its job, but verifying by checking their Spam folder on occasion just to make sure.
People can feel better about embracing a new technology if they have a “window” into how that technology is performing so that they can put their fears to rest. When the data scientists and technologists are designing their “killer app” for the intelligence community, it’s essential that they build a similar window for the analysts – something that allows them to verify that the AI is operating as intended.
AI could be the answer to helping the analyst community sort through and analyze the mountains of data that today’s ISR sensors and platforms are generating on behalf of the military and intelligence community. However, it can only be effective if analysts embrace it, and they’ll only embrace it if a “Analyst 2.0” approach to AI is adopted within the organization. That means perpetual communication and collaboration between data scientists and analysts, and a window into the actions of the AI so that analytics can trust the technology…but verify.