If there’s one thing that today’s military is exceptional at, it’s generating data. Today’s military has deployed advanced sensors and new intelligence, surveillance, and reconnaissance (ISR) solutions across each of its domains. Combined, these new sensors and solutions are generating terabytes of intelligence every single day.
However, just generating data and intelligence isn’t useful to the military. That data – whether it be real-time streaming video or other types of intelligence data – needs to be analyzed and interpreted to deliver actionable information to the military’s senior leaders and decision makers.

If military leaders are going to receive any value from the intelligence they’re generating, that data needs to be sorted through. Hours of video needs to be watched. Discrepancies need to be documented. Notable changes need to be analyzed for significance to the mission and national security. And this has all traditionally been the job of analysts in the intelligence community.
But keeping pace with the sheer deluge of data coming in from today’s ISR platforms is becoming nearly impossible for analysts.
However, a recent Thought Piece entitled, “Analyst 2.0: Redefining the Analysis Tradecraft,” developed by the world’s largest government consultancy, Booz Allen Hamilton, seems to indicate that there could be hope for taking this burden off of analysts. That hope comes in the form of today’s advanced Artificial Intelligence (AI) and Machine Learning (ML) technologies. But for these technologies to be useful and effective, the analyst community needs to embrace them.
Why AI is an analyst’s best friend
Sitting and watching hours of redundant, real-time video content isn’t nearly as entertaining as watching a Hollywood movie. In many cases, the analyst watching that video is observing a scene in which nothing changes for hours or days at a time, waiting for something to change or a discrepancy to arise.
This is repetitive, tedious work that keeps analysts from performing high-value tasks. It’s also work that the thought Piece correctly states could be accomplished by today’s advanced AI and ML technologies.
With AI and ML technologies in place, computers would effectively scan intelligence data – such as video intelligence – and subsequently flag changes and activities that could indicate a threat.
In this environment, the role of the analyst would then shift to reviewing the intelligence that the AI flagged to determine if it is truly relevant to the mission and to national security. This is effectively freeing up the analyst community to perform a higher function task – instead of identifying suspicious activity, they’d be analyzing those activities to determine their potential meaning.
Fear of change and competition
So, if AI and ML could partner to make analysts more efficient and more effective, while transitioning them to higher-value, more mission-critical tasks, why wouldn’t the intelligence community be racing to adopt it?

Much of that is due to a fear of change and a fear for their jobs.
If you polled many Americans about their openness to traveling in an autonomous vehicle, they would probably be reticent for fear that the machine behind the wheel wouldn’t be as capable of identifying and responding to changing situations as a human driver would be. These same people would still hop into the back seat of an Uber with a stranger behind the wheel with no questions asked.
Humans remain hesitant to embrace the concept that an intelligent machine can function in a job as effectively as a human can. This is just as true for analysts. But even if AI was given the reigns to analyze intelligence and passed the test with flying colors, there would still be concern about setting it loose on the nation’s intelligence data – but for a different reason.
If AI and ML join forces to be an effective alternative to human analysts, the humans are understandably concerned that their position within the nation’s intelligence community could become compromised. After all, there’s no reason to pay an analyst’s salary when a machine that doesn’t need a lunch break or paid sick leave can do the job just as effectively.
The Thought Piece concludes that, together, these fears – the fear of change and the fear of being replaced – are creating a culture that is toxic for the adoption of AI and ML technologies in the intelligence and analyst communities. But there are ways that the senior leadership at the nation’s intelligence agencies and military intelligence organizations can put these fears to rest and drive the increased adoption of AI and ML.
In my next post on the Government Technology Insider, I’ll look at the how the Booz Allen Hamilton Thought Piece claims senior leaders can make AI and ML palatable to analysts.
To read the Thought Piece in its entirety, download a complimentary copy of, “Analyst 2.0: Redefining the Analysis Tradecraft,” by clicking HERE.