Agencies face many challenges when it comes to data sharing. For example, there is no solid validation process when it comes to searching and sharing information. With data sharing challenges, it’s difficult for agencies to assure what data was used when building the solution. This is especially important when using external Artificial Intelligence (AI) models. When agencies implement these external AI solutions, knowing what data was used helps to reassure that no bad actors or data disrupted the model.
Concepts, such as federated AI, offer a possible solution for validating data. Federated AI helps agencies to share data through algorithms by maintaining a central location for each of the agencies’ data. Most importantly, the raw data never leaves the agency’s security perimeter, which allows agencies to know that the data they are working with are still valid.
These were the key themes in the third part of our The Evolution of Artificial Intelligence in Government podcast series, hosted on Government Technology Insider, where Kal Voruganti, Senior Fellow and Vice President at Equinix; and Scott Andersen, Distinguished Solution Architect at Verizon, discussed this topic further.
“In the case of data or any AI models that you’re buying from external sources, it’s challenging to know really who built this model, what data was used, etc. The second big issue I see is lack of good templates for governance for sharing of data between consultants,” said Voruganti. The last challenge is technology. There is a need for solutions that help people keep their raw data with themselves, and only exchange or trade insights. For example, you send the algorithm to the location and then let the algorithm work on that data within your agency’s four walls, then you share the insights.”
Listen to the full podcast below: