These days, many government agencies need to make large data sets available to diverse groups within their organization in order to accomplish mission critical goals. While the size of these data sets alone presents a challenge, the situation becomes even more complicated when you factor in that despite being team members in the same agency, the researchers, analysts, and others using the data are located not only across the United States, but around the world, and often in remote locations. Let’s throw in one more complicating factor, that because of past procurement policies these research centers, labs, and offices are not only separated by geography, but also technology too. While the move to bulk purchase agreements is harmonizing technology within and across agencies, legacy systems are often a mix of systems that may or may not appear in other locations.
How then, can agencies overcome these obstacles to ensure that the business of government is accomplished in a timely and cost efficient manner?
One key consideration is to ensure that the database applications are able to both scale to meet demand and function across platforms. It’s an added bonus if the database applications are easy to use and administer as well.
Is this all possible? This recent study by Oracle of Genoscope, the French National Genomic Sequencing Center, explores not only how it is theoretically possible to meet these database application goals, but also how it can be put into practice. Click here to access the report.