With the amount of data that agencies must parse, process, manage, and store in the era of interconnected government, it’s no wonder that they’re looking for solutions that make this critical function less overwhelming. As part of his commitment to IT modernization, President Obama authorized the National Strategic Computing Initiative (NCSI) to drive U.S. global competitiveness. The NSCI “is a whole-of-government effort designed to create a cohesive, multi-agency strategic vision and Federal investment strategy, executed in collaboration with industry and academia, to maximize the benefits of HPC for the United States.”
While agencies like the Department of Energy have long-embraced super computing to deliver on their mission, many other agencies have been reluctant to buy into High Performance Computing, largely on the basis of costs. However, with the era of big data truly upon us, and with the drive towards open data initiatives the costs of not being bale to manage petabytes of data quickly and easily is now seen as being a more significant cost than infrastructure investment.
In a recent interview with GovDataDownload Gabriel Broner, vice president and general manager of High Performance Computing at SGI, discussed NASA’s commitment to, and investment in, HPC to meet their mission of understanding our universe more fully. Broner also shared that the National Center for Atmospheric Research (NCAR) has recently made a significant investment in super computing to facilitate the analysis of long-term weather patterns, hurricane paths, precipitation and water levels to determine how best to protect human lives and prevent property loss during storms, and assist farmers in crop planning to optimize yields based on yearlong weather predictions.
For agency CIOs considering investing in HPC, Broner has some advice to share. Beyond ensuring that the technical specifications meet the requirements, the people who will support you are also crucial. Not only should CIOs have an understanding of how to grow their supercomputer in a way that doesn’t require massive service interruption, but their vendor should also be able to help them tune their system to deliver the application performance and extend capabilities to meet interim and long-term needs. He went on to share that “like scientific research, building a sophisticated HPC infrastructure is a team effort. The relationship you want to see is all contributors working side by side, to not only meet current goals, but to innovate and push the boundaries of High Performance Computing to benefit the customer and society.”
Interested in learning more about HPC? You can read the full article here…