Friday 19 August 2011

What happened to High Productivity Computing?

How to make HPC more effective? Value for money and high impact strategic research facilities like HPC are often difficult to match. Not so long ago, this concern meant that the familiar HPC acronym was hijacked to mean "High Productivity Computing", to emphasize that it is not only the raw compute performance at your disposal that counts but, more importantly, how well you are able to make use of that performance. In other words: how productive is it?



High Productivity Computing means balanced investment in application development, programming skills, user education, software tools, as well as the full ecosystem of hardware (processor, interconnect, memory, storage, etc.). It means understanding why the HPC facility is essential - what business objectives it drives, or how it will enable new research - and ensuring the infrastructure and services are properly resourced to meet those goals. Usability. Programmability. Support. Access. Business processes. And so on. Rather than, as often happens, the fastest possible cluster is procured, and the rest is thought about afterwards.

The focus on Productivity in "HPC" was about making sure the value of HPC was recognized and the optimum benefits extracted for the budget. This is clearly still a valid and important topic. So why has the use of "High Productivity Computing" disappeared over the last few years and HPC now always means "High Performance Computing" again?

The truth is that HPC ("performance") always did include productivity for those that understood. There were some good issues raised during the popularity of the term High Productivty Computing. But perhaps the need for a special term has reduced.

However, the message remains the same - hardware alone is not HPC. Real, effective, value for money HPC needs hardware + software tools + application software + skilled people + ...

No comments: