Thursday 2 August 2012

What is the point of supercomputers?

Maybe it seems an odd question to ask on a blog dedicated to High Performance Computing (HPC). But it is good to question why we do things – hopefully leading us to a clearer justification for investing money, time and effort. Ideally, this would also enable better delivery – the “how” supporting the “why” – focusing on the best processes, technologies, etc. to achieve the goals identified in the justification.

So, again, why supercomputing? Perhaps you think the answer is obvious – supercomputing enables modelling and simulation to be done faster than with normal computers, or enables bigger problems to be solved.


First, for anyone (even seasoned HPC people) trying to explain and evangelize HPC or supercomputing to other scientists, potential new users, funding bodies, politicians – well, anyone really – I can firmly recommend John West’s talk at the HPCC 2012 conference as an excellent use of your time.

Gary Johnson wrote a great article in HPC Wire recently, discussing the case for continually investing in the biggest supercomputers in the world, and raising the possibility that perhaps the world’s largest supercomputers are becoming too far detached from the actual needs of the scientists and engineers who are used to justify these large procurements.

However, I was recently prompted to think of the reason for supercomputing as comprising two related but different opportunities.

The first opportunity is about enabling faster, bigger, better simulation and modelling – supporting scientists and engineers to be more productive (faster results, search more parameter space, etc.), and to exploit more accurate computer predictions (bigger, more complex,) in their research, design and testing processes. The business need or scientific case for bigger, better, faster computational modelling is usually readily made and, to many, that alone is enough to answer "why supercomputing?". Gary's HPC Wire article discusses this case, and the resulting different ways to deliver such a supercomputing resource.


The second type of opportunity is about enabling new or game-changing capabilities. At this year’s NCSA Private Sector Program annual meeting, I was joined by Gerry Labedz on a panel moderated by Bill Gropp, on the topic of modern software for supercomputing – see here for my summary of the panel and a link to the video. Among other things, in this panel discussion, we spoke about the idea that supercomputing is about enabling dreams.

This concept of radical capability is a critical factor when considering why supercomputing is important to a business, to the economy, to science, etc. It does not, necessarily, mean using the most powerful supercomputers in the world. For a researcher or business previously using a desktop computer for modelling/simulation, being able to exploit the power of a few hundred cores in a small cluster could be game-changing.

This happens when the new computational power opens up not just faster results but new ways of approaching a problem. In desperate search of an analogy, consider air travel. For many users, it is a faster and better way to get from A to B than driving. For other users, it enables them to do something they just couldn't do by driving. This might be reaching an overseas destination. It might be the speed of air travel makes a specific vacation possible because they don't have enough holiday entitlement to take the time get to the vacation destination by slower means of transport (car/ship). Thus, the same platform can deliver very beneficial improvements in operational performance to some users, whilst enabling entirely new experiences to other users.

And so to supercomputing. It is not the size of the supercomputer that distinguishes between the "bigger, better, faster" and "new capabilities" benefits, but what it means to the business processes or research methods of a given user. There is obviously a grey line between these two types of supercomputing opportunity but, in my view, the difference is real - and is measured in terms of the impact on the business or research agenda, not the scale of computing resources used.

1 comment:

Josh said...

I think the biggest plug for the supercomputing recently was "Watson" on "Jeopardy." It showed the ordinary computer user what and even a little why of supercomputing.