Showing posts with label productivity. Show all posts
Showing posts with label productivity. Show all posts

Friday 15 June 2012

Supercomputers are for dreams

I was invited to the 2012 NCSA Annual Private Sector Program (PSP) meeting in May. In my few years of attending, this has always been a great meeting (attendance by invitation only), with an unusually high concentration of real HPC users and managers from industry.

NCSA have recently released streaming video recordings of the main sessions - the videos can be found  as links on the Annual PSP Meeting agenda page.

Bill Gropp chaired a panel session on "Modern Software Implementation" with myself and Gerry Labedz as panellists.

The full video (~1 hour) is here but I have also prepared a breakdown of the panel discussion in this blog post below.


Friday 19 August 2011

What happened to High Productivity Computing?

How to make HPC more effective? Value for money and high impact strategic research facilities like HPC are often difficult to match. Not so long ago, this concern meant that the familiar HPC acronym was hijacked to mean "High Productivity Computing", to emphasize that it is not only the raw compute performance at your disposal that counts but, more importantly, how well you are able to make use of that performance. In other words: how productive is it?

Monday 13 September 2010

Do you want ice with your supercomputer?

[Originally posted on The NAG Blog]

Would you like ice with your drink?” It’s a common question of course. One that divides people – few will think “I don’t mind” – most have a firm preference one way or the other. There are people who hate ice with their drink and those who freak if there is none. National stereotypes have a role to play – in the USA the question is not always asked – it’s assumed you want ice with everything. In the UK, you often have to ask specifically to get ice.



Yet the role of ice in making our drinks chilled is misleading. I once had a discussion with a leading American member of the international HPC community about this. “No ice”, he was complaining as we headed out of a European country, “they had no ice for the drink”.



I don’t get this obsession with ice”, I chipped in. “What?!” He looked at me as if I were mad. “Why do you like your coke warm?



Ah, but that’s just it”, I replied. “I hate warm drinks – I really like my coke chilled. But surely, in this modern world over a century after the invention of the refrigerator, it’s not unreasonable to expect the fluid to be chilled – without the need to drop lumps of solid water into it?



Ah, fair point”, he conceded.



What has this got to do with supercomputing? Perhaps the common thread is that usually we just accept the habitual choices of ways to do things – and don’t often step back to think – “are those the only choices?



Maybe we should step back a little more often and ask ourselves what we are trying to achieve with HPC – and are the usual choices the only ways forward? Or are there different ways to approach the problem that will deliver simpler, better or cheaper performance?



Perhaps your business/research goals mean you need to conduct more complex modelling or you need faster performance. Maybe the drive of computing technology towards many-core processors rather than faster processors is limiting your ability to achieve this. (I have had several conversations recently, where companies are buying older technology because their software won’t run on multicore).



The “ice or no ice” question might be whether or not to upgrade your HPC with the latest multicore processors. But what about the “just chill the fluid” option? Well, how about upgrading the software instead, or as well?



NAG has plenty of case studies to show where enhancements to software have achieved huge gains in performance or capability (e.g., www.hector.ac.uk/cse/reports).



Sometimes buying more compute power is the right answer. Sometimes, extracting more efficient performance from what you have is the answer. Bringing them together - a balance of hardware upgrades and software innovations might well give you the best chance of optimising cost efficiency, performance and sustainability of performance.

Monday 30 August 2010

Me on HPC 2

Things I have said (or have been attributed as saying - not always the same thing!) - some older interviews with me in various publications about HPC, multicore, etc ...


Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West

[full article requires subscription, extracts here are not complete, and are modified slightly to support that]

"cost of science, not just the cost of supercomputer ownership"

"lead time, and funding, to get the user community ready"

"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"

"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."

"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."

Thursday 18 February 2010

Exascale or personal HPC?

[Originally posted on The NAG Blog]

Which is more interesting for HPC watchers - the ambition of exaflops or personal supercomputing? Anyone who answers "personal supercomputing" is probably not being honest (I welcome challenges!). How many people find watching cars on the local road more interesting than F1 racing? Or think local delivery vans more fascinating than the space shuttle? Of course, everyday cars and local delivery vans are more important for most people than F1 and the space shuttle. And so personal supercomputing is more important than exaflops for most people.

High performance computing at an individual or small group scale directly impacts a far broader set of researchers and business users than exaflops will (at least for the next decade or two). Of course, in the same way that F1 and the shuttle pioneer technologies that improve cars and other everyday products, so the exaflops ambition (and the petaflops race before it) will pioneer technologies that make individual scale HPC better.

One potential benefit to widespread technical computing that some are hoping for is an evolution in programming. It is almost certain that the software challenges of an exaflops supercomputer with a complex distributed processing and memory hierarchy demanding billion-way concurrency will be the critical factor to success and thus tools and language evolutions will be developed to help the task.

Languages might be extended (more likely than new languages) to help express parallelism better. Better may mean easier or with assured correctness rather than higher performance. Language implementations might evolve to better support robustness in the face of potential errors. Successful exascale applications might expect to make much greater use of solver and utility libraries optimized for specific supercomputers. Indeed one outlying idea is that libraries might evolve to become part of the computer system rather than part of the application. Developments like these should also help to make the task of programming personal scale high performance computing much easier, reducing the expertise required to get acceptable performance from a system using tens of cores or GPUs.

Of course, while we wait for the exascale benefits to trickle down, getting applications to achieve reasonable performance across many cores still requires specialist skills.

Thursday 28 January 2010

Are we taking supercomputing code seriously?

[Article by me on ZDNet UK, 28 January, 2010]

The supercomputing programs behind so much science and research are written by people who are not software pros ...

http://www.zdnet.co.uk/news/it-strategy/2010/01/28/are-we-taking-supercomputing-code-seriously-40004192/

Thursday 1 October 2009

Monday 10 August 2009

Personal supercomputing anyone?

[Article by me on ZDNet UK, 10 August, 2009]

Personal supercomputing may sound like a contradiction in terms, but it definitely exists ...

http://www.zdnet.co.uk/news/it-strategy/2009/08/10/personal-supercomputing-anyone-39710087/

Thursday 18 December 2008

Santa's HPC Woes

[Article by me for HPCwire, December 18, 2008]

In a break with centuries of reticence, perhaps the most widely recognised distributor of festive spirit and products, Santa Claus, has revealed some details of the HPC underpinning his time-critical global operations.

http://www.hpcwire.com/features/Santas-HPC-Woes-36399314.html

Friday 31 October 2008

Is supercomputing just about performance?

[Article by me on ZDNet UK, 31 October, 2008]

You may think you know what 'HPC' stands for — but conflicting views on what that 'P' really stands for reflect important changes taking place within the field of supercomputing ...

http://www.zdnet.co.uk/news/servers/2008/10/30/is-supercomputing-just-about-performance-39534285/

Thursday 14 August 2008

NAG Embarks on a New Business Venture

[Interview with me in HPCwire, August 14, 2008]

by John E. West, for HPCwire

... responding to changes in computing at both ends of the spectrum, [NAG] is positioning itself as the place to go, not just for shrink-wrapped libraries, but also for education and expertise in how to program in parallel, and even for expert advice on how to buy, build and run your own supercomputer. HPCwire talked to Andrew Jones, vice-president of HPC business at NAG, on what he has in mind for this new business and how he sees the future of HPC and parallel programming shaping up ...

http://www.hpcwire.com/features/NAG_Embarks_on_a_New_Business_Venture.html?viewAll=y