Wednesday 24 February 2010

Events guide: What's on in supercomputing

[Article by me on ZDNet UK, 24 February, 2010]

The key events in the supercomputing calendar can provide real insights and a chance to network ...

http://www.zdnet.co.uk/news/it-strategy/2010/02/24/events-guide-whats-on-in-supercomputing-40041925/

Thursday 18 February 2010

Exascale or personal HPC?

[Originally posted on The NAG Blog]

Which is more interesting for HPC watchers - the ambition of exaflops or personal supercomputing? Anyone who answers "personal supercomputing" is probably not being honest (I welcome challenges!). How many people find watching cars on the local road more interesting than F1 racing? Or think local delivery vans more fascinating than the space shuttle? Of course, everyday cars and local delivery vans are more important for most people than F1 and the space shuttle. And so personal supercomputing is more important than exaflops for most people.

High performance computing at an individual or small group scale directly impacts a far broader set of researchers and business users than exaflops will (at least for the next decade or two). Of course, in the same way that F1 and the shuttle pioneer technologies that improve cars and other everyday products, so the exaflops ambition (and the petaflops race before it) will pioneer technologies that make individual scale HPC better.

One potential benefit to widespread technical computing that some are hoping for is an evolution in programming. It is almost certain that the software challenges of an exaflops supercomputer with a complex distributed processing and memory hierarchy demanding billion-way concurrency will be the critical factor to success and thus tools and language evolutions will be developed to help the task.

Languages might be extended (more likely than new languages) to help express parallelism better. Better may mean easier or with assured correctness rather than higher performance. Language implementations might evolve to better support robustness in the face of potential errors. Successful exascale applications might expect to make much greater use of solver and utility libraries optimized for specific supercomputers. Indeed one outlying idea is that libraries might evolve to become part of the computer system rather than part of the application. Developments like these should also help to make the task of programming personal scale high performance computing much easier, reducing the expertise required to get acceptable performance from a system using tens of cores or GPUs.

Of course, while we wait for the exascale benefits to trickle down, getting applications to achieve reasonable performance across many cores still requires specialist skills.

Thursday 4 February 2010

Don't call it High Performance Computing?

[Originally posted on The NAG Blog]

Having just signed up for twitter (HPCnotes), I've realised that the space I previously had to get my point across was nothing short of luxurious (e.g. my ZDNet columns). It's like the traditional challenge of the elevator pitch - can you make your point about High Performance Computing (HPC) in the 140 character limit of a tweet? It might even be a challenge to state what HPC is in 140 characters. Can we sum up our profession that simply? To a non-HPC person?





The inspired John West of InsideHPC fame wrote about the need to explain HPC some time ago in HPCwire. It's not an abstract problem. As multicore processors (whether CPUs or GPUs) become the default for scientific computing, the parallel programming technologies and methods of HPC are becoming important for all numercial computing users - even if they don't identify themselves as HPC users. In turn, of course, HPC benefits in sustainability and usability from the mass market use of parallel programming skills and technologies.





I'll try to put it in 140 characters (less space for a link): Multicore CPUs promise extra performance but software must be optimised to take advantage. HPC methods can help.





It's not good - can you say it better? Add a comment to this blog post to try ...





For those of you finding this blog post from the short catch line above, hoping to find the answer to how HPC methods can help - well that's what my future posts and those of my colleagues here will address.

Thursday 28 January 2010

Are we taking supercomputing code seriously?

[Article by me on ZDNet UK, 28 January, 2010]

The supercomputing programs behind so much science and research are written by people who are not software pros ...

http://www.zdnet.co.uk/news/it-strategy/2010/01/28/are-we-taking-supercomputing-code-seriously-40004192/

Tuesday 15 December 2009

2009-2019: A Look Back on a Decade of Supercomputing

[Article by me for HPCwire, December 15, 2009]

As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It's amazing to think how much has changed in that time.

http://www.hpcwire.com/features/2009-2019-A-Look-Back-on-a-Decade-of-Supercomputing-79351812.html?viewAll=y

Thursday 12 November 2009

Tough choices for supercomputing's legacy apps

[Article by me on ZDNet UK, 12 November, 2009]

The prospect of hundreds of petaflops and exascale computing raises tricky issues for legacy apps ...

http://www.zdnet.co.uk/news/it-strategy/2009/11/12/tough-choices-for-supercomputings-legacy-apps-39869521/

Thursday 1 October 2009

Monday 10 August 2009

Personal supercomputing anyone?

[Article by me on ZDNet UK, 10 August, 2009]

Personal supercomputing may sound like a contradiction in terms, but it definitely exists ...

http://www.zdnet.co.uk/news/it-strategy/2009/08/10/personal-supercomputing-anyone-39710087/

Thursday 18 June 2009

When supercomputing benchmarks fail to add up

[Article by me on ZDNet UK, 18 June, 2009]

Using benchmarks to choose a supercomputer is more complex than just picking the fastest system ...

http://www.zdnet.co.uk/news/it-strategy/2009/06/18/when-supercomputing-benchmarks-fail-to-add-up-39664193/

Friday 20 March 2009

Are supercomputers just better liars?

[Article by me on ZDNet UK, 20 March, 2009]

Supercomputers may be far more powerful than ordinary machines, but that does not make their predictions infallible ...

http://www.zdnet.co.uk/news/it-strategy/2009/03/20/are-supercomputers-just-better-liars-39629474/

Thursday 5 February 2009

What to do if your supercomputing supplier fails

[Article by me on ZDNet UK, 5 February, 2009]

High-performance computing providers often live on the edge — technologically and financially. But if your supplier fails, it need not be a disaster ...

http://www.zdnet.co.uk/news/it-strategy/2009/02/05/what-to-do-if-your-supercomputing-supplier-fails-39610056/