Monday, 9 November 2015

SC15 Preview

SC15 - the biggest get-together of the High Performance Computing (HPC) world - takes place next week in Austin, TX. Around 10,000 buyers, users, programmers, managers, business development people, funders, researchers, media, etc. will be there.

With a large technical program, an even larger exhibition, and plenty of associated workshops, product launches, user groups, etc., SC15 will dominate the world of HPC for a week, plus most of this week leading up to it. It is one of the best ways for HPC practitioners to share experiences, learn about the latest advances, and build collaborations and business relationships.

So, to wet your appetites, here is the @hpcnotes preview to SC15 - what I think might be the key topics, things to look out for, what not to miss, etc.

New supercomputers

It's always one of the aspects of SC that grabs the media and attendee attention the most. Which new biggest supercomputers will be announced? Will there be a new occupier of the No.1 spot on the Top500 list? Usually I have some idea of what new supercomputers are coming up before they are public, but this year I have no idea. My guess? No new No.1. A few new Top20 machines. So which one will win the news coverage?

New products

In spite of the community repeatedly acknowledging that the whole system is important - memory, interconnect, I/O, software, architecture, packaging, etc., judging by the media attention and informal conversations, we still seem to get most excited by the processors.

Monday, 5 October 2015

HPC Bingo

A big part of SC (Austin in 2015) is actually getting there. Most attendees will have to navigate the joys of long distance air travel. If you travel enough, or play the game wisely, you can secure frequent flyer elite status which helps make the air travel more bearable. Here is a version of elite status bingo for HPC. I listed some categories and "achievements" required for each. Can you claim elite HPC status?

HPC System User category

There have been lots of systems in HPC over the years, but we should stick to options that even a recent recruit to HPC might be able to claim. You can award yourself this category if you have used (logged into and run or compiled code) each of these systems:
  • IBM Power system
  • Cray XT, XE, or XC
  • SGI shared memory system - Origin, Altix or UV
  • x86 cluster
  • A system with any one of Sparc, vector, or ARM, GPU, Phi, or FPGA

HPC Programmer category

Award yourself this category if you have written programs to run on a HPC system in each of these:
  • Fortran 77
  • Fortran 90 or later
  • C
  • MPI
  • OpenMP
  • Any one of CUDA, OpenACC, OpenCL, Python, R, Matlab

HPC Talker/Buzzword category

Buzzwords seem to be an integral part of HPC. To be awarded this category, you must have used each of these in talks (powerpoint etc.) since SC14:
  • Big Data
  • Any of green computing, energy efficient computing, or power aware computing
  • One of my HPC analogies?
  • "it's all about the science" (but then just talked about the HPC like everyone else!!)
  •  Any reference to "FLOPS are free, data movement is hard" or similar
  • Exascale

Previous SC content ...

I'll write some new content for SC15 Austin soon but while you are waiting, here are two of my previous writings on SC:

Essential Analogies for the HPC Advocate

This is an update of a two-part article I wrote for HPC Wire in 2013: Part 1 and Part 2.

An important ability for anyone involved in High Performance Computing (HPC or supercomputing or big data processing, etc.) is to be able to explain just what HPC is to others.

"Others” include politicians, Joe Public, graduates possibly interested in HPC, industry managers trying to see how HPC fits into their IT or R&D programs, or family asking for the umpteenth time “what exactly do you do?

One of the easiest ways to explain HPC is to use analogies that relate the concepts to things that the listener is more familiar with. So here is a run-through of some useful analogies for explaining HPC or one of its concepts:

The simple yet powerful: A spade

Need to dig a hole? Use the right tool for the job – a spade. Need to dig a bigger hole, or a hole through tougher material like concrete? Use a more powerful tool – a mechanical digger.

Now instead of digging a hole, consider modeling and simulation. If the model/simulation is too big or too complex – use the more powerful tool: i.e. HPC. It’s nice and simple – HPC is a more powerful tool that can tackle more complex or bigger models/simulations than ordinary computers.

There are some great derived analogies too. You should be able to give a spade to almost anyone and they should be able to dig a hole without too much further instruction. But, hand a novice the keys to a mechanical digger, and it is unlikely they will be able to effectively operate the machine without either training or a lot of on the job learning. Likewise, HPC requires training to be able to use the more powerful tool effectively. Buying mechanical diggers is also requires expertise that buying a spade doesn’t. And so on.

It neatly focuses on the purpose and benefit of HPC rather than the technology itself. If you’ve heard any of my talks recently you will know this is an HPC analogy that I use myself frequently.

The moral high ground: A science/engineering instrument

I’ve occasionally accused the HPC community of being riddled with hypocrites – we make a show of “the science is what matters” and then proceed to focus the rest of the discussion on the hardware (and, if feeling pious or guilty, we mention “but software really matters”).

However, there is a critical truth to this – the scientific (or engineering) capability is what matters when considering HPC. I regularly use this perspective, often very firmly, myself: a supercomputer is NOT a computer – it is a major scientific instrument that just happens to be built using computer technology. Just because it is built from most of the same components as commodity servers does not mean that modes of usage, operating skills, user expectations, etc. should be the same. This helps to put HPC into the right context in the listeners mind – compare it to a major telescope, a wind tunnel, or even LHC@CERN.

The derived analogies are effective too – expertise in the technology itself is required, not just the science using the instrument. Sure, the skills overlap but they are distinct and equally important.

This analogy focuses on the purpose and benefit of HPC, but also includes a reference to it being based on a big computer.

Thursday, 27 August 2015

The price of open-source software - a joint response

This viewpoint is published jointly on, (personal blog), (personal blog) under a CC-BY licence. It was written by Neil Chue Hong (Software Sustainability Institute), Simon Hettrick (Software Sustainability Institute), Andrew Jones (@hpcnotes & NAG), and Daniel S. Katz (University of Chicago & Argonne National Laboratory)

In their recent paper, Krylov et al. [1] state that the goal of the research community is to advance “what is good for scientific discovery.” We wholeheartedly agree. We also welcome the debate on the role of open source in research, begun by Gezelter [2], in which Krylov was participating. However, we have several concerns with Krylov’s arguments and reasoning on the best way to advance scientific discovery with respect to research software.

Gezelter raises the question of whether it should be standard practice for software developed by publicly funded researchers to be released under an open-source licence. Krylov responds that research software should be developed by professional software developers and sold to researchers.

We advocate that software developed with public funds should be released as open-source by default (supporting Gezelter’s position). However, we also support Krylov’s call for the involvement of professional software developers where appropriate, and support Krylov’s argument that researchers should be encouraged to use existing software where possible. We acknowledge many of Krylov’s arguments of the benefits of professionally written and supported software.

Our first major concern with Krylov’s paper is its focus on arguing against an open-source mandate on software developed by publicly funded researchers. To the knowledge of the authors, no such mandate exists. It appears that Krylov is pre-emptively arguing against the establishment of such a mandate, or even against it becoming “standard practice” in academia. There is a significant difference between a recommendation of releasing as open-source by default (which we firmly support) and a mandate that all research software must be open source (which we don’t support, because it hinders the flexibility that scientific discovery needs).

Our second major concern is Krylov’s assumption that the research community could rely entirely on software purchased from professional software developers. We agree with this approach whenever it is feasible. However, by concentrating on large-scale quantum chemistry software, Krylov overlooks the diversity of software used in research. A significant amount of research software is at a smaller scale: from few line scripts to short programs. Although it is of fundamental importance to research, this small-scale software is typically used by only a handful of researchers. There are many benefits in employing professionals to develop research software but, since so much research software is not commercially viable, the vast majority of it will continue to be developed by researchers for their own use. We do advocate researchers engaging with professional software developers as far as appropriate when developing their own software.

Our desire is to maximise the benefit of software by making it open—allowing researchers other than the developers to read, understand, modify, and use it in their own research—by default. This does not preclude commercial licensing where it both is feasible and is the best way of maximising the software benefit. We believe this is also the central message of Gezelter.

In addition to these two fundamental issues with Krylov, we would like to respond to some of the individual points raised.

Tuesday, 17 June 2014

Secrets of the Supercomputers

These are revelations from inside the strange world of supercomputing centers. Nobody is pretending these are real stories. They couldn’t possibly be. Could they?

On one of my many long haul airplane plane journeys this year, I caught myself thinking about the strange things that go on inside supercomputer centers - and other parts of the HPC world. I thought it might be fun to poke at and mock such activities while trying to make some serious points.

Since the flight was a long one, I started writing ... and so "Secrets of the Supercomputers" was born.

You can find Episode 1 at HPC Wire today, touching on the topic of HPC procurement.

No offense to anyone intended. Gentle mocking maybe. Serious lessons definitely.

Take a look here for some serious comments on HPC procurement at the NAG blog.

Tuesday, 10 June 2014

Silence ...

Really, October 2013? That long since I wrote a blog? Not even anything for SC13? Oops. Still, busy is good. Be nice to get some more blog posts again though. Maybe a preview of ISC14 in the next few days ...

Friday, 18 October 2013

Essential guide to HPC on twitter

Who are the best HPC people on twitter?

A good question posed by Suhaib Khan (@suhaibkhan) - which he made tougher by saying "pick your top 5". A short debate followed on twitter but I thought the content was useful enough to record in a blog post for community reference. I also strongly urge anyone to provide further input to this topic and I'll update this post.

Some rules (mine not Suhaib's):
  1. What are the minimum set of accounts you can follow and still expect to catch most of the HPC news, gossip, opinion pieces, analysis and key technical content?
  2. How to avoid too much marketing?
  3. How to access comment and debate beyond the news headlines?
  4. Which HPC people are not only active but also interactive on twitter?
I cheated a little on the "top 5" by suggesting 4 themes:

Thursday, 10 October 2013

Supercomputing - the reality behind the vision

My opinion piece "Supercomputing - the reality behind the vision" was published today in Scientific Computing World, where I:
  • liken a supercomputer to a "pile of silicon, copper, optical fibre, pipework, and other heavy hardware [...] an imposing monument that politicians can cut ribbons in front of";
  • describe system architecture as "the art of balancing the desires of capacity, performance and resilience against the frustrations of power, cooling, dollars, space, and so on";
  • introduce software as magic and infrastructure and a virtual knowledge engine;
  • and note that "delivering science insight or engineering results from [supercomputing] requires users";
  • and propose that we need a roadmap for people just as much as for the hardware technology.

Read the full article here:

Friday, 30 August 2013

All software needs to be parallel

I often use this slide to show why all software has to be aware of parallel processing now.

In short, if your software does not exploit parallel processing techniques, then your code is limited to less than 2% of the potential performance of the processor. And this is just for a single processor - it is even more critical if the code has to run on a cluster or a supercomputer.

Thursday, 18 July 2013

An early blog about SC13 Denver - just for fun ...

As SC13 registration opens this week, it occurs to me both how far away SC13 is (a whole summer and several months after that) but also how close SC13 is (only a summer and a month or two). It got me thinking how far ahead people plan for SC. I have heard of people who book hotels for the next SC as soon as they home from the previous SC (to secure the best deal/hotel/etc.). I have also heard stories of those who still have not booked flights only days before SC.

So, just for fun - how far ahead do you plan your travel for SC? Are you the kind of HPC person who books SC13 as soon as SC12 has ended? Or do you leave SC13 travel booking until a week or two before SC13? Of course, it may not be up to you - many attendees need to get travel authority etc. and this is often hard to get a long time in advance.

Please complete the survey here -

Once I have enough reponses, I will write another blog revealing the results.


[PS - this survey is not on behalf of, or affiliated with, either the SC13 organisers or anyone else - it's just a curiosity and to share in a blog later.]