Monday 5 October 2015

HPC Bingo

A big part of SC (Austin in 2015) is actually getting there. Most attendees will have to navigate the joys of long distance air travel. If you travel enough, or play the game wisely, you can secure frequent flyer elite status which helps make the air travel more bearable. Here is a version of elite status bingo for HPC. I listed some categories and "achievements" required for each. Can you claim elite HPC status?

HPC System User category


There have been lots of systems in HPC over the years, but we should stick to options that even a recent recruit to HPC might be able to claim. You can award yourself this category if you have used (logged into and run or compiled code) each of these systems:
  • IBM Power system
  • Cray XT, XE, or XC
  • SGI shared memory system - Origin, Altix or UV
  • x86 cluster
  • A system with any one of Sparc, vector, or ARM, GPU, Phi, or FPGA

HPC Programmer category


Award yourself this category if you have written programs to run on a HPC system in each of these:
  • Fortran 77
  • Fortran 90 or later
  • C
  • MPI
  • OpenMP
  • Any one of CUDA, OpenACC, OpenCL, Python, R, Matlab

HPC Talker/Buzzword category


Buzzwords seem to be an integral part of HPC. To be awarded this category, you must have used each of these in talks (powerpoint etc.) since SC14:
  • Big Data
  • Any of green computing, energy efficient computing, or power aware computing
  • One of my HPC analogies?
  • "it's all about the science" (but then just talked about the HPC like everyone else!!)
  •  Any reference to "FLOPS are free, data movement is hard" or similar
  • Exascale

Previous SC content ...

I'll write some new content for SC15 Austin soon but while you are waiting, here are two of my previous writings on SC:
Enjoy!

Essential Analogies for the HPC Advocate

This is an update of a two-part article I wrote for HPC Wire in 2013: Part 1 and Part 2.

An important ability for anyone involved in High Performance Computing (HPC or supercomputing or big data processing, etc.) is to be able to explain just what HPC is to others.

"Others” include politicians, Joe Public, graduates possibly interested in HPC, industry managers trying to see how HPC fits into their IT or R&D programs, or family asking for the umpteenth time “what exactly do you do?

One of the easiest ways to explain HPC is to use analogies that relate the concepts to things that the listener is more familiar with. So here is a run-through of some useful analogies for explaining HPC or one of its concepts:

The simple yet powerful: A spade


Need to dig a hole? Use the right tool for the job – a spade. Need to dig a bigger hole, or a hole through tougher material like concrete? Use a more powerful tool – a mechanical digger.

Now instead of digging a hole, consider modeling and simulation. If the model/simulation is too big or too complex – use the more powerful tool: i.e. HPC. It’s nice and simple – HPC is a more powerful tool that can tackle more complex or bigger models/simulations than ordinary computers.

There are some great derived analogies too. You should be able to give a spade to almost anyone and they should be able to dig a hole without too much further instruction. But, hand a novice the keys to a mechanical digger, and it is unlikely they will be able to effectively operate the machine without either training or a lot of on the job learning. Likewise, HPC requires training to be able to use the more powerful tool effectively. Buying mechanical diggers is also requires expertise that buying a spade doesn’t. And so on.

It neatly focuses on the purpose and benefit of HPC rather than the technology itself. If you’ve heard any of my talks recently you will know this is an HPC analogy that I use myself frequently.

The moral high ground: A science/engineering instrument


I’ve occasionally accused the HPC community of being riddled with hypocrites – we make a show of “the science is what matters” and then proceed to focus the rest of the discussion on the hardware (and, if feeling pious or guilty, we mention “but software really matters”).

However, there is a critical truth to this – the scientific (or engineering) capability is what matters when considering HPC. I regularly use this perspective, often very firmly, myself: a supercomputer is NOT a computer – it is a major scientific instrument that just happens to be built using computer technology. Just because it is built from most of the same components as commodity servers does not mean that modes of usage, operating skills, user expectations, etc. should be the same. This helps to put HPC into the right context in the listeners mind – compare it to a major telescope, a wind tunnel, or even LHC@CERN.

The derived analogies are effective too – expertise in the technology itself is required, not just the science using the instrument. Sure, the skills overlap but they are distinct and equally important.

This analogy focuses on the purpose and benefit of HPC, but also includes a reference to it being based on a big computer.