Thursday 18 July 2013

An early blog about SC13 Denver - just for fun ...

As SC13 registration opens this week, it occurs to me both how far away SC13 is (a whole summer and several months after that) but also how close SC13 is (only a summer and a month or two). It got me thinking how far ahead people plan for SC. I have heard of people who book hotels for the next SC as soon as they home from the previous SC (to secure the best deal/hotel/etc.). I have also heard stories of those who still have not booked flights only days before SC.

So, just for fun - how far ahead do you plan your travel for SC? Are you the kind of HPC person who books SC13 as soon as SC12 has ended? Or do you leave SC13 travel booking until a week or two before SC13? Of course, it may not be up to you - many attendees need to get travel authority etc. and this is often hard to get a long time in advance.

Please complete the survey here - http://www.surveymonkey.com/s/3MRSYYH

Once I have enough reponses, I will write another blog revealing the results.

Enjoy!

[PS - this survey is not on behalf of, or affiliated with, either the SC13 organisers or anyone else - it's just a curiosity and to share in a blog later.]

Monday 10 June 2013

China supercomputer to be world's fastest (again) - Tianhe-2

It seems that China's Tianhe-2 supercomputer will confirmed as the world's fastest supercomputer at next Top500 list to be revealed at the ISC'13 conference next week.

I was going to write about the Chinese Tianhe-2 supercomputer and how it matters to the USA and Europe - then I found these old blog posts of mine:


Sunday 2 June 2013

Supercomputing goes to Leipzig - a preview of ISC13

I have written my preview of ISC13 over at the NAG Blog ... a new location, Tianhe-2, MIC vs. GPU, industry, exascale, big data and ecoystems. Not quite HPC keyword bingo but close :-)

See you there!

Tuesday 12 March 2013

Name that supercomputer (Quiz)

Instead of a sensible HPC blog post, how about some fun? Can you name these supercomputers?

I'm looking for actual machine names (e.g. 'Sequoia') and the host site (e.g. LLNL). Bonus points for the funding agency (e.g. DOE NNSA) and the machine type (e.g. IBM BlueGene/Q).

Submit your guesses or knowledgeable answers either through the comments field below, or to me on twitter (@hpcnotes).

For the photos, if you are stuck, you might need to use clues from my twitter stream as to where I have been recently.

Answers will be revealed once there have been enough guesses to amuse me. Have fun!


  1. Which supercomputer are we looking underneath?

  2. Acceptance of this leading system became a HPC news topic recently

  3. NAG provides the Computational Science & Engineering Support Service for this one

  4. One letter is all that’s needed to describe this supercomputer

  5. Racing cattle powered by Greek letters

  6. Spock was one of these

  7. Which supercomputer does this photo show the inner rows of?

  8. Memory with a deerstalker & pipe

  9. Put an end to Ming (or did he)?

  10. This plant/leaf is normally silver when used as the national symbol of this one’s host country


Friday 11 January 2013

Predictions for 2013 in HPC

As we stumble into the first weeks of 2013, it is the season for predictions about what the coming year will bring. In my case, following my recent review of HPC in 2012, I get to make some predictions for the world of HPC in 2013.


Buzzwords

First up, this year’s buzzword for HPC marketing and technology talks. Last year was very much the year of “Big Data” as a buzzword. As that starts to become old hat (and real work) a new buzzword will be required. Cynical? My prediction is that this year will see Big Data still present in HPC discussions and real usage but it will diminish in use as a buzzword. 2013 will probably spawn two buzzwords.

The first buzzword will be “energy-efficient computing”. We saw the use of this a little last year but I think it will become the dominant buzzword this year. Most technical talks will include some reference to energy-efficient computing (or the energy cost of the solution or etc.). All marketing departments will swing into action to brand their HPC products and services as energy efficient computing – much as they did with Big Data and before that, Cloud Computing, and so on. Yes, I’m being a tad cynical about the whole thing. I’m not suggesting that energy efficiency is not important – in fact it is essential to meet our ambitions in HPC. I’m merely noting its impending over-use as a theme. And of course, energy efficient computing is not the same as Green Computing – after all that buzzword is several years old now.

Energy efficiency will be driven by the need to find lower power solutions for exascale-era supercomputers (not just exascale systems but the small department petascale systems that will be expected at that time – not to mention consumer scale devices). It is worth noting that optimizing for power and energy may not be the same thing. The technology will also drive the debate – especially the anticipated contest between GPUs and Xeon Phi. And politically, energy efficient computing sounds better for attracting investment rather than “HPC technology research”.

Thursday 20 December 2012

A review of 2012 in supercomputing - Part 2

This is Part 2 of my review of the year 2012 in supercomputing and related matters.

In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.

Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.

The themes that stick out in my mind from HPC/supercomputing in 2012 are:
  • The exascale race stalls
  • Petaflops become "ordinary"
  • HPC seeks to engage a broader user community
  • Assault on the Top500

The exascale race stalls

The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.

Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]

Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.

The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.

Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.


Tuesday 18 December 2012

A review of 2012 in supercomputing - Part 1

It's that time of year when doing a review of the last twelve months seems like a good idea for a blog topic. (To be followed soon after by a blog of predictions for the next year.)
So, here goes - my review of the year 2012 in supercomputing and related matters. Thoroughly biased, of course, towards things that interested me throughout the year.


Predictions for 2012

Towards the end of 2011 and in early 2012 I made various predictions about HPC in 2012. Here are the ones I can find or recall:
  • The use of "cloud computing" as the preferred marketing buzzword used for large swathes of the HPC product space would come to an end.
  • There would be an onslaught of "Big Data" (note the compulsory capital letters) as the marketing buzzword of choice for 2012 - to be applied to as many HPC products as possible - even if only a tenuous relevance (just like cloud computing before it - and green computing before that - and so on ...)
  • There would be a vigorous ongoing debate over the relative merits and likely success of GPUs (especially from NVidia) vs. Intel's MIC (now called Xeon Phi).
  • ARM would become a common part of the architecture debate alongside x86 and accelerators.
  • There would be a growth in the recognition that software and people matter just as much as the hardware.

Thursday 8 November 2012

HPC notes at SC12

I'll be at SC12 next week.

I have a mostly full schedule in advance but I always leave a little time to explore the show floor, and to meet new people or old friends.

If you are at SC12 too, you might be able to find me via the NAG booth (#2431) - or walking the streets between meetings - or at one of the networking receptions.


If you are a twitter person - you can find me at @hpcnotes (but be warned I won't be tweeting most of the HPC news during the show - @HPC_Guru is much better for that).

Hope to see some of you there. And rember my tribute from last year to those not attending SC.

Tuesday 6 November 2012

HPC fun for SC12

I've previously written some light-hearted but partly serious pieces for the main supercomputing events.

I'm working on one for SC12 too - again to be published in HPC Wire - but in the meantime, here are pointers for the SC11 and ISC11 articles:

Friday 12 October 2012

The making of “1000x” – unbalanced supercomputing

I have posted a new article on the NAG blog: The making of "1000x" – unbalanced supercomputing.

This goes behind my article in HPCwire ("Chasing1000x: The future of supercomputing is unbalanced"), where I explain the pun in the title and dip into some of the technology issues affecting the next 1000x in performance.

Tuesday 2 October 2012

The first mention of SC12

It's that time of year again. SC has started to drift into my inbox and phone conversations with increasing regularity - here comes Supercomputing 2012 in Salt Lake City. Last year, in the run up to SC11 in Seattle, I wrote the SC11 diary - blogging every few days on my preparations and thoughts for the biggest annual event of the supercomputing world.

I'm not sure I'll do such a diary again this year (unless by popular demand - not likely!). However, I will be writing some articles for some publications (HPC Wire and others - see my previous articles) in the coming weeks which will set the scene for SC from my point of view - burning issues I hope will be debated in the community, key technology areas I will be watching, and so on.

In the meantime, if you crave SC reading material, you might amuse yourself by reading my previous fun at SC time (e.g. The top ten myths of SC - in HPC Wire for SC11) or you might even want to translate my fun from ISC (Are you an ISC veteran?) to new meanings at SC.

If you want more serious content, then browse on this blog site (e.g. tagged "events") or on the NAG Blog (e.g. tagged "HPC").

If you find nothing you like - drop me a comment below or via twitter and I'll see what I can do to address the topic you are interested in!