Thursday 20 December 2012

A review of 2012 in supercomputing - Part 2

This is Part 2 of my review of the year 2012 in supercomputing and related matters.

In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.

Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.

The themes that stick out in my mind from HPC/supercomputing in 2012 are:
  • The exascale race stalls
  • Petaflops become "ordinary"
  • HPC seeks to engage a broader user community
  • Assault on the Top500

The exascale race stalls

The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.

Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]

Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.

The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.

Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.


Tuesday 18 December 2012

A review of 2012 in supercomputing - Part 1

It's that time of year when doing a review of the last twelve months seems like a good idea for a blog topic. (To be followed soon after by a blog of predictions for the next year.)
So, here goes - my review of the year 2012 in supercomputing and related matters. Thoroughly biased, of course, towards things that interested me throughout the year.


Predictions for 2012

Towards the end of 2011 and in early 2012 I made various predictions about HPC in 2012. Here are the ones I can find or recall:
  • The use of "cloud computing" as the preferred marketing buzzword used for large swathes of the HPC product space would come to an end.
  • There would be an onslaught of "Big Data" (note the compulsory capital letters) as the marketing buzzword of choice for 2012 - to be applied to as many HPC products as possible - even if only a tenuous relevance (just like cloud computing before it - and green computing before that - and so on ...)
  • There would be a vigorous ongoing debate over the relative merits and likely success of GPUs (especially from NVidia) vs. Intel's MIC (now called Xeon Phi).
  • ARM would become a common part of the architecture debate alongside x86 and accelerators.
  • There would be a growth in the recognition that software and people matter just as much as the hardware.

Thursday 8 November 2012

HPC notes at SC12

I'll be at SC12 next week.

I have a mostly full schedule in advance but I always leave a little time to explore the show floor, and to meet new people or old friends.

If you are at SC12 too, you might be able to find me via the NAG booth (#2431) - or walking the streets between meetings - or at one of the networking receptions.


If you are a twitter person - you can find me at @hpcnotes (but be warned I won't be tweeting most of the HPC news during the show - @HPC_Guru is much better for that).

Hope to see some of you there. And rember my tribute from last year to those not attending SC.

Tuesday 6 November 2012

HPC fun for SC12

I've previously written some light-hearted but partly serious pieces for the main supercomputing events.

I'm working on one for SC12 too - again to be published in HPC Wire - but in the meantime, here are pointers for the SC11 and ISC11 articles:

Friday 12 October 2012

The making of “1000x” – unbalanced supercomputing

I have posted a new article on the NAG blog: The making of "1000x" – unbalanced supercomputing.

This goes behind my article in HPCwire ("Chasing1000x: The future of supercomputing is unbalanced"), where I explain the pun in the title and dip into some of the technology issues affecting the next 1000x in performance.

Tuesday 2 October 2012

The first mention of SC12

It's that time of year again. SC has started to drift into my inbox and phone conversations with increasing regularity - here comes Supercomputing 2012 in Salt Lake City. Last year, in the run up to SC11 in Seattle, I wrote the SC11 diary - blogging every few days on my preparations and thoughts for the biggest annual event of the supercomputing world.

I'm not sure I'll do such a diary again this year (unless by popular demand - not likely!). However, I will be writing some articles for some publications (HPC Wire and others - see my previous articles) in the coming weeks which will set the scene for SC from my point of view - burning issues I hope will be debated in the community, key technology areas I will be watching, and so on.

In the meantime, if you crave SC reading material, you might amuse yourself by reading my previous fun at SC time (e.g. The top ten myths of SC - in HPC Wire for SC11) or you might even want to translate my fun from ISC (Are you an ISC veteran?) to new meanings at SC.

If you want more serious content, then browse on this blog site (e.g. tagged "events") or on the NAG Blog (e.g. tagged "HPC").

If you find nothing you like - drop me a comment below or via twitter and I'll see what I can do to address the topic you are interested in!

Thursday 2 August 2012

What is the point of supercomputers?

Maybe it seems an odd question to ask on a blog dedicated to High Performance Computing (HPC). But it is good to question why we do things – hopefully leading us to a clearer justification for investing money, time and effort. Ideally, this would also enable better delivery – the “how” supporting the “why” – focusing on the best processes, technologies, etc. to achieve the goals identified in the justification.

So, again, why supercomputing? Perhaps you think the answer is obvious – supercomputing enables modelling and simulation to be done faster than with normal computers, or enables bigger problems to be solved.

Friday 15 June 2012

Supercomputers are for dreams

I was invited to the 2012 NCSA Annual Private Sector Program (PSP) meeting in May. In my few years of attending, this has always been a great meeting (attendance by invitation only), with an unusually high concentration of real HPC users and managers from industry.

NCSA have recently released streaming video recordings of the main sessions - the videos can be found  as links on the Annual PSP Meeting agenda page.

Bill Gropp chaired a panel session on "Modern Software Implementation" with myself and Gerry Labedz as panellists.

The full video (~1 hour) is here but I have also prepared a breakdown of the panel discussion in this blog post below.


Wednesday 6 June 2012

Some fun for ISC12

I have written a guest blog post for the ISC'12 website - "Are you an ISC veteran?". The article is intended to raise a few serious observations amongst the fun.

I also wrote an earlier guest blog post for the ISC'12 website - "Is co-design for exascale computing a false hope?"

I've added these two links to my page on this site "Interviews, Quotes, Articles" (which lists my various articles, interviews, etc. in other locations around the internet).


Wednesday 30 May 2012

The power of supercomputers - energy, exascale and elevators

Paul Henning has written on his blog (HPC Ruminations) about the growing issue of power requirements for large scale computing. Paul's blog post - "Familiarity Breeds Complacency" - is partly in response to my article at HPCwire - "Exascale: power is not the problem" and my follow-up disucssion on here - "Supercomputers and other large science facilities".

Paul makes several good points and his post is well worth reading. He ends with an observation that I've noted before (in my own words):

One of supercomputing's biggest strengths - it's ability to help almost all areas of science and engineering - is also one of it's greatest weaknesses - because there a portfolio of cases rather than a single compelling champion to drive attention and investment.

PS - I've added Paul's new blog to my list of HPC blogs and news sites.

Friday 25 May 2012

Looking ahead to ISC'12

I have posted my preview of ISC'12 Hamburg - the summer's big international conference for the world of supercomputing over on the NAG blog. I will be attending ISC'12, along with several of my NAG colleagues. My blog post discusses these five key topics:
  • GPU vs MIC vs Other
  • What is happening with Exascale?
  • Top 500, Top 10,
  • Tens of PetaFLOPS
  • Finding the advantage in software
  • Big Data and HPC 
Read more on the NAG blog ...

Thursday 29 March 2012

Co-design for exascale

I wrote a blog for the ISC website on co-design for exascale.

This has also been mentioned on InsideHPC here.

I made similar comments at the panel hosted by Thomas Sterling at the HPCC conference in Newport, RI earlier this week.

The video of this panel should be posted soon at InsideHPC soon.

Thursday 9 February 2012

HPC Insiders - The Newport Gathering

The warm up for the annual HPCC meeting in Newport RI (March 26-28) has started - Are You an HPC Industry Insider?.

"The National High Performance Computing and Communications Conference (NHPCC) will highlight several exciting changes this year. Also known as the Newport Conference, the elite gathering that started 26 years ago as a one-day event to bring vendors together with government agency personnel has expanded its focus this year to include a more global perspective."

"Another significant change this year is the emphasis on manufacturing and competitiveness."

I have a page on this blog site the main HPC events of the year. Many people have rightly remarked that the HPC community really is that - a community - and that there is still a relatively high degree of connection between the various practitioners. In other words, despite its growing size and global reach, it feels like a small community. People know each other. Consequently, networking, whether technical or commercial, goes a long way to helping your business. Whatever your scale of technical computing, from multicore workstations to multi-thousand-node supercomputers, getting involved with the active HPC community can help you with your parallel computing goals. Online resources can help, but by far the most effective way of benefiting from the wider HPC community is by participating at the right events.

I enjoy this Newport event - I think it is one of the best annual events for the HPC community - and am looking forward to great discussions and meeting the many friends in the international HPC community. See you there!

Thursday 19 January 2012

Cloud computing or HPC? Finding trends.

I posted "Cloud computing or HPC? Finding trends." on the NAG blog today. Some extracts ...
Enable innovation and efficiency in product design and manufacture by using more powerful simulations. Apply more complex models to better understand and predict the behaviour of the world around us. Process datasets faster and with more advance analyses to extract more reliable and previously hidden insights and opportunities.
... and ...
High performance computing (HPC), supercomputing, computational science and engineering, technical computing, advanced computer modelling, advanced research computing, etc. The range of names/labels and the diversity of the audience involved mean that what is a common everyday term for many (e.g. HPC) is an unrecognised meaningless acronym to others - even though they are doing "HPC".
... and then I use some Google Trends plots to explore some ideas ...

Read the full article ...