As we stumble into the first weeks of 2013, it is the season for predictions about what the coming year will bring. In my case, following my recent review of HPC in 2012, I get to make some predictions for the world of HPC in 2013.
Buzzwords
First up, this year’s buzzword for HPC marketing and technology talks. Last year was very much the year of “Big Data” as a buzzword. As that starts to become old hat (and real work) a new buzzword will be required. Cynical? My prediction is that this year will see Big Data still present in HPC discussions and real usage but it will diminish in use as a buzzword. 2013 will probably spawn two buzzwords.
The first buzzword will be “energy-efficient computing”. We saw the use of this a little last year but I think it will become the dominant buzzword this year. Most technical talks will include some reference to energy-efficient computing (or the energy cost of the solution or etc.). All marketing departments will swing into action to brand their HPC products and services as energy efficient computing – much as they did with Big Data and before that, Cloud Computing, and so on. Yes, I’m being a tad cynical about the whole thing. I’m not suggesting that energy efficiency is not important – in fact it is essential to meet our ambitions in HPC. I’m merely noting its impending over-use as a theme. And of course, energy efficient computing is not the same as Green Computing – after all that buzzword is several years old now.
Energy efficiency will be driven by the need to find lower power solutions for exascale-era supercomputers (not just exascale systems but the small department petascale systems that will be expected at that time – not to mention consumer scale devices). It is worth noting that optimizing for power and energy may not be the same thing. The technology will also drive the debate – especially the anticipated contest between GPUs and Xeon Phi. And politically, energy efficient computing sounds better for attracting investment rather than “HPC technology research”.
The hpcnotes HPC blog - supercomputing, HPC, high performance computing, cloud, e-infrastructure, scientific computing, exascale, parallel programming services, software, big data, multicore, manycore, Phi, GPU, HPC events, opinion, ...
Friday 11 January 2013
Thursday 20 December 2012
A review of 2012 in supercomputing - Part 2
This is Part 2 of my review of the year 2012 in supercomputing and related matters.
In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.
Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.
The themes that stick out in my mind from HPC/supercomputing in 2012 are:
The exascale race stalls
The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.
Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]
Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.
The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.
Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.
In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.
Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.
The themes that stick out in my mind from HPC/supercomputing in 2012 are:
- The exascale race stalls
- Petaflops become "ordinary"
- HPC seeks to engage a broader user community
- Assault on the Top500
The exascale race stalls
The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.
Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]
Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.
The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.
Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.
Labels:
blue waters,
data,
exascale,
hpc,
people,
petaflops,
supercomputing,
top500
Tuesday 18 December 2012
A review of 2012 in supercomputing - Part 1
It's that time of year when doing a review of the last twelve months seems like a good idea for a blog topic. (To be followed soon after by a blog of predictions for the next year.)
So, here goes - my review of the year 2012 in supercomputing and related matters. Thoroughly biased, of course, towards things that interested me throughout the year.
Predictions for 2012
Towards the end of 2011 and in early 2012 I made various predictions about HPC in 2012. Here are the ones I can find or recall:
So, here goes - my review of the year 2012 in supercomputing and related matters. Thoroughly biased, of course, towards things that interested me throughout the year.
Predictions for 2012
Towards the end of 2011 and in early 2012 I made various predictions about HPC in 2012. Here are the ones I can find or recall:
- The use of "cloud computing" as the preferred marketing buzzword used for large swathes of the HPC product space would come to an end.
- There would be an onslaught of "Big Data" (note the compulsory capital letters) as the marketing buzzword of choice for 2012 - to be applied to as many HPC products as possible - even if only a tenuous relevance (just like cloud computing before it - and green computing before that - and so on ...)
- There would be a vigorous ongoing debate over the relative merits and likely success of GPUs (especially from NVidia) vs. Intel's MIC (now called Xeon Phi).
- ARM would become a common part of the architecture debate alongside x86 and accelerators.
- There would be a growth in the recognition that software and people matter just as much as the hardware.
Labels:
cloud,
data,
gpu,
HECToR,
hpc,
leadership,
MIC,
supercomputing
Thursday 8 November 2012
HPC notes at SC12
I'll be at SC12 next week.
I have a mostly full schedule in advance but I always leave a little time to explore the show floor, and to meet new people or old friends.
If you are at SC12 too, you might be able to find me via the NAG booth (#2431) - or walking the streets between meetings - or at one of the networking receptions.
If you are a twitter person - you can find me at @hpcnotes (but be warned I won't be tweeting most of the HPC news during the show - @HPC_Guru is much better for that).
Hope to see some of you there. And rember my tribute from last year to those not attending SC.
I have a mostly full schedule in advance but I always leave a little time to explore the show floor, and to meet new people or old friends.
If you are at SC12 too, you might be able to find me via the NAG booth (#2431) - or walking the streets between meetings - or at one of the networking receptions.
If you are a twitter person - you can find me at @hpcnotes (but be warned I won't be tweeting most of the HPC news during the show - @HPC_Guru is much better for that).
Hope to see some of you there. And rember my tribute from last year to those not attending SC.
Labels:
SC12,
supercomputing
Tuesday 6 November 2012
HPC fun for SC12
I've previously written some light-hearted but partly serious pieces for the main supercomputing events.
I'm working on one for SC12 too - again to be published in HPC Wire - but in the meantime, here are pointers for the SC11 and ISC11 articles:
I'm working on one for SC12 too - again to be published in HPC Wire - but in the meantime, here are pointers for the SC11 and ISC11 articles:
Labels:
fun,
HPCwire,
isc,
SC12,
supercomputing
Friday 12 October 2012
The making of “1000x” – unbalanced supercomputing
I have posted a new article on the NAG blog: The making of "1000x" – unbalanced supercomputing.
This goes behind my article in HPCwire ("Chasing1000x: The future of supercomputing is unbalanced"), where I explain the pun in the title and dip into some of the technology issues affecting the next 1000x in performance.
This goes behind my article in HPCwire ("Chasing1000x: The future of supercomputing is unbalanced"), where I explain the pun in the title and dip into some of the technology issues affecting the next 1000x in performance.
Labels:
explain hpc,
hpc,
HPCwire,
performance,
software,
supercomputing
Tuesday 2 October 2012
The first mention of SC12
It's that time of year again. SC has started to drift into my inbox and phone conversations with increasing regularity - here comes Supercomputing 2012 in Salt Lake City. Last year, in the run up to SC11 in Seattle, I wrote the SC11 diary - blogging every few days on my preparations and thoughts for the biggest annual event of the supercomputing world.
I'm not sure I'll do such a diary again this year (unless by popular demand - not likely!). However, I will be writing some articles for some publications (HPC Wire and others - see my previous articles) in the coming weeks which will set the scene for SC from my point of view - burning issues I hope will be debated in the community, key technology areas I will be watching, and so on.
In the meantime, if you crave SC reading material, you might amuse yourself by reading my previous fun at SC time (e.g. The top ten myths of SC - in HPC Wire for SC11) or you might even want to translate my fun from ISC (Are you an ISC veteran?) to new meanings at SC.
If you want more serious content, then browse on this blog site (e.g. tagged "events") or on the NAG Blog (e.g. tagged "HPC").
If you find nothing you like - drop me a comment below or via twitter and I'll see what I can do to address the topic you are interested in!
I'm not sure I'll do such a diary again this year (unless by popular demand - not likely!). However, I will be writing some articles for some publications (HPC Wire and others - see my previous articles) in the coming weeks which will set the scene for SC from my point of view - burning issues I hope will be debated in the community, key technology areas I will be watching, and so on.
In the meantime, if you crave SC reading material, you might amuse yourself by reading my previous fun at SC time (e.g. The top ten myths of SC - in HPC Wire for SC11) or you might even want to translate my fun from ISC (Are you an ISC veteran?) to new meanings at SC.
If you want more serious content, then browse on this blog site (e.g. tagged "events") or on the NAG Blog (e.g. tagged "HPC").
If you find nothing you like - drop me a comment below or via twitter and I'll see what I can do to address the topic you are interested in!
Thursday 2 August 2012
What is the point of supercomputers?
Maybe it seems an odd question to ask on a blog dedicated to High Performance Computing (HPC). But it is good to question why we do things – hopefully leading us to a clearer justification for investing money, time and effort. Ideally, this would also enable better delivery – the “how” supporting the “why” – focusing on the best processes, technologies, etc. to achieve the goals identified in the justification.
So, again, why supercomputing? Perhaps you think the answer is obvious – supercomputing enables modelling and simulation to be done faster than with normal computers, or enables bigger problems to be solved.
So, again, why supercomputing? Perhaps you think the answer is obvious – supercomputing enables modelling and simulation to be done faster than with normal computers, or enables bigger problems to be solved.
Labels:
explain hpc,
hpc,
HPCwire,
John West,
supercomputing
Friday 15 June 2012
Supercomputers are for dreams
I was invited to the 2012 NCSA Annual Private Sector Program (PSP) meeting in May. In my few years of attending, this has always been a great meeting (attendance by invitation only), with an unusually high concentration of real HPC users and managers from industry.
NCSA have recently released streaming video recordings of the main sessions - the videos can be found as links on the Annual PSP Meeting agenda page.
Bill Gropp chaired a panel session on "Modern Software Implementation" with myself and Gerry Labedz as panellists.
The full video (~1 hour) is here but I have also prepared a breakdown of the panel discussion in this blog post below.
NCSA have recently released streaming video recordings of the main sessions - the videos can be found as links on the Annual PSP Meeting agenda page.
Bill Gropp chaired a panel session on "Modern Software Implementation" with myself and Gerry Labedz as panellists.
The full video (~1 hour) is here but I have also prepared a breakdown of the panel discussion in this blog post below.
Labels:
blue waters,
events,
hpc,
ncsa,
parallel programming,
people,
performance,
productivity,
software,
strategy,
supercomputing
Wednesday 6 June 2012
Some fun for ISC12
I have written a guest blog post for the ISC'12 website - "Are you an ISC veteran?". The article is intended to raise a few serious observations amongst the fun.
I also wrote an earlier guest blog post for the ISC'12 website - "Is co-design for exascale computing a false hope?"
I've added these two links to my page on this site "Interviews, Quotes, Articles" (which lists my various articles, interviews, etc. in other locations around the internet).
I also wrote an earlier guest blog post for the ISC'12 website - "Is co-design for exascale computing a false hope?"
I've added these two links to my page on this site "Interviews, Quotes, Articles" (which lists my various articles, interviews, etc. in other locations around the internet).
Labels:
events,
isc,
spoof,
supercomputing
Wednesday 30 May 2012
The power of supercomputers - energy, exascale and elevators
Paul Henning has written on his blog (HPC Ruminations) about the growing issue of power requirements for large scale computing. Paul's blog post - "Familiarity Breeds Complacency" - is partly in response to my article at HPCwire - "Exascale: power is not the problem" and my follow-up disucssion on here - "Supercomputers and other large science facilities".
Paul makes several good points and his post is well worth reading. He ends with an observation that I've noted before (in my own words):
PS - I've added Paul's new blog to my list of HPC blogs and news sites.
Paul makes several good points and his post is well worth reading. He ends with an observation that I've noted before (in my own words):
One of supercomputing's biggest strengths - it's ability to help almost all areas of science and engineering - is also one of it's greatest weaknesses - because there a portfolio of cases rather than a single compelling champion to drive attention and investment.
PS - I've added Paul's new blog to my list of HPC blogs and news sites.
Labels:
exascale,
leadership,
power,
supercomputing
Subscribe to:
Posts (Atom)