Showing posts with label data. Show all posts
Showing posts with label data. Show all posts

Thursday 20 December 2012

A review of 2012 in supercomputing - Part 2

This is Part 2 of my review of the year 2012 in supercomputing and related matters.

In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.

Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.

The themes that stick out in my mind from HPC/supercomputing in 2012 are:
  • The exascale race stalls
  • Petaflops become "ordinary"
  • HPC seeks to engage a broader user community
  • Assault on the Top500

The exascale race stalls

The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.

Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]

Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.

The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.

Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.


Tuesday 18 December 2012

A review of 2012 in supercomputing - Part 1

It's that time of year when doing a review of the last twelve months seems like a good idea for a blog topic. (To be followed soon after by a blog of predictions for the next year.)
So, here goes - my review of the year 2012 in supercomputing and related matters. Thoroughly biased, of course, towards things that interested me throughout the year.


Predictions for 2012

Towards the end of 2011 and in early 2012 I made various predictions about HPC in 2012. Here are the ones I can find or recall:
  • The use of "cloud computing" as the preferred marketing buzzword used for large swathes of the HPC product space would come to an end.
  • There would be an onslaught of "Big Data" (note the compulsory capital letters) as the marketing buzzword of choice for 2012 - to be applied to as many HPC products as possible - even if only a tenuous relevance (just like cloud computing before it - and green computing before that - and so on ...)
  • There would be a vigorous ongoing debate over the relative merits and likely success of GPUs (especially from NVidia) vs. Intel's MIC (now called Xeon Phi).
  • ARM would become a common part of the architecture debate alongside x86 and accelerators.
  • There would be a growth in the recognition that software and people matter just as much as the hardware.

Friday 4 November 2011

My SC11 diary 10

It seems I have been blogging about SC11 for a long time - but it has only been two weeks since the first SC11 diary post, and this is only the 10th SC11 diary entry. However, this will also be the final SC11 diary blog post.

I will write again before SC11 in HPC Wire (to be published around or just before the start of SC11).

And, then maybe a SC11 related blog post after SC11 has all finished.

So, what thoughts for the final pre-SC11 diary then? I'm sure you have noticed that the pre-show press coverage has started in volume now. Perhaps my preview of the SC11 battleground, what to look out for, what might emerge, ...


Thursday 11 August 2011

Big Data and Supercomputing for Science

It is interesting to note the increasing attention “big data” seems to be getting from the supercomputing community.

Data explosion


We talk about the challenges of the exponential increase in data, or even an “explosion of data”. This is caused by our ever-growing ability to generate data. More powerful computational resources deliver finer resolutions, wider parameter studies, etc. The emergence of individual scale HPC (GPU etc.) that is both cost-viable and effort-viable gives increased data creation capability to the many scientists not using high end supercomputers. And instrumental sources continue to improve in resolution and speed.

So, we are collecting more data than we have before. We are also increasing our use of multiple data sources – fusion from various sensors and computer models to form predictions or study scientific phenomena.

It is also common to questions such as: are we drowning in volume of data? Is this growth in data overwhelming our ability to extract useful information or insight? Is the potential value of the increased data lost by our inability to manage and comprehend it? Does having more data mean more information – or less due to analysis overload? Do the diversity of formats, quality, and sources further hinder data use?