Showing posts with label John West. Show all posts
Showing posts with label John West. Show all posts

Monday 9 November 2015

SC15 Preview

SC15 - the biggest get-together of the High Performance Computing (HPC) world - takes place next week in Austin, TX. Around 10,000 buyers, users, programmers, managers, business development people, funders, researchers, media, etc. will be there.

With a large technical program, an even larger exhibition, and plenty of associated workshops, product launches, user groups, etc., SC15 will dominate the world of HPC for a week, plus most of this week leading up to it. It is one of the best ways for HPC practitioners to share experiences, learn about the latest advances, and build collaborations and business relationships.

So, to wet your appetites, here is the @hpcnotes preview to SC15 - what I think might be the key topics, things to look out for, what not to miss, etc.

New supercomputers

It's always one of the aspects of SC that grabs the media and attendee attention the most. Which new biggest supercomputers will be announced? Will there be a new occupier of the No.1 spot on the Top500 list? Usually I have some idea of what new supercomputers are coming up before they are public, but this year I have no idea. My guess? No new No.1. A few new Top20 machines. So which one will win the news coverage?

New products

In spite of the community repeatedly acknowledging that the whole system is important - memory, interconnect, I/O, software, architecture, packaging, etc., judging by the media attention and informal conversations, we still seem to get most excited by the processors.

Thursday 2 August 2012

What is the point of supercomputers?

Maybe it seems an odd question to ask on a blog dedicated to High Performance Computing (HPC). But it is good to question why we do things – hopefully leading us to a clearer justification for investing money, time and effort. Ideally, this would also enable better delivery – the “how” supporting the “why” – focusing on the best processes, technologies, etc. to achieve the goals identified in the justification.

So, again, why supercomputing? Perhaps you think the answer is obvious – supercomputing enables modelling and simulation to be done faster than with normal computers, or enables bigger problems to be solved.

Monday 30 August 2010

Me on HPC 2

Things I have said (or have been attributed as saying - not always the same thing!) - some older interviews with me in various publications about HPC, multicore, etc ...


Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West

[full article requires subscription, extracts here are not complete, and are modified slightly to support that]

"cost of science, not just the cost of supercomputer ownership"

"lead time, and funding, to get the user community ready"

"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"

"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."

"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."

Tuesday 23 March 2010

What’s the next revolution in technical computing?

[Originally posted on The NAG Blog]

It’s a question that absorbs the attention of the technical computing community, especially those working at the leading edge of technology and performance (high performance computing, HPC). What is the next disruptive technology? In other words, what is the next technology that will replace a currently dominant technology? Usually a disruptive technology presents a step-change in performance, cost or ease-of use (or a combination of these) compared to the established technology. The new technology may or may not be disruptive in the sense of discontinuous change in user experience.



Why is identifying disruptive technology so important? First, those who spot the right change early enough and deploy it effectively can attain a significant advantage over competitors as a result of a substantial improvement in technical computing capability or reduction in cost. Second, identifying the right technology change in time can help ensure that future investments (whether software engineering, procurement planning, or HPC product development) are optimally spent.



However, in a field as fast moving as technical computing, spotting the next disruptive technologies of specific relevance to your individual needs can easily become a full time activity (which is why NAG helps to do this for others).



One very credible candidate for disruptive change in HPC right now is GPU computing (or related products that might be in development). However, at the Newport conference recently, the discussion turned to what the next disruptive technology to hit HPC would be (after the possible GPU disruption). One suggestion, made by John West (of InsideHPC fame), was that the next disruptive technology could be in software, especially programming tools and interfaces. This builds on the fact that parallel computing is no longer a specialist activity unique to the HPC crowd – parallel processors are becoming pervasive across all areas of computing from embedded to personal to workgroup technical computing. Parallel programming is thus heading towards a mass market activity – and the mass market is unlikely to view what we have in HPC currently (Fortran plus MPI and/or OpenMP, or limited tools, etc) with much favour. I’m not knocking any of these, but they are not mass-market interfaces to parallel computing. So perhaps the mass market, through volume of people in need – and companies driven by economics will come up with a “better” solution for interfacing with supercomputers.



As a HPC community we lost control of much of our hardware to the commodity market some years ago. Maybe we now face losing control of our software to the commodity community too.

Thursday 4 February 2010

Don't call it High Performance Computing?

[Originally posted on The NAG Blog]

Having just signed up for twitter (HPCnotes), I've realised that the space I previously had to get my point across was nothing short of luxurious (e.g. my ZDNet columns). It's like the traditional challenge of the elevator pitch - can you make your point about High Performance Computing (HPC) in the 140 character limit of a tweet? It might even be a challenge to state what HPC is in 140 characters. Can we sum up our profession that simply? To a non-HPC person?





The inspired John West of InsideHPC fame wrote about the need to explain HPC some time ago in HPCwire. It's not an abstract problem. As multicore processors (whether CPUs or GPUs) become the default for scientific computing, the parallel programming technologies and methods of HPC are becoming important for all numercial computing users - even if they don't identify themselves as HPC users. In turn, of course, HPC benefits in sustainability and usability from the mass market use of parallel programming skills and technologies.





I'll try to put it in 140 characters (less space for a link): Multicore CPUs promise extra performance but software must be optimised to take advantage. HPC methods can help.





It's not good - can you say it better? Add a comment to this blog post to try ...





For those of you finding this blog post from the short catch line above, hoping to find the answer to how HPC methods can help - well that's what my future posts and those of my colleagues here will address.

Thursday 14 August 2008

NAG Embarks on a New Business Venture

[Interview with me in HPCwire, August 14, 2008]

by John E. West, for HPCwire

... responding to changes in computing at both ends of the spectrum, [NAG] is positioning itself as the place to go, not just for shrink-wrapped libraries, but also for education and expertise in how to program in parallel, and even for expert advice on how to buy, build and run your own supercomputer. HPCwire talked to Andrew Jones, vice-president of HPC business at NAG, on what he has in mind for this new business and how he sees the future of HPC and parallel programming shaping up ...

http://www.hpcwire.com/features/NAG_Embarks_on_a_New_Business_Venture.html?viewAll=y