tag:blogger.com,1999:blog-53763738520498445682024-03-13T17:05:56.205+00:00HPC - High Performance Computing, Supercomputing & CloudThe hpcnotes HPC blog - supercomputing, HPC, high performance computing, cloud, e-infrastructure, scientific computing, exascale, parallel programming services, software, big data, multicore, manycore, Phi, GPU, HPC events, opinion, ...Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.comBlogger111125tag:blogger.com,1999:blog-5376373852049844568.post-80614949739058979732020-11-13T14:26:00.003+00:002020-11-13T14:27:34.464+00:00Name that supercomputer 2 (Quiz)It's a long time since I did an HPC quiz, so here is one to keep some fun in these odd times. Can you name these supercomputers?<br />
<br />
I'm looking for actual machine names (e.g. 'Fugaku') and the host site (e.g. RIKEN CCS). Bonus points for the machine details (e.g. Fujitsu A64FX).<br />
<br />
Submit your guesses or knowledgeable answers either through the comments field below, or to me on twitter (<a href="http://twitter.com/hpcnotes">@hpcnotes</a>).<br />
<br />
Answers will be revealed once there have been enough guesses to amuse me. Have fun!<br />
<br />
<br />
<ol>
<li>Maybe it's Italian style, but this oily system has a purely descriptive name, a bit like the name of a robot with a short circuit.</li>
<br />
<li>In spite of the name, this one is a step away from the very top.</li>
<br />
<li>The seven daughters of Atlas.</li>
<br />
<li>Arising from a beautiful reef, this top supercomputer is named after one of my co-presenters at my SC19 tutorial (or so we think).</li>
<br />
<li>This border system's owner often tells how it was renamed in planning due to a bigger newer super that took it's original name.</li>
<br />
<li>It has no name, at least not publicly, and the operator has not been open with full details, but with 10,000 GPUs it can do a lot of AI.</li>
<br />
<li>On the road to exascale, but not there yet, this system will be housed next year in a chilly northern European location, and shares some similar architecture to two of the first exascale systems.</li>
<br />
<li>A chicken with green-ish / brown-ish eyes. Or is it a type of nut?</li>
<br />
<li>In a rare move, this number 9 is named after a living scientist, actually one of its users.</li>
<br />
<li>Sing a song for this one, because it is named to be hit hard.</li>
<br />
</ol>
.Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-12174169639521999142020-11-09T16:15:00.006+00:002020-11-09T16:16:27.892+00:00Snackable videos<h3 style="text-align: left;"> Snackable videos</h3>
<p>A few rash moments on my part on twitter and now I'm committed to recording a series of mini-videos about HPC and related topics.</p>
<p>Watch the <a href="https://youtu.be/9TrEMmF-Jnw">first episode</a> on the <a href="https://www.youtube.com/channel/UC6-oxuDPYMLSNhVUaVOJfBg/">hpcnotes YouTube channel</a> to find out more!</p><p><br /></p>Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-90230600525332548822020-05-22T14:21:00.005+01:002020-05-22T14:54:59.676+01:00 What makes a Supercomputer Centre a Supercomputer Centre?<div><h3 style="text-align: left;">
When is a Supercomputer Center not a Supercomputer Center?</h3></div><div></div><div><i>The world of HPC has always been a place of rapid change in technology with slower change in business models and skill profiles, but what actually makes a supercomputer center a supercomputer center?</i></div><div><i><br /></i></div><div></div><div><h3 style="text-align: left;"></h3><h3 style="text-align: left;">Tin (or Silcon maybe)<br /></h3></div><div>Is it having a big HPC system? How big counts? Does it matter what type of "big" system you have?<br /></div><div><br /></div><div><div>Does it matter if there is not one big supercomputer but instead a handful of medium sized ones of different types? <br /></div><div><br /></div><div>Does it count if the supercomputers are across the street, or in a
self-owned/operated datacentre the other side of town? What if the
supercomputers are located hundreds of miles away from the HPC (eg to
get cheap power & cooling)?</div><div><br /></div><div></div><h3 style="text-align: left;">Who and How<br /></h3></div><div>Or is it having a team of HPC experts able to help users? How many experts? What level of expertise counts? How many have to be RSE (Research Software Engineer) types?<br /></div><div><br /></div><div>Is it having the vision and processes to recognise they are primarily a service provider to their users ("customers") rather than thinking of themselves mainly as a buyer of HPC kit?</div><br /><div><div>What if you mainly have AI workloads rather than "traditional" HPC?
What if you only run many small simulation jobs and no simulations that
span thousands of cores? What if users only ever submit jobs via web
portals and never log in to the supercomputers directly?</div><h3 style="text-align: left;"></h3></div><div></div><div>Is it essential to have a .edu, .gov, .ac.uk etc. address? Or can .com be a supercomputer center too?</div><div><br /></div><span></span><div></div><div><h3 style="text-align: left;">This but not that?</h3></div><div></div><div>If you have no supercomputers of your own, but have 50 top class HPC experts who work with users on other supercomputers and also research future technologies - is that a supercomputer center?</div><div><br /></div><div>If you have a very large HPC system but only the bare miuminm of HPC staff and no technology R&D efforts - is that a supercopmputer center?</div><div><br /></div><div>Which of the last two adds more value to your users?</div><div><br /></div><div></div><h3 style="text-align: left;">Declare or Earn?</h3><div>Is it merely a matter of declaration - "we are a supercomputer center"? Or it is a matter of other supercomputer centers accepting you as a peer? But then who counts as other supercomputer centers to accept you? What if some do and some don't?</div><div><br /></div><div>Is there a difference between a supercomput<i>er</i> center and a supercomput<i>ing</i> center?</div><div><br /></div><div><i><b>What do you think?</b> And does your answer depend on whether you are a user, or work at a "traditional" supercomputer center, or a new type of supercomputing center, or a HPC vendor, or from outside the HPC field?</i></div><div><br /></div><div></div>Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-2657779448439658412020-02-21T17:28:00.000+00:002020-02-21T17:36:03.091+00:00Why cloud computing is like air travelSome fun observations comparing the worlds of cloud computing and air travel ...<br />
<br />
<h3>
Why cloud computing is like air travel</h3>
<ul>
<li>The price depends on how far in advance you commit/buy.</li>
<li>The marketing focus on the desirability of the posher seats / more powerful VMs but on the costs of the cheapest seats / VMs.</li>
<li>Just like there are three main alliances (Oneworld, Star Alliance, SkyTeam) plus various independents airlines, there are three main cloud providers (Microsoft Azure, Google, Amazon) plus various specialist cloud providers.</li>
</ul>
<a name='more'></a><ul>
<li>You can buy an airline ticket to almost anywhere and someone else figures out the details of providing you that flight. Likewise, you can select almost any type of hardware in the cloud and someone else has figured out the details of making that available to you.</li>
<li>The characters/culture of the various airlines and cloud providers can be quite different.</li>
<li>Changing which flight you take or which VM type you use is usually possible (may incur extra costs in both cases!).</li>
<li>Both cloud computing and air travel (global collaboration) can be very powerful in advancing science and innovation we need to address the challenges of today and tomorrow.</li>
<li>There are wildly varying ideas of what makes a “reasonable” carry-on, just like there are wildly varying ideas of what is needed for a reasonable performance in the cloud. (<i>I’m not sure I own enough clothes to fill the size of carry-ons some people try to heave on board.</i>)</li>
<li>You can’t make one aircraft setup perfect for all passengers (<i>what is the obsession with shutting out the outside world with closed blinds on daytime flights???</i>) and you can’t make one cloud architecture perfect for all potential users.</li>
<li>Seasoned flyers understand the intricacies of each airport, airline, frequent flyer program, security checkpoints, multiple connections, etc. and don’t see why occasional flyers get stressed by it all. In the same way, seasoned cloud architects don’t see why their world looks so complicated to potential new users.</li>
<li>Both cloud computing and global air travel are amazing feats of human achievement spanning vision, planning, engineering, execution, and more.</li>
</ul>
<br /><ul>
</ul>
<h3>
And why cloud computing is <i>not </i>like air travel</h3>
<ul>
<li>On an aircraft you are very conscious of the person in seat next to you. In the cloud, you shouldn’t be aware of another VM using the same physical node as you (the cloud providers work hard to ensure you won’t see impact of naughty behaviour by other VMs, although this isn’t foolproof).</li>
<li>The relative total cost of cloud computing vs on-premises is a complex analysis and the answer varies for each customer situation. But there are very few situations where you can deploy your own aircraft, pilot, licences, etc. more cheaply than the price of an airline ticket!
</li>
</ul>
<br />
<br />
<b>Any other similarities or differences between cloud computing and air travel that you can think of? Please share in the comments below!</b><br />
<br />
.<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-9808107834018409722020-01-13T15:29:00.000+00:002020-01-13T15:29:35.868+00:00A step into the future: HPC and cloud<b>I am delighted to announce that at the start of February, I will be joining the Microsoft Azure HPC engineering & product team.</b><br />
<br />
The HPC world has experienced several big changes in technology or business model over the last few decades. Cloud computing is probably the next big change facing HPC, on both business model and technology fronts.<br />
<br />
I have been privileged to have earned a reputation with a wide range of HPC buyers and technology vendors as an impartial and knowledgeable voice on both the business and technical aspects of HPC (including cloud) over the last few years. A major trend that I observed was the pace at which I had to keep updating my independent assessment of the readiness and value of cloud. Today, on-premises HPC is still a great option to deliver impact and value to users. However, I have watched the amazing journey of cloud towards a genuine option delivering new or better value to HPC users and buyers.<br />
<br />
In particular, I have been impressed with the approach taken by Microsoft Azure towards the HPC space. This includes strong technology and product offerings, a sector-leading people strategy, and much more. Of course, the journey towards leadership of cloud for HPC is still in progress and I am excited to help drive that adventure by joining the Azure HPC team.<br />
<br />
More details of our vision, and my own role, will be shared over the coming days and months. Follow me on Twitter (<a href="https://twitter.com/hpcnotes">@hpcnotes</a>) and LinkedIn (<a href="https://www.linkedin.com/in/andrewjones">www.linkedin.com/in/andrewjones</a>) to learn more.
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRKc_Eriw91oDjnli59qWprXxtF_HWGyPOO6tvTWt2BqZ3vtqq_qAhctytagneRx8Kz2kNjmpTmfMr7fizz17toSGGrw7EEfjbspFAcy9bEUhphUQxzHw0Lo6ye_Mqb5yPfvVY0FEqhIk/s1600/IMG_4609.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRKc_Eriw91oDjnli59qWprXxtF_HWGyPOO6tvTWt2BqZ3vtqq_qAhctytagneRx8Kz2kNjmpTmfMr7fizz17toSGGrw7EEfjbspFAcy9bEUhphUQxzHw0Lo6ye_Mqb5yPfvVY0FEqhIk/s400/IMG_4609.JPG" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-17585983774078947522020-01-10T14:33:00.001+00:002021-01-05T13:14:04.585+00:00Over a decade of HPC consulting success at NAG<h3>
From small beginnings ...</h3>
<br />
It is almost 12 years since I joined NAG to build and lead the HPC consulting and services business. Over that time, we have built a consulting business from a tiny start to its current thriving status. We have helped a wide range of customers around the world of High-Performance Computing (HPC) and related areas such as cloud computing and machine learning by providing training and tutorials, multi-year professional services contracts, benchmarking services, focused consulting projects, impartial procurement expertise, strategic and technical advice, and more.<br />
<br />
Protecting our customers' confidentiality and competitive advantages has been a strong theme of our success, which is why we have rarely been able to name our customers. We have helped many of the big oil & gas companies, plus several smaller ones, aerospace companies, manufacturing companies, automotive companies, public supercomputer centres, universities, government organisations, sports entities, HPC and cloud vendors, entertainment industry, and others.<br />
<br />
The trusted position we have earned in the HPC community is arguably unique and will be difficult to replicate. There are very few other organisations worldwide who can genuinely offer the expertise, experience, impartiality and integrity that NAG delivered.<br />
<br />
<h3>
HPC requires expertise - technical <i>and </i>business</h3>
<br />
HPC, whether traditional simulation, or using on-premises supercomputers, or combined machine learning and simulation, or in the cloud, is hard. Creating a robust and compelling business case for investment is not easy. Reducing the risk of decisions in strategic direction, technology selection, staffing, software development, is not easy. Finding skilled HPC programmers is not easy. Delivering cost-effective and high-impact HPC services (rather than just standing up a machine) is not easy.<br />
<br />
The current era of technology diversity in the HPC world is good for innovation and competitiveness. HPC buyers and users clearly benefit from this with better capabilities and pricing, but they must also manage the uncertainty and risk that the increased decision spaces create. Which CPU? On-premises vs cloud? Which cloud solution? Which system architecture? Which business model?<br />
<br />
Over the last decade, we have helped customers and friends solve these challenges. The range of issues and number of customers impacted continue to grow.<br />
<br />
I hope NAG has a bright future ahead with the new CEO, the healthy market opportunities, and the vision developing within the Executive Team. I expect NAG will continue to be a rare source of proven expertise in techncial computing.<br />
<br />
<a name='more'></a><br />
<br />
<h3>
New adventures</h3>
<br />
However, I can announce today that I was recently inspired by an exciting new role elsewhere in the HPC world and so I have made the difficult decision to leave NAG at the end of this month.<br />
<br />NAG has been an excellent place to work, with many great colleagues, and a plethora of wonderful customers and friends around the world that have made the last eleven and a half years such an enjoyable experience.<br />
<br />
<b>Please do check back here over the next few days to hear where my new adventure takes me.</b><br />
<b><br /></b>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1kMw2XAT0qLPR-qmVyCtFyEhTcvcz8Ja3mVMa55QKhlx_WyED8PQRnghObGhNKVx5FiUtWu1pdnaB365Ww_k4zWOr3C5vvXRu1gqriCVe_ecwtxmnhRr2FE9Jaw0Uoadf3Hlbz3UJ_7M/s1600/IMG_4266.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1kMw2XAT0qLPR-qmVyCtFyEhTcvcz8Ja3mVMa55QKhlx_WyED8PQRnghObGhNKVx5FiUtWu1pdnaB365Ww_k4zWOr3C5vvXRu1gqriCVe_ecwtxmnhRr2FE9Jaw0Uoadf3Hlbz3UJ_7M/s400/IMG_4266.JPG" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<b>Edited to add announcement:</b><br />
<br />
The new adventure is announced here: <a href="https://www.hpcnotes.com/2020/01/a-step-into-future-hpc-and-cloud.html">https://www.hpcnotes.com/2020/01/a-step-into-future-hpc-and-cloud.html</a><br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-4528954292565185182019-11-12T17:55:00.000+00:002019-11-12T17:55:45.596+00:00hpcnotes at SC19I will be in Denver next week for SC19.<br />
<br />
As usual, NAG will have a booth (#932) in the exhibition. The booth will have information and handouts on our HPC consulting services (I believe the marketing strapline is "<i>The leading independent and international center-of-excellence in the business and technical aspects of HPC</i>"). The NAG staff on the booth will be happy to discuss our consulting work or software tools.<br />
<br />
However, I expect to spend no more than a few minutes near the NAG booth during the whole week. So, where can you find me and what will I be doing?<br />
<br />
Each year I have to balance four personalities at SC:<br />
<br />
<a name='more'></a><br /><br />
<ul>
<li><b>Consultant</b>. Meeting with existing and prospective customers to understand how our rare consulting capabilities in HPC (and cloud) can help customers' HPC deliver the best impact at the best cost.</li>
<li><b>Pseudo-customer</b>. Whilst NAG doesn't buy supercomputers for ourselves, our consulting team advises numerous customers around the world with planning for investments, technology evaluations, helping with procurement, etc. Due to confidentiality, our role is often never made public, but we have been involved in over 40 HPC planning/procurement projects, with a cumulative procurement budget probably well over $1bn. Thus, like any of the other major HPC buyers, we spend the week of SC doing the rounds of the private NDA briefings in the hotels surrounding the convention center and in other meetings with vendors to understand their roadmaps, plans, risks, hopes, etc.</li>
<li><b>Educator</b>. Helping to share our experience and lessons, and grow the next generation of practitioners in HPC. For example, our tutorials on the business aspects of HPC (TCO, value metrics, procurement, strategic considerations, etc) have been a strongly attended feature of the official SC program for several years now. (<a href="https://www.hpcnotes.com/2019/11/sc19-tutorials.html">https://www.hpcnotes.com/2019/11/sc19-tutorials.html</a></li>
<li><b>Community member</b>. A huge part of the SC conference (or any HPC conference) is meeting people - from old friends to new contacts - on the show floor, in the corridors, or at the many networking receptions.</li>
</ul>
<br />
I’m looking forward to an intense and valuable week in Denver. If you see me around SC (or on the journey there), please feel free to say hello, trade gossip (I won’t break confidences though), share travel stories, or ask for opinions on any aspect of HPC.
<br />
<br />
Hope you have a productive SC19!
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-17537320453677136172019-11-09T16:05:00.000+00:002019-11-09T16:27:42.569+00:00Supercomputers and Jet EnginesWe all know that supercomputers are used to design jet engines.
Designing and understanding the performance characteristics of a jet engine would likely take millions of core-hours on large HPC systems.<br />
<br />
But, a random fact thrown out over lunch at May's industry HPC leaders group meeting sparked an interesting conversation.<br />
<br />
If a jet engine takes many megawatt-hours of supercomputing to design - how many MW does a jet engine create, and thus how many petaflops could a jet engine support if it were the power source?<br />
<br />
The HPC leaders of GE, Boeing, ExxonMobil and others drew together their shared knowledge of aerospace and HPC - and some use of google search - to built a fun picture.<br />
<a name='more'></a><br />
<br />
The most powerful commercial jet engine on the market is the General Electric GE90, as often used on the Boeing 777 aircraft. This comes in many variants, so I’ve fudged the detail of the numbers here.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjN0CAQCEFDPEu__V5N63-l4HoaYhyphenhyphenMRKKwQNK8MFpacTj_cwIZn7FhLjeZYqlk31GyDmTdI3K77DluJVyrCePN9xt3Js5yRazomftDdBXNZxVD_YXLnIAz1MpnEPczE7w-TF1XQpsElh4/s1600/ge90.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjN0CAQCEFDPEu__V5N63-l4HoaYhyphenhyphenMRKKwQNK8MFpacTj_cwIZn7FhLjeZYqlk31GyDmTdI3K77DluJVyrCePN9xt3Js5yRazomftDdBXNZxVD_YXLnIAz1MpnEPczE7w-TF1XQpsElh4/s400/ge90.jpg" width="400" /></a></div>
<br />
<br />
Each GE90 delivers about 120kW of electrical power to the aircraft.
120kW doesn’t get you a lot of supercomputer. Assuming a CPU based system ("boring but most useful", to steal a line from Dan Stanzione) it would probably be just under one petaflops of computing capacity, depending on the details of hardware choices.<br />
<br />
But that only accounts for the designed electricity feed directly supplied from the GE90.
The GE90 produces O(100,000) lb of thrust. The lunch geeks’ calculations concluded that means a GE90 is capable of producing around 80MW of power if a very efficient generator was attached.
Even allowing for a good bit of conversion inefficiency, that means a 50MW electrical power output should be no trouble for the GE90.
As a check, the LM9000 power generation turbine (based on GE90) produces 65MW.<br />
<br />
So, a GE90 could easily power the HPC system that designed itself (I think it's a fair guess that GE isn't currently running over 50MW of HPC kit). More interestingly, a GE90 would also have enough oomph to comfortably power the exascale supercomputers that will design its successors.<br />
<br />
Of course, running a jet engine at full tilt 24x7 near the data centre would be a noisy, expensive, and very environmentally unfriendly way to generate electricity.<br />
<br />
Plus, as was pointed out at our lunch discussion, who wants their supercomputers to smell of JetA fuel?
<br />
<br />
<br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com1tag:blogger.com,1999:blog-5376373852049844568.post-52768719405115480452019-11-08T17:51:00.002+00:002019-11-08T17:51:25.367+00:00SC19 TutorialsAt SC19, I will be again be leading a full day of tutorial on the business aspects of HPC: "<a href="https://sc19.supercomputing.org/presentation/?id=tut118&sess=sess187">Delivering HPC: Procurement, Cost Models, Metrics, Value, and More</a>".<br />
<br />
<div>
</div>
<br />
<br />
My co-presenters for SC19 will be Ingrid Barcena Roig, Branden Moore, Dairsie Latimer, and Sierra Koehler.<br />
<br />
The tutorials is on Monday 18th November, in room 210-212 of the Denver convention center, 8:30am - 5.00pm.<br />
<br />
<div>
<br /></div>
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghyphenhyphenJO88YewRF6kS-Dnt2o2cdZNmExHPk03vAt8YBXl9wSIQ6wm_DCa5NhmpqH9WILVdx8OCXEofnvByVa7xZjlLNjFN8BjbsglWg-xCnXicPN63JbyaVWKKVxw61luFLyKpzPgrWiTURw/s1600/sc19_tutorial_agenda.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghyphenhyphenJO88YewRF6kS-Dnt2o2cdZNmExHPk03vAt8YBXl9wSIQ6wm_DCa5NhmpqH9WILVdx8OCXEofnvByVa7xZjlLNjFN8BjbsglWg-xCnXicPN63JbyaVWKKVxw61luFLyKpzPgrWiTURw/s640/sc19_tutorial_agenda.jpg" width="640" /></a></div>
<br />
<br />
<br />
Please join us to learn more about how to get investment in HPC, how to spend it wisely, and how to measure the impact.<br />
<br />
<br />
<br /></div>
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-15126179615612942712019-11-08T12:43:00.003+00:002019-11-09T16:06:11.320+00:00Guide to announcements for SC19<br />
(Originally published on my LinkedIn profile: <a href="https://www.linkedin.com/feed/update/urn:li:activity:6597188398224556032/">post link</a>)<br />
<br />
It's that time of year - yes, the annual fest of press releases and social media deluges in the run up to 'SC' - the primary annual supercomputing conference, held this year in Denver.<br />
<br />
<br />
Here is a handy guide for <b>vendor PR teams</b> ...<br />
<br />
[company] will be at #SC19!<br />
<i>Yes, along with almost everyone else in #HPC world</i><br />
<br />
[company] will be highlighting products at SC19!<br />
<i>As above</i><br />
<br />
[company] will launch new version of our current product in a slightly different shade of grey at SC19!<br />
<i>We had no actual news</i><br />
<br />
<br />
However, the <b>HPC centers are just as bad</b> with "news" for the big annual #Supercomputing conference:<br />
<br />
<a name='more'></a><br /><br />
[center] our users did some science!<br />
<i>We very much hope so</i><br />
<br />
[center] system is top in niche category defined such that our system is top at the time of SC19!<br />
<i>Well done</i><br />
<br />
[center] will be at #SC19!<br />
<i>Some vendors might be there vendors, see above</i><br />
<br />
[center] staff to give talks and participate in BoFs at SC19!<br />
<i>The official program PDF has 937 pages filled with talks ...</i><br />
<br />
<br />
And, of course, <b>I’m no better</b>: my repetitive SC PR ...<br />
<br />
"Andrew Jones & friends* to deliver tutorial on business of #HPC at #SC19!"<br />
<i>err, yup, just like SC18, SC17, SC16, SC15, SC13, so no news there!</i><br />
<br />
<br />
* the friends this year are:
Andrew Jones (aka @hpcnotes), Ingrid Barcena Roig, Branden Moore, Dairsie Latimer, Sierra Koehler.<br />
<br />
<br />
See you all in Denver! :-)<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-65506085569885262862018-11-07T11:42:00.000+00:002018-11-07T11:42:30.945+00:00SC18 previewI've written my customary preview of SC, which is now published at HPC Wire: <a href="https://www.hpcwire.com/2018/11/06/sc18-preview-big-in-dallas/">https://www.hpcwire.com/2018/11/06/sc18-preview-big-in-dallas/</a>.<br />
<br />
Over 10,000 members of the global HPC community will gather in Dallas for the SC18 conference. Even a decent sized team will struggle to attend everything the official program has to offer. On top of this, there will be a plethora of public and private meetings outside the official program, many of which are more valuable than the official program. Plus, there will be the usual flood of press releases, social media blasts, etc.<br />
<br />
Out of all of this, what will emerge as the key themes? What are some essential things to do/attend? Read the <a href="http://twitter.com/hpcnotes">@hpcnotes</a> SC18 preview to find out!<br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-74853421489238127182018-11-06T11:32:00.000+00:002018-11-07T18:01:47.641+00:00SC18 Networking Receptions<h3>
Networking Receptions at SC18 Dallas [<i>updated regularly until SC starts</i>]</h3>
<br />
A huge part of the SC conference (or any HPC conference) is meeting people - from old friends to new contacts. Here is a curated list of networking opportunities (receptions) crowd-sourced from this twitter thread <a href="https://twitter.com/hpcnotes/status/1059437643837161474">https://twitter.com/hpcnotes/status/1059437643837161474</a> and other sources:<br />
<br />
Sunday 11th<br />
<ul>
<li>Spectrum Scale UG and happy hour, 12-4pm</li>
<li>LLVM & Flang social, 6pm-9pm, Aloft, <a href="http://lists.llvm.org/pipermail/llvm-dev/2018-October/127303.html">http://lists.llvm.org/pipermail/llvm-dev/2018-October/127303.html</a></li>
</ul>
Monday 12th<br />
<ul>
<li>Intersect360, 2pm-4pm, Biergarten Restaurant, <a href="https://www.eventbrite.com/e/intersect360-research-sc18-networking-reception-tickets-51318644447">https://www.eventbrite.com/e/intersect360-research-sc18-networking-reception-tickets-51318644447</a></li>
<li>DDN user group reception, 5pm, Old Red Museum of Dallas County History & Culture, following user group meeting <a href="https://www.ddn.com/company/events/user-group-sc/">https://www.ddn.com/company/events/user-group-sc/</a></li>
<li>SC official opening gala, 7pm-9pm, <a href="https://sc18.supercomputing.org/presentation/?id=pec122&sess=sess284">https://sc18.supercomputing.org/presentation/?id=pec122&sess=sess284</a></li>
<li>Beowulf Bash, 9pm, Eddie Dean's Ranch, <a href="https://beowulfbash.com/">https://beowulfbash.com</a>/</li>
</ul>
Tuesday 13th<br />
<ul>
<li>Women in HPC, 6.30pm-9pm, Cafe Herrera on Lamar (at the Omni Dallas), <a href="https://www.eventbrite.co.uk/e/whpcsc18-networking-and-careers-reception-tickets-51886141847">https://www.eventbrite.co.uk/e/whpcsc18-networking-and-careers-reception-tickets-51886141847</a></li>
<li>IBM</li>
<li>Cray</li>
<li>DDN, 7pm, Reunion Tower, tickets via DDN booth #3213</li>
<li>Nimbix, 6pm-10pm, d.e.c. on dragon st, <a href="https://www.eventbrite.com/e/nimbix-sc18-lounge-party-registration-50540352555">https://www.eventbrite.com/e/nimbix-sc18-lounge-party-registration-50540352555</a></li>
<li>OpenACC, 7pm-10pm, Eddie Dean's Ranch, <a href="https://www.eventbrite.com/e/9th-openacc-user-group-meeting-sc2018-tickets-50250658071">https://www.eventbrite.com/e/9th-openacc-user-group-meeting-sc2018-tickets-50250658071</a></li>
</ul>
<div>
<br /></div>
Wednesday 14th<br />
<ul>
<li>Mellanox, 6.30pm, Sheraton, <a href="http://www.mellanox.com/sc18/event.php?ls=social-tw&lsd=11.5.18">http://www.mellanox.com/sc18/event.php?ls=social-tw&lsd=11.5.18</a></li>
<li>Boston Limited, 7.30pm Bob's Steak & Chop House, <a href="https://www.boston.co.uk/events/2018/sc18.aspx">https://www.boston.co.uk/events/2018/sc18.aspx</a> [pre-booking required via booth #3255]</li>
</ul>
<br />
Tweet me <a href="http://twitter.com/hpcnotes">@hpcnotes</a> using hashtag #SC18 to add your reception to this list!<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-32148892257881356872018-11-05T20:28:00.001+00:002018-11-07T14:36:02.519+00:00SC18 TutorialsAt SC18, I will be leading two tutorials, along with my long-time co-presenter Owen Thomas and new co-presenter for SC18, Ingrid Barcena Roig.<br />
<br />
<ul>
<li>8:30am - noon : "<a href="https://sc18.supercomputing.org/?post_type=page&p=3479&id=tut112&sess=sess234">The Business of HPC: TCO, value, metrics, and more ...</a>"</li>
<li>1.30pm - 5.00pm : "<a href="https://sc18.supercomputing.org/?post_type=page&p=3479&id=tut113&sess=sess235">Procurement and Commissioning of HPC Systems</a>"</li>
</ul>
<br />
<div>
Both tutorials are on Monday 12th November, in room C140 of the Dallas convention center.</div>
<div>
<br /></div>
<div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsIOcwIeQZ_IQ3l7PuaQ49BeBjGveyVa1B3Zf4CnloiRS2nthYBaTXRBScwcasIo1tfrf4_veMW9G0RWMtnqJKuPcLOoBr0j208VFrK63z5CaL1fX5ZeCpQMppsp3WuxBu-EcBDV7FFjY/s1600/SC18_Tutorial_Business_of_HPC_agenda.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsIOcwIeQZ_IQ3l7PuaQ49BeBjGveyVa1B3Zf4CnloiRS2nthYBaTXRBScwcasIo1tfrf4_veMW9G0RWMtnqJKuPcLOoBr0j208VFrK63z5CaL1fX5ZeCpQMppsp3WuxBu-EcBDV7FFjY/s640/SC18_Tutorial_Business_of_HPC_agenda.jpg" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhob1_zRPN6k9M4VE7heQbdk7OqcR9hOSlcOdblinpqK9DPrGHxmOn-8oJOHhT2Cd6w_ZCx6ueX9Z3L2js4C0nW1UxB69f3zYj6lXoowet6zFNvAZaqBpfwkadDvuq0z2yW6VlaNpiclF4/s1600/SC18_Tutorial_Procurement_and_Commissioning_agenda.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhob1_zRPN6k9M4VE7heQbdk7OqcR9hOSlcOdblinpqK9DPrGHxmOn-8oJOHhT2Cd6w_ZCx6ueX9Z3L2js4C0nW1UxB69f3zYj6lXoowet6zFNvAZaqBpfwkadDvuq0z2yW6VlaNpiclF4/s640/SC18_Tutorial_Procurement_and_Commissioning_agenda.jpg" width="640" /></a></div>
<br />
<br />
<br />
Please join us to learn more about how to get investment in HPC, how to spend it wisely, and how to measure the impact.<br />
<br />
<br />
<br /></div>
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-3190251543429878662018-06-23T10:54:00.000+01:002018-06-23T12:53:47.476+01:00A useful reading list for travelling to ISC18Travelling to Frankfurt for ISC? Need to feed your HPC thirst while on planes, trains, or in hotel rooms? Here is my pick of things to download and read so that you are fully informed when you start ISC:<br />
<br />
<ul>
<li>(Obviously!) <a href="https://www.hpcwire.com/2018/06/21/isc-2018-preview-from-hpcnotes/">The @hpcnotes ISC18 preview</a> at HPC Wire</li>
<li>The <a href="https://insidehpc.com/2018/06/welcome-isc-high-performance-2018/">official welcome to ISC18 by the organizers</a>, via InsideHPC</li>
<li>An article on the race between USA and China (and others) to get to the top of the Top500 at WiredUK "<a href="http://www.wired.co.uk/article/ibm-summit-supercomputer-china-taihulight-fastest-supercomputer"><i>Why the US and China's brutal supercomputer war matters</i></a>" and my follow up thoughts: <a href="https://www.hpcnotes.com/2018/06/does-it-matter-whether-usa-china-eu-or.html">https://www.hpcnotes.com/2018/06/does-it-matter-whether-usa-china-eu-or.html</a></li>
<li><a href="https://www.top500.org/news/sandia-to-install-first-petascale-supercomputer-powered-by-arm-processors/">Sandia to 2.3 deploy petaflops Cavium TX2 ARM supercomputer</a> at Top500.org</li>
<li><a href="https://www.nextplatform.com/2018/06/21/details-emerge-on-post-k-exascale-system-with-first-prototype/">Details on Japan's post-K exascale supercomputer</a> from The Next Platform</li>
<li>[<i>added Sat afternoon</i>] <a href="https://www.top500.org/news/thomas-sterling-talks-exascale-chinese-hpc-machine-learning-and-non-von-neumann-architectures/">Top500 interviews Thomas Sterling on Exascale, Chinese HPC, ML and non-von-Neumann</a></li>
<li><br /></li>
<li>[<i>more to follow during the weekend ...</i></li>
</ul>
<div>
See you in Frankfurt!</div>
<div>
<br /></div>
<div>
Andrew / <a href="http://twitter.com/hpcnotes">@hpcnotes</a></div>
<div>
<br /></div>
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-6849358433331743062018-06-20T17:57:00.004+01:002018-06-20T17:58:12.036+01:00NAG-TACC HPC Leadership Institute 2018Just taken over a HPC management or leadership role? Or hoping to soon? Or know someone who could grow into those roles? Or been a HPC director for years but value ongoing personal development?<br />
<br />
The HPC Leadership Institute is a partnership between Numerical Algorithms Group (NAG) and Texas Advanced Computing Center (TACC) to deliver training on the business aspects of High Performance Computing. The training covers strategy, total cost of ownership (TCO), cloud vs on-site, supercomputer procurement, governance, user services, and much more.<br />
<br />
The 2018 course will be held in Austin TX September 11-13. Learn more and register now at:<br />
<br />
<a href="https://www.tacc.utexas.edu/education/institutes/hpc-leadership-institute">https://www.tacc.utexas.edu/education/institutes/hpc-leadership-institute</a><br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-65082666069773472552018-06-20T12:39:00.004+01:002018-06-20T12:42:42.574+01:00Does it matter whether USA, China, EU, or someone else has the biggest supercomputer?Much fuss will be made over the ORNL's new Summit supercomputer at the ISC18 event next week - in particular the fact that it means the USA replaces China as the home of world's fastest supercomputer according to <a href="http://www.top500.org/">www.top500.org</a>. This brings the usual question as to whether it really matters which country has the biggest supercomputer.<br />
<br />
Having a supercomputer 20%, or even 2x, faster than a competitor isn’t critical on its own, because it is possible to make up 20% or 2x actual competitive capability through better software, better people, or better service delivery practices.<br />
<br />
However, a 10x faster supercomputer would be an issue, because that would typically reflect a political commitment to High Performance Computing (HPC) involving hardware and software and people - and so could mean potential capability dominance.<br />
<br />
Of course, if you had the 2x slower supercomputer <i>without </i>investing in people/software/practices to make up the difference, then that would be a meaningful competitive gap and <i>would </i>matter.<br />
<br />
Read more in this article at WiredUK: "<a href="https://www.wired.co.uk/article/ibm-summit-supercomputer-china-taihulight-fastest-supercomputer">Why the US and China's brutal supercomputer war matters</a>"<br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-28870535905561494682017-11-22T16:45:00.000+00:002017-11-22T16:46:25.781+00:00Benchmarking HPC systemsAt SC17, we celebrated the 50th edition of the Top500 list. With nearly 25,000 list positions published over 25 years, the Top500 is an incredibly rich database of consistently measured performance data with associated system configurations, sites, vendors, etc. Each SC and ISC, the Top500 feeds community gossip, serious debate, the HPC media, and ambitious imaginations of HPC marketing departments. Central to the Top500 list is the infamous HPL benchmark.<br />
<br />
Benchmarks are used to answer questions such as (naively posed): “<i>How fast is this supercomputer?</i>”, “<i>How fast is my code?</i>”, “<i>How does my code scale?</i>”, “<i>Which system/processor is faster?</i>”.<br />
<br />
In the context of HPC, benchmarking means the collection of quantifiable data on the speed, time, scalability, efficiency, or similar characteristics of a specific combination of hardware, software, configuration, and dataset. In practice, this means running well-understood test case(s) on various HPC platforms/configurations under specified conditions or rules (for consistency) and recording appropriate data (e.g., time to completion).<br />
<br />
These test cases may be full application codes, or subsets of those codes with representative performance behaviour, or standard benchmarks. HPL falls into the latter category, although for some applications it could fall into the second category too. In fact, this is the heart of the debate over the continued relevance of the HPL benchmark for building the Top500 list: how many real-world applications does it provide a meaningful performance guide for? But, even moving away from HPL to “user codes”, selecting a set of benchmark codes is as much a political choice (e.g., reflecting stakeholders) as it is a technical choice.<br />
<br />
<a name='more'></a><br /><br />
Once the performance data has been collected, an analysis phase usually follows. This seeks to explore and explain the observed performance behaviour with respect to architecture or other features of either the hardware or the software.<br />
<br />
Thus, while benchmarks are only measured for specific scenarios, they are most often used to extrapolate or infer more general behaviour. This might include predicting the performance of a potential hardware upgrade, or of a new algorithm, or identifying a performance bottleneck. This is also an easy area for "cheating" or optimistic assumptions to creep in, and care is needed when making decisions based on extrapolated benchmark data. Comparing results across different systems or scales requires a treatment of architectural and extrapolation issues, which will be complex and depend on in-depth knowledge of the hardware, software or both.<br />
<br />
When undertaking benchmarking, it is important to collect a full set of metadata. The more metadata recorded the better, and a typical list might be: which machine, system details, time, other users/codes on the system, runtime flags, code build details, library versions, input deck, core count, node population, topology used, etc.<br />
<br />
Finally, in addition to measuring the performance achieved, it might be appropriate to measure the effort needed to achieve that performance (e.g., code porting and tuning).<br />
<br />
Good benchmarking requires specific skills and experience, plus a persistent, methodical, enquiring attitude. It can be complicated, definitely frustrating, hopefully insightful, and even fun!
<br />
<br />
<i>A version of this article was original published in print edition of Top500 News at SC17.</i><br />
<div>
<br /></div>
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-48579007862135816852017-09-29T17:38:00.000+01:002017-11-22T16:46:40.127+00:00Finding a Competitive Advantage with High Performance ComputingHigh Performance Computing (HPC), or supercomputing, is a critical enabling capability for many industries, including energy, aerospace, automotive, manufacturing, and more. However, one of the most important aspects of HPC is that HPC is not only an <i>enabler</i>, it is often also a <i>differentiator </i>– a fundamental means of gaining a competitive advantage.<br />
<br />
<h3>
Differentiating with HPC</h3>
<br />
Differentiating (gaining a competitive advantage) through HPC can include:<br />
<ul>
<li><b>faster </b>- complete calculations in a shorter time;</li>
<li><b>more </b>- complete more computations in a given amount of time;</li>
<li><b>better </b>- undertake more complex computations;</li>
<li><b>cheaper </b>- deliver computations at a lower cost;</li>
<li><b>confidence </b>- increase the confidence in the results of the computations; and </li>
<li><b>impact </b>- effectively exploiting the results of the computations in the business.</li>
</ul>
These are all powerful business benefits, enabling quicker and better decision making, reducing the cost of business operations, better understanding risk, supporting safety, etc.<br />
<br />
Strategic delivery choices are the broad decisions about <i>how </i>to do/use HPC within an organization. This might include:<br />
<ul>
<li>choosing between cloud computing and traditional in-house HPC systems (or points on a spectrum between these two extremes);</li>
<li>selecting between a cost-driven hardware philosophy and a capability-driven hardware philosophy;</li>
<li>deciding on a balance of internal capability and externally acquired capability;</li>
<li>choices on the balance of investment across hardware, software, people and processes.</li>
</ul>
The answers to these strategic choices will depend on the environment (market landscape, other players, etc.), how and where you want to navigate that environment, and why. This is an area where our consulting customers benefit from our expertise and experience. If I were to extract a core piece of advice from those many consulting projects, it would be: "<b>explicitly make a decision rather than drift into one, and document the reasons, risk accepted, and stakeholder buy-in</b>".<br />
<br />
<h3>
Which HPC technology?</h3>
<br />
A key means of differentiating with HPC, and one of the most visible, is through the choice of hardware technologies used and at what scale. The HPC market is currently enjoying (or is it suffering?) a broader range of credible hardware technology options than the previous few years.<br />
<br />
<a name='more'></a><br /><br />
Whilst systems based on Intel Xeon (Skylake is the current iteration) are still the dominant choice, there are several viable alternatives. Other processor options include AMD's EPYC, IBM's (Open)Power, and ARM family (e.g., Cavium's ThunderX series).<br />
<br />
The use of Graphical Processing Units (CPUs) for computations has become mature over recent years, and these are now a realistic candidate for use in production environments. The GPU-for-HPC space is dominated by NVidia in terms of presence, product and ecosystem maturity. AMD has a competing GPU product that has some possible performance advantages, with arguably a less rich ecosystem, although it is showing promise for maturing rapidly.<br />
<br />
Intel’s manycore processor, the Xeon Phi (often referred to by the codename of the current generation, Knights Landing, or KNL), is an alternative that is attracting attention. The Xeon Phi promises higher performance than traditional Xeon processors, without the coding discontinuity of the GPU solutions.<br />
<br />
The interaction of these hardware technology choices with the other aspects of the competitive game - i.e., software, people and processes - must also be considered.<br />
<br />
Our team does a lot of benchmarking and other work to understand the relative performance, benefits, risks and futures of each technology. I'm not going to pass comment on the pros or cons of any of these options here, although I will note that there is no clear winner - it will depend on your business situation, application needs, other strategic choices, etc. - we can help you navigate and de-risk that decision space.<br />
<br />
<h3>
Software, People, Delivery</h3>
<br />
Perhaps the largest element of inertia in any substantial computational science effort is the <b>application software</b>. More than other components of the HPC ecosystem, application software (especially in-house software), persists for years, is expensive to evolve, and is subject to inertia from users, developers and funders. Yet, done right, application software can be one of the most capable elements in terms of creating a differentiation. Software must be part of the competitive landscape for HPC, along with how software interacts with the hardware and people choices.<br />
<br />
The relative investments in <b>people </b>versus hardware, in which types of people and how they are engaged, can make substantial differences to the competitive landscape.<br />
<br />
Finally, often forgotten when discussing HPC services, trends or competitive advantages, is the <b>business aspects of HPC</b> – how effectively HPC is delivered, measured, and used within the business. We are seeing growing demand for our training in this area (e.g., tutorials at SC), which is encouraging.<br />
<br />
<h3>
Finding the best HPC solution</h3>
<br />
Together, these elements provide a large multidimensional parameter space over which to seek meaningful differentiation in the use of HPC with respect to competitors.<br />
<br />
<b>HPC, especially in the context of competing, is best thought of not as IT – but as a powerful business/research tool that just happens to be built using IT components.</b><br />
<br />
This philosophy underlines why HPC must take different approaches compared with normal IT across technology planning, acquisition, user service delivery, operational practices, cost-value calculations, etc. It also explains why <i>HPC is never a commodity in a competitive sector – even if it employs mostly commodity components</i>.<br />
<br />
Whether the differentiation is in the HPC itself, or in the use of HPC-enabled results, or both, the optimum solution for one organization may be far from the optimum solution for another organization. Understanding the competitive possibilities, and creating the right edge over competitors through differentiated HPC is thus a critical part of any discussion of HPC planning, delivery and use.<br />
<br />
<i>To discuss this further contact me at <a href="http://twitter.com/hpcnotes">@hpcnotes</a> or <a href="http://www.linkedin.com/in/andrewjones">www.linkedin.com/in/andrewjones</a>, or use the comments box below.</i><br />
<i><br /></i>
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-59969322382747668572017-07-31T20:04:00.000+01:002017-11-22T16:47:06.025+00:00HPC Getting More Choices - Technology Diversity<h2>
HPC has been easy for a while ...</h2>
<br />
When buying new workstations or personal computers, it is easy to adopt the simple mantra that a newer processor or higher clock frequency means your application will run faster. It is not totally true, but it works well enough. However, with High Performance Computing, HPC, it is more complicated.<br />
<br />
HPC works by using parallel computing – the use of many computing elements together. The nature of these computing elements, how they are combined, the hardware and software ecosystems around them, and the challenges for the programmer and user vary significantly – between products and across time. Since HPC works by bringing together many technology elements, the interaction between those elements becomes as important as the elements themselves.<br />
<br />
Whilst there has always been a variety of HPC technology solutions, there has been a strong degree of technical similarity of the majority of HPC systems in the last decade or so. This has meant that (i) code portability between platforms has been relatively easy to achieve and (ii) attention to on-node memory bandwidth (including cache optimization) and inter-node scaling aspects would get you a long way towards a single code base that performs well on many platforms.<br />
<br />
<h2>
Increase in HPC technology diversity</h2>
<br />
However, there is a marked trend of an increase in diversity of technology options over the last few years, with all signs that this is set to continue for the next few years. This includes breaking the near-ubiquity of Intel Xeon processors, the use of many-core processors for the compute elements, increasing complexity (and choice) of the data storage (memory) and movement (interconnect) hierarchies of HPC systems, new choices in software layers, new processor architectures, etc.<br />
<br />
This means that unless your code is adjusted to effectively exploit the architecture of your HPC system, your code may not run faster at all on the newer system.<br />
<br />
It also means HPC clusters proving themselves where custom supercomputers might have previously been the only option, and custom supercomputers delivering value where commodity clusters might have previously been the default.<br />
<br />
<a name='more'></a><br /><br />
Many-core processors, often referred to as accelerators, are processors that use a high degree of parallel processing within the chip – such as GPUs and Xeon Phi – and require different programming techniques to achieve the best performance. In the case of GPUs, a different language might be required (e.g., CUDA or OpenCL). It is likely that code written for one accelerator will be non-trivial to port and achieve good performance on another accelerator (maybe even when both accelerators are from the same family).<br />
<br />
Even away from accelerators, there are strong signs that credible competitors to Intel’s Xeon CPU family are back in play. One or more of AMD’s EPYC x86 processors, ARM architecture candidates, or IBM’s OpenPower could take market share from Intel. This adds a further portability and performance tuning challenge to programmers.<br />
<br />
<h2>
Not just CPUs and GPUs</h2>
<br />
The data storage hierarchy that an executing HPC code must consider now spans registers, multiple levels of on-chip cache (some shared, some dedicated), off-chip cache or local high speed memory, on-node memory, memory on remote nodes (maybe split into topologically near and far nodes!), fast storage layers (e.g., cache buffers), and disk. Between each level of this hierarchy is a substantial sharp step in capacity, bandwidth and latency (and cost). A good compiler might manage the registers and lower cache levels, whilst some parallelisation strategies might minimize the performance effects of the off-node memories. Use of libraries or software frameworks might help manage the hierarchies.<br />
<br />
However, the combination of processor diversity and data hierarchy means that software developers, and application buyers, must pay attention to the hardware details in a way that has not mattered this strongly for several years.<br />
<br />
The rules of this new game are often reduced to simple sounding ideals: expose more parallelism; avoid data movement; extra compute is cheaper than data movement; etc. The reality of effective implementation turns out to be somewhat harder in practice.<br />
<br />
<h2>
What increased HPC technology diversity means</h2>
<br />
The diversity is good, because it brings competition, which helps reduce prices, and - perhaps more importantly - drives continued innovation among the suppliers. The diversity is hard, because it adds decision risk in determining the optimum technology path for buyers, and adds complexity for software developers seeking portability and performance.<br />
<br />
Ultimately, technology diversity keeps HPC alive and fun - and it keeps our consulting business going, as we provide impartial expert advice on the technology choices and mitigating decision risk!<br />
<br />
Are you ready to seize the opportunity performance or cost opportunities? How are you managing the decision risk associated with finding the optimum technology path for you? Are you comfortable with the potential lost performance of not exploring the technology diversity?<br />
<br />
<i>To discuss this further contact me at <a href="http://twitter.com/hpcnotes">@hpcnotes</a> or <a href="http://www.linkedin.com/in/andrewjones">www.linkedin.com/in/andrewjones</a>, or use the comments box below.</i><br />
<i><br /></i>
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-72825553630339055502017-07-11T15:08:00.000+01:002017-07-11T15:09:39.670+01:00SC17 Tutorials - HPC cost models, investment cases and acquisitionsFollowing our successful HPC tutorials at SC16 and <a href="https://2017oilgashpcconference.sched.com/list/descriptions/">OGHPC17</a>, I'm delighted to report that we've had three tutorials accepted for <a href="http://sc17.supercomputing.org/">SC17</a> in Denver this November, all continuing our mission to provide HPC training opportunities for HPC people other than just programmers.<br />
<br />
At SC17, we will be delivering these three tutorials:<br />
<ul>
<li>[Sun 12th, am] "<i>Essential HPC Finance: Total Cost of Ownership (TCO), Internal Funding, and Cost-Recovery Models</i>"</li>
<li>[Sun 12th, pm] "<i>Extracting Value from HPC: Business Cases, Planning, and Investment</i>"</li>
<li>[Mon 13th, am] "<i>HPC Acquisition and Commissioning</i>"</li>
</ul>
In a last minute bit of co-ordination, Sharan Kalwani will be following these with his related tutorial "<i>Data Center Design</i>" on Mon 13th pm.<br />
<br />
<h3>
Are these tutorials any good?</h3>
<br />
The HPC procurement tutorial was successfully presented at SC13 (>100 attendees) and SC16 (~60 attendees). Feedback from the SC16 attendees was very positive: scored 4.6/5 overall and<b> scored 2.9/3 for “<i>recommend to a colleague</i>”</b>.<br />
<br />
The HPC finance tutorial was successfully presented at SC17 (~60 attendees) and at the Rice Oil & Gas HPC conference 2017 (~30 attendees). Feedback from the SC16 attendees was very positive: scored 4.3/5 overall and <b>scored 2.7/3 for “<i>recommend to a colleague</i>”</b>.<br />
<br />
The HPC business case tutorial is new for SC17.<br />
<br />
<h3>
What is the goal of the tutorials?</h3>
<br />
The tutorials provide an impartial, practical, non-sales focused guide to the business aspects of HPC facilities and services (including cloud), such as total cost of ownership, funding models, showing value and securing investing in HPC, and the process of purchasing and deploying a HPC system. All tutorials include exploration of the main issues, pros and cons of differing approaches, practical tips, hard-earned experience and potential pitfalls.<br />
<br />
<h3>
What is in the tutorials?</h3>
<br />
<i>Essential HPC Finance Practice: Total Cost of Ownership (TCO), Internal Funding, and Cost-Recovery Models</i><br />
<ul>
<li>Calculating and using TCO models</li>
<li>Pros and cons of different internal cost recovery and funding models</li>
<li>Updated from the SC16 base, with increased consideration of cloud vs in-house HPC</li>
</ul>
<i>Extracting Value from HPC: Business Cases, Planning, and Investment</i><br />
<ul>
<li>Applicable to either a first investment or an upgrade of existing capability</li>
<li>Most relevant to organizations with a clear purpose (e.g., industry) or those with a clear service mission (e.g., academic HPC facilities)</li>
<li>Identifying the value, building a business case, engaging stakeholders, securing funding, requirements capture, market survey, strategic choices, and more</li>
</ul>
<i>HPC Acquisition and Commissioning</i><br />
<ul>
<li>Procurement process including RFP</li>
<li>Specify what you want, yet enable the suppliers to provide innovative solutions beyond the specification both in technology and in the price</li>
<li>Bid evaluation, benchmarks, clarification processes</li>
<li>Demonstrate to stakeholders that the solution selected is best value for money</li>
<li>Contracting, project management, commissioning, acceptance testing</li>
</ul>
<br />
<h3>
Who are the tutors?</h3>
<br />
Me (<a href="http://www.nag.com/executive-management-team">Andrew Jones</a>, <a href="http://twitter.com/hpcnotes">@hpcnotes</a>), Owen Thomas (Red Oak Consulting), and Terry Hewitt. We have been involved in numerous major HPC procurements and other strategic HPC projects since 1990, as service managers, bidders to funding agencies, as customers and as impartial advisors. We are all from the UK but have worked around the world and the tutorials will be applicable to HPC projects and procurements anywhere. The tutorials are based on experiences across a diverse set of real world cases in various countries, in private and public sectors.<br />
<br />
<h3>
What if you need even more depth?</h3>
<br />
These SC17 tutorials will deliver a lot of content in each half day. However, if you need more depth, or a fuller range of topics, or are looking for a CV step towards becoming a future HPC manager, then our joint <a href="http://twitter.com/tacc">TACC</a>-<a href="http://twitter.com/nagtalk">NAG</a> summer training institute is the right thing for you: "<i><a href="https://www.top500.org/news/where-will-future-hpc-leaders-come-from/">Where will future HPC leaders come from?</a></i>"<br />
<br />
<br />
<br />
<b><i>Hope to see you at one (or more!) of our tutorials at SC17 this November in Denver.</i></b><br />
<b><i>@hpcnotes</i></b><br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-53109973170224554592017-06-28T16:37:00.002+01:002017-06-28T16:37:33.140+01:00Is cloud inevitable for HPC?In 2009, I wrote this article for HPC Wire: "<i><a href="https://www.hpcwire.com/2009/12/15/2009-2019_a_look_back_on_a_decade_of_supercomputing/">2009-2019: A Look Back on a Decade of Supercomputing</a></i>", pretending to look back on supercomputing between 2009 and 2019 from the perspective of beyond 2020.<br />
<br />
The article opens with the idea that owning your own supercomputer was a thing of the past:<br />
<blockquote class="tr_bq">
"<i>As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It’s amazing to think how much has changed in that time. Many of our older readers will recall how things were before the official Planetary Supercomputing Facilities at Shanghai, Oak Ridge and Saclay were established. </i><i>Strange as it may seem now, each country — in fact, each university or company — had its own supercomputer!</i>"</blockquote>
I got this bit wrong:<br />
<blockquote class="tr_bq">
"<i>And then the critical step — businesses and researchers finally understood that their competitive asset was the capabilities of their modelling software and user expertise — not the hardware itself. Successful businesses rushed to establish a lead over their competitors by investing in their modelling capability — especially robustness (getting trustable predictions/analysis), scalability (being able to process much larger datasets than before) and performance (driving down time to solutions).</i>"</blockquote>
Hardware still matters - in some cases - as a means of gaining a competitive advantage in performance or cost [We help advise if that is true for our <a href="http://www.nag.com/content/high-performance-computing-consulting-and-services">HPC consulting customers</a>, and how to ensure the operational and strategic advantage is measured and optimized].<br />
<br />
And, of course, my predicted rush to invest in software and people hasn't quite happened yet.<br />
<br />
Towards the end, I predicted three major computing providers, from which most people got their HPC needs:<br />
<blockquote class="tr_bq">
"<i>We have now left the housing and daily care of the hardware to the specialists. The volume of public and private demand has set the scene for strong HPC provision into the future. We have the three official global providers to ensure consumer choice, with its competitive benefits, but few enough providers to underpin their business cases for the most capable possible HPC infrastructure.</i>"</blockquote>
Whilst my predictions were a little off in timing, some could be argued to have come true e.g., the rise to the top of Chinese supercomputing, the increasing likelihood of using someone else's supercomputer rather than buying your own (even if we still call it cloud), etc.<br />
<br />
With the ongoing debate around cloud vs in-house HPC (where I am desperately trying to inject some impartial debate to balance the relentless and brash cloud marketing), re-visiting this article made an interesting trip down memory lane for me. I hope you might enjoy it too.<br />
<br />
As I recently posted on LinkedIn:<br />
<blockquote class="tr_bq">
"<i>Cloud will never be the right solution for everyone/every use case. Cloud is rightly the default now for corporate IT, hosted applications, etc. But, this cloud-for-everything is unfortunately, wrongly, extrapolated to specialist computing (e.g., high performance computing, HPC), where cloud won't be the default for a long time.</i></blockquote>
<blockquote class="tr_bq">
<i>For many HPC users, cloud is becoming a viable path to HPC, and very soon perhaps even the default option for many use cases. <b>But, cloud is not yet, and probably never will be, the right solution for everyone.</b> There will always be those who can legitimately justify a specialized capability (e.g., a dedicated HPC facility) rather than a commodity solution (i.e., cloud, even "HPC cloud"). The reasons for this might include better performance, specific operational constraints, lower TCO, etc. that only specialized facilities can deliver. </i></blockquote>
<blockquote class="tr_bq">
<i><b>The trick is to get an unbiased view for your specific situation</b>, and you should be aware that most of the commentators on cloud are trying to sell cloud solutions or related services, so are not giving you impartial advice!</i>"</blockquote>
[We provide that impartial advice on cloud, measuring performance, TCO, and related topics to our <a href="http://www.nag.com/content/high-performance-computing-consulting-and-services">HPC consulting customers</a>]<br />
<br />
<br />
<a href="http://twitter.com/hpcnotes">@hpcnotes</a><br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-90997783212282953782017-06-21T17:37:00.001+01:002017-06-21T17:42:27.622+01:00Deeply learning about HPC - ISC17 day 3 summary - Wednesday eveningFor most of the HPC people gathered in Frankfurt for <a href="http://twitter.com/ischpc">ISC17</a>, Wednesday evening marks the end of the hard work, the start of the journey home for some, already home for others. A few hardy souls will hang on until Thursday for the workshops. So, as you relax with a drink in Frankfurt, trudge through airports on the way home, or catch up on the week's emails, here's my final daily summary of ISC17, as seen through the lens of twitter, private conversations, and the HPC media.<br />
<div>
<br /></div>
<div>
This follows my highlights blogs from Monday "<i><a href="http://www.hpcnotes.com/2017/06/cutting-through-clutter-of-isc17-monday.html">Cutting through the ISC17 clutter</a></i>" (~20k views so far) and Tuesday "<i><a href="http://www.hpcnotes.com/2017/06/isc17-information-overload-tuesday.html">ISC17 information overload</a></i>" (~4k views so far).</div>
<div>
<br /></div>
<div>
So what sticks out from the last day, and what sticks out from the week overall?<br />
<div>
<br /></div>
<h2>
Deep Learning</h2>
<div>
Wednesday was touted by ISC as "deep learning day". If we follow the current convention (inaccurate but seemingly pervasive) of using deep learning, machine learning, AI (nobody actually spells out artificial intelligence), big data, data analytics, etc. as totally interchangeable terms (why let facts get in the way of good marketing?), then Wednesday was indeed deep learning day, judging by by tweet references to one or more of the above. However, I struggle to nail down exactly what I am supposed to have learnt about HPC and deep learning from today's content. Perhaps you had to be there in person (there is a reason why attending conferences is better than watching via twitter).</div>
</div>
<div>
<br /></div>
<div>
I think my main observations are:</div>
<div>
<ul>
<li>DL/ML/AI/BigData/analytics/... is a real and growing part of the HPC world - both in terms of "traditional" HPC users looking at these topics, and new users from these backgrounds peering into the HPC community to seek performance advantages.</li>
<li>A huge proportion of the HPC community doesn't really know what DL/ML/... actually means in practice (which software, use case, workflow, skills, performance characteristics, ...).</li>
<li>It is hard to find the reality behind the marketing of DL/ML/... products, technologies, and "success stories" of the various vendors. But, hey, what's new? - I was driven to deal with this issue for GPUs and cloud in my recent webinar "<i><a href="http://www.nag.com/content/webinar-dissecting-myths-cloud-gpus-hpc">Dissecting the myths of Cloud and GPUs for HPC</a></i>".</li>
<li>Between all of the above, I still feel there is a huge opportunity being missed: for users in either community and for the technology/product providers. I don't have the answers though.</li>
</ul>
</div>
<div>
<br /></div>
<h2>
Snippets</h2>
<div>
Barcelona (BSC) has joined other HPC centers (e.g., Bristol <a href="http://gw4.ac.uk/isambard/">Isambard</a>, Cambridge <a href="http://hpc-sig.org.uk/wp-content/blogs.dir/sites/63/2017/02/2017_02_09_tier2_Peta-5.pdf">Peta5</a>, ...) in buying a bit of everything to explore the technology diversity for future HPC systems: "<a href="https://www.top500.org/news/new-marenostrum-supercomputer-reflects-processor-choices-confronting-hpc-users/"><i>New MareNostrum Supercomputer Reflects Processor Choices Confronting HPC Users</i></a>".</div>
<div>
<br /></div>
<div>
Exascale is now a world-wide game: China, European countries, USA, Japan are all close enough to start talking about how they might get to exascale, rather than merely visions of wanting to get there.</div>
<div>
<br /></div>
<div>
People are on the agenda: growing the future HPC talent, e.g., the <a href="https://www.top500.org/news/isc-launches-first-stem-student-day-aims-to-bridge-skills-gap/">ISC STEM Student Day Day & Gala</a>, the <a href="http://www.studentclustercomp.com/">Student Cluster Competition</a>, gender diversity (<a href="https://www.womeninhpc.org/">Women-in-HPC</a> activities), and more.</div>
<div>
<br /></div>
<h2>
Wrapping up</h2>
<div>
There are some parts of ISC that have been repeated over the years due to demand. Thomas Sterling's annual "<i><a href="http://www.isc-hpc.com/isc17_ap/presentationdetails.htm?t=presentation&o=1026&a=select&ra=sessiondetails">HPC Achievement & Impact</a></i>" keynote that traditionally closes ISC (presenting as I write this) is an excellent session and goes a long way towards justifying the technical program registration fee.</div>
<div>
<br /></div>
<div>
2017 sees the welcome return of <a href="http://twitter.com/addisonsnell">Addison Snell</a>'s "<i><a href="http://www.isc-hpc.com/isc17_ap/sessiondetails.htm?t=session&o=567&a=select&ra=index">Analyst Crossfire</a></i>". With a great selection of questions, fast pace, and well chosen panel members, this is always a good event. Of course, I am biased towards the <a href="https://insidehpc.com/2011/06/video-isc11-analyst-crossfire/">ISC11 Analyst Crossfire</a> being the best one!</div>
<div>
<br /></div>
<div>
I'll join Addison's fun with my "one up, one down" for ISC17. Up is <a href="http://twitter.com/cscsch">CSCS</a>, not merely for Piz Daint knocking the USA out of the top 3 of the <a href="http://www.top500.org/">Top500</a>, but for a sustained program of supercomputing over many years, culminating in this leadership position. Down is Intel - brings a decent CPU to market in Skylake but gets backlash for pricing, has to face uncertainty over the CORAL Aurora project, and in spite of a typically high profile presence at the show, a re-emerging rival AMD takes a good share of the twitter & press limelight with EPYC.</div>
<div>
<br /></div>
<div>
<br /></div>
<h2>
Until next time</h2>
<div>
That's all from me for ISC17. I'll be back with more blogs over the next few weeks, based on my recent conference talks (e.g., "<i>Six Trends in HPC for Engineers</i>" and "<i>Measuring the Business Impact of HPC</i>").</div>
<div>
<br /></div>
<div>
You can catch up with me in person at the SEG Annual Meeting, EAGE HPC Workshop (I'm presenting), the <a href="https://www.tacc.utexas.edu/education/institutes/hpc-concepts-for-managers">TACC-NAG Training Institute for Managers</a>, and SC17 (I can reveal we will be delivering tutorials again, including a new one - more details soon!).</div>
<div>
<br /></div>
<div>
In the meantime, interact with me on twitter <a href="http://twitter.com/hpcnotes">@hpcnotes</a>, where I provide pointers to key HPC content, plus my comments and opinions on HPC matters (with a bit of F1 and travel geekery thrown in for fun).</div>
<div>
<br /></div>
<div>
Safe travels,</div>
<div>
<a href="http://twitter.com/hpcnotes">@hpcnotes</a>.</div>
<div>
<br /></div>
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-43618300652352060792017-06-20T15:05:00.000+01:002017-06-21T12:59:20.764+01:00ISC17 information overload - Tuesday afternoon summaryI hope you've been enjoying a productive <a href="http://twitter.com/ischpc">ISC17</a> if you are in Frankfurt, or if not have been able to <a href="http://www.hpcnotes.com/2017/06/how-to-keep-up-with-hpc-news-from-isc17.html">keep up with the ISC17 news flow</a> from afar.<br />
<br />
My ISC17 highlights blog post from yesterday ("<i><a href="http://www.hpcnotes.com/2017/06/cutting-through-clutter-of-isc17-monday.html">Cutting through the clutter of ISC17: Monday lunchtime summary</a></i>") seems to have collected over 11,000 page-views so far. Since this <a href="http://www.hpcnotes.com/">hpcnotes blog </a>normally only manages several hundred to a few thousand page views per post, I'm assuming a bot somewhere is inflating the stats. However, there are probably enough real readers to make me write another one. So here goes - my highlights of ISC17 news flow as of Tuesday mid-afternoon.<br />
<br />
<a name='more'></a><br /><br />
<h3>
Processor Competition</h3>
I understand that one active conversation topic at ISC17 is the increasing competition in the HPC processor space. Intel has enjoyed a dominance of market share with a very capable Xeon line recently (and has priced accordingly), but alternative processors that challenge Xeon on performance, price/performance, etc. are here now or arriving soon. The list includes AMD EPYC, IBM POWER, Cavium ThunderX2 (ARM based), plus GPUs from NVIDIA (Pascal or Volta) and AMD (Vega), plus Intel's own Xeon Phi (KNL). Tim Prickett-Morgan of The Next Platform looks at the CPU and GPU competition coming from AMD here: "<i><a href="https://www.nextplatform.com/2017/06/19/amd-winds-one-two-compute-punch-servers/">AMD Winds Up One-Two Compute Punch For Servers</a></i>".<br />
<br />
I'm hearing that customers are considering non-Xeon based HPC deployments more seriously than at any time in the past few years, including a willingness to invest in the HPC software engineering effort often required to adopt new technologies. There has certainly been notably increased demand for <a href="http://www.nag.com/content/high-performance-computing-consulting-and-services">NAG's HPC software engineering services</a> to help customers test and port to KNL and GPUs over recent months.<br />
<br />
<h3>
Who has been noisy, who has been quiet?</h3>
It's only Tuesday, so there is more to come yet, and impressions on the floor will be different from my view from afar.<br />
<br />
But, from what I can see from public activity at ISC17 and some private feedback, <a href="http://twitter.com/hpe">HPE</a> has been making a strong claim to be seen as a supercomputing company (again?) (i.e., not just a server company). This is helped by the SGI pedigree now within the HPE walls, including the always inspiring <a href="http://twitter.com/englimgoh">Eng Lim Goh</a> - "<i><a href="https://insidehpc.com/2017/06/dr-eng-lim-goh-hpes-recent-pathforward-award-exascale-computing/">Dr. Eng Lim Goh on HPE’s Recent PathForward Award for Exascale Computing</a></i>" - video interview with <a href="http://twitter.com/insidehpc">InsideHPC</a>. We shouldn't pretend HPE has been missing from HPC for years - e.g., HPE has several large HPC systems in aerospace and oil & gas - but it is fair to say HPE has not been a loud voice in the high end of the HPC community until recently.<br />
<br />
The usual suspects are also being noisy: <a href="http://twitter.com/nvidiadc">NVIDIA</a> (especially around their new fav topic of deep learning and, in my view, perhaps not enough noise about recent HPC successes); <a href="http://twitter.com/intelhpc">Intel</a> (SKL, KNL, OmniPath, cloud, ... the machine rolls on); <a href="http://twitter.com/cray_inc">Cray</a> (<a href="https://www.top500.org/news/new-zealand-to-join-petaflop-club/">new petascale supercomputers in New Zealand</a>, a <a href="https://www.markleygroup.com/events/2017-06/markley-attend-isc-high-performance-frankfurt-germany">Supercomputer-as-a-Service with Markley</a>, and <a href="https://www.top500.org/news/cray-brings-graph-analytics-deep-learning-capabilities-to-xc-supercomputers/">big data software for XC</a>); and <a href="http://twitter.com/atos">Atos</a> (<a href="https://www.hpcwire.com/off-the-wire/genci-boost-industrial-innovation-new-petascale-supercomputer/">9 petaflops supercomputer at GENCI</a>, <a href="https://www.top500.org/news/atos-reveals-first-commercial-arm-based-supercomputer/">new ARM based Sequana product</a>).<br />
<br />
The HPC market's own two analyst firms, Hyperion Research (formerly IDC) and InterSect360, have held (separate!) briefing sessions at ISC17 this week have updated their forecasts and analysis for the order $40bn HPC market and related sectors such as deep learning, hyperscale, etc. See "<i><a href="https://www.hpcwire.com/2017/06/20/hyperion-deep-learning-ai-helping-drive-healthy-hpc-industry-growth/">Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth</a></i>" for HPC Wire's coverage of the Hyperion forecasts, and <a href="http://www.intersect360.com/industry/reports.php?id=148">here for InterSect360's forecasts</a>.<br />
<br />
Who's been quiet? Well, either the media haven't got to publishing the stories yet, or the twitter accounts are muted, or there isn't much to say - but the usual raft of HPC center stories have been scarce so far this year. Clearly <a href="http://twitter.com/cscsch">CSCS</a> is getting attention for the number 3 spot on the Top500 list with the upgraded Piz Daint, <a href="https://insidehpc.com/2017/06/tacc-hosts-hpc-managers-institute-austin/">TACC has announced the HPC Training Institute for Managers jointly with NAG</a>, and a few others have issued stories, but I feel it has been quiet so far.<br />
<br />
<h3>
Datacenters</h3>
One thing that caught my eye, and that has inspired me to look out for similar stories was this announcement: "<a href="https://verneglobal.com/news/news-verne-global-sets-strategic-roadmap-to-manage-advanced-computing-requirements">Verne Global Sets Strategic Roadmap to Manage Advanced Computing Requirements</a>". Together with the Cray-Markely offering, this made me wonder if we will we see growth of offerings in this "middle ground" segment between in-house HPC and HPC-in-the-cloud. One to watch, I think.<br />
<br />
<h3>
Finally, ...</h3>
I'm not attending ISC17 myself, but several of my <a href="http://twitter.com/nagtalk">NAG</a> colleagues are - please do stop by our booth J-616 and assure them I am thinking of them and their sore feet :-)<br />
<br />
More tomorrow, until then, enjoy ISC17 and tonight's networking receptions.<br />
<br />
<a href="http://twitter.com/hpcnotes">@hpcnotes</a>.<br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-85511035031888973172017-06-19T13:26:00.003+01:002017-06-21T19:24:33.383+01:00How to keep up with the HPC news from ISC17Overwhelmed by the HPC information pouring out of <a href="http://isc-hpc.com/">ISC17</a>? Twitter, press releases, media stories, exhibitors, presentations, etc.? How to keep up?<br />
<br />
<h2>
Twitter</h2>
<ul>
<li><a href="http://twitter.com/ischpc">@ischpc</a> - the official ISC stream</li>
<li><a href="http://twitter.com/hpc_guru">@HPC_Guru</a> - the anonymous tweeting wonder that feeds the HPC community's appetite for news, and adds targeted comments</li>
<li><a href="http://twitter.com/hpcnotes">@hpcnotes</a> (me) - a subset of @hpc_guru's stream, plus my own extra snippets and opinion</li>
</ul>
The above three will get you most of what you need (in my opinion!) but you can gain useful additonal information by more exploring who the above three interact with throughout ISC17.<br />
<br />
If you are a glutton, then follow <a href="http://twitter.com/search?=#isc17">#ISC17</a>.<br />
<br />
I'll update the above list throughout ISC17 if other tweeters become key commentators, but you might also find this (mildly out of date) <a href="http://www.hpcnotes.com/2013/10/essential-guide-to-hpc-on-twitter.html">list of HPC twitter accounts </a>handy.<br />
<br />
<h2>
HPC Notes</h2>
Of course, I would say the most essential method is reading my ISC17 summary blogs!<br />
<ul>
<li>Mon Jun 19th - "<i><a href="http://www.hpcnotes.com/2017/06/cutting-through-clutter-of-isc17-monday.html">Cutting through the HPC clutter of ISC17: Monday lunchtime summary</a></i>"</li>
<li>Tue Jun 20th - "<a href="http://www.hpcnotes.com/2017/06/isc17-information-overload-tuesday.html"><i>ISC17 information overload - Tuesday afternoon summary</i></a>"</li>
<li>Wed Jun 21st - "<i><a href="http://www.hpcnotes.com/2017/06/deeply-learning-about-hpc-isc17-day-3.html">Deeply learning about HPC - ISC17 day 3 summary - Wednesday evening</a></i>"</li>
</ul>
<div>
<br /></div>
<h2>
Media</h2>
<div>
If you prefer commentary and press releases from the main HPC media then here are the mian options:</div>
<div>
<br /></div>
<ul>
<li><a href="http://www.top500.org/">Top500.org</a> - your first port of call for the main announcements and editor Michael Feldman's analysis</li>
<li><a href="http://www.nextplatform.com/">The Next Platform</a> - in depth analysis of the stories behind the press releases from Nicole Hemsoth and Timothy Prickett-Morgan</li>
<li><a href="http://www.insidehpc.com/">InsideHPC</a> - a selection of announcements, plus audio/video news and interviews from the show floor, by Rich Brueckner</li>
<li><a href="http://www.hpcwire.com/">HPC Wire</a> - the most comprehensive list of HPC press releases, with other articles by Tiffany Trader and Doug Black</li>
</ul>
Happy reading!<br />
<br />
<br />
<br />
<br />Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0tag:blogger.com,1999:blog-5376373852049844568.post-48151389876887932302017-06-19T12:52:00.002+01:002017-06-21T12:59:34.237+01:00Cutting through the clutter of ISC17: Monday lunchtime summary<a href="http://isc-hpc.com/">ISC</a>, the HPC community's 2nd biggest annual gathering, in fully underway in Frankfurt now. ISC week is characterized by a vibrant twitter flood (<a href="https://twitter.com/search?q=%23isc17">#ISC17</a>), topped up with a deluge of press releases (a small subset of which are actually news), plus a plethora of news and analysis pieces in the HPC media. And, of course, anyone physically present at ISC, has presentations, meetings, and exhibitors further demanding their attention.<br />
<br />
I go to ISC almost every year. It is a valuable use of time for anyone in the HPC community or who uses, or has an interest in, HPC even if they don't see themselves as part of the HPC community. However, I have decided not to attend ISC this year, due to other commitments. However, <b>I will keep an eye on the "news" throughout the week and post a handful of summary blogs (like this one), which might be a useful catch-up on "news" so far, whether you are attending ISC or watching from afar</b>.<br />
<br />
<a name='more'></a><br /><br />
<h3>
The 500 fastest supercomputers (with many caveats ...)</h3>
The main news on day 1 (Monday) is always the release of the latest <a href="http://www.top500.org/">Top500</a> list of supercomputers. Or, at least, it seems to generate the most twitter comments, and most media articles on the first day. Sometimes the Top500 list has some big interest stories (e.g., a new Number 1 on the list). What about this year's ISC Top500 release?<br />
<br />
<a href="https://twitter.com/cscsch">CSCS</a> in Switzerland got a truck full of GPUs, lifting their <a href="http://twitter.com/cray_inc">Cray </a>supercomputer Piz Daint to the No3 spot on the list. In other Top500 news, every company or HPC center is now leading the Top500 (providing they are allowed to define their own metric or subset such that they "win").<br />
<br />
Please read various summaries and analysis of the Top500 news here:<br />
<ul>
<li>"<a href="https://www.top500.org/news/top500-list-refreshed-us-edged-out-of-third-place/"><i>TOP500 List Refreshed, US Edged Out of Third Place</i></a>" - Top500.org</li>
<li>"<a href="https://www.nextplatform.com/2017/06/19/hpc-poised-big-changes-top-bottom/"><i>HPC Poised For Big Changes, Top To Bottom</i></a>" - The Next Platform</li>
<li>"<a href="https://insidehpc.com/2017/06/radio-free-hpc-runs-latest-top500/"><i>Radio Free HPC Runs Down the Latest TOP500</i></a>" - InsideHPC (audio)</li>
<li>"<a href="https://www.hpcwire.com/2017/06/19/49th-top500-list-announced-isc/"><i>49th Top500 List Announced at ISC</i></a>" - HPC Wire</li>
</ul>
<div>
ISC17 also saw the announcement of many other x500 lists (Green500, HPCG500, Graph500, Green Graph500, IO500, ...) - I wonder if this is getting a bit silly. But in the spirit of joining in, I propose the HPCpeople500 (ranked by how many HPC staff a site has), the MW500 (ranked by power consumed by HPC systems at a site: more MW = higher on list), and the Codes500 (ranked by how many supported codes are installed on the system).</div>
<div>
<br /></div>
<h3>
Other news (apart from the Top500)</h3>
<div>
ISC itself continues to make records - <a href="https://insidehpc.com/2017/06/isc-2017-announces-record-attendance/">3200 attendees from 60 countries</a>.</div>
<div>
<br /></div>
<div>
A splurge of product announcements have been made (press releases). See <a href="https://www.hpcwire.com/">HPC Wire "off the wire"</a> for the best list of most of the press releases, <a href="http://www.insidehpc.com/">InsideHPC "Recent News"</a> for a selection of the press releases and other show news, and <a href="http://www.top500.org/">Top500 "News"</a> for a selection of the announcements. I'll present my own selection of the most important ISC announcements, along with my comments, in a later blog today.</div>
<div>
<br /></div>
<div>
One likely discussion point at ISC will be around which processor option is best. Intel offers Skylake (SKL) and Knights Landing (KNL), AMD suggests EPYC and Vega (GPU), NVidia insists on P100 or maybe even V100 GPUs, IBM whispers Power, ARM promises various options (e.g., Cavium ThunderX2).<br />
<br />
Some analysis of the KNL vs SKL options has been published by <a href="https://www.nextplatform.com/2017/06/19/knights-landing-can-stand-alone-often-wont/">The Next Platform: "<i>Knights Landing Can Stand Alone—But Often Won’t</i>"</a>.<br />
<br />
I made some cautions on this debate in my recent webinar <a href="http://www.nag.com/content/webinar-dissecting-myths-cloud-gpus-hpc">"<i>Dissecting the myths of Cloud and GPUs for HPC</i>" (register here for a recorded video)</a>. etc</div>
<div>
<br /></div>
<div>
Enjoy the afternoon at ISC - I'll be back with more summary and comment later. In the meantime, you can follow my thoughts on <a href="http://twitter.com/hpcnotes">twitter: @hpcnotes</a>.<br />
<div>
<br /></div>
<b>>> see the day 2 summary here: "<i><a href="http://www.hpcnotes.com/2017/06/isc17-information-overload-tuesday.html">ISC17 information overload - Tuesday afternoon summary</a></i>".</b><br />
<br />
<br /></div>
<div>
<br /></div>
Andrewhttp://www.blogger.com/profile/05974964640620611504noreply@blogger.com0