Showing posts with label top500. Show all posts
Showing posts with label top500. Show all posts

Friday 13 November 2020

Name that supercomputer 2 (Quiz)

It's a long time since I did an HPC quiz, so here is one to keep some fun in these odd times. Can you name these supercomputers?

I'm looking for actual machine names (e.g. 'Fugaku') and the host site (e.g. RIKEN CCS). Bonus points for the machine details (e.g. Fujitsu A64FX).

Submit your guesses or knowledgeable answers either through the comments field below, or to me on twitter (@hpcnotes).

Answers will be revealed once there have been enough guesses to amuse me. Have fun!


  1. Maybe it's Italian style, but this oily system has a purely descriptive name, a bit like the name of a robot with a short circuit.

  2. In spite of the name, this one is a step away from the very top.

  3. The seven daughters of Atlas.

  4. Arising from a beautiful reef, this top supercomputer is named after one of my co-presenters at my SC19 tutorial (or so we think).

  5. This border system's owner often tells how it was renamed in planning due to a bigger newer super that took it's original name.

  6. It has no name, at least not publicly, and the operator has not been open with full details, but with 10,000 GPUs it can do a lot of AI.

  7. On the road to exascale, but not there yet, this system will be housed next year in a chilly northern European location, and shares some similar architecture to two of the first exascale systems.

  8. A chicken with green-ish / brown-ish eyes. Or is it a type of nut?

  9. In a rare move, this number 9 is named after a living scientist, actually one of its users.

  10. Sing a song for this one, because it is named to be hit hard.

.

Wednesday 22 November 2017

Benchmarking HPC systems

At SC17, we celebrated the 50th edition of the Top500 list. With nearly 25,000 list positions published over 25 years, the Top500 is an incredibly rich database of consistently measured performance data with associated system configurations, sites, vendors, etc. Each SC and ISC, the Top500 feeds community gossip, serious debate, the HPC media, and ambitious imaginations of HPC marketing departments. Central to the Top500 list is the infamous HPL benchmark.

Benchmarks are used to answer questions such as (naively posed): “How fast is this supercomputer?”, “How fast is my code?”, “How does my code scale?”, “Which system/processor is faster?”.

In the context of HPC, benchmarking means the collection of quantifiable data on the speed, time, scalability, efficiency, or similar characteristics of a specific combination of hardware, software, configuration, and dataset. In practice, this means running well-understood test case(s) on various HPC platforms/configurations under specified conditions or rules (for consistency) and recording appropriate data (e.g., time to completion).

These test cases may be full application codes, or subsets of those codes with representative performance behaviour, or standard benchmarks. HPL falls into the latter category, although for some applications it could fall into the second category too. In fact, this is the heart of the debate over the continued relevance of the HPL benchmark for building the Top500 list: how many real-world applications does it provide a meaningful performance guide for? But, even moving away from HPL to “user codes”, selecting a set of benchmark codes is as much a political choice (e.g., reflecting stakeholders) as it is a technical choice.

Tuesday 20 June 2017

ISC17 information overload - Tuesday afternoon summary

I hope you've been enjoying a productive ISC17 if you are in Frankfurt, or if not have been able to keep up with the ISC17 news flow from afar.

My ISC17 highlights blog post from yesterday ("Cutting through the clutter of ISC17: Monday lunchtime summary") seems to have collected over 11,000 page-views so far. Since this hpcnotes blog normally only manages several hundred to a few thousand page views per post, I'm assuming a bot somewhere is inflating the stats. However, there are probably enough real readers to make me write another one. So here goes - my highlights of ISC17 news flow as of Tuesday mid-afternoon.

Monday 19 June 2017

Cutting through the clutter of ISC17: Monday lunchtime summary

ISC, the HPC community's 2nd biggest annual gathering, in fully underway in Frankfurt now. ISC week is characterized by a vibrant twitter flood (#ISC17), topped up with a deluge of press releases (a small subset of which are actually news), plus a plethora of news and analysis pieces in the HPC media. And, of course, anyone physically present at ISC, has presentations, meetings, and exhibitors further demanding their attention.

I go to ISC almost every year. It is a valuable use of time for anyone in the HPC community or who uses, or has an interest in, HPC even if they don't see themselves as part of the HPC community. However, I have decided not to attend ISC this year, due to other commitments. However, I will keep an eye on the "news" throughout the week and post a handful of summary blogs (like this one), which might be a useful catch-up on "news" so far, whether you are attending ISC or watching from afar.

Monday 9 November 2015

SC15 Preview

SC15 - the biggest get-together of the High Performance Computing (HPC) world - takes place next week in Austin, TX. Around 10,000 buyers, users, programmers, managers, business development people, funders, researchers, media, etc. will be there.

With a large technical program, an even larger exhibition, and plenty of associated workshops, product launches, user groups, etc., SC15 will dominate the world of HPC for a week, plus most of this week leading up to it. It is one of the best ways for HPC practitioners to share experiences, learn about the latest advances, and build collaborations and business relationships.

So, to wet your appetites, here is the @hpcnotes preview to SC15 - what I think might be the key topics, things to look out for, what not to miss, etc.

New supercomputers

It's always one of the aspects of SC that grabs the media and attendee attention the most. Which new biggest supercomputers will be announced? Will there be a new occupier of the No.1 spot on the Top500 list? Usually I have some idea of what new supercomputers are coming up before they are public, but this year I have no idea. My guess? No new No.1. A few new Top20 machines. So which one will win the news coverage?

New products

In spite of the community repeatedly acknowledging that the whole system is important - memory, interconnect, I/O, software, architecture, packaging, etc., judging by the media attention and informal conversations, we still seem to get most excited by the processors.

Monday 10 June 2013

China supercomputer to be world's fastest (again) - Tianhe-2

It seems that China's Tianhe-2 supercomputer will confirmed as the world's fastest supercomputer at next Top500 list to be revealed at the ISC'13 conference next week.

I was going to write about the Chinese Tianhe-2 supercomputer and how it matters to the USA and Europe - then I found these old blog posts of mine:


Thursday 20 December 2012

A review of 2012 in supercomputing - Part 2

This is Part 2 of my review of the year 2012 in supercomputing and related matters.

In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.

Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.

The themes that stick out in my mind from HPC/supercomputing in 2012 are:
  • The exascale race stalls
  • Petaflops become "ordinary"
  • HPC seeks to engage a broader user community
  • Assault on the Top500

The exascale race stalls

The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.

Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]

Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.

The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.

Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.