Last month, HPC experts from around the globe – Europe, Japan, Australia, and the Americas – gathered at the Computer History Museum in Mountain View, California, for the 2015 PBS User Group. It’d be impossible to capture all the great presentations, the great discussions, and give you a sense of the great venue in a blog post, so I’m going to highlight a few of my favorites. For full videos of presentations, check out InsideHPC’s coverage of the event.
In his keynote address, Professor Satoshi Matsuoka really motivates how HPC improves people’s lives from improving medical care by simulating blood flow, to keeping cities cool and safe by simulating airflow and heat around skyscrapers, to reducing water waste by… you’ll have to watch the video if you want to know what application needs a supercomputer to reduce water use (about 16:33 in the InsideHPC video). Professor Matsuoka covers the evolution of the Tsubame supercomputer, from being the first system with GPUs on the TOP500 list to plans for Tsubame 3, touching on their pioneering work in scheduling, virtualization, and power management.
The sponsor panel – thanks again SGI, Intel, HP, and Cray – focused on the recently announced NSCI executive order. For obvious reasons, all of the participants were excited by the long term ramifications, not just about delivering an Exascale machine, but also for the broader focus on improving software, educating the workforce, accelerating standards, and enabling data discovery.
“In industry [HPC] has a very high ROI; for every dollar you spend, you get $500 back within two years.” Thomas Leung (GE Global Research) quotes IDC, and then points out that you don’t necessarily see Industrial HPC on the TOP500, but there are a lot of big HPC systems in industry… but, because HPC adds a competitive advantage, companies keep it a secret. Leung then dives into the differences between Industrial HPC and Research HPC from scheduling policies to how systems are funded over time.
Michael Thompson (Wayne State University) reminded everyone that the goal is not to have the newest, shiniest, most powerful hardware, but to deliver real HPC services to real users. Thompson’s analysis of whole lifecycle costs pits buying new versus the cloud. At first blush, the bursty Wayne State workload looks like a great fit for the cloud, but Thompson goes on to do something that I personally have not seen before – he adds “buying used hardware” to the equation, and drops the annual cost-per-core by more than a factor of 5x while still delivering what his users need.
During the Product Manager panel, I asked the audience, “What about HPC on Windows?” One hand went up. When I asked this question a little more than a year ago (at the last PBS User Group), a few hands went up. My take: Windows is neither growing much nor shrinking much – it remains important, and it remains a small percentage of the overall HPC workload.
On a personal level, it was really a pleasure to be at an event with such positive energy, and a near-total absence of complaint of any kind. From me, a personal “thank you” to the whole PBS Works engineering team for really taking care of our users (and making my job easier).