BID '20: Proceedings of the Workshop on Benchmarking in the Datacenter

Full Citation in the ACM Digital Library

Benchmarking in the datacenter (BID) 2020: workshop summary

The workshop in beautiful San Diego was composed of two submitted papers and two invited talks.

The first paper presentation, "Power Modeling for Phytium FT-2000+ Multi-core Architecture" by Zhixin Ou a PhD student from the National University of Defense Technology was streamed from Hunan, China due to travel difficulties. Her talk describes the application of HPCC benchmarks on the first software-based power model for the new Phytium FT-2000+/64 ARM platform. The results of these benchmarks of the software were compared with real power measurements to prove how accurate this approach is.

The first invited talk by Ammar Awan a PhD student at Ohio State University, "Benchmarking Deep Learning Workloads on Large-Scale HPC Systems" described machine learning and deep learning benchmarks. He described parallel communication in distributed deep learning for training image recognition networks and how it is different from typical high performance computing parallel communication. He identified reproducibility and benchmarks for new applications of deep learning as important future areas.

The second invited presentation was given by the San Diego Supercomputer Center (SDSC) user support lead Mahidhar Tatineni and addressed the "Evolution of Benchmarking on SDSC systems". It was an interesting talk that gave an overview of platforms and benchmarks at the supercomputing center closest to the conference venue. The speaker succeeded in capturing all aspects in this regard during his 15-years of devoted work at the San Diego Supercomputing Center.

The final paper presentation, "The ESIF-HPC-2 Benchmark Suite" was given by Christopher Chang who works at the National Renewable Energy Laboratory (NREL) in the HPC Application group in Denver. He described the benchmark suite he and his team developed for procurement of the most recent NREL supercomputer. He demonstrated a set of dimensions that are useful in classifying benchmarks and in systematically assessing their coverage of performance measures. This suite is released as open source software on GitHub.

The program committee for the workshop was composed of:

• David Bailey (Lawrence Berkely National Laboratory and University of California Davis)

• Valeria Bartsch (Fraunhofer ITWM)

• Ben Blamey (Uppsala University)

• Rodrigo N. Calheiros (Western Sydney University)

• Anando Chatterjee (Indian Institute of Technology Kanpur)

• Juan Chen (National University of Defense Technology)

• Paweł Czarnul (Gdansk University of Technology)

• Denis Demidov (Kazan Federal University and Russian Academy of Sciences)

• Joel Guerrero (University of Genoa and Wolf Dynamics)

• Khaled Ibrahim (Lawrence Berkeley National Laboratory)

• Kate Isaacs (University of Arizona)

• Beau Johnston (Australian National University and University of New England)

• Mchael Lehn (University of Ulm)

• Maged Korga

• Guo Liang (Open Data Center Committee and China Academy of Information and Communications Technology)

• Xioyi Lu (Ohio State University)

• Amitava Majumdar (San Diego Supercomputing Center)

• Jorji Nonaka (Riken)

• Peter Pirkelbauer (Lawrence Livermore National Laboratory)

• Harald Servat (Intel)

• Ashwin Siddarth (University of Texas at Arlington)

• Manodeep Sinha (Swinburne University of Technology)

• Gabor Szárnyas (Budapest University of Technology and Economics)

• Mahidhar Tatineni (San Diego Supercomputing Center)

• Jianfeng Zhan (Chinese Academy of Sciences)

We thank the program committee and the subreviewers for their careful review of the submitted papers, with each paper obtaining at least 4 reviews. We thank the authors for their patience with the publication process.

The ESIF-HPC-2 benchmark suite

We describe the development of the ESIF-HPC-2 benchmark suite, a collection of kernel and application benchmark codes for measuring computational and I/O performance from single nodes to full HPC systems that was used for acceptance testing in our recent HPC procurement. The configurations of the benchmarks used for our system is presented. We also describe a set of "dimensions" that can be used to classify benchmarks and assess coverage of a suite systematically. The collection is offered cost-free as a GitHub repository for general usage and further development.

Power modeling for Phytium FT-2000+/64 multi-core architecture

Power and energy consumption is the first-class constraint on high-performance-computing (HPC) systems. Understanding the power consumption on such systems is crucial for software optimization and hardware architecture design. Unfortunately, the subtle interactions between the CPU and the memory subsystem make precise power modeling highly challenging on emerging multi-core architectures. This paper presents the first software-based power model for Phytium FT-2000+/64, an ARM-based HPC multi-core architecture. It shows by carefully choosing and modeling a set of system-wide metrics, one can build an accurate power model for the multi-core CPU and the DRAM memory subsystem, two major energy consumers of an HPC system computing node. We evaluate our approach by applying it to HPCC benchmarks and comparing our results against real power measurement. Experimental results show that our approach is highly accurately in modeling power consumption of FT-2000+/64. The average error rate for CPU power modeling in all the scales (8, 16, 32, 64 processes) is 2%, and the average error rate for memory power modeling is about 7.5%.