

 |
SCC Winners |
|
SCC22: |
Dallas, Texas |
Overall: |
National Tsing Hua University |
Linpack: |
University of California, San Diego
(114.26 Teraflops) |
HPCG: |
Friedrich-Alexander-Universität
(1.97 Teraflops) |
io500: |
University of Texas, Austin
(Score: 42.486) |
|
SCC22: |
(Virtual) |
Overall: |
Sun Yat-sen University |
|
SCC21: |
(Virtual) |
Overall: |
Tsinghua University |
Linpack: |
Southern University of Science and
Technology (280 Teraflops) |
|
SCC20: |
(Virtual) |
Overall: |
Tsinghua University |
Linpack: |
Tsinghua University (299.99 Teraflops) |
HPCG: |
Tsinghua University (15.81 Teraflops) |
io500: |
Tsinghua University (Score: 143.73) |
|
SCC19: |
Denver, Colorado |
Overall: |
Tsinghua University |
Linpack: |
Nanyang Technological University
(51.74 Teraflops) |
HPCG: |
Nanyang Technological University
(1.86 Teraflops) |
io500: |
Tsinghua University (Score: 30.56) |
|
SCC18: |
Dallas, Texas |
Overall: |
Tsinghua University |
Linpack: |
Nanyang Technological University
(56.51 Teraflops) |
HPCG: |
Tsingua University (1.94 Teraflops) |
|
SCC17: |
Denver, Colorado |
Overall: |
Nanyang Technological University |
Linpack: |
Nanyang Technological University
(51.77 Teraflops) |
HPCG: |
Nanyang Technological University
(2.01 Teraflops) |
|
SCC16: |
Salt Lake City, Utah |
Overall: |
University of Science and
Technology of China |
Linpack: |
University of Science and
Technology of China (31.15 Teraflops) |
HPCG: |
University of Science and
Technology of China (820.66 Gigaflops) |
|
SCC15: |
Austin, Texas |
Overall: |
Tsinghua University |
Linpack: |
Technische Universität München
(7.134 Teraflops) |
HPCG: |
Tsingua University (207.4 Gigaflops) |
|
SCC14: |
New Orleans, Louisiana |
Overall: |
University of Texas at Austin |
Linpack: |
National Tsing Hua University
(10.07 Teraflops) |
|
SCC13: |
Denver, Colorado |
Overall: |
University of Texas at Austin |
Linpack: |
National University of Defense
Technology (8.224 Teraflops) |
Commodity
Track: |
Bentley University and
Northeastern University |
|
SCC12: |
Salt Lake City, Utah |
Overall: |
University of Texas at Austin |
Linpack: |
National University of Defense
Technology (3.014 Teraflops) |
Little Fe
Track: |
University of Utah |
|
SCC11: |
Seattle, Washington |
Overall: |
National Tsing Hua University |
Linpack: |
State Univiversity of Nizhny Novgorod
(1.93 Teraflops) |
|
SCC10: |
New Orleans, Louisiana |
Overall: |
National Tsing Hua University |
Linpack: |
University of Texas at Austin
(1.07 Teraflops) |
|
SCC09: |
Portland, Oregon |
Overall: |
Stony Brook University |
Linpack: |
Colorado University (692 Gigaflops) |
|
SCC08: |
Austin, Texas |
Overall: |
Indiana University |
Linpack: |
National Tsing Hua University
(703 Gigaflops) |
|
SCC07: |
Reno, Nevada |
Overall: |
University of Alberta |
Linpack: |
National Tsing Hua University
(420 Gigaflops) |
|
|
Once again, the SC Conference series is pleased to host the Student Cluster Competition (SCC), now in its eighteenth year, at SC23. The SCC is an opportunity for students to showcase their expertise in a friendly, yet spirited, competition. Held as part of the Students@SC program, the SCC is designed to introduce the next generation of students to the high-performance computing community. The competition draws teams of undergraduate students from around the world.
The Student Cluster Competition is an HPC multi-disciplinary experience integrated within the HPC community's biggest gathering, the Supercomputing Conference. The competition is a microcosm of a modern HPC center that teaches and inspires students to pursue careers in the field. It demonstrates the breadth of skills, technologies and science that it takes to build, maintain and utilize supercomputers.
Competition teams are composed of six students, an advisor, and vendor partners. The students provide their skills and enthusiasm, the advisor provides guidance, and the vendor provides resources (e.g., software, expertise, and travel funding).
2023 Competition Details
Colorado Convention Center, Denver
The SC23 Student Cluster Competition will be an in-person event in Denver, November 13-15, 2023. The competition will be chaired by Jenett Tillotson, National Center for Atmospheric Research (
NCAR).
The SCC23 Committee conducted two webinars, on April 25 and 27, to describe the competition and answer questions. They described several rule changes, including new per-node power limits and a power budget for networking equipment. They also described how to form a successful team, including new rules requiring that half of the team members must be new to the SCC. The webinar slides are
here, and recordings of the webinars are
here and
here.
Benchmarks:
-
High-Performance Linpack (HPL)
The HPL benchmark solves a (random) dense linear system in double precision arithmetic. It is often used to measure the peak performance of a computer or that of a high-performance computing (HPC) cluster. The ranking of the top 500 supercomputers in the world is determined by their performances with the HPL benchmark.
Read more: https://netlib.org/benchmark/hpl/
-
HPC Conjugate Gradient (HPCG)
The HPCG benchmark uses a preconditioned conjugate gradient (PCG) algorithm to measure the performance of HPC platforms with respect to frequently observed but challenging patterns of computing, communication, and memory access. While HPL provides an optimistic performance target for applications, HPCG can be considered as a lower bound on performance. Many of the top 500 supercomputers also provide their HPCG performance as a reference.
Read more: https://www.hpcg-benchmark.org/
-
MLPerf Inference
Machine Learning (ML) is increasingly being used in many scientific domains for making groundbreaking innovations. MLPerf Inference is a benchmark suite for measuring how fast systems can run models in a variety of deployment scenarios. The key motivations behind this benchmark is to measure ML-system performance in an architecture-neutral, representative, and reproducible manner.
Read more: https://mlcommons.org/en/inference-datacenter-30/
Applications:
-
MPAS (Atmosphere Core)
The Model for Prediction Across Scales—Atmosphere (MPAS-A) is an atmospheric simulation model for use in climate, regional climate, and weather research. MPAS-A supports global and limited-area domains with horizontal resolution from O(100) km down to O(1) km or less, and it employs unstructured meshes known as centroidal Voronoi tessellations (CVTs). The model consists of a dynamical core, which handles the resolved-scale equations of motion, as well as parameterizations of additional physical processes. MPAS-A is developed by the National Center for Atmospheric Research (NCAR), and it shares software infrastructure that was co-developed with the Los Alamos National Laboratory.
Key software characteristics of MPAS-A:
- Runs on hardware as limited as a Raspberry Pi or as powerful as the largest systems on the Top500 list
- Primarily Fortran 2008 code, with some C
- Parallelization with MPI and OpenMP by horizontal domain decomposition
- Support (in a separate code branch) for executing parts of the model on GPUs via OpenACC
Homepage in NCAR's MMM Lab: https://www.mmm.ucar.edu/models/mpas
Source code repository: https://github.com/MPAS-Dev/MPAS-Model
-
This is a numerical simulation, written in Fortran with MPI, to study the descent of cold and dense plumes in a stratified layer. Such simulations are important to understanding dynamics of plume development in regards to thermal and magnetic forces inside of stars.
-
Reproducibility Challenge
The Reproducibility Challenge is based on SC22 paper "Symmetric Block-Cyclic Distribution: Fewer Communications Leads to Faster Dense Cholesky Factorization". In this paper, the authors are interested in the Cholesky factorization of large dense matrices performed in parallel in a distributed manner. Inspired by recent progress on asymptotic lower bounds on the total number of communications required to perform this operation, they present an original data distribution, Symmetric Block Cyclic (SBC), as an alternative to the standard 2D Block Cyclic (2DBC) distribution implemented in ScaLAPACK. It is designed to take advantage of the symmetry of the matrix to reduce inter-process communications. SBC is implemented within the paradigm of task-based runtime systems using the dense linear algebra library Chameleon associated with the StarPU runtime system. Experiments were carried out on the experimental platform PlaFRIM using homogeneous CPU-only nodes. The factorization of several synthetic test case matrices demonstrate that using the SBC distribution actually reduces the total volume of inter-process communication by a factor of sqrt(2) compared to the standard 2DBC distribution, as predicted by the theoretical analysis. The results clearly show that using SBC allows better performance and scalability than with 2DBC distribution in all tested configurations.