Welcome

We are a motivated Student Cluster Competition Team at Nanchang University, that has participated in the ASC Student Supercomputer Challenge for 9 years. As the host of the finals in 2018, our school welcomed excellent supercomputing talents from all over the world. Our best result in ASC is the first prize. Now, we are trying to challenge the broader boundaries of supercomputing and attempting to establish extensive connections with more supercomputing universities.

Nanchang University is the only 211 project university in Jiangxi Province, that devotes significant efforts to computer science research, especially on high-performance computing (HPC). HPC enables the blossom of Artificial Intelligence and Data-intensive business nowadays. Thus, HPC shall be considered as the foundation of cutting-edge computer science and technique. Nanchang University has already investigated and successfully built the largest distributed high-performance clusters for educational purposes in the mid-west region of China. The peak performance is 100 TFlop, based on a carefully designed heterogeneous architecture.

2024

2024.02

We participated in ASC24 and won second prize, congratulations to Jiarong, Fuxiang, Weihan, Yuhao, and Pingye!

2023.12

We are going to participate in ASC24, if you are a self-motivated student at Nanchang University, welcome to join us!

2023

We participated in SC23 SCC in Denver, USA, under the leadership of Ke Chen. Although we have no prior experience in this competition, we firmly believe that we can do our best. Due to not finding supplier support, we are going to take part in IndySCC with worldwide schools, it’s a part of SCC and mainly for those schools with fewer vendors or experiences.

Finally, we completed all challenges, although we have not enrolled in top 3, and you can see our experience in IndySCC Website.

We are invited to observe ASC22 Final, a nice journey for everyone.

2022

We won two second prizes in ASC22 (all ranks top 8%) under the leadership of Ruobing Yao and Yuxuan Li.

During this competition, we have overcome Yuan Large Language Model Challenge and DeePMD Challenge.

YUAN LARGE LANGUAGE MODEL CHALLENGE

Training large-scale language model like Yuan is difficult because it requires not only massive computing resources but also complex training methods to efficiently process a large number of parameters. We complete Yuan Large Language Model challenge in 67.75h using 4 32GB Tesla V100 DGXS, ZeRO parallel strategy, and various acceleration training methods.

Our solution can be seen here.

DeePMD CHALLENGE

Traditional models applied in molecular dynamics (MD) usually are density functional theory (DFT) models and empirical force fields (EFF)-based models. The former has a quantum mechanical precision but can not handle large system, while the latter is efficient but are limited by the transferability of the model. In recent years, machine learning based on MD(MLMD) methods tackle this dilemma. As one such model, Deep Potential achieves the balance between accuracy and efficiency.

DeePMD-kit is a realization of Deep Potential, it is a deep learning package for many-body potential energy representation and molecular dynamics, which can written in Python/C++. In this challenge,we are required to make improvements on the training procedure. We end up with the speed results that are 7.946(watewr), 3.121(mgalcu), 1.728(copper) compared to the original results, through the optimization of custom operators,Compressed Training.

Our solution can be seen here.

2020-2021

We won two second prizes in ASC20-21 (all ranks top 8%) under the leadership of Haichuan Hu and Sheng Yi

During this competition, we have overcome Language Exam Challenge, QuEST Challenge, and PRESTO Challenge.