Google AI has launched GSPMD – General and Scalable Parallelization for ML Computation Graphs, to address scaling challenges. GSPMD is capable of scaling most deep learning network architectures and has been applied to many deep learning models which include GShard-M4, BigSSL, LaMDA, ViT, and MetNet-2. GSPMD has also been integrated into multiple ML frameworks, including TensorFlow and JAX, which use XLA as a shared compiler.
The solution separates the task of programming an ML model from the challenge of parallelization. It allows model developers to write programs as if they were run on a single device with very high memory and computation capacity. The user only needs to add a few lines of annotation code to a subset of critical tensors in the model code to indicate how to partition the tensors. With GSPMD, developers may employ different parallelism algorithms for different use cases without the need to reimplement the model.
The separation of model programming and parallelism allows developers to minimize code duplication. GSPMD is designed to support a large variety of parallelism algorithms with a uniform abstraction and implementation. It also supports nested patterns of parallelism. The solution facilitates innovation on parallelism algorithms by allowing performance experts to focus on algorithms that best utilize the hardware, instead of the implementation that involves lots of cross-device communications.
In the recent MLPerf set of performance benchmarks, a BERT-like encoder-only model with ~500 billion parameters to which the team applied GSPMD for parallelization over 2048 TPU-V4 chips, yielded highly competitive results, utilizing up to 63% of the peak FLOPS that the TPU-V4s offer. As a shared, robust mechanism for different parallelism modes, GSPMD allows users to conveniently switch between modes in different parts of a model. This is especially valuable for models that may have different components with distinct performance characteristics, like multimodal models that handle both images and audio.
“As this often requires building larger and even more complex models, we are pleased to share the GSPMD paper and the corresponding open-source library to the broader research community, and we hope it is useful for efficient training of large-scale deep neural networks,” wrote Yuanzhong Xu and Yanping Huang, Software Engineers; Google Research, Brain Team, in the blog post.