UHG
Search
Close this search box.

Google Launches A Tool That Can Scale and Parallelize Neural Networks

GSPMD separates programming an ML model from parallelization and is capable of scaling most deep learning network architectures

Share

Google AI has launched GSPMD – General and Scalable Parallelization for ML Computation Graphs, to address scaling challenges. GSPMD is capable of scaling most deep learning network architectures and has been applied to many deep learning models which include GShard-M4, BigSSL, LaMDA, ViT, and MetNet-2. GSPMD has also been integrated into multiple ML frameworks, including TensorFlow and JAX, which use XLA as a shared compiler.

The solution separates the task of programming an ML model from the challenge of parallelization. It allows model developers to write programs as if they were run on a single device with very high memory and computation capacity. The user only needs to add a few lines of annotation code to a subset of critical tensors in the model code to indicate how to partition the tensors. With GSPMD, developers may employ different parallelism algorithms for different use cases without the need to reimplement the model.

The separation of model programming and parallelism allows developers to minimize code duplication. GSPMD is designed to support a large variety of parallelism algorithms with a uniform abstraction and implementation. It also supports nested patterns of parallelism. The solution facilitates innovation on parallelism algorithms by allowing performance experts to focus on algorithms that best utilize the hardware, instead of the implementation that involves lots of cross-device communications.

In the recent MLPerf set of performance benchmarks, a BERT-like encoder-only model with ~500 billion parameters to which the team applied GSPMD for parallelization over 2048 TPU-V4 chips, yielded highly competitive results, utilizing up to 63% of the peak FLOPS that the TPU-V4s offer. As a shared, robust mechanism for different parallelism modes, GSPMD allows users to conveniently switch between modes in different parts of a model. This is especially valuable for models that may have different components with distinct performance characteristics, like multimodal models that handle both images and audio.

“As this often requires building larger and even more complex models, we are pleased to share the GSPMD paper and the corresponding open-source library to the broader research community, and we hope it is useful for efficient training of large-scale deep neural networks,” wrote Yuanzhong Xu and Yanping Huang, Software Engineers; Google Research, Brain Team, in the blog post.

📣 Want to advertise in AIM? Book here

Picture of Meeta Ramnani

Meeta Ramnani

Meeta’s interest lies in finding out real practical applications of technology. At AIM, she writes stories that question the new inventions and the need to develop them. She believes that technology has and will continue to change the world very fast and that it is no more ‘cool’ to be ‘old-school’. If people don’t update themselves with the technology, they will surely be left behind.
Related Posts
Association of Data Scientists
Tailored Generative AI Training for Your Team
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.