Abstract
Deep learning training is an expensive process that extensively uses GPUs. However, not all model training saturates modern powerful GPUs. To create guidelines for such cases,
this paper examines the performance of the different collocation methods available on NVIDIA GPUs: naïvely submitting multiple processes on the same GPU using multiple streams,
utilizing Multi-Process Service (MPS), and enabling the MultiInstance GPU (MIG). Our results demonstrate that collocating multiple model training runs yields significant benefits, leading to up to three times training throughput despite increased epoch time. On the other hand, the aggregate memory footprint and compute needs of the models trained in parallel must fit the available memory and compute resources of the GPU. MIG can be beneficial thanks to its interference-free partitioning but can suffer from sub-optimal GPU utilization with dynamic or mixed workloads. In general, we recommend MPS as the best-performing and most flexible form of collocation for a single user submitting training jobs.
this paper examines the performance of the different collocation methods available on NVIDIA GPUs: naïvely submitting multiple processes on the same GPU using multiple streams,
utilizing Multi-Process Service (MPS), and enabling the MultiInstance GPU (MIG). Our results demonstrate that collocating multiple model training runs yields significant benefits, leading to up to three times training throughput despite increased epoch time. On the other hand, the aggregate memory footprint and compute needs of the models trained in parallel must fit the available memory and compute resources of the GPU. MIG can be beneficial thanks to its interference-free partitioning but can suffer from sub-optimal GPU utilization with dynamic or mixed workloads. In general, we recommend MPS as the best-performing and most flexible form of collocation for a single user submitting training jobs.
Original language | English |
---|---|
Title of host publication | Proceedings of the 4th Workshop on Machine Learning and Systems, EuroMLSys 2024, Athens, Greece, 22 April 2024 |
Number of pages | 10 |
Publisher | Association for Computing Machinery |
Publication date | 2024 |
Pages | 81-90 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- Deep learning training
- GPU utilization
- Collocation methods
- NVIDIA GPUs
- Multi-Process Service (MPS)
- Multi-Instance GPU (MIG)
- Training throughput
- Model training
- Memory footprint
- Compute resources