An Analysis of Collocation on GPUs for Deep Learning Training

Titus Theodorus (Ties) Robroek, Ehsan Yousefzadeh-Asl-Miandoab, Pinar Tözün

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Deep learning training is an expensive process that extensively uses GPUs. However, not all model training saturates modern powerful GPUs. To create guidelines for such cases,
this paper examines the performance of the different collocation methods available on NVIDIA GPUs: naïvely submitting multiple processes on the same GPU using multiple streams,
utilizing Multi-Process Service (MPS), and enabling the MultiInstance GPU (MIG). Our results demonstrate that collocating multiple model training runs yields significant benefits, leading to up to three times training throughput despite increased epoch time. On the other hand, the aggregate memory footprint and compute needs of the models trained in parallel must fit the available memory and compute resources of the GPU. MIG can be beneficial thanks to its interference-free partitioning but can suffer from sub-optimal GPU utilization with dynamic or mixed workloads. In general, we recommend MPS as the best-performing and most flexible form of collocation for a single user submitting training jobs.
Original languageEnglish
Title of host publicationProceedings of the 4th Workshop on Machine Learning and Systems, EuroMLSys 2024, Athens, Greece, 22 April 2024
Number of pages10
PublisherAssociation for Computing Machinery
Publication date2024
Pages81-90
DOIs
Publication statusPublished - 2024

Keywords

  • Deep learning training
  • GPU utilization
  • Collocation methods
  • NVIDIA GPUs
  • Multi-Process Service (MPS)
  • Multi-Instance GPU (MIG)
  • Training throughput
  • Model training
  • Memory footprint
  • Compute resources

Fingerprint

Dive into the research topics of 'An Analysis of Collocation on GPUs for Deep Learning Training'. Together they form a unique fingerprint.

Cite this