Abstract
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality in the latter model implies optimality in the decomposable bulk synchronous parallel model, which is known to effectively describe a wide and significant class of parallel platforms. The proposed framework can be regarded as an attempt to port the notion of obliviousness, well established in the context of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed.
Original language | English |
---|---|
Article number | 3 |
Journal | Journal of the ACM |
Volume | 63 |
Issue number | 1 |
Number of pages | 36 |
ISSN | 0004-5411 |
DOIs | |
Publication status | Published - Mar 2016 |
Keywords
- Communication
- models of computation
- network locality
- oblivious algorithms
- parallel algorithms