In the above, *f* represents the fraction of the code that cannot be parallelized. The remaining fraction, *1 – f*, is parallelizable. Optimally, if the parallelized code scales linearly with the number of workers once parallelized, then the runtime reduces to *(1-f)/N* and hence

T_{N}= [f + (1-f)/N]*T_{1}

Speedup ratio, *S*, and parallel efficiency, *E*, may be used:

- to provide an estimate for how well a code sped up if it was parallelized. For example, if
*f = 0.1*the speedup bound above predicts a 10 fold speedup in the limit. On the other hand, a code that is 50% parallelizable will at best see a factor of 2 speedup. In the latter example, a potentional speedup of only a facotr of two may not be compelling to initiate a code parallelization effort — especially if it takes a fair amount of effort to parallelize the code. - to generate a plot of time vs. workers to understand the behavior of the parallelized code.
- to see how the parallel efficiency tends toward the point of diminishing returns. With this information, you would be able to determine, for a fixed problem size, what is the optimal number of workers to use.

Previous | Home | Next |