Integration is one of many types of numerical computations that is highly suitable for parallel processing. Since no communications among the processors are required during computation, one can achieve high parallel efficiency with integration. In addition, it scales well with many workers. At the end of computation, a many-to-one, collective, communication is required to collect the integral sums from all the processors and compute the final integral sum.

In this example, we integrate cosine over the range [a, b]. For simplicity, the range along the abcissa is divided into p uniform partitions. Each partition is in turn divided into n uniform intervals. Within each interval, the mid-point rule for integration asserts that the integrand is constant within the interval and hence the integral over this interval is simply the area under the cosine curve. The area is the height, i.e., cosine evaluated at the mid-point, times h, the width of the interval.

When multiple processors are available for parallel processing, the computation of each partition is assigned to a processor, which MATLAB calls a worker. Note that for p=1, we have the special case of serial processing.

The kernel of this example is the section of integration that is performed by a worker over its assigned partition. The kernel may be computed more efficiently with the vector form than with a for loop.

In the following, we will demonstrate integration in multiple ways (you can see the m-file containing all 7 integrations here) :

  1. Serial integration (with for loop)
  2. Serial integration (with vector form)
  3. Parallel integration with spmd
  4. Parallel integration with parfor 1
  5. Parallel integration with parfor 2
  6. Parallel integration with drange 1
  7. Parallel integration with drange 2
  1. Serial integration with kernel computed by a for loop

    To start, we demonstrate the usage of Integral in a serial integration. There is only 1 partition and the size of the interval, m, is 10000.

  2. Serial integration with kernel computed by vector form

  3. Parallel integration with spmd

    For parallel integration, we start with the Single Program Multiple Data, or spmd, paradigm. The Single Program refers to the fact that the same program is run on all processors concurrently while Multiple Data points to the fact that different data may be used on different processors. The two enabling utilites are:

    • numlabs
      The number of labs (workers) assigned to the spmd, . . . ,end code region. This is typically the number of workers requested via matlabpool (default). But it could be overridden to use, say, 2 workers, as in spmd 2, . . . , end.
    • labindex
      A worker’s index; this ranges from 1 to numlabs.

  4. Parallel integration with parfor 1

  5. Parallel integration with parfor 2

  6. Parallel integration with drange 1

  7. Parallel integration with drange 2

Timings For Integration Example

The timings for all parallel paradigms used above are poor compared with the vector form of the serial integration. This is due to the overhead associated with parallel processing. If the amount of work was more significant, the timings for the parallel methods would improve.

              
Previous   Home   Next