Invoking parpool submits a batch job to start a parallel environment.

>> parpool(n)

where n is the number of labs. If, in addition, spmd is invoked, then this parallel environment is almost equivalent to pmode but without the interactive GUI layout. By invoking spmd, labindex and numlabs, as are available in pmode, are now available. If spmd is not enabled (as in a parfor application), labindex is not needed and is not available. However, there are times when you might need to know the number of processors, numlabs. In that situation, you can do the following:


>>p = gcp
>>n = p.NumWorkers
n = 4
  • parfor can be used with parpool (but not inside spmd). If parpool is not activated, parfor reverts to for.
  • P>> parfor i=1:4, disp(['myid is ' num2str(labindex) '; i = ' num2str(i)]),end
    myid is 1; i = 4
    myid is 1; i = 3
    myid is 1; i = 2
    myid is 1; i = 1
    P>> for j=1:4, disp(['myid is ' num2str(labindex) '; j = ' num2str(j)]),end
    myid is 1; j = 1
    myid is 1; j = 2
    myid is 1; j = 3
    myid is 1; j = 4
    P>> x = 0; parfor i=1:10, x = x + i; end   % reduction operation
    P>> y = []; parfor i=1:10, y = [y, i]; end   % concatenation
    P>> z = []; parfor i=1:10, z(i) = i; end    % slicing
    P>> f = zeros(1,50); f(1) =  1; f(2) = 2;
    P>> parfor n=3:50
    f(n) = f(n-1) + f(n-2);
    end % Fibonacci numbers; not parallelizable
    % Next is a reduction; but "-" violates associative rule, and the op fails
    P>> u = 0; parfor i=1:10, u = u - i; end
  • spmd
    >> spmd, x = labindex; disp(['for myid = ' num2str(labindex) '; x = ' num2str(x)]),end 1 for myid = 1; x = 1 2 for myid = 2; x = 2 3 for myid = 3; x = 3 4 for myid = 4; x = 4

    The first column of output above are the rank number printed automatically by spmd. Without activating spmd, you have no access to labindex (or numlabs).

  • >> delete(gcp) % to close all pre-existing or dangling parpool jobs
  • Nondistributed array is a shared array. Any change made to the array from a lab causes the change to be seen by all labs.
  • A replicated array (nondistributed) resides on the MATLAB client. This array is visible from the labs.
  • A variant array, resides, in its entirety, in individual lab’s workspace. This is a composite array. A composite array can communicate between the MATLAB client and labs directly. A composite array be manipulated
    • from the client. In this case, the {labnumber} must always be used. Example
      >> C{2} = C{4};
    • from within “spmd”. Example
      >> C = C + 4 % I haven’t figure out how to do C{2} = C{4} yet % maybe through labSend/labReceive
    >> A = Composite();
    >> A = magic(4);   % A is a replicate array
    >> A{2} = ones(1,4);   % Change A on lab 2 directly from MATLAB client
  • A codistributed array is a single array divided into segments (e.g., columns of a 2D array), each residing in the workspace of a lab.
  • Codistributed arrays
    Array of this type is partitioned into segments, each residing in a different lab. This results in memory saving, ideal for large arrays.

    >> spmd
    A = [11:18; 21:28; 31:38; 41:48]
    D = codistributed(A, 'convert')
    end
    >> spmd
    D
    size(D)    % this will report size of the global array
    L = localPart(D)    % L is the same as D, locally
    size(L)    % this gives the size of the local array
    end
  • non-distributed arrays
    Arrays created on the MATLAB client, before or after parpool, are nondistributed arrays (replicated and variant arrays); entire array is stored on each lab.
  • Communication among labs
    • mpiInit
    • labSend
    • labReceive
    • labSendReceive

Codistributed arrays

  • Why create codistributed array ?
    For more efficient parallel computing and memory usage.
  • How to create codistributed array ?
    • Partitioning a larger array
      >> parpool(4)
      >> spmd
      A = [11:18;21:28;31:38;41:48];   % replicate array
      D = codistributed(A, 'convert')  % codistributed
      end
      1: localPart(D)| 2: localPart(D)| 3: localPart(D)| 4: localPart(D)
          11    12   |     13    14   |     15    16   |     17    18
          21    22   |     23    24   |     25    26   |     27    28
          31    32   |     33    34   |     35    36   |     37    38
          41    42   |     43    44   |     45    46   |     47    48
    • Building from smaller arrays
      >> parpool(4)
      >> spmd
      >> A = [11:13; 21:23; 31:33; 41:43] + (labindex-1)*3;  % variant array
      >> D = codistributed(A, 'convert');
      end
    • Using MATLAB constructor functions
      >> D = zeros(1000, codistributor());
    • codistributed is normally used in the parallel environment, like spmd and pmode.
    • codistributed is similar to the scatter function in MPI.
    • Change the distribution of a codistributed array with redistribute
      X = redistribute(D, codistributor(‘1d’,1))
    • gather is the opposite of “codistributed”.
    • numel is not supported for codistributed arrays. It always return the value one (1) for codistributed arrays.
    • Use codcolon to find the first, step, and last, element index of a codistributed array.
  • Indexing into Codistributed Array
    >> spmd
    A = [11:18; 21:28; 31:38; 41:48];   % A is a replicated array
    D = codistributed(A, 'convert');    % D is A distributed on labs; saves memory
    L = localPart(D);                   % L is a local copy of D
    L(3,:)   % prints row 3 of local array
    n = size(L,2);  % column size of L
    L(3,1:n) % same as above
    D(3,:)   % PCT treats D as if localPart(D)
    D(3,1:end) % same as above
    s = distributionPartition(codistributor(D)); % size of local partitions
    last = sum(s(1:labindex));       % global ending element index for local worker
    first = last - s(labindex) + 1;  % global beginning element index
    D(3,first:end) % first:end is global indexing; D is hence global
    D(3,1:n)       % explicit indexing causes D to be treated as global;
                   % fails on labs 2 to 4
    end