This page is intended for users who want to run MATLAB Parallel Computing Toolbox (PCT) jobs with more than 12 workers using MATLAB R2012b.

The MATLAB pre-defined local configuration should be used for interactive or batch applications that require up to 12 workers. This configuration is suitable for PCs as well as for any single Katana node. For applications requiring more than 12 workers, a user-defined configuration must be created. For your convenience, SCV has prepared a configuration SGE. It selects Katana’s batch scheduler, the Sun Grid Engine, as the batch scheduler for all MATLAB PCT applications. To create and deploy SGE in R2012b, you have to import it via the MATLAB R2012b window.

  1. Log in to Katana on a graphics-enabled screen/window (e.g., x-win32)
     katana % /usr/local/apps/matlab-2012b/bin/matlab
    

    In the Home tab, mouse click on Parallel and select the drop down menu item Manage Cluster Profiles.

    At present, local is the default configuration; you can change the default configuration with parallel/set default. This means that if you start matlabpool with:
    >> matlabpool open % use default settings
    then the current default configuration, local, and the hardware maximum core count will be used. The maximum number of workers for local is 12 (this is a PCT software limit). The actual maximum depends on your hardware configuration.

    Now, we are ready to import the SCV-prepared SGE configuration file. First, mouse click on the import button. In the popup window’s File Name: box, enter the file name, along with the absolute path:
    /usr/local/apps/matlab-2012b/SGE_Base.settings. Click the Open button.

    A new configuration labelled SGE appears under local. However, local is still the the default. You can assign default status to any of your predefined configurations by mouse click on parallel followed by selecting Set Default in the drop down menu.

    If you have multiple configurations, explicitly specifying the intended configuration is advised to avoid unintended consequences. Except for rare situations, omitting the worker count will enable the maximum available to be used.

    Please be aware that if you use a configuration other than local, you are most likely using multiple nodes. Generally, communications within a node is faster than communications across nodes. For communication-bound applications, your PCT throughput may be less efficient and demonstrates longer wallclock time per processor. A further detriment to the communication time is that the nodes for PCT computation use the Ethernet (100 Mbit) rather than the InfiniBand (2GBit).