The Research Computing Services (RCS) group at Boston University maintains the Shared Computing Cluster (SCC) to provide high performance computing to the BU research community. On occasion, a research project may have special needs that are not adequately served by the SCC. For those projects, the external resources provided through the XSEDE program may be appropriate.

| Overview | Resources | Account | Allocation | Login | Transfer files | More info | Help |

  • Overview

    XSEDE (eXtreme Science and Engineering Discovery Environment) is a virtual system that provides compute resources for scientists and researchers from all over the country. Its mission is to facilitate research collaboration among institutions, enhance research productivity, provide remote data transfer, and enable remote instrumentation. XSEDE is funded by National Science Foundation (NSF).

    Here is the Getting Started Guide for XSEDE.

    Training and workshops on various topics are available throughout the year.

  • What compute resources are available?

    XSEDE compute resources are administrated by several universities and institutions across the country. These compute resources include various powerful computer clusters for high-performance computing (HPC), high throughput computing (HTC) and cloud computing. In particular, XSEDE provides the following kinds of resources that are not available on the BU SCC (Please see the technical summary of SCC).

    • An extremely large number of compute nodes and cores.
      Currently the maximum resource for a single MPI job on SCC is 16 compute nodes (256 cores). However, you can run an MPI job using hundreds of nodes (thousands of cores) on many XSEDE clusters (such as Stampede, Comet, and SuperMIC). If you have an MPI program that speeds up perfectly with the number of CPU cores, it is a good idea to migrate it to XSEDE clusters.
    • Giant compute nodes.
      Most compute nodes on SCC have 16 CPU cores and some have 20 CPU cores. The XSEDE cluster Greenfield provides several giant compute nodes with around one hundred cores on each. If you want to run a shared-memory (such as OpenMP) program that speeds up well with the number of CPU cores, Greenfiled is a good option.
    • Large-memory nodes.
      The maximum memory per node on SCC is 256 GB, while some XSEDE clusters (such as Bridges, Stampede) provide several or more than 10 TB memory per node. If you need to run a program that consumes more than 256 GB memory, it is recommended to use XSEDE resources instead.
    • A large number of advanced GPUs.
      There are 160 GPUs on SCC. Many XSEDE clusters provides much more GPUs with newest architectures (such as K40 and K80). The cluster XStream is designed particularly for GPU computation. If you want to use more GPUs or to explore new GPU architectures, you might consider using XSEDE resources.
    • Xeon-Phi (MIC) coprocessors/processors.
      There is no Xeon-Phi coprocessor on SCC currently. Several XSEDE clusters (such as Stampede and SuperMIC) provide a lot of first-generation Xeon-Phi coprocessors (Knights Corner). The second-generation Xeon-Phi processor (Knights Landing) resource on Stampede is also available to XSEDE users. If you want to use Xeon-Phi coprocessor/processor to accelerate your program, XSEDE resources is a good option.
    • Advanced storage system.
      Parallel I/O is not supported by SCC currently, while most of the XSEDE clusters support parallel I/O with advanced storage system (such as Lustre). Some XSEDE clusters (such as Gordon) provide solid-state drives (SSD) particularly for I/O bound applications. If your program requires parallel I/O or a very high I/O speed, XSEDE provides you appropriate resources.
    • High throughput computing system
      The Open Science Grid supports high throughput computing and it is available to XSEDE users.
    • Cloud computing system and virtual machine.
      A new XSEDE system, Jetstream, provides cloud computing services. A portion of the cluster Comet is also configured for virtual-machine works.
  • How do I get an XSEDE User Portal account?

    In order to use XSEDE resources, you must have an XSEDE User Portal (XUP) account. You can create an XUP account at the XSEDE user portal home page.

  • How do I apply for XSEDE allocations ?

    Here is an overview of XSEDE allocations. There are three primary considerations:

    • Who qualifies?
      The Principle Investigator (PI) of an XSEDE project must be a faculty, staff, post-doc of a US-based educational institution or member of a commercial organization. Members of the project may be any researcher, including graduate students, post-docs and visiting scholars. For details, see XSEDE PI qualifications.
    • Which type of computer systems should I apply for?
      To determine the system that best match your hardware and software requirements, please see available resources.
    • Which type of allocations (CPU time and disk storage allotments)?
      There are 4 types of allocations (BU RCS , startup, education and research) available for BU XSEDE users. A summary is given below.

      1. BU RCS allocations

        This is the simplest and quickest way to “sample” resources.

        If you are not sure which resources are appropriate for your computing needs, this is the right type of allocation to start with. BU, as an XSEDE member institution, has been given allocations by a few of the resource providers (institutions) across the country through XSEDE. RCS staffs can add you as a member under this project, subject to approval by XSEDE.

        PROS: no need to write a proposal; time to approval is two business days.
        CONS: you can only run relatively short duration jobs and are limited to the resources available to Boston University.

        Machine
        Name
        Resource Provider Best Types of Computation Resource Highlights
        Bridges Pittsburgh Supercomputing Center (PSC). Good for MPI, OpenMP, or GPU jobs. Especially good for large-memory jobs. Large-memory (3 TB) and extremely-large-memory (12 TB) nodes.
        Comet San Diego Supercomputing Center (SDSC). Good for MPI, OpenMP, or GPU jobs. Supports virtual-machine jobs too. Intel Haswell processors; GPU nodes; Virtual Machine repository.
        Gordon San Diego Supercomputing Center (SDSC). Good for IO-bound apps. SSD (FLASH) storage.
        Greenfield Pittsburgh Supercomputing Center (PSC). Good for shared-memory (such as OpenMP) jobs. Giant compute nodes with around one hundred cores and around 10 TB memory on each.
        Open Science Grid Over 100 individual sites spanning the United States. Good for distributed high throughput computing (DHTC). Virtual cluster environment
        Jetstream Indiana University (IU) and Texas Advanced Computing Center (TACC) Particularly for cloud computing. User-friendly cloud environment.
        Maverick Texas Advanced Computing Center (TACC). Particularly for visualization jobs VNC server; GPU and large memory nodes for visualization.
        Stampede Texas Advanced Computing Center (TACC). The largest cluster among all XSEDE resources; Good for massive MPI or OpenMP jobs. Thousands of compute nodes with 1 or 2 Intel Xeon-Phi/MIC coprocessors on each; GPU nodes; large-memory nodes.
        SuperMIC Louisiana State University (LSU). Good for MPI or OpenMP jobs. Hundreds of compute nodes with 2 Intel Xeon-Phi/MIC coprocessors on each; large-memory nodes.
        XStream Stanford Research Computing Center (SRCC) Particularly for GPU jobs. Tens of compute nodes with 8 K80 24GB GPU cards on each; A lot of machine/deep learning platforms.
        Once you have an XUP account, complete the application form for BU RCS allocation. This enables you to access one or more of the above listed systems within a few days, subject to XSEDE approval. The purpose of the BU RCS allocation is to provide the simplest, quickest, way for BU researchers to explore the wide-ranging XSEDE facilities to identify the most appropriate resource for your start-up allocation applications — with on-site testing. While no resource-usage limit is enforced, we strongly recommend that each user adhere to a quota of 2000 SUs (CPU hours), per facility. Thank you for your cooperation.
      2. Startup, education and research allocations

        The startup allocation is suitable for researchers who need a moderate amount of resources or as a first step towards a more significant research allocation. You may apply at any time. It requires a one-page abstract. Approval process is 2 weeks. Allocation expires after one year. Then a startup-allocation PI is encouraged to proceed with a follow-up research allocation request. However, renewal of an startup allocation will be permitted with appropriate justification and subject to XSEDE reviewer approval.

        The education allocation enables the PI to provide classroom instruction or training activities. The policy for an education allocation is similar to that for a startup allocation.

        The research allocation enables the PI to continue research they began with startup allocations. However, a startup allocation is not a prerequisite for requesting a research allocation. A successful research allocation request requires a detailed justification of resource usage (a 10 to 15 page proposal). Requests are reviewed quarterly by the XSEDE Resource Allocations Committee. Since it is competitive to apply for research allocations, PIs need to prepare a strong proposal. It is recommended to watch an online training for Writing and Submitting a Successful XSEDE Allocation Proposal.

        For details on up-to-date policies, please consult the XSEDE allocation policy page.

        Once you have an XUP account, you can access to the XSEDE Resource Allocation System (XRAS) to submit applications for these three types of allocations.

  • How do I login?

    There are several ways to log in to an XSEDE system on which you have an allocation. The easiest way is to log in via the Single Sign On (SSO) login hub. Alternatively, there is the GSI-SSHTerm method for logging in from your personal computer. If you prefer to use direct ssh, you will probably need to apply for a userid and password for the specific system on which you have an allocation.

    • Single Sign On (SSO) Login Hub
      The Single Sign On login hub (login.xsede.org) is a single point-of-entry to the XSEDE resources. Upon logging into the hub with your XUP userid and password, a 12-hour proxy certificate is automatically generated, allowing the user to access XSEDE resources for the duration of the proxy. Users may then gsissh to any XSEDE system without the need for a system-specific userid and password.
      PROS: The advantage is that there is a universal userid/password for all the XSEDE systems you have access to.
    • GSI-SSHTerm
      First, download and install the GSI-SSHTerm. With this software, you can connect to an XSEDE host system from a PC (running Windows XP or Window 7), Mac, or Linux with the XUP userid and password.
    • Direct ssh
      Most sites, e.g., LSU SuperMIC, TACC stampede and maverick, require a separate userid and password to access via direct ssh. You can submit a ticket to the XSEDE helpdesk (see the last paragraph of this page) requesting for the direct-ssh access.
  • How do I transfer files?

    You may need to transfer files between an XSEDE system and your personal computer or between two XSEDE systems. There are two ways to transfer files: scp/sftp or Globus. Please refer to XSEDE Data Transfers & Management.

    • scp/sftp: All of the XSEDE resources are Linux/Unix clusters, so you can use scp or sftp to transfer files. Please refer to the SCC user guide for scp and sftp.
    • Globus: Globus is based on the grid-ftp protocol. If you want to transfer a large amount of data (e.g. in the scale of terabytes), it is better to use Globus to obtain a faster transfer speed. For usage of Globus, please refer to Globus User Guide for XSEDE users.
  • Other information ?

    • Since a lot of jobs are continually running on XSEDE resources, it is possible that your job will be waiting in the queue. You can use the wait time prediction tool to estimate how soon your job will start, so as to optimize your job submission.
    • A supercomputer, Anton, built specifically for molecular dynamics (MD) simulations is open to academic institutions, including BU.
  • Need help?

    • Shaohao Chen (shaohao@bu.edu, 617-353-8294) is the XSEDE liaison at Boston University. Contact him for general questions such as allocations, programming, and HPC issues. Contact the XSEDE helpdesk (see below) for help on system-specific issues.
    • If you want to ask for help from a specific XSEDE site, you can submit a ticket to XSEDE helpdesk or send email to help@xsede.org . They also have a 24/7 phone service (1-866-907-2383).