The Research Computing Services (RCS) group at Boston University maintains computing resources to provide high performance computing, without charge, to our research community. On occasions, a research project may have special needs that are not adequately served by our local systems. For those projects, the external resources provided through the XSEDE program may be appropriate.
What is XSEDE?
XSEDE (eXtreme Science and Engineering Discovery Environment) is the follow-on NSF initiative to Teragrid (which had a primary goal of providing free compute cycles to US researchers) to expand its mission to facilitate research collaboration among institutions, enhance research productivity, provide remote data transfer, and enable remote instrumentation, to name a few.
What resources are available?
How do I apply for XSEDE allocations ?
- Who qualifies?
The Principle Investigator (PI) must be a faculty, staff, post-doc of a US-based educational institution or member of a commercial organization. Members of the project may be any researcher, including graduate students and visiting scholars. For detail, see XSEDE PI qualifications.
- What type of computer systems?
To determine the system that best match your hardware and software requirements, please see available resources.
- Which type of allocations (CPU time and disk storage allotments)?
There are 4 types of allocations available. A summary is given below. (For details, please consult the XSEDE allocation policy page.) Many may find it useful to start by experimenting with the existing RCS allocations on a few machines; then request a startup allocation on a specific system to do more in depth analyses which may eventually lead to a research allocation proposal that require strong justifications to win approval. However, if your computing resource requirement are fully understood, proceed directly with a startup or research application:
This is the simplest and quickest way to “sample” resources.
If you are not sure which resources are appropriate for your computing needs, this is the right place to start.
We, as an XSEDE member institution, have been given allocations by a few of the resource providers (institutions) across the country through XSEDE. We can add you as a member under this project, subject to approval by XSEDE.
PROS: no need to write a proposal; time to approval is two business days.
CONS: you can only run relatively short duration jobs and are limited to the resources available to Boston University.
Resource Provider Machine
Key Features Status Blacklight Pittsburg Supercomputing Center
login with: ssh USER@blacklight.psc.xsede.org
(USER is assigned by PSC, may be different than XSEDE portal USERID)
SMP Large memory: 16 / 32 TB
For 1440 cores, walltime<=48 hrs
For 256 cores, walltime<=96 hrs
You can run OpenMP jobs with lots of cores.
Through 1st quarter, 2015 Gordon San Diego Supercomputing Center (SDSC)
login with: ssh USER@gordon.sdsc.xsede.org
Cluster Good for IO-bound apps with 4.8 TB of SSD (FLASH) storage. Through 1st quarter 2015 Kraken NICS
(CPU + GPU)
There are 36 nodes, each with 16 cores and 6 GPUs.
Max walltime is 12 hours. No limit on nodes requested.
Decommissioned May, 2014 Maverick Texas Advanced Computing Center (TACC)
login with: ssh USER@maverick.tacc.xsede.org
Cluster For visualization, 512 GPUs
14.5 TB aggregate memory
Until 2017 Stampede TACC
login with: ssh USER@stampede.tacc.xsede.org
Cluster Intel Xeon Phi coprocessors (MIC architecture) Until 2017 Trestle SDSC
login with: ssh USER@trestles.sdsc.edu
Cluster For 1024 cores, walltime<=48 hrs (336 hrs by arrangement) Through 2014.
- Who qualifies?