{"id":137984,"date":"2021-12-03T15:45:36","date_gmt":"2021-12-03T20:45:36","guid":{"rendered":"http:\/\/www.bu.edu\/tech\/?page_id=137984"},"modified":"2025-11-05T12:10:24","modified_gmt":"2025-11-05T17:10:24","slug":"batch-script-examples","status":"publish","type":"page","link":"https:\/\/www.bu.edu\/tech\/support\/research\/system-usage\/running-jobs\/batch-script-examples\/","title":{"rendered":"Batch Script Examples"},"content":{"rendered":"<h2>Content<\/h2>\n<ul>\n<li><a href=\"#BASIC\">Basic Batch Script<\/a><\/li>\n<li><a href=\"#FREQUENT\">Batch Script With Frequently Used Options<\/a><\/li>\n<li><a href=\"#MEMORY\">Large Memory Jobs<\/a><\/li>\n<li><a href=\"#ARRAY\" a=\"\">Array Job Script<\/a><\/li>\n<li><a href=\"#OMP\">Basic Parallel Job (Single Node)<\/a><\/li>\n<li><a href=\"#MPI\" a=\"\">MPI Job Script<\/a><\/li>\n<li><a href=\"#GPU\" a=\"\">GPU Job Script<\/a><\/li>\n<li><a href=\"#BUYIN\" a=\"\">Use Own Buy-in Compute Nodes<\/a><\/li>\n<li><a href=\"https:\/\/www.bu.edu\/tech\/support\/research\/system-usage\/transferring-files\/cloud-applications\/#DTN\">Using the Data Transfer Node to transfer files to the SCC (separate web page)<\/a><\/li>\n<\/ul>\n<h2 style=\"margin-bottom: 1.em; margin-top: 2.5em;\">Basic Batch Script<a id=\"BASIC\" name=\"BASIC\" href=\"#BASIC\" style=\"text-decoration:none;\">&#x1F517;<\/a><\/h2>\n<p>Here is an example of a basic script for the Shared Computing Cluster (SCC). The first line in the script specifies the interpreter &#8211; shell. Lines that start with #$ symbols are used to specify the Sun Grid Engine (SGE) options used by the <code><span class=\"command\">qsub<\/span><\/code> command. All other lines that start with the # symbol are comments that provide details for each option. The program and its optional input arguments are at the end of the script, preceded by a module statement if needed. If the <code><span class=\"command\">module<\/span><\/code> command is used in the script the first line should contain the &#8220;-l&#8221; option to ensure proper command interpretation by the system. See <a href=\"https:\/\/www.bu.edu\/tech\/support\/research\/system-usage\/running-jobs\/submitting-jobs\/#job-options\">General job submission directives<\/a> for a list of other SGE options.<\/p>\n<pre><code class=\"code-block\" style=\"padding: 0em 1em 0em 1em;\"><span style=\"color: #1a6d20; font-weight: 800;\">#!\/bin\/bash -l<\/span>\r\n\r\n# Set SCC project\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -P my_project<\/span>\r\n\r\n# Specify hard time limit for the job. \r\n#   The job will be aborted if it runs longer than this time.\r\n#   The default time, also selected here, is 12 hours.  You can increase this up to 720:00:00 for single processor jobs but your job will take longer to start.\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -l h_rt=12:00:00<\/span>\r\n\r\n<span style=\"color: #222; font-weight: 800;\">module load python3\/3.13.8\r\npython -V<\/span><\/code><\/pre>\n<h2 style=\"margin-bottom: 1.em; margin-top: 2.5em;\">Batch Script With Frequently Used Options<a id=\"FREQUENT\" name=\"FREQUENT\" href=\"#FREQUENT\" style=\"text-decoration:none;\">&#x1F517;<\/a><\/h2>\n<p>Here is an example of a script with frequently used options:<\/p>\n<pre><code class=\"code-block\" style=\"padding: 0em 1em 0em 1em;\"><span style=\"color: #1a6d20; font-weight: 800;\">#!\/bin\/bash -l<\/span>\r\n\r\n# Set SCC project\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -P my_project<\/span>\r\n\r\n# Specify hard time limit for the job. \r\n#   The job will be aborted if it runs longer than this time.\r\n#   The default time is 12 hours\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -l h_rt=12:00:00<\/span>\r\n\r\n# Send an email when the job finishes or if it is aborted (by default no email is sent).\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -m ea<\/span>\r\n\r\n# Give job a name\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -N example<\/span>\r\n\r\n# Combine output and error files into a single file\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -j y<\/span>\r\n\r\n# Keep track of information related to the current job\r\necho \"==========================================================\"\r\necho \"Start date : $(date)\"\r\necho \"Job name : $JOB_NAME\"\r\necho \"Job ID : $JOB_ID  $SGE_TASK_ID\"\r\necho \"==========================================================\"\r\n\r\n<span style=\"color: #222; font-weight: 800;\">module load python3\/3.13.8\r\npython -V<\/span><\/code><\/pre>\n<h2 style=\"margin-bottom: 1.em; margin-top: 2.5em;\">Large Memory Jobs<a id=\"MEMORY\" name=\"MEMORY\" href=\"#MEMORY\" style=\"text-decoration:none;\">&#x1F517;<\/a><\/h2>\n<p>Jobs requiring <strong>more than 4 GB of memory<\/strong> should include appropriate qsub options for the amount of memory needed for your job.<\/p>\n<p>The SCC has a variety of nodes, each with a different number of cores along with varying amounts of memory. Jobs that require up to 64 GB can share memory resources on the same node. Jobs that require more than 64 GB need to request a whole node. The <code><span class=\"command\">qsub<\/span><\/code> options in the table below will schedule your job to a node with enough resources to complete your job. The <a href=\"https:\/\/www.bu.edu\/tech\/support\/research\/computing-resources\/tech-summary\/\">Technical Summary<\/a> page describes the configuration of the different types of nodes on the SCC. For more information about available processing and memory resources, visit our <a href=\"https:\/\/www.bu.edu\/tech\/support\/research\/system-usage\/running-jobs\/resources-jobs\/#memory\"> Resources Available for your Jobs<\/a> page.<\/p>\n<p>The table below gives suggestions for appropriate qsub options for different ranges of memory your job may need. See our <a href=\"https:\/\/www.bu.edu\/tech\/support\/research\/system-usage\/running-jobs\/allocating-memory-for-your-job\/\">Allocating Memory for your Job<\/a> webpage to estimate the amount of memory your job requires.<\/p>\n<table width=\"1220\">\n<tbody>\n<tr>\n<td colspan=\"4\" style=\"padding: 7px; background-color: #2c6696; color: #ffffff; text-align: center; font-size: 120%;\">Requesting Node Resources<\/td>\n<\/tr>\n<tr>\n<th colspan=\"3\" style=\"background-color: #c9d5d7; text-align: center; box-shadow: none; border: 1px solid #97a1a4;\" width=\"77%\">Job Resource Requirements<\/th>\n<th style=\"background-color: #c9d5d7; text-align: center; box-shadow: none; border: 1px solid #97a1a4;\" width=\"23%\"><code>qsub<\/code> Options<\/th>\n<\/tr>\n<\/tbody>\n<tbody>\n<tr>\n<td rowspan=\"3\" style=\"padding: 7px; background-color: #ffffff; text-align: center; border: 1px solid #97a1a4;\" width=\"10%\"><b>Partial Node<\/b><\/td>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: center; border: 1px solid #97a1a4; white-space: nowrap;\" width=\"10%\"><strong>\u2264 16 GB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request <strong>4<\/strong> <strong>cores.<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>4<\/em><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: center; border: 1px solid #97a1a4; white-space: nowrap;\" width=\"10%\"><strong>\u2264 32 GB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request <strong>8<\/strong> <strong>cores.<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>8<\/em><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: center; border: 1px solid #97a1a4; white-space: nowrap;\" width=\"10%\"><strong>\u2264 64 GB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request <strong>8 cores<\/strong> on a machine with at least 128 GB of RAM.<\/td>\n<td style=\"padding: 7px; background-color: #ffffff; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>8<\/em><br \/>\n<b>-l mem_per_core<\/b>=<em>8G<\/em><\/td>\n<\/tr>\n<tr>\n<td rowspan=\"7\" style=\"padding: 7px; background-color: #ecf4f7; text-align: center; border: 1px solid #97a1a4;\"><b>Whole Node<\/b><\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: center; border: 1px solid #97a1a4; white-space: nowrap;\"><strong>\u2264 128 GB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4 line-height: 120%;\">Request a whole node with<strong> 16 cores<\/strong> and at least <strong>128 GB<\/strong> of RAM.<\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>16<\/em><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: center; border: 1px solid #97a1a4; white-space: nowrap;\"><strong>\u2264 192 GB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request a whole node with <strong>28 cores<\/strong> and at least <strong>192 GB<\/strong> of RAM.<\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>28<\/em><\/td>\n<\/tr>\n<tr>\n<td rowspan=\"2\" style=\"padding: 7px; background-color: #ecf4f7; text-align: center; border: 1px solid #97a1a4; line-height: 120%; white-space: nowrap;\"><strong>\u2264 256 GB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request a whole node with <strong>16 cores<\/strong> and at least <strong>256 GB<\/strong> of RAM.<\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>16<\/em><br \/>\n<b>-l mem_per_core<\/b>=<em>16G<\/em><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request a whole node with <strong>28 cores<\/strong> and at least <strong>256 GB<\/strong> of RAM.<\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>28<\/em><br \/>\n<b>-l mem_per_core<\/b>=<em>9G<\/em><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: center; border: 1px solid #97a1a4; white-space: nowrap;\"><strong>\u2264 384 GB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request a whole node with<strong> 28 cores<\/strong> and at least<strong> 384 GB<\/strong> of RAM.<\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>28<\/em><br \/>\n<b>-l mem_per_core<\/b>=<em>13G<\/em><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: center; border: 1px solid #97a1a4; white-space: nowrap;\"><strong>\u2264 512 GB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request a whole node with <strong>28 cores<\/strong> and at least <strong>512 GB<\/strong> of RAM.<\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>28<\/em><br \/>\n<b>-l mem_per_core<\/b>=<em>18G<\/em><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: center; border: 1px solid #97a1a4; white-space: nowrap;\"><strong>\u2264 1 TB<\/strong><\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\">Request a whole node with <strong>36 cores<\/strong> and at least <strong>1 TB<\/strong> of RAM.<\/td>\n<td style=\"padding: 7px; background-color: #ecf4f7; text-align: left; border: 1px solid #97a1a4; line-height: 120%;\"><b>-pe omp<\/b> <em>36<\/em><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>To request large memory resources in OnDemand, add the appropriate <strong>qsub<\/strong> options from the summary table above to the <em>Extra qsub options<\/em> text field in the OnDemand form. Below is an example of requesting a node with at least 512 GB of memory:<\/p>\n<p><img loading=\"lazy\" src=\"\/tech\/files\/2021\/11\/ondemand_large_memory_job-1-636x93.png\" alt=\"\" class=\"alignnone size-medium wp-image-137877\" width=\"636\" height=\"93\" srcset=\"https:\/\/www.bu.edu\/tech\/files\/2021\/11\/ondemand_large_memory_job-1-636x93.png 636w, https:\/\/www.bu.edu\/tech\/files\/2021\/11\/ondemand_large_memory_job-1.png 753w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p>An example batch script for a job that requires 500 GB of RAM:<\/p>\n<pre><code class=\"code-block\" style=\"padding: 0em 1em 0em 1em;\"><span style=\"color: #1a6d20; font-weight: 800;\">#!\/bin\/bash -l<\/span>\r\n\r\n# Set SCC project\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -P my_project<\/span>\r\n\r\n# Request a whole 28 processor node with at least 512 GB of RAM\r\n<span class=\"highlight\" style=\"color: #1a206d; font-weight: 800;\">#$ -pe omp 28<\/span>\r\n<span class=\"highlight\" style=\"color: #1a206d; font-weight: 800;\">#$ -l mem_per_core=18G<\/span>\r\n\r\n<span style=\"color: #222; font-weight: 800;\">module load python3\/3.13.8\r\npython -V<\/span><\/code><\/pre>\n<h2 style=\"margin-bottom: 1.em; margin-top: 2.5em;\">Array Job Script<a id=\"ARRAY\" name=\"ARRAY\" href=\"#ARRAY\" style=\"text-decoration:none;\">&#x1F517;<\/a><\/h2>\n<p>If you submit many jobs at the same time that are largely identical, you can submit them as an array job. Array jobs are easier to manage, faster to submit, and they greatly reduce the load on the scheduler.<\/p>\n<p>For example, if you have many different input files, but want to run the same program on all of them, you can use an array job with a single script:<\/p>\n<pre><code class=\"code-block\" style=\"padding: 0em 1em 0em 1em;\"><span style=\"color: #1a6d20; font-weight: 800;\">#!\/bin\/bash -l<\/span>\r\n\r\n# Set SCC project\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -P my_project<\/span>\r\n\r\n# Submit an array job with 3 tasks \r\n<span class=\"highlight\"><span style=\"color: #1a206d; font-weight: 800;\">#$ -t 1-3<\/span><\/span>\r\n\r\n# Get all csv files in current directory, select the one that correspond to the task ID and pass it to the program\r\n<span style=\"color: #222; font-weight: 800;\">inputs=($(ls *.csv))\r\ntaskinput=${inputs[$(($SGE_TASK_ID-1))]}\r\n\r\n.\/my_program $taskinput  \r\n<\/span><\/code><\/pre>\n<p>Within your code, you can query the task ID using the appropriate function. Below are some common examples in Python, R, and MATLAB:<\/p>\n<pre class=\"code-block\"><code><span class=\"comment\">\r\n# Python<\/span><span class=\"command\">\r\nimport os\r\nid = os.getenv('SGE_TASK_ID')\r\n<\/span>\r\n<span class=\"comment\">\r\n# R<\/span><span class=\"command\">\r\nid &lt;- as.numeric(Sys.getenv(\"SGE_TASK_ID\"))\r\n<\/span>\r\n<span class=\"comment\">\r\n% MATLAB<\/span><span class=\"command\">\r\nid = str2num(getenv('SGE_TASK_ID'));\r\n<\/span>\r\n<\/code><\/pre>\n<h2 style=\"margin-bottom: 1.em; margin-top: 2.5em;\">Basic Parallel Job (Single Node)<a id=\"OMP\" name=\"OMP\" href=\"#OMP\" style=\"text-decoration:none;\">&#x1F517;<\/a><\/h2>\n<p>Below is a basic example of a script that requests a parallel environment for a multi-threaded or multi-processor job on a single node. You can request up to 36 cores for your parallel jobs. We recommend that you request 1,2,3,4,8,16,28, or 36. Requesting other numbers of cores might result in a longer waiting time in the queue. (Note: some buy-in nodes have 20 or 32 core machines)<\/p>\n<pre><code class=\"code-block\" style=\"padding: 0em 1em 0em 1em;\"><span style=\"color: #1a6d20; font-weight: 800;\">#!\/bin\/bash -l<\/span>\r\n\r\n# Set SCC project\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -P my_project<\/span>\r\n\r\n# Request a parallel environment with 8 cores \r\n<span class=\"highlight\"><span style=\"color: #1a206d; font-weight: 800;\">#$ -pe omp 8<\/span><\/span>\r\n\r\n# When a parallel environment is requested the environment variable <span class=\"placeholder\" style=\"font-weight: 800;\">NSLOTS<\/span> is set to the number of cores requested. This variable can be used within a program to setup an appropriate number of threads or processors to use.\r\n# For example, some programs rely on the environment variable <span class=\"placeholder\" style=\"font-weight: 800;\">OMP_NUM_THREADS<\/span> for parallelization:\r\n<span style=\"color: #1a206d; font-weight: 800;\">OMP_NUM_THREADS=$NSLOTS<\/span>\r\n\r\n<span style=\"color: #222; font-weight: 800;\">.\/my_program input_args<\/span><\/code><\/pre>\n<p>Within your code, you can query how many cores your script requested from the batch system. Below are some common examples in Python, R, and MATLAB:<\/p>\n<pre class=\"code-block\"><code><span class=\"comment\">\r\n# Python<\/span><span class=\"command\">\r\nimport os\r\nncores = os.getenv('NSLOTS')\r\n<\/span>\r\n<span class=\"comment\">\r\n# R<\/span><span class=\"command\">\r\nncores &lt;- as.numeric(Sys.getenv(\"NSLOTS\"))\r\n<\/span>\r\n<span class=\"comment\">\r\n% MATLAB<\/span><span class=\"command\">\r\nncores = str2num(getenv('NSLOTS'));\r\n<\/span>\r\n<\/code><\/pre>\n<h2 style=\"margin-bottom: 1.em; margin-top: 2.5em;\">MPI Job Script<a id=\"MPI\" name=\"MPI\" href=\"#MPI\" style=\"text-decoration:none;\">&#x1F517;<\/a><\/h2>\n<p>The SCC has 3 sets of nodes dedicated to run MPI jobs. One set has 16-core nodes with 128 GB each, connected by 56 GB\/s FDR Infiniband. The second set has 36 nodes with 256GB each connected by 100 GB\/s EDR Infiniband.\u00a0 The third set has 36 nodes with 28 CPU cores with 192 GB of memory each, connected by 100 GB\/s EDR Infiniband. You can request up to 256 cores on the 16-core nodes and up to 448 cores on the 28-core nodes. For multi-node MPI jobs the time limit is 120 hours (5 days). Below is an example of a script running an MPI job on 28-core nodes:<\/p>\n<pre><code class=\"code-block\" style=\"padding: 0em 1em 0em 1em;\"><span style=\"color: #1a6d20; font-weight: 800;\">#!\/bin\/bash -l<\/span>\r\n\r\n# Set SCC project\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -P my_project<\/span>\r\n\r\n# Request 8 nodes with 28 core each\r\n<span class=\"highlight\"><span style=\"color: #1a206d; font-weight: 800;\">#$ -pe mpi_28_tasks_per_node 224<\/span><\/span>\r\n\r\n# When a parallel environment is requested the environment variable <span class=\"placeholder\" style=\"font-weight: 800;\">NSLOTS<\/span> is set to the number of cores requested. This variable can then be used to set up the total number of processors used by the MPI job:\r\n<span style=\"color: #222; font-weight: 800;\">mpirun -np $NSLOTS .\/my_mpi_program input_args<\/span><\/code><\/pre>\n<h2 style=\"margin-bottom: 1.em; margin-top: 2.5em;\">GPU Job Script<a id=\"GPU\" name=\"GPU\" href=\"#GPU\" style=\"text-decoration:none;\">&#x1F517;<\/a><\/h2>\n<p>There are several types of GPU cards available on the SCC. Each has its own compute capability and amount of memory.\u00a0 It is important to know the compute capability and memory requirements of your program and request appropriate GPUs.\u00a0 You can view a list of GPUs available on the SCC by executing <code>qgpus<\/code> command in a terminal window. For a detailed list, run <code>qgpus -v<\/code>.\u00a0 For more information on GPU job options see the <a href=\"https:\/\/www.bu.edu\/tech\/support\/research\/software-and-programming\/programming\/multiprocessor\/gpu-computing\/\">GPU computing<\/a> page.<\/p>\n<p>When you use <code>-l gpu_c<\/code> flag to specify a compute capability, your job will be assigned a node with a GPU that has <em>at least<\/em> the capability that you requested. For example, for <code>-l gpu_c=6.0<\/code>, you may get a GPU that has 6.0, 7.0, or higher compute capability:<\/p>\n<pre><code class=\"code-block\" style=\"padding: 0em 1em 0em 1em;\"><span style=\"color: #1a6d20; font-weight: 800;\">#!\/bin\/bash -l<\/span>\r\n\r\n# Set SCC project\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -P my_project<\/span>\r\n\r\n# Request 4 CPUs\r\n<span class=\"highlight\"><span style=\"color: #1a206d; font-weight: 800;\">#$ -pe omp 4<\/span><\/span>\r\n\r\n# Request 1 GPU \r\n<span class=\"highlight\"><span style=\"color: #1a206d; font-weight: 800;\">#$ -l gpus=1<\/span><\/span>\r\n\r\n# Specify the minimum GPU compute capability. \r\n<span class=\"highlight\"><span style=\"color: #1a206d; font-weight: 800;\">#$ -l gpu_c=7.0<\/span><\/span>\r\n\r\n# As an example, use the academic-ml module to get Python with machine learning.\r\n<span style=\"color: #222; font-weight: 800;\">module load miniconda\r\nmodule load academic-ml\/fall-2025\r\nconda activate fall-2025-pyt\r\npython my_pytorch_prog.py<\/span><\/code><\/pre>\n<h2 style=\"margin-bottom: 1.em; margin-top: 2.5em;\">Use Own Buy-in Compute Nodes<a id=\"BUYIN\" name=\"BUYIN\" href=\"#BUYIN\" style=\"text-decoration:none;\">&#x1F517;<\/a><\/h2>\n<p>Those people who are members of projects that have access to Buy-in Compute Group hardware can restrict their jobs to running on that hardware only. Doing this can significantly increase the wait time before your job starts running but will guarantee that your job is not charged any <a href=\"https:\/\/www.bu.edu\/tech\/support\/research\/account-management\/manage-project\/#SUS\">SUs<\/a>. This option is only available if you submit the job under a project (<code>-P <span class=\"placeholder\">project<\/span><\/code>) that has access to <a href=\"https:\/\/www.bu.edu\/tech\/support\/research\/computing-resources\/service-models\/buy-in\/\">Buy-in<\/a> resources. Your monthly report, received on the 3rd of each month, tells you if the projects you belong to have access to any Buy-in Compute resources.<\/p>\n<p>An alternative way to do this, if you know the name of the <b>queue<\/b> associated with the Buy-in Compute Group that you want to use, is to use the &#8220;<code>-q <span class=\"placeholder\">queuename<\/span><\/code>&#8221; option to <code><span class=\"command\">qsub<\/span><\/code> but the method below is simpler and will let you run on any appropriate Buy-in queue you have access to.<\/p>\n<p>If you try to do either of these commands under a project that does not have access to Buy-in Compute group hardware of any sort you will get an error like &#8220;<code><span class=\"output\">Unable to run job: error: no suitable queues.<\/span><\/code>&#8221; and your job <b>will not be scheduled<\/b>.<\/p>\n<pre><code class=\"code-block\" style=\"padding: 0em 1em 0em 1em;\"><span style=\"color: #1a6d20; font-weight: 800;\">#!\/bin\/bash -l<\/span>\r\n\r\n# Set SCC project\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -P my_project<\/span>\r\n\r\n# Request my job to run on Buy-in Compute group hardware my_project has access to\r\n<span class=\"highlight\"><span style=\"color: #1a206d; font-weight: 800;\">#$ -l buyin<\/span><\/span>\r\n\r\n# Specify hard time limit for the job. \r\n#   The job will be aborted if it runs longer than this time.\r\n#   The default time is 12 hours\r\n<span style=\"color: #1a206d; font-weight: 800;\">#$ -l h_rt=12:00:00<\/span>\r\n\r\n# Actual commands to run.  Change this appropriately for your codes.\r\n<span style=\"color: #222; font-weight: 800;\">module load python3\/3.13.8\r\npython -V<\/span><\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Content Basic Batch Script Batch Script With Frequently Used Options Large Memory Jobs Array Job Script Basic Parallel Job (Single Node) MPI Job Script GPU Job Script Use Own Buy-in Compute Nodes Using the Data Transfer Node to transfer files to the SCC (separate web page) Basic Batch Script&#x1F517; Here is an example of a&#8230;<\/p>\n","protected":false},"author":1692,"featured_media":0,"parent":137962,"menu_order":12,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/pages\/137984"}],"collection":[{"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/users\/1692"}],"replies":[{"embeddable":true,"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/comments?post=137984"}],"version-history":[{"count":14,"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/pages\/137984\/revisions"}],"predecessor-version":[{"id":160256,"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/pages\/137984\/revisions\/160256"}],"up":[{"embeddable":true,"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/pages\/137962"}],"wp:attachment":[{"href":"https:\/\/www.bu.edu\/tech\/wp-json\/wp\/v2\/media?parent=137984"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}