Topics Map > Research Computing Support > Flash

Using the SSCC's High Performance Computing Cluster

The SSCC's High Performance Computing Cluster, Flash, is designed for parallel computing. It is ideal for jobs that can be broken up into many pieces that are executed simultaneously. As of this writing it has of 456 cores. Most jobs run on Flash are written in FORTRAN, C, or C++ and use MPI to handle the parallelization.


If after reading this article you'd like to use Flash, please contact the clusters' administrator, Dan Bongert ( He will set up an account for you on Flash and schedule a brief orientation.

Moving Files to Flash

For performance reasons, our HPC cluster does not have access to the shared SSCC file system; it uses local disks exclusively. Thus you will need to transfer any data or other files you need using SFTP. You can start an SFTP program on your computer and connect to See Using SFTP for instructions.

Running Jobs

The general sequence for running an MPI job is as follows:

  1. Check the status of Flash to make sure no one else is using the nodes you plan to use.
  2. Compile your program, making sure you link it to the MPI libraries.
  3. Run your program.

The example Fortran program /etc/skel/pi.f90 which computes pi in parallel is used to illustrate the steps for submitting a job. It assumes that you are logged on to Flash and are in your home directory. We highly recommend that you run this example the first time you log on to Flash in order to verify that everything is running correctly.

1. Check the status of the cluster online by opening a browser and going to If the graph labeled Flash CPU last hour is not nearly all Idle CPU (i.e. gray) then you should wait until the load goes down to submit a new job. Only one person should have jobs running at a time on any given node. Otherwise, system performance will be poor for everyone. You may however run jobs on nodes that are idle even if other nodes are in use.

2. Compile the program using the gfortran compiler:

> mpif90 pi.f90 -O3 -o pi.bin


-O3 enables aggressive optimization,

-o specifies where to write the output.

For more information regarding the command line options type: man mpif90 for mpi specific options and man gfortran for fortran specifc options.

To compile C programs use mpicc instead; for C++ use mpicxx.

3. Run the program using mpirun:

>mpirun --hostfile hosts.txt -np 16 pi.bin

where -np is the number of processes (slots) you want to use. The hostfile option is a file which tells OpenMPI which nodes to run your job on and how many slots are available on each node. If you want to run your job on certain nodes you should copy the hosts.txt file to your home directory and modify it as needed, but by using the default file you'll be sure to always have the latest node list.


Your output will be similar to:

Process 2 of 16 is alive
 Process 0 of 16 is alive
 Process 6 of 16 is alive
 Process 5 of 16f is alive
 . . .
 pi is approximately: 3.1415926535898362 Error is: 0.0000000000000431


Further Information

The following links provide more information about running jobs using Flash:

SSCC staff cannot write or debug your programs for you, but if you need assistance submitting jobs, please contact the cluster's administrator, Dan Bongert (

Keywords:linux, high, performance, compute, computing, cluster ssccs   Doc ID:96622
Owner:Dan B.Group:Social Science Computing Cooperative
Created:2019-12-13 11:40 CDTUpdated:2020-11-05 15:26 CDT
Sites:Social Science Computing Cooperative
Feedback:  0   0