Topics Map > User Guides
Topics Map > Platform R
Platform R: Getting Started
- Before You Log In
- Connecting to Platform R
- Install Spack and Load Shared Environment
- Add Packages to Spack Environment
- Ingress/Egress of Data for Platform R
- Using Slurm for Job Management
- Logging Out
- See Also
Before You Log In
In order to access and use Platform R, you will need the following:
- Duo Multi-Factor Authentication (MFA).
- Credentials to sign-in (Your NetID, password, and access to Platform R).
- Are you a PI looking for access?
- As a user of Platform R, you cannot proceed until your lab has gone through the onboarding process.
Connecting to Platform R
- Head over to Azure Virtual Desktop (AVD).
- Sign in using your NetID@wisc.edu email.
- Note: If you are signed out, it will redirect you to the UW-Madison SSO sign-in page to complete the sign-in process.
- Note: If you are signed out, it will redirect you to the UW-Madison SSO sign-in page to complete the sign-in process.
- Sign in using your NetID@wisc.edu email.
- You will see a Desktop labeled "Platform R PHI Desktop".
- Click "Connect".
- Enter your NetID credentials when prompted by AVD to "Sign In"
- Use PuTTY, MobaXTerm, or VSCode to finish connecting to the Platform R Cluster.
- Using PuTTY as an example:
- Connect to host: slurm.platformr.wisc.eduport: 22
- Save and cache the server's SSH key fingerprint by clicking "Accept".
- Use you NetID credentials to sign in when prompted.
- Note: Using PuTTY, you can save your username to auto-populate:
- Note: Using PuTTY, you can save your username to auto-populate:
- Connect to host: slurm.platformr.wisc.eduport: 22
- Using PuTTY as an example:
Install Spack and Load Shared Environment
Spack is the package manager used in Platform R, used to manage software installation and environments, whether shared or individual.
- Install Spack into user space.
-
$ . /mnt/scratch/shared/software/spack/install_spack.sh
-
$ exec $SHELL
-
- Duplicate the shared environment (from .yaml file). **
-
$ spack env create env_name /mnt/scratch/shared/software/spack/environments/package/spack.yaml
- Using "R" as an example:
-
$ spack env create r /mnt/scratch/shared/software/spack/environments/r-4_4_0/spack.yaml
-
- Using "R" as an example:
-
$ spacktivate env_name
- Using "R" as an example:
-
$ spacktivate r
-
- Using "R" as an example:
-
$ spack install
-
- Duplicate the shared environment (from .lock file). **
-
$ spack env create env_name /mnt/scratch/shared/software/spack/environments/package/spack.lock
- Using "R" as an example:
-
$ spack env create r /mnt/scratch/shared/software/spack/environments/r-4_4_0/spack.lock
-
- Using "R" as an example:
-
$ spacktivate env_name
- Using "R" as an example:
-
$ spacktivate r
-
- Using "R" as an example:
-
$ spack install
-
- Options to learn about the current Spack environment.
-
$ spack config get
- view Spack config (including compilers and package preferences).
-
$ spack find
- lists the installed packages.
-
$ spack spec
- shows dependency resolution prior to installation.
-
- "Do Work".
- Using "R" as an example:
-
$ R
-
- Using "R" as an example:
** These are variations of the same step. Use a .yaml file if you need the environment to be modifiable (versions, packages, etc). Use the .lock file when you need the environment to have the exact specifications across environments.
Looking for further documentation? Check out Spack's official page.
Add Packages to Spack Environment
- Ensure you are in an active environment.
-
$ spacktivate env_name
-
- To see every option that is available to add/search for a specific software:
-
$ spack list / $ spack list software_name
-
- Add the software to your environment.
-
$ spack add listed_software_name
-
- Complete the addition to your environment.
-
$ spack install
-
Example:
Installing GenomicAlignments (a Bioconductor software package):
-
-
-
$ spacktivate my_environment
-
$ spack list genoicalignments
- Since the package name is "r-genomicaligments", run the following to add the package to your environment:
-
$ spack add r-genomicalignments
-
- Finally, run the following to complete the addition:
-
$ spack install
-
-
-
Ingress/Egress of Data for Platform R
Currently, Research and Restricted drive are approved methods of ingress of data for Platform R, however Restricted Drive is the only approved egress of data to Platform R. This can only be done from the head node (pr-sett-001).
- List the contents of Restricted Drive (do this first - sometimes Restricted Drive will not be connected without it).
-
$ ls -ll /mnt/restricteddrive/NETID
-
- Copy the sample data folder and it's contents to the group scratch location - by default, files in the destination folder will be overwritten upon transfer.
-
$ rsync -av --progress /mnt/restricteddrive/NETID/sampledata /mnt/scratch/group/NETID/
-
- Do your work.
- Copy your results back to the Processed folder within Restricted Drive.
-
$ rsync -av --progress /mnt/scratch/group/NETID/sampledata /mnt/restricteddrive/NETID/processed
-
Useful tips about "rsync":
- -a (or -archive) enables archive mode - this creates a recursive copy of the data.
- -u (or -update) will skip files that are newer in the destination location.
- --ignore-existing will skip updating files that already exist in the destination folder.
- A trailing slash (/) will alter the behavior of the transfer.
Using Slurm for Job Management
Slurm is workload manager and job scheduling software utilized in Platform R, used for checking availability of resources, queuing and running jobs, and checking the status of in progress jobs, among many other features.
See the clusters resources
- Check the state of cluster resources from the head node (pr-sett-001):
-
$ sinfo
-
Run a job
- Utilize the "srun" command to run your job with specific parameters.
-
$ srun --cpus-per-task=32 --mem=256G --gres=gpu:nvidia_h200:4 --pty bash
- Note: "gres" is Slurm's allocation of GPUs
-
- Once your job is running, you can check on the reservation of the GPUs by using "nvidia-smi"
-
$ nvidia-smi
-
- Check your queued jobs by using "squeue"
-
$ squeue -u <NetID>
-
Looking for further Slurm documentation? Check out Slurm's Manual Pages.
Logging Out
Return to the head node by entering exit in the command prompt and pressing enter. Then, by entering exit again, your method of connection will close out. All Slurm jobs submitted will continue to run and return output without you needing to be connected.