Welcome to the Google Codelab for running a Slurm cluster on Google Cloud Platform! By the end of this codelab you should have a solid understanding of the ease of provisioning and operating an auto-scaling Slurm cluster.

Google Cloud teamed up with SchedMD to release a set of tools that make it easier to launch the Slurm workload manager on Compute Engine, and to expand your existing cluster dynamically when you need extra resources. This integration was built by the experts at SchedMD in accordance with Slurm best practices.

If you're planning on using the Slurm on Google Cloud Platform integrations, or if you have any questions, please consider joining our Google Cloud & Slurm Community Discussion Group!

About Slurm

Basic architectural diagram of a stand-alone Slurm Cluster in Google Cloud Platform.

Slurm is one of the leading workload managers for HPC clusters around the world. Slurm provides an open-source, fault-tolerant, and highly-scalable workload management and job scheduling system for small and large Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions:

1. It allocates exclusive or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work.

2. It provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes.

3. It arbitrates contention for resources by managing a queue of pending work.

What you'll learn

Prerequisites

Self-paced environment setup

Create a Project

If you don't already have a Google Account (Gmail or G Suite), you must create one. Sign-in to Google Cloud Platform console (console.cloud.google.com) and open the Manage resources page:

Click Create Project.

Enter a project name. Remember the project ID (highlighted in red in the screenshot above). The project ID must be a unique name across all Google Cloud projects. If your project name is not unique Google Cloud will generate a random project ID based on the project name.

Next, you'll need to enable billing in the Developers Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running (see "Conclusion" section at the end of this document). The Google Cloud Platform pricing calculator is available here.

New users of Google Cloud Platform are eligible for a $300 free trial.

Google Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

Launch Google Cloud Shell

From the GCP Console click the Cloud Shell icon on the top right toolbar:

Then click Start Cloud Shell:

It should only take a few moments to provision and connect to the environment:

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and simplifying authentication. Much, if not all, of your work in this lab can be done with simply a web browser or a Google Chromebook.

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID:

$ gcloud auth list


Command output:

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
$ gcloud config list project


Command output:

[core]
project = <PROJECT_ID>


If the project ID is not set correctly you can set it with this command:

$ gcloud config set project <PROJECT_ID>

Command output:

Updated property [core/project].

Download the Slurm Deployment Configuration

In the Cloud Shell session, execute the following command to clone (download) the Git repository that contains the Slurm for Google Cloud Platform deployment-manager files:

git clone https://github.com/SchedMD/slurm-gcp.git

Switch to the Slurm deployment configuration directory by executing the following command:

cd slurm-gcp

Configure Slurm Deployment YAML

This YAML file details the configuration of the deployment, including the network, instances, and storage to deploy.

In the Cloud Shell session, open the deployment configuration YAML file slurm-cluster.yaml. You can either use your preferred command line editor (vi, nano, emacs, etc.) or use the Cloud Console Code Editor to view the file contents:

Review the contents of the YAML file, and uncomment (remove the "#" character) the "ompi_version" field.

# [START cluster_yaml]
imports:
- path: slurm.jinja

resources:
- name: slurm-cluster
  type: slurm.jinja
  properties:
    cluster_name            : g1

    zone                    : us-central1-b

  # Optional network configuration fields
  # READ slurm.jinja.schema for prerequisites
    # vpc_net                   : default
    # vpc_subnet                : default
    # shared_vpc_host_project   : < my-shared-vpc-project-name >

    controller_machine_type : n1-standard-2
    # controller_disk_type      : pd-standard
    # controller_disk_size_gb   : 50
    # controller_labels         :
    #   key1 : value1
    #   key2 : value2
    # controller_service_account: default
    # controller_scopes         :
    # - https://www.googleapis.com/auth/cloud-platform
    # cloudsql                  :
    #   server_ip: <cloudsql ip>
    #   user: slurm
    #   password: verysecure
    #   # Optional
    #   db_name: slurm_accounting

    login_machine_type        : n1-standard-2
    # login_disk_type           : pd-standard
    # login_disk_size_gb        : 10
    # login_labels              :
    #   key1 : value1
    #   key2 : value2
    # login_node_count          : 0
    # login_node_service_account: default
    # login_node_scopes         :
    # - https://www.googleapis.com/auth/devstorage.read_only
    # - https://www.googleapis.com/auth/logging.write

  # Optional network storage fields
  # network_storage is mounted on all instances
  # login_network_storage is mounted on controller and login instances
    # network_storage           :
    #   - server_ip: <storage host>
    #     remote_mount: /home
    #     local_mount: /home
    #     fs_type: nfs
    # login_network_storage     :
    #   - server_ip: <storage host>
    #     remote_mount: /net_storage
    #     local_mount: /shared
    #     fs_type: nfs

    compute_image_machine_type  : n1-standard-2
    # compute_image_disk_type   : pd-standard
    # compute_image_disk_size_gb: 10
    # compute_image_labels      :
    #   key1 : value1
    #   key2 : value2

  # Optional compute configuration fields
    # external_compute_ips      : False
    # private_google_access     : True

    # controller_secondary_disk         : True
    # controller_secondary_disk_type    : pd-standard
    # controller_secondary_disk_size_gb : 300

    # compute_node_service_account : default
    # compute_node_scopes          :
    #   -  https://www.googleapis.com/auth/devstorage.read_only
    #   -  https://www.googleapis.com/auth/logging.write

    # Optional timer fields
    # suspend_time              : 300

    # slurm_version             : 19.05-latest
    ompi_version              : v3.1.x

    partitions :
      - name              : debug
        machine_type      : n1-standard-2
        max_node_count    : 10
        zone              : us-central1-a

    # Optional compute configuration fields

        # cpu_platform           : Intel Skylake
        # preemptible_bursting   : False
        # compute_disk_type      : pd-standard
        # compute_disk_size_gb   : 10
        # compute_labels         :
        #   key1 : value1
        #   key2 : value2
        # compute_image_family   : custom-image

    # Optional network configuration fields
        # vpc_subnet                : default

    # Optional GPU configuration fields

        # gpu_type               : nvidia-tesla-v100
        # gpu_count              : 8


    # Additional partition

      # - name           : partition2
        # machine_type   : n1-standard-16
        # max_node_count : 20
        # zone           : us-central1-b

    # Optional compute configuration fields

        # cpu_platform           : Intel Skylake
        # preemptible_bursting   : False
        # compute_disk_type      : pd-standard
        # compute_disk_size_gb   : 10
        # compute_labels         :
        #   key1 : value1
        #   key2 : value2
        # compute_image_family   : custom-image
        # network_storage        :
        #   - server_ip: none
        #     remote_mount: <gcs bucket name>
        #     local_mount: /data
        #     fs_type: gcsfuse
        #     mount_options: file_mode=664,dir_mode=775,allow_other
        #

    # Optional network configuration fields
        # vpc_subnet                : my-subnet

    # Optional GPU configuration fields
        # gpu_type               : nvidia-tesla-v100
        # gpu_count              : 8

#  [END cluster_yaml]

Be sure to uncomment (remove the "#" character) the "ompi_version" field to install OpenMPI on the cluster. If you do not uncomment the "ompi_version" field, the "Run an MPI Job" section of this codelab will not work correctly.

Within this YAML file there are several fields to configure. These include:

Advanced Configuration

If desired you may choose to install additional packages and software as part of the cluster deployment process. You may either do this by adding these packages to a live VM, adding the installation scripts to the custom-install scripts, or by building and using an image with the desired software and configuration installed. Currently Slurm uses the Google CentOS 7 image by default.

In order to add additional packages to the startup-script.py, you may add additional yum installable packages to the python list "packages" in the "install_packages" python function. For more complex installation procedures, please add new python functions in the locations marked with the comment "# Add any additional installation functions here".

In order to use your own image, build an image with your own configuration based on a CentOS image. Next, replace the reference to the centos-7 image in the slurm.jinja and resume.py files with your own image, and test the change. In the future we will support a yaml field to specify your own image.

Deploy the Configuration

In the Cloud Shell session, execute the following command from the slurm-gcp folder:

gcloud deployment-manager deployments create google1 --config slurm-cluster.yaml

This command creates a deployment named google1. The operation can take a few minutes to complete, so please be patient.

Once the deployment has completed you will see output similar to:

Create operation operation-1515793351850-5629b244a3810-c9541e28-80863bfb completed successfully.
NAME                           TYPE                   STATE      ERRORS  INTENT
g1-all-internal-firewall-rule  compute.v1.firewall    COMPLETED  []
g1-allow-iap                   compute.v1.firewall    COMPLETED  []
g1-compute-0-image             compute.v1.instance    COMPLETED  []
g1-controller                  compute.v1.instance    COMPLETED  []
g1-login0                      compute.v1.instance    COMPLETED  []
g1-network                     compute.v1.network     COMPLETED  []
g1-us-central1                 compute.v1.subnetwork  COMPLETED  []
g1-us-central1-router          compute.v1.router      COMPLETED  []

Verify the Deployment

Click Overview - google1. The Deployment properties pane displays the overall deployment configuration.

Click View on the Expanded Config property. The Config pane displays the contents of the deployment configuration YAML file modified earlier. Verify the contents are correct before proceeding. If you need to change a deployment configuration simply delete the deployment according to steps in "Clean Up the Deployment", and restart the deployment according to the steps in "Configure Slurm Deployment YAML".

With the deployment's configuration verified, you will now confirm that the cluster's instances are started.

Verify VM instance creation

Open the navigation menu and select Compute Engine > VM Instances. You should see the following VM instances listed:

Under VM instances review the five virtual machine instances that have been created by the deployment manager. This includes:

The g1-compute-0-image instance is only online for a short time to create the compute image used by the partition's auto-scaling nodes, and then it is shut down. If you'd like to make updates to the image used by compute nodes you can start this instance, make changes, and make another image in the Slurm cluster's image family to update that partition's image.

Access the Slurm Cluster

Return to your Code Editor/Cloud Shell tab. Run the following command to login to your instance, substituting <ZONE> for the g1-login0 node's zone (should be us-central1-b):

gcloud compute ssh g1-login0 --zone=<ZONE>

This command will log you into the g1-login0 virtual machine.

If this is the first time you have used cloud shell, you may see a message like the one below asking you to create an SSH key:

WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
This tool needs to create the directory [/home/user/.ssh] before being
 able to generate SSH keys.

Do you want to continue (Y/n)?

If so, enter Y. If requested to select a passphrase, leave it blank by pressing Enter twice.

If the following message appears upon login:

*** Slurm is currently being installed/configured in the background. ***
A terminal broadcast will announce when installation and configuration is
complete.

/home on the controller will be mounted over the existing /home.
Any changes in /home will be hidden. Please wait until the installation is
complete before making changes in your home directory.

Wait and do not proceed with the lab until you see this message (approx 5 mins):

*** Slurm logindaemon installation complete ***

/home on the controller was mounted over the existing /home.
Either log out and log back in or cd into ~.

Once you see the above message, you will have to log out and log back in to g1-login0 to continue the lab. To do so, press CTRL + C to end the task.

Then execute the following command logout of your instance:

exit

Now run the following command to login to your instance, substituting <ZONE> for the g1-login0 node's zone (should be us-central1-b):

gcloud compute ssh g1-login0 --zone=<ZONE>

Tour of the Slurm CLI Tools

You're now logged in to your cluster's Slurm login node. This is the node that's dedicated to user/admin interaction, scheduling Slurm jobs, and administrative activity.

Let's run a couple commands to introduce you to the Slurm command line.

Execute the sinfo command to view the status of our cluster's resources:

sinfo

Sample output of sinfo appears below. sinfo reports the nodes available in the cluster, the state of those nodes, and other information like the partition, availability, and any time limitation imposed on those nodes.

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      8  idle~ g1-compute-0-[0-9]

You can see our 10 nodes, dictated by the debug partition's "max_node_count" of 10, are marked as "idle~" (the node is in an idle and non-allocated mode, ready to be spun up).

Next, execute the squeue command to view the status of our cluster's queue:

squeue

The expected output of squeue appears below. squeue reports the status of the queue for a cluster. This includes each the job ID of each job scheduled on the cluster, the partition the job is assigned to, the name of the job, the user that launched the job, the state of the job, the wall clock time the job has been running, and the nodes that job is allocated to. We don't have any jobs running, so the contents of this command is empty.

JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)

The Slurm commands "srun" and "sbatch" are used run jobs that are put into the queue. "srun" runs parallel jobs, and can be used as a wrapper for mpirun. "sbatch" is used to submit a batch job to slurm, and can call srun once or many times in different configurations. "sbatch" can take batch scripts, or can be used with the --wrap option to run the entire job from the command line.

Let's run a job so we can see Slurm in action and get a job in our queue!

Run a Slurm Job and Scale the Cluster

Now that we have our Slurm cluster running, let's run a job and scale our cluster up.

While logged in to g1-login0, use your preferred text editor to create a new file "hostname_batch":

vi hostname_batch

Type "i" to enter insert mode.

Copy and paste the following text into the file to create a simple sbatch script:

#!/bin/bash
#
#SBATCH --job-name=hostname_sleep_sample
#SBATCH --output=out_%j.txt
#
#SBATCH --nodes=2

srun hostname
sleep 60

Save and exit the code editor by pressing escape and typing ":wq" without quotes.

This script defines the Slurm batch execution environment and tasks. First, the execution environment is defined as bash. Next, the script defines the Slurm options first with the "#SBATCH" lines. The job name is defined as "hostname_sleep_sample". The output file is set as "output_%j.txt" where %j is substituted for the Job ID according to the Slurm Filename Patterns.

This output file is written by each compute node to a local directory, in this case the directory the sbatch script is launched from. In our example this is the user's /home folder, which is a NFS-based shared file system. This allows compute nodes to share input and output data if desired. In a production environment, the working storage should be separate from the /home storage to avoid performance impacts to the cluster operations.

Finally, the number of nodes this script should run on is defined as 2.

After the options are defined the executable commands are provided. This script will run the hostname command in a parallel manner through the srun command, and sleep for 60 seconds afterwards. You may also try modifying the script to execute a few other commands like date or whoami.

Execute the sbatch script using the sbatch command line:

sbatch hostname_batch

Running sbatch will return a Job ID for the scheduled job, for example:

Submitted batch job 2

We can use the Job ID returned by the sbatch command to track and manage the job execution and resources. Execute the following command to view the Slurm job queue:

squeue

You will likely see the job you executed listed like below:

JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
    2     debug hostname username  R       0:10      2 g1-compute-0-[0-1]

Since we didn't have any compute nodes provisioned, Slurm will automatically create compute instances according to the job requirements. The automatic nature of this process has two benefits. First, it eliminates the work typically required in a HPC cluster of manually provisioning nodes, configuring the software, integrating the node into the cluster, and then deploying the job. Second, it allows users to save money because idle, unused nodes are scaled down until the minimum number of nodes is running.

You can execute the sinfo command to view the Slurm cluster spinning up:

sinfo

This will show the nodes listed in squeue in the "mix#" state, meaning the nodes are being created:

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      8  idle~ g1-compute-0-[2-9]
debug*       up   infinite      2  mix#  g1-compute-0-[0-1]

You can also check the VM instances section in Google Cloud Console to view the newly provisioned nodes. It will take a few minutes to spin up the nodes and get Slurm installed before the job is allocated to the newly allocated nodes. Your VM instances list will soon resemble the following:

Once the nodes are ready the instances will move to an "alloc" state, meaning the jobs are allocated to a job:

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      8  idle~ g1-compute-0-[2-9]
debug*       up   infinite      2  alloc g1-compute-0-[0-1]

Once a job is complete, it will no longer be listed in squeue, and the "alloc" nodes in sinfo will return to the "idle" state. Run "squeue" periodically until the job is completed, after a minute or two.

The output file out_%j.txt will have been written to your NFS-shared /home folder, and will contain the hostnames. Open or cat the output file (typically out_2.txt), it contents of the output file will contain:

g1-compute-0-0
g1-compute-0-1

Great work, you've run a job and scaled up your Slurm cluster!

Now let's run an MPI job across our nodes. While logged in to g1-login0, use wget to download an MPI program written in the C programming language:

wget https://raw.githubusercontent.com/open-mpi/ompi/master/examples/hello_c.c

We'll use the "mpicc" tool to compile the MPI C code. Execute the following command on g1-login0:

mpicc hello_c.c -o hello_c

This compiles our C code to machine code so that we can run the code across our cluster through Slurm.

Next, use your preferred text editor to create a new file "helloworld_batch":

vi helloworld_batch

Type i to enter the vi insert mode.

Copy and paste the following text into the file to create a simple sbatch script:

#!/bin/bash
#
#SBATCH --job-name=hello_world
#SBATCH --output=hello_world_%j.txt
#
#SBATCH --nodes=2

srun hello_c

Save and exit the code editor by pressing escape and typing ":wq" without quotes.

Then execute the sbatch script using the sbatch command line:

sbatch helloworld_batch

Running sbatch will return a Job ID for the scheduled job, for example:

Submitted batch job 3

This will run the hostname command across 2 nodes, with one task per node, as well as printing the output to the hello_world_3.txt file.

Since we had 2 nodes already provisioned this job will run quickly.

Monitor squeue until the job has completed and no longer listed:

squeue

Once completed open or cat the latest output file (typically out_3.txt) and confirm it ran on g1-compute-0-[0-1]:

Hello, world, I am 0 of 2, (Open MPI v3.1.7a1, package: Open MPI root@g1-controller Distribution, ident: 3.1.7a1, repo rev: v3.1.6-8-g6e2efd3, Unreleased developer copy, 141)
Hello, world, I am 1 of 2, (Open MPI v3.1.7a1, package: Open MPI root@g1-controller Distribution, ident: 3.1.7a1, repo rev: v3.1.6-8-g6e2efd3, Unreleased developer copy, 141)

After being idle for 5 minutes (configurable with the YAML's suspend_time field, or slurm.conf's SuspendTime field) the dynamically provisioned compute nodes will be de-allocated to release resources. You can validate this by running sinfo periodically and observing the cluster size fall back to 0:

JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
    2     debug hello_wo username  R       0:10      2 g1-compute-0-[0-1]

Try spinning up more instances, up to your Quota allowed in the region you deployed the cluster in, and running different MPI applications.

Congratulations, you've created a Slurm cluster on Google Cloud Platform and used its latest features to auto-scale your cluster to meet workload demand! You can use this model to run any variety of jobs, and it scales to hundreds of instances in minutes by simply requesting the nodes in Slurm.

If you would like to continue learning to use Slurm on GCP, be sure to continue with the "Building Federated HPC Clusters with Slurm" codelab. This codelab will guide you through setting up two federated Slurm clusters in the cloud, to represent how you might achieve a multi-cluster federation, whether on-premise or in the cloud.

Are you building something cool using Slurm's new GCP-native functionality? Have questions? Have a feature suggestion? Reach out to the Google Cloud team today through Google Cloud's High Performance Computing Solutions website, or chat with us in the Google Cloud & Slurm Discussion Group!

Clean Up the Deployment

Logout of the slurm node:

exit

Let any auto-scaled nodes scale down before deleting the deployment. You can also delete these nodes manually by running "gcloud compute instances delete <Instance Name>" for each instance, or by using the Console GUI to select multiple nodes and clicking "Delete".

You can easily clean up the deployment after we're done by executing the following command from your Google Cloud Shell, after logging out of g1-login0:

gcloud deployment-manager deployments delete google1

When prompted, type Y to continue. This operation can take some time, please be patient.

Delete the Project

To cleanup, we simply delete our project.

What we've covered

Find Slurm Support

If you need support using these integrations in testing or production environments please contact SchedMD directly using their contact page here: https://www.schedmd.com/contact.php

You may also use SchedMD's Troubleshooting guide here: https://slurm.schedmd.com/troubleshoot.html

Finally you may also post your question to the Google Cloud & Slurm Discussion Group found here: https://groups.google.com/forum/#!forum/google-cloud-slurm-discuss

Learn More

Feedback

Please submit feedback about this codelab using this link. Feedback takes less than 5 minutes to complete. Thank you!