Taurus HPC Cluster

Active Compute Nodes: Always check this list to see which compute nodes are available on the cluster. Compute nodes are constantly updated. Move your files in your shared folder before switching to a new compute node. (updated 2023-06-20)

  • leo1.local.net to leo8.local.net (newest): gcc, g++, OpenMP, MPI, CUDA 12.1, Python 3, MATLAB R2023a, xRDP
  • taurus1.local.net to taurus5.local.net (deprecated): gcc, g++, OpenMP, MPI, CUDA 12.1, Python 3, MATLAB R2022b, xRDP
Important note: When following the instructions on this page, make sure that you replace the hostname taurus with the hostname of the compute nodes that you are using.

1. Introduction

This guide walks you through the steps of connecting to the Taurus High Performance Computing (HPC) Cluster, installing Visual Studio Code on your computer and compiling and running programs in C/C++, OpenMP, CUDA, MPI, MATLAB and Python. Before connecting to the cluster, you will need a username and password. Contact the system administrator, Dr. Vincent Roberge (vincent.roberge(at)rmc.ca), to get an account created for you. If you intend to use MATLAB on the cluster, you will also need an OpenVPN config file which will be supplied by the system administrator.

The Taurus HPC Cluster’s configuration is based on the Linux Containers (LXC) virtualization technology. The host Linux operating systems offer minimal services beside the LXC virtualization and all compute nodes are implemented in LXC containers. Containers run in unprivileged mode to allow direct access to the GPUs. A preconfigured set of containers have been deployed by the system administrator to allow users to program in C/C++, OpenMP, CUDA, MPI, MATLAB and Python. However, if your research project requires that you have your own container(s) in order for you to install your own software and libraries, this is possible. Discuss your requirements with the system administrator.

2. Cluster Specifications

The Taurus HPC Cluster is composed of eight Linux compute nodes (taurus1.local.net to taurus8.local.net), one Windows compute node (orion9.local.net) and one Linux client workstation (desktop-usb.local.net). All hosts are connected with a 10 Gbps low-latency converged Ethernet switch. The Linux nodes are Dell Precision servers equipped with 128 GB of RAM and dual Intel Xeon Silver 4214R CPUs with 12 hyper-threaded cores for a total of 24 cores or 48 virtual cores per node. The Windows node has 512 GB of RAM and two Intel Xeon Gold 5218R CPUs with 20 hyper-threaded cores for a total of 80 virtual cores. All nodes are equipped with an NVIDIA RTX3080 graphics processing unit (GPU) with 8,960 cores and 10 GB DDR5 RAM supporting CUDA compute capability 8.6. The Linux nodes are configured with Linux Ubuntu 22.04 and MATLAB Parallel Server R2022b. The Windows node is configured with Windows Server 2022 Datacenter Edition. The Taurus HPC Cluster provides a combined processing power of 1.23 TFLOPS on the CPUs and 119.2 TFLOPS on the GPUs.

Scroll to Top