Decommissioned: General Purpose Cluster (GPC)

November 9, 2016 in Decommissioned Systems


The General Purpose Cluster (GPC) is our extremely large “workhorse” cluster (ranked 16th in the world at its inception, then the fastest in Canada) and is where most computations are done at SciNet – it has already performed more than 45,000,000 computations for Canadian researchers.  It is an IBM iDataPlex cluster based on Intel’s Nehalem architecture (one of the first in the world to make use of the new chips).

The GPC consists of 3,780 nodes (IBM iDataPlex DX360M2) with a total of 30,240 cores (Intel Xeon E5540) at 2.53GHz, with 16GB RAM per node (2GB per core) with some larger-memory nodes up to 32GB.   The nodes run Linux.   Approximately one quarter of the cluster is connected with non-blocking DDR InfiniBand while the rest of the nodes are connected with 5:1 blocked QDR InfiniBand. The compute nodes are accessed through a queuing system that allows jobs with a maximum wall time of 48 hours and a minimum time, in the batch queue, of 15 minutes.

A “quickstart” guide to using SciNet’s GPC can be found on our technical documentation wiki.

Decommissioned: BlueGene/Q (BGQ)

November 7, 2016 in Decommissioned Systems

4rackbgqThe BGQ is a Southern Ontario Smart Computing Innovation Platform (SOSCIP) BlueGene/Q supercomputer located at the University of Toronto’s SciNet HPC facility. The SOSCIP multi-university/industry consortium is funded by the Ontario Government and the Federal Economic Development Agency for Southern Ontario [1]. A half-rack of BlueGene/Q (8,192 cores) was furthermore purchased by the Li Ka Shing Institute of Virology at the University of Alberta in late fall 2014 and integrated into the existing BGQ system.

The BGQ is an extremely dense and energy efficient 3rd generation Blue Gene IBM supercomputer built around a system-on-a-chip compute node that has a 16core 1.6GHz PowerPC based CPU (PowerPC A2) with 16GB of Ram. It consists of 4,096 nodes with a total of 65,536 core and upto 262144 hardware threads. The nodes run a lightweight Linux-like OS and are interconnected in a 5D torus. The compute nodes are accessed through a queuing system.

A quickstart guide to using the BGQ can be found on SciNet’s technical documentation wiki.

Decommissioned: Power 8 GPU Test System (P8)

June 9, 2016 in Decommissioned Systems

The P8 Test System consists of of 4 IBM Power 822LC Servers each with 2x8core 3.25GHz Power8 CPUs and 512GB Ram. Similar to Power 7, the Power 8 utilizes Simultaneous MultiThreading (SMT), but extends the design to 8 threads per core allowing the 16 physical cores to support up to 128 threads. 2 nodes have two NVIDIA Tesla K80 GPUs with CUDA Capability 3.7 (Kepler), consisting of 2xGK210 GPUs each with 12 GB of RAM connected using PCI-E, and 2 others have 4x NVIDIA Tesla P100 GPUs each wit h 16GB of RAM with CUDA Capability 6.0 (Pascal) connected using NVlink.

A quickstart guide to running on SciNet’s P8 cluster is available on our technical documentation wiki.

Decommisioned: Tightly Coupled System (TCS)

March 1, 2012 in Decommissioned Systems

The Tightly Coupled System (TCS) was a specialized cluster of `fat’ (high-memory, many-core) IBM Power 575 nodes, with 4.7GHz Power 6 processors on a very fast Infiniband connection.  The nodes had 32 cores (with hardware support for running 64 threads, using Simulatanous MultiThreading (SMT) with 128GB of RAM, and run AIX; two nodes had 256GB of RAM each.   It had a relatively small number of cores (~3000) and so was dedicated for jobs that require such a large memory / low latency configuration.  Jobs needed to use multiples of 32 cores (a node), and were to be submitted to a queuing system that allows jobs with a maximum wall time of 48 hours per job.

The TCS was decommissioned on Sept. 29, 2017.

Decommissioned: Power 7 Linux Cluster (P7)

February 27, 2012 in Decommissioned Systems

SciNet’s Power 7 (P7) cluster consisted of 5 IBM Power 755 Servers each with 4x 8core 3.3GHz Power7 CPUs and 128GB Ram. Similar to the Power 6, but running Linux, the Power 7 utilized Simultaneous Multi Threading (SMT), but extends the design from 2 threads per core to 4. This allowed the 32 physical cores to support up to 128 threads which in many cases could lead to significant speedups.

The P7 cluster was decommissioned in June 2019.