Our Systems

SciNet is Canada’s largest supercomputing centre, and we run and make available a range of computing resources for Canadian researchers and innovators.   Below are a description of the systems, and links to find out more.

More technical information and the system status can be found on the SciNet documentation wiki.

Niagara Supercomputer

Niagara is a homogeneous cluster of 61,920 cores, owned by the University of Toronto and operated by SciNet, intended to enable large parallel jobs of 1024 cores and more. It is the most powerful supercomputer in Canada available for academic research. Compute allocations are handled through Compute Canada’s annual resource allocation competition. Niagara was designed

High Performance Storage System (HPSS)

The High Performance Storage System (HPSS) is a tape-backed hierarchical storage system that provides a significant portion of the allocated storage space at SciNet. It is a repository for archiving data that is not being actively used. Data can be returned to the active filesystem on the compute clusters when it is needed. SciNet’s HPSS


The SOSCIP GPU Cluster (SGC)is a Southern Ontario Smart Computing Innovation Platform (SOSCIP) resource located at the University of Toronto’s SciNet HPC facility. The SOSCIP multi-university/industry consortium is funded by the Ontario Government and the Federal Economic Development Agency for Southern Ontario. The SOSCIP GPU Cluster consists of of 14 IBM Power 822LC “Minsky” Servers

Teach Cluster

Teach is a cluster of 672 cores at SciNet that has been assembled from older re-purposed compute hardware. Access to this small, homogeneous cluster is provided primarily for local teaching purposes. It is configured similarly to the production Niagara system. The cluster consists of 42 repurposed nodes each with 16 cores (two 8-core Intel Xeon

Power 8 GPU Test System (P8)

The P8 Test System consists of of 4 IBM Power 822LC Servers each with 2x8core 3.25GHz Power8 CPUs and 512GB Ram. Similar to Power 7, the Power 8 utilizes Simultaneous MultiThreading (SMT), but extends the design to 8 threads per core allowing the 16 physical cores to support up to 128 threads. 2 nodes have