The development of SciNet at the University of Toronto stretches back to 1999 with the timing of an award of from the Canada Foundation for Innovation (CFI) and the Province of Ontario for the founding of PSciNet, an acronym which then stood for the Physical Sciences computing NETwork. This funding was received in response to an application prepared by a group
consisting of astrophysicists from the Canadian Institute for Theoretical Astrophysics, chemical physicists from the Department of Chemistry and planetary physicists from the Department of Physics. The funds were invested to acquire three distinct computer systems, each one designed to serve the special needs of each of these collaborating groups and to be operated by them as separate systems.

A second proposal to CFI for further PSciNet development was funded in 2003, this time to a group consisting of high energy experimental particle physicists from the Department of Physics, planetary physicists from the Department of Physics and a third group consisting of aerospace and mechanical engineers from the University of Toronto Institute for Aerospace
Studies (UTIAS) and from the Department of Mechanical and Industrial Engineering. This funding was employed to acquire two new cluster systems as well as an upgrade to the vector system employed by the planetary physics group.

The SciNet consortium, with the “P” for “Physical” dropped from the acronym, was established in 2005 through the continuing collaboration of all five of these previously involved groups, joined by their colleagues from the areas of computational biology, genomics and bioinformatics at both the University of Toronto and the ten research hospitals affiliated with it. SciNet participated in developing the Compute Canada response to the 2006 National Platform Fund (NPF) call for proposal from CFI and was allocated $15M of the total amount awarded to Compute Canada in December 2006. The CFI award was matched by the Ontario provincial government and supplemented by the University of Toronto.

Complications in administering the NPF award across 7 consortia and more than 15 institutions delayed the issue of the SciNet Request for Proposals (RFP) until Jan 2008. After reviewing all proposals a final contract with IBM was signed in July 2008 for the construction of the datacentre (in an existing building) and the installation of two clusters and storage. Renovations began in late Aug 2008, the first cluster (the TCS) and storage system were installed in November and opened to friendly users in December, the datacentre was fully completed in February 2009 and the installation of the largest cluster (the GPC) began in March with the arrival of the first IBM iDataPlex servers based on the brand-new Intel Nehalem CPU
architecture. Friendly user period for the GPC began in May and both systems were fully opened to researchers from across Canada at the beginning of August 2009.

Several smaller test systems have been acquired and operated by SciNet since, but no major refresh was available until CFI’s 2015 Cyberinfrastructure Initiative. This initiative awarded a grant through Compute Canada that led to a refresh of Canadian computing resources in the form of a cloud system, Arbutus, and two general purpose clusters, Graham and Cedar, in 2017, located in British Columbia and Ontario, and a “large parallel” supercomputer, Niagara, at SciNet, in 2018 (note: a third General Purpose cluster was installed in 2019 in Montreal).

Decommissioned: General Purpose Cluster (GPC)

The General Purpose Cluster (GPC) was our extremely large “workhorse” cluster (ranked 16th in the world at its inception in 2009, then the fastest in Canada) and was where most computations were done at SciNet until its decommission and the its successor, Niagara, went into production.  It was an IBM iDataPlex cluster based on Intel’s

Decommissioned: BlueGene/Q (BGQ)

The BGQ is a Southern Ontario Smart Computing Innovation Platform (SOSCIP) BlueGene/Q supercomputer located at the University of Toronto’s SciNet HPC facility. The SOSCIP multi-university/industry consortium is funded by the Ontario Government and the Federal Economic Development Agency for Southern Ontario [1]. A half-rack of BlueGene/Q (8,192 cores) was furthermore purchased by the Li Ka

Decommissioned: SOSCIP GPU Cluster (SGC)

The SOSCIP GPU Cluster (SGC)is a Southern Ontario Smart Computing Innovation Platform (SOSCIP) resource located at the University of Toronto’s SciNet HPC facility. The SOSCIP multi-university/industry consortium is funded by the Ontario Government and the Federal Economic Development Agency for Southern Ontario. The SOSCIP GPU Cluster consists of of 14 IBM Power 822LC “Minsky” Servers

Decommissioned: Power 8 GPU Test System (P8)

The P8 Test System consists of of 4 IBM Power 822LC Servers each with 2x8core 3.25GHz Power8 CPUs and 512GB Ram. Similar to Power 7, the Power 8 utilizes Simultaneous MultiThreading (SMT), but extends the design to 8 threads per core allowing the 16 physical cores to support up to 128 threads. 2 nodes have

Decommisioned: Tightly Coupled System (TCS)

The Tightly Coupled System (TCS) was a specialized cluster of `fat’ (high-memory, many-core) IBM Power 575 nodes, with 4.7GHz Power 6 processors on a very fast Infiniband connection.  The nodes had 32 cores (with hardware support for running 64 threads, using Simulatanous MultiThreading (SMT) with 128GB of RAM, and run AIX; two nodes had 256GB

Decommissioned: Power 7 Linux Cluster (P7)

SciNet’s Power 7 (P7) cluster consisted of 5 IBM Power 755 Servers each with 4x 8core 3.3GHz Power7 CPUs and 128GB Ram. Similar to the Power 6, but running Linux, the Power 7 utilized Simultaneous Multi Threading (SMT), but extends the design from 2 threads per core to 4. This allowed the 32 physical cores