Apply by 9 March, decisions in early April
Expenses paid by program
Sponsored by PRACE, XSEDE, Riken, and Compute Canada
Compute Canada/Calcul Canada invites students and researchers at Canadian post-secondary institutions to apply for one of 10 spots allocated to Canada for the fifth International Summer School on HPC Challenges in Computational Sciences. This is a great opportunity for Canadian students and postdocs to attend an Advanced Summer School on High Performance Computing Challenges, all expenses paid.
The workshop is aimed primarily at graduate students or postdocs; however, junior faculty or advanced undergraduates are also welcome to apply. Attendees will be expected to have some experience in HPC parallel programming (for instance, MPI, OpenMP, or CUDA/OpenCL), preferably on software used in successful research projects, and must be at least 18 years of age at time of application. Attendees from all disciplines are invited to participate.
The summer school is sponsored by the European Union Seventh Framework Program’s Partnership for Advanced Computing in Europe Implementation Phase project (PRACE-3IP), U.S. National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) project, RIKEN Advanced Institute for Computational Science (RIKEN AICS), and Compute Canada / Calcul Canada.
Leading American, Canadian, European and Japanese computational scientists and HPC technologists will offer instruction on a variety of topics, including:
Access to EU, Canadian, Japanese and U.S. HPC-infrastructures
HPC challenges by discipline (e.g., bioinformatics, computer science, chemistry, and physics)
HPC Programming Proficiencies
Performance analysis & profiling
Algorithmic approaches & numerical libraries
The expense-paid program will benefit advanced scholars from European, Canadian, Japanese and U.S. institutions who use HPC to conduct research. Interested students should apply by March 9, 2014.
Meals, housing, and travel from Canada, Japan and the U.S. will be covered for the selected participants. Applications from students in all science and engineering fields are welcome. Preference will be given to applicants with parallel programming experience, and a research plan that will benefit from the utilization of high performance computing systems.
For further information and to apply online, please click here.
The recent announcements of continued IT infrastructure building in Markham (and across the other southern York Region municipalities of Richmond Hill and Vaughan) reflect an established data centre cluster in the area, including recognizable names such as IBM, Rogers, Compugen, OnX, and HP. Of particular noteworthiness is Vaughn-based SCINET—Canada’s largest supercomputer data centre—a High Performance Computing consortium of the University of Toronto and affiliated Ontario hospitals.
Any Canadian academic researcher may obtain a default allocation on any Compute Canada system, including those at SciNet, at any time by registering with the Compute Canada DataBase (CCDB) and requesting accounts at one or more consortia. The size of default allocations vary by system.
A Principal Investigator who requires more than the default allocation (be it computing time or storage space), and who is eligible to apply to national granting councils for funding, must submit a proposal to Compute Canada’s Resource Allocation Committee (RAC). The Call for Proposals is posted on the Compute Canada site each fall, with awarded allocations running 1 Jan to 31 Dec of every year.
The 2014 call for resource proposals is now open. Proposals with details about the scientific and technical aspects, are to be submitted via the Compute Canada website CCDB site.
Note: The Resource Allocation deadline has been extended. Proposals must be submitted electronically to Compute Canada on or before October 21, 2013 at 3pm (Eastern). It is however necessary to have started the application process by October 16.
It’s been a great, busy, year here at SciNet in 2012; here’s our take on a SciNet Year in Review as a countdown to what’s already shaping up to be an exciting 2013…
5! SciNet Training
Training and education have always been important to SciNet; it’s one thing to provide computer resources, but we enable research at scale by teaching researchers how to make use of the computers for their work. In 2012:
We held 100 hours of courses, seminars, and Tech Talks,
We launched our new and clearer SciNetHPC.ca website this year, which will be the one-stop-shop for news, events and features about the science being enabled here at SciNet; but don’t worry, our technical wiki, wiki.SciNetHPC.ca, will still be there for all your documentation and training material needs. Some website highlights:
Our new Careers page keeps track of Ontario jobs for researchers with HPC experience;
Our technical wiki served it’s 750,000th page view this year, and that doesn’t even count the downloads of PDFs of training materials or views of video-recorded educational sessions.
…3! Big Storage for SciNet Users
This was the year we made available our large HPSS system for long-term storage available to our users. This sophisticated large-scale storage system allows our users reliable near-line storage for very-large data sets. On tape we already have
And the top highlight of the year has to go to working with the great high school students at SATEC in Toronto, who built a supercomputer of their own, learned to program it with MPI and OpenMP, and demo’ed it to their local MPP. These students will be the supercomputing experts and data scientists of the future, and it was a pleasure to work with them.
So thanks for all your emails, tweets, and support through the year, have a wonderful holiday, and…
“SciNet was the natural choice to host, run, and support these supercomputers,” said Dr. Chris Loken, CTO of SciNet. “We’ve built a centre that has the concentration of expertise to support users looking to make use of this system for research and development; and we have one of the largest, most energy-efficient, research computing datacentres in Canada, and still have lots of room to grow.”
SciNet’s green datacentre, which makes use of Canada’s cold winters to help reduce the costs and energy needed to cool these behemoth computers, means that the computer’s rankings on the twice-annual Green 500 list, where the machines are tied for 6th and 24th in the world for energy efficiency, actually understates the case for how energy efficiently they actually run. Because of judicious use of “free cooling” provided by Ontario’s mild climate whenever possible, almost all the energy used by SciNet’s datacentre goes into compute equipment, not air conditioning infrastructure. Partly as a result, SciNet uses less than one half of the possible four megawatts of power to which the facility has access.
But although the systems only sip energy compared to similarly large-scale systems, they can tear through “big data” or raw number-crunching computational problems with ease. On the Graph 500 ranking of supercomputers, which ranks the worlds largest computing resources by how well they can handle the sort of big-data problems that business analytics or digital humanities problems need, the new systems rank the 13th and 35th fastest in the world. And for raw, brute-force number crunching capability, the larger of the two systems ranks 68th.
While researchers look eagerly forward to use such computational engines for discovery and innovation, some need some help in scaling their research software up to effectively utilize such powerful computers. “And that’s where SciNet really shines,” says Dr. Daniel Gruner, CTO-Software of SciNet. “We’ve got an amazing team of expert analysts – second to none – who can help researchers and innovative companies retool in order to take advantage of the largest machines in the world, and realize their full potential.”
More information about the computers, SOSCIP, and SciNet’s role can be found at
SciNet is Canada’s largest supercomputer centre, providing Canadian researchers with computational resources and expertise necessary to perform their research on scales not previously possible in Canada. SciNet powers work from the biomedical sciences and aerospace engineering to astrophysics and climate science, and is funded by CFI, NSERC, the Ontario Government, the Federal Economic Development Agency for Southern Ontario, and the University of Toronto. SciNet runs computers for, and provides computational expertise to users of, Compute Canada’s National Platform, the Southern Ontario Water Consortium, and the Southern Ontario Smarter Computing and Innovation Platform.
SOSCIP BG/Qs operating at SciNet
SOSCIP’s BG/Q Supercomputers near final installation at SciNet’s data centre
SciNet CTO Chris Loken with SOSCIP’s BG/Q Supercomputers
An Earth-sized planet has been found around the star closest to the Sun, Alpha Centauri – and while the astronomers used a telescope, it was only with big computing that they could first “see” the planet.
Want to learn more about computation and the Higgs? This PC Advisor article has a very good overview of the massive data challenges that the worlds largest scientific experiment faces, and this blog post describes how the frontiers of computing and of science affect each other.
For the last three years, the coming of summer has meant one thing — the Ontario HPC Summer School. The three Ontario HPC Consortia have worked together to teach a week-long intensive “boot camp” introducing HPC and parallel computing to an audience including attendees from academia, government labs, and industry. This year, to better reach our Ontario-wide target audience, multiple sections are being taught; one was taught in early June in London (Summer School West), one in late July in Ottawa (Summer School East), and one in Toronto is just winding up today.
Students learned about working at the shell, the basics of HPC, OpenMP, MPI or CUDA, parallel debugging, and the current state of HPC best practices.
It was a tremendous experience, and we look forward to doing something similar next year!
Using Mellanox end-to-end InfiniBand solutions, SciNet has improved reliability and stability of their file systems, greatly improving the performance of parallel user jobs and user efficiency. SciNet is experiencing at least 15-20% increased performance out their upgraded cluster and expects to be high on the TOP500 list when the update is issued at ISC.
Since the start of the year, on Thursdays after school, students at SATEC @ W. A. Porter Collegiate have met in their school’s ICT lab to build and program a supercomputer of their own.
As computing grows more important — for creating the next Facebook or Google, or for designing bicycles, studying black holes, or improving Canadians’ health — using a single computer just isn’t enough. More and more, firms and researchers turn to “cluster computing”, building a single supercomputer out of many individual computers.
Three of the students wrestling with an OpenMP problem
Working with staff from SciNet, Canada’s largest open supercomputing centre, the students at the SATEC school, (many of whom have been part of the school’s ICT Specialist High Skills Major program) with the help of their teacher Sacha Noukhovitch and their ICT department, learned how to write parallel programs to tackle the biggest computations, faster. They learned to use OpenMP to make use of all the processors on modern motherboards, and MPI to communicate between different nodes within a cluster.
Some of the students are cabling up what will be the "Goliath" cluster, a cluster of 3 old single-core Pentium-4 desktops with 100Mb ethernet.
Learning the theory wasn’t enough for these students, however. They started building their own cluster, first using old PCs in one of the school’s computer labs, networking them together and installing BCCD Linux on the systems to start running them as one single compute resource.
Once this computer, “Goliath” was up and running the students started assembling a LittleFe cluster, “David” a cluster with 12 Intel Atom processors and 6 NVIDIA CUDA-capable GPUs, with a top speed of 13.6 GFLOPS (13.6 billion mathematical operations per second– a power which would have made it one of the worlds fastest computers in 1997).
All the parts for a LittleFe gathered together. Doesn't look much like a supercomputer at this point.
The LittleFe cluster construction was one part computer engineering and one part carpentry, assembling the aluminum frame, running cables and wiring, mounting the six motherboards to plates which slide into the finished frame. Once the machine was physically assembled, which took about a month, the software side began, installing BCCD Linux on to the “head node” and then having the “client nodes” boot from the head node.
On May 9th, the David cluster powered up for the first time. Then the experiments began – how much faster was this mini-supercomputer than Goliath, or their desktops? What sorts of problems work well in parallel on this cluster?
The students presented their results on May 17th at University College at the University of Toronto, at a poster session for high school students working with University of Toronto researchers. The presented their scaling results and summary of what they learned, and had both the Goliath and David clusters up and running. The project generated a huge amount of interest as visitors flocked to the booth to ask questions.
Lorenzo Berardinetti, MPP for Scarborough Southwest, came to visit the students and their cluster later at SATEC, and was so impressed that he gave a statement in the Legislature about the project:
Three of the students wrestling with an OpenMP problem
MPI is harder than OpenMP
Some of the students here are cabling up what will be the “Goliath” cluster, a cluster of 3 old Pentium-4 desktops with 100Mb ethernet.
Booting up BCCD Linux on the Goliath cluster head node.
Getting the cluster networking set up on the Goliath cluster; not easy with old hardware!
Watching programs run on the Goliath cluster
All the parts for a LittleFe gathered together.
Mounting the Jetway Intel Atom motherboards on the aluminum assemblies that will go into the frame
Testing the frame, making sure the mainboard assemblies fit
The aluminum frame completed
The final motherboard goes into the LittleFe frame
The team at their booth at University College
Visitor asking questions about the project
Four members of the team explaining the cluster to a visitor
Peering into the inner workings of a portable supercomputer
Many visitors listing to explanations about the project
An visitor asks questions about the “David” cluster
An interested visitor views the LittleFe cluster
There’s lots of interest at the booth!
Demoing the “David” cluster to visitors to the booth
Explaining the poster and clusters to visitors to the booth.
Demonstrating the cluster to interested students
The whole team in West Hall, University College
SATEC students are multi-talented! Muntashir plays the piano at University College West Hall.
SATEC students are multi-talented! David plays the piano at University College West Hall.