Computing and Planet-Finding

October 17, 2012 in blog, blog-general, blog-technical

An Earth-sized planet has been found around the star closest to the Sun, Alpha Centauri – and while the astronomers used a telescope, it was only with big computing that they could first “see” the planet.

Read the rest of this entry →

SciNet and the Discovery of the Higgs Boson

July 4, 2012 in blog, blog-general, frontpage, in_the_news

“SciNet is absolutely central to make anything out of what happens,” Teuscher [a University of Toronto ATLAS Researcher] said in this Toronto Star article.

SciNet, and the other Compute Canada centres, play a significant role in the work of the Large Hadron Collider and the physicists who use it.

Want to learn more about computation and the Higgs? This PC Advisor article has a very good overview of the massive data challenges that the worlds largest scientific experiment faces, and this blog post describes how the frontiers of computing and of science affect each other.

There are many excellent video descriptions of the physics such as What is the Higgs boson? by theoretical physicist John Ellis, and this explanation of the Higgs mechanism by CMS (one of the CERN experiments) spokesperson Joe Incandela. And this week’s CERN Bulletin has a number of articles describing both the physics and the experimental details that went into this discovery.

For a University of Toronto perspective, the University of Toronto news has a good writeup.

The resulting science papers are starting to come out, and some are freely available:
Landmark Papers on the Higgs Boson Published and Freely Available in Elsevier’s Physics Letters B, and Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC.

Ontario HPC Summer School 2012 (Central ON: Toronto)

June 28, 2012 in blog, blog-general

For the last three years, the coming of summer has meant one thing — the Ontario HPC Summer School. The three Ontario HPC Consortia have worked together to teach a week-long intensive “boot camp” introducing HPC and parallel computing to an audience including attendees from academia, government labs, and industry. This year, to better reach our Ontario-wide target audience, multiple sections are being taught; one was taught in early June in London  (Summer School West), one in late July in Ottawa (Summer School East), and one in Toronto is just winding up today.

Students learned about working at the shell, the basics of HPC, OpenMP, MPI or CUDA, parallel debugging, and the current state of HPC best practices.

It was a tremendous experience, and we look forward to doing something similar next year!

SciNet’s Infiniband Upgrade on The HPC Rich Report

June 16, 2012 in blog, blog-general, in_the_news

Our CTO, Dr. Chris Loken, was on this week’s Rich Report HPC podcast with Gilad Shainer of Mellanox and the HPC Advisory Council describing our recent Infiniband upgrade and the improvements that means for our user community; the podcast was also featured on Inside HPC.

 Using Mellanox end-to-end InfiniBand solutions, SciNet has improved reliability and stability of their file systems, greatly improving the performance of parallel user jobs and user efficiency. SciNet is experiencing at least 15-20% increased performance out their upgraded cluster and expects to be high on the TOP500 list when the update is issued at ISC.

SATEC Students Build Supercomputer with SciNet

May 18, 2012 in blog, blog-general, for_educators, frontpage

Since the start of the year, on Thursdays after school, students at SATEC @ W. A. Porter Collegiate have met in their school’s ICT lab to build and program a supercomputer of their own.

As computing grows more important — for creating the next Facebook or Google, or for designing bicycles, studying black holes, or improving Canadians’ health — using a single computer just isn’t enough.  More and more, firms and researchers turn to “cluster computing”, building a single supercomputer out of many individual computers.

Learning parallel programming

Three of the students wrestling with an OpenMP problem

Working with staff from SciNet, Canada’s largest open supercomputing centre, the students at the SATEC school, (many of whom have been part of the school’s ICT Specialist High Skills Major program) with the help of their teacher Sacha Noukhovitch and their ICT department, learned how to write parallel programs to tackle the biggest computations, faster.   They learned to use OpenMP to make use of all the processors on modern motherboards, and MPI to communicate between different nodes within a cluster.

Assembling the "Goliath" Cluster

Some of the students  are cabling up what will be the "Goliath" cluster, a cluster of 3 old single-core Pentium-4 desktops with 100Mb ethernet.

Learning the theory wasn’t enough for these students, however.  They started building their own cluster, first using old PCs in one of the school’s computer labs, networking them together and installing BCCD Linux on the systems to start running them as one single compute resource.

Once this computer, “Goliath” was up and running the students started assembling a LittleFe cluster, “David” a cluster with 12 Intel Atom processors and 6 NVIDIA CUDA-capable GPUs, with a top speed  of 13.6 GFLOPS (13.6 billion mathematical operations per second– a power which would have made it one of the worlds fastest computers in 1997).

All the parts for a LittleFe gathered together.   Doesn't look much like a supercomputer at this point.

The LittleFe cluster construction was one part computer engineering and one part carpentry, assembling the aluminum frame, running cables and wiring, mounting the six motherboards to plates which slide into the finished frame.   Once the machine was physically assembled, which took about a month, the software side began, installing BCCD Linux on to the “head node” and then having the “client nodes” boot from the head node.

It's Alive!

On May 9th, the David cluster powered up for the first time. Then the experiments began – how much faster was this mini-supercomputer than Goliath, or their desktops?   What sorts of problems work well in parallel on this cluster?

The students examined the scaling behaviour of three different computer programs on the cluster: Galaxsee, which simulates gravitational N-body dynamics such as early models of galaxy formation; Life, a parallel implementation of Conway’s game of life; and Monkey, software written by one of the SATEC students investigating the “infinite monkey theorem”, measuring the rate at which a sequence of randomly-generated characters matches text from Hamlet.

The team at their booth at University College

The team at their booth at University College

The students presented their results on May 17th at University College at the University of Toronto, at a poster session for high school students working with University of Toronto researchers.  The presented their scaling results and summary of what they learned, and had both the Goliath and David clusters up and running.  The project generated a huge amount of interest as visitors flocked to the booth to ask questions.

Lorenzo Berardinetti, MPP for Scarborough Southwest, came to visit the students and their cluster later at SATEC, and was so impressed that he gave a statement in the Legislature about the project:

Science Rendezvous 2012

May 12, 2012 in blog, blog-general, for_educators

We had a great time at Science Rendezvous this year!

This year explorers of all ages at our booth found out how researchers use computers for discovery.  They saw how even simple computer simulations that you can run in your web browser can teach them important facts about how complex systems behave.

Learning how forest fires spread at Science Rendezvous 2012

A Science Rendezvous explorer learns how computer simulations can teach us about how forest fires spread

Canada is a leader in forest fire research; we have huge stands of forests and we must understand how fires behave if we’re to prevent them.    Students explored the Forest Fire application by Shodor and saw how wind speed and forest density effect forest fires in  a simplified model of how forest fires spread.

Students also crashed galaxies together with Galcrash; processes that take billions of years unfolded as they watched and they saw how different galaxy orientations — and even turning “on” and “off” effects of gravitational dark matter — change the outcome of these enormous events.

Students collide galaxies at Science Rendezvous 2012

Students use computer simulations to explore how galaxy collisions - which unfold over billions of years - take place

But as researchers ask harder questions – what if we wanted to include topography in our forest fire simulation?  Or more realistic galaxies in our galaxy collision simulations? – they quickly outgrow what can be done on a single computer.  That’s where we come in!

Discussing Cluster Computing at Science Rendezvous 2012

SciNetter Danny Gruner explains cluster computing and our Little Fe cluster at Science Rendezvous 2012

To show how supercomputers work,  we also debuted our LittleFe cluster, showing how networking together ordinary computers can, with carefully written software, make use of the power of many processors and memory to tackle bigger problems than you could with a single computer.

Many thanks the organizers who made this possible, and to everyone who turned out on a Saturday to discover science!

Success Story: Intel and Intel® MPI Library

February 29, 2012 in blog-general, for_industry, success_story

When clients have extremely large-scale computations to do, SciNet recommends that they use Intel® MPI Library, a highly tuned, massively scalable communications library that performs well where competitors have trouble even starting.   And when Intel was looking to further improve the performance of their flagship MPI library, they knew they could turn to SciNet and its massive cluster to serve as the ultimate testbed.

SciNet’s GPC cluster, with four thousand nodes connected by ethernet, is one of the largest flat such networks in the world, and provided an extremely valuable testing facility for tuning the MPI library’s performance on extremely large scales. “Over the years we have worked closely with SciNet to develop and tune our Intel MPI Library symbiotically with their high performance computing applications on their compute platform,” said Sanjiv Shah, Director, Technical Computing Software, SSG, at Intel.

“Over the years we have worked closely with SciNet to develop and tune our Intel MPI Library symbiotically with their high performance computing applications on their compute platform.”

Sanjiv Shah, Director, Technical Computing Software, SSG, Intel Corporation.

“ As SciNet’s computational infrastructure and demands from their research community have grown, we have worked closely to ensure that Intel software development tools continue to scale with their computational demands.”

The Intel team used up to 30,000 processors at a time on SciNet’s GPC cluster to test and tune the latest version of their MPI library, released as Intel® MPI Library v4.0, and SciNet users used an early version of the library to run one of Canada’s largest simulations.

But the demand for even more powerful computers and extreme scaling never ends, and so development, and the partnership, continues.   “We’re delighted to continue our fruitful partnership with Intel, helping develop and test their massively scalable MPI library implementation”, said Dr. Daniel Gruner, SciNet’s CTO-Software. “Intel has become a very important software company, taking its cues from the needs of high performance users, such as those using the SciNet systems who are always pushing the envelope for computational science.”

Compute Canada Allocates Nearly $80 Million Worth of Powerful Computing Resources To Support Canadian Researchers

February 27, 2012 in blog-general, in_the_news

Supercomputers and data centres across Canada tasked with supporting national-scale research problems

[This release is available as a PDF here, et en français ici. It also appeared at insideHPC]

(February 27, 2012 – Ottawa) – Compute Canada, Canada’s national platform of High Performance Computing (HPC) resources and partners, today announced grants of nearly $80 million worth of state-of-the-art computing, storage, and support resources allocated to 159 leading edge Canadian research projects across the country. Compute Canada’s distributed resources represent close to two petaFLOPs of compute power, which is equal to two quadrillion calculations per second, and more than 20 petabytes of storage, equivalent to more than 400 million four drawer filing cabinets filled with text. These competitively-awarded grants will allocate nearly 725 million processor hours and eight petabytes of storage to the projects over the next year. Researchers will also have direct access to more than 40 Compute Canada programming and technical experts who are critical to enabling the efficient use of these state-of-the-art HPC systems.

“The scope and scale of today’s research investigations demand an incredible amount of computational power,” said Compute Canada Executive Director, Susan Baldwin. “Compute Canada responds to that need by delivering the essential tools and resources Canadian researchers need to respond to today’s big data challenges, propel ground-breaking discoveries, and develop new industrial applications or commercial opportunities.” Each year Compute Canada accepts requests from researchers across the country whose projects require cutting-edge computing resources, storage, and expertise. The projects — which range from aerospace design and climate modeling to medical imaging and nanotechnology — produce results and breakthroughs that in many cases simply wouldn’t be possible without Compute Canada’s resources.

“I’ve always been a champion of HPC because it enables us to perform the kind of complex, large-scale calculations that are essential for verifying our ideas and uncovering new findings,” says André Bandrauk, a University of Sherbrooke Professor of Theoretical Chemistry and Canada Research Chair in Computational Chemistry & Molecular Photonics. “These resources are critical for driving advancements in Canadian research as well as enabling Canadian researchers to compete on the international stage.”

The partner institutions and resource centres that comprise Compute Canada are hubs of interdisciplinary computational research, connected from coast to coast by the high-speed national CANARIE network and regional advanced networks. Together, these distributed computing facilities work collaboratively to provide the expertise and resources necessary to give Canada’s researchers and innovators access to these world-class technologies. Compute Canada’s resources are granted based on scientific merit and computational need. In addition to the competitively-allocated grants for above average computing requirements, all Canadian researchers have access to significant default allocations of computational resources and support expertise. For more information on Compute Canada, its regional consortia, and its distributed resources, visit the Compute Canada website: www.computecanada.org.

Media Contact:
Susan Baldwin
Executive Director, Compute Canada
susan.baldwin@computecanada.org

– 30 –

* Compute Canada can also arrange media interviews with project representatives from any of the 2012 Resource Allocation recipients. *

BACKGROUND

2012 Resource Allocations Recipients

PDF version of the complete list of resource recipients for 2012 can be found here, or at this web page.

Compute Canada

Compute Canada is Canada’s national platform of supercomputing resources, bringing together computer and data facilities, computational expertise, and hundreds of academic researchers to tackle some of Canada’s biggest research challenges. Compute Canada has built a user community across Canada in disciplines ranging from the sciences and engineering to arts and humanities. Each year, Compute Canada’s Resource Allocation Committee awards resources to Canadian research projects, which are selected based on their scientific merit. For more information about Compute Canada or the 2012 resource allocations, please visit https://computecanada.org.

What is supercomputing?

Supercomputing, or High Performance Computing (HPC), uses the largest and most powerful computers available to tackle the biggest problems facing science, society, and industry. Supercomputers’ massive number of processors and specialized software capabilities enable them to tackle extremely complex and large-scale computational problems. For example, a calculation-intensive task that would take a single PC years to complete, can be solved by supercomputers in an hour. This does more than shorten the time to get an answer; it makes new types of analysis and understanding possible. From generating computer models of unprecedented fidelity in the medical, biological and earth sciences, to analyzing vast amounts of data in fields such as space research in astronomy, text or musical archiving in the humanities, or complex financial projections in industry, supercomputers provide an extensive set of hardware to build Canada’s skills and capabilities in science, technology, and the economy.

10,000,000 Computations Served… and counting!

February 8, 2012 in blog-general, in_the_news

In the early hours of Sunday, Feb 5th, SciNet’s GPC supercomputer quietly performed its ten-millionth set of calculations for Canadian researchers, crossing the milestone by performing a simulation for an international particle physics experiment.

Like a virtual factory, the SciNet computing  systems run twenty-four hours a day, seven days a week; each second it is running, it completes up to 300,000,000,000,000 mathematical operations. Researchers across the country use the internet to construct their simulation or data analysis task remotely; then the supercomputer assigns it to a collection of its 40,000 processors when they become available. The systems tackle such tasks as biomedical research including studying Alzheimer’s and brain function; aerospace research such as finding cleaner-burning mixes of biofuels; and in astronomy for finding signals in the very first light to travel through the Universe.

“We’re enormously pleased that our centre, its people, and its facilities have been in such high demand from researchers across Ontario and all of Canada”, said Dr. Chris Loken, Chief Technical Officer of SciNet. “To have built something that has proven so essential for so many scientists, engineers, and others that it’s been asked to provide ten million compute `jobs’ in just two and a half years is remarkable.”

The science behind the ten-millionth job involves some of the most fundamental physics possible, the search to understand the basic properties of matter and the forces that govern the universe. ATLAS, an international experiment based at the Large Hadron Collider in Geneva, Switzerland is one of SciNet’s biggest users. “The SciNet facility provides the largest ATLAS Tier-2 Analysis Facility in Canada and we now run about 8,000 jobs a day for ATLAS, about 2.2 million in the last 12 months”, said Dr. Leslie Groer, in charge of the ATLAS project at SciNet. “Simulation calculations like this one are vital to understand the ATLAS detector and to analyze the physics coming from the largest experiment in the world.”

This article was featured on InsideHPC.

About SciNet:

SciNet is Canada’s largest supercomputer centre, providing Canadian researchers with the computational resources and expertise necessary to perform their research on scales not previously possible in Canada, from the biomedical sciences and aerospace engineering to astrophysics and climate science. More information is available at http://www.scinet.utoronto.ca.

About Compute Canada

Compute Canada is a national platform of advanced computing resources across the country, bringing together computer and data resources, academic researchers, and computational expertise to tackle some of the Canada’s biggest research questions. Compute Canada has built a user community across Canada in disciplines ranging from the sciences and engineering to arts and humanities. The Compute Canada Resource Allocation Committee annually awards supercomputing time to projects on the basis of scientific merit. For more information about Compute Canada or this year’s allocations, see https://computecanada.org.

The Daily Beast: Supercomputer Programmers Wanted

January 6, 2012 in blog, blog-general



Scientists refer to the talent shortage as the “missing middle,” meaning there are enough specialists to run the handful of world-beating supercomputers that cost a few hundred million dollars, and plenty of people who can manage ordinary personal computers and server computer—but there are not nearly enough people who know how to use the small and mid-sized high-performance machines that cost anywhere from $1 million to $10 million.

From The Daily Beast, 28 Dec.

Interested in learning about programming supercomputers?   SciNet offers regular courses on a variety of topics!