SciNet News February 2012

February 5, 2012 in for_researchers, for_users, newsletter



    To mitigate some of the file system problems, there will be a relatively short downtime of all SciNet systems on Thursday to perform a reconfiguration. The downtime is expected to last approximately two hours. Check the wiki for updates.

  • Wed Feb 8, 12:00 noon: SCINET USER GROUP (SNUG) MEETING

    The SciNet Users Group (SNUG) meetings are every month on the second Wednesday, and involve pizza, user discussion, feedback, and one or two short talks on topics or technologies of interest to the SciNet community.

    This time, we will have

    • TechTalk by Jonathan Dursi (SciNet) on

    “Tuning your MPI application without writing code: mpitune and otpo”

    MPI libraries are very complicated packages, with many tunable parameters that affect their behaviours. These parameters are set to reasonable default settings that should make sense for most applications; but sometimes modest adjustments to these settings can improve the performance of your code. We’ll discuss automated tools for the IntelMPI and OpenMPI libraries which allow testing large numbers of these parameters and how they can help you improve performance of your code.

    • User discussion
    • Pizza!

    Sign up at


    Part III of SciNet’s Scientific Computing course. These parts can be taken as “mini-courses” or “modular courses” by astrophysics and physics graduate students.

    More info:


  • Thu Feb 23 and Fri Feb 24: SciNet will be hosting and co-teaching a Software Carpentry scientific computing boot-camp during reading week.

    Since 1998, Software Carpentry has taught scientists and engineers the skills and tools they need to use computing more productively. Thanks to a grant from the Sloan Foundation, we are running two-day workshops at selected institutions, followed by 4-8 weeks of self-paced online learning. Each workshop will cover:

    • Using the Unix shell to get more done in less time
    • Using version control to manage and share information
    • Basic Python programming
    • How (and how much) to test programs
    • Working with relational databases

    The online follow-up will go deeper into these topics, and also touch on program design and construction, matrix programming, data management, and development life cycles for small research teams.

    Registration details to follow; keep an eye on

  • SciNet is a local seminar location for the Coast-to-Coast seminar series. Dates/times: Feb 21, Mar 6, Mar 20, Apr 3. 2:30 to 3:30 EST. More info at
  • Mar 14/Apr 11/May 9, at noon: FUTURE SNUG MEETINGS

    We are still looking for users (students, postdocs, staff, faculty, it does not matter) willing to giving a short talk (20-30 minutes) about interesting work that they did on SciNet clusters and how they did it! If you are up for it, email

    More info on future SNUGs and sign-up at (Mar) (Apr) (May)


  • GPC: Due to some changes we are making to the GigE nodes, if you run multinode ethernet MPI jobs, you will need to explicitly request the ethernet interface in your mpirun:

    For Openmpi: mpirun –mca btl self,sm,tcp

    For IntelMPI: mpirun -env I_MPI_FABRICS shm:tcp

    There is no need to do this if you run on IB, or if you run single node mpi jobs on the ethernet (GigE) nodes. Please check the wiki page on ‘GPC MPI Versions’ for more details.

  • The new Resource Allocations have taken effect on January 9 for groups who were awarded an allocation. This includes storage allocations.
  • Note that the ‘diskUsage’ command from the ‘extras’ module can be used to query your disk usage, your group’s disk usage, and the quotas (including number of files), for each of the file systems that you have access to.
  • For group with storage allocations, we will start to make backups of project. This is possible since most material now resides on HPSS. Note that this backup system does not keep full snapshots of the past, but keeps a copy of the last version of the files. So, if any data accidentally gets deleted and you contact us quickly, it can be restored.
  • GPC: On January 30th, CentOS 5 was phased out.
  • GPC: A more recent module for valgrind/3.7.0 was installed which includes valkyrie, a visualization tool for memcheck.
  • GPC: A module for scalapack/2.0.1 was installed
  • GPC: A newer version of R was installed as module R/2.14.1 (users have to explicit request this version, 2.13.1 is still the default).
  • GPC: Newer versions of the GSL was installed as modules gsl/1.15-gsl and gsl/1.15-intel (these are also not the default yet).
  • A milestone was reached on Sunday February 5th when the 10,000,000th job ran on SciNet. It started at 4:31 am and ran for 2 hours 13 minutes and 31 seconds. It was a job from the ATLAS project (


All new wiki content below is listed and linked on the main page:
  • The slides of the lectures of part II of the Scientific Computing Course on “Numerical Tools for Physical Scientists”.
  • The page on ‘GPC MPI Versions’ was updated.
  • Information about part III of the Scientific Computing Course on “High Performance Scientific Computing”.


  • Jan 9: “Intro to the Linux shell” session was given.
  • Jan 11: “Intro to SciNet” session was held.
  • Jan 11: SNUG meeting was held, with a TechTalk by Chris Neale on “Kinetics of Hydrophobic Gating and Energetics of Magnesium Permeation in the Bacterial Divalent Cation Transport System CorA”
  • Jan 13,20,27, Feb 3: Part 2 of SciNet’s Scientific Computing course on “Numerical Tools for Physical Scientists” was given.