Harvard NRG Tools

From Neuroinformatics Research Group
Jump to: navigation, search



Since 2011, the Neuroinformatics Research Group has migrated the majority of software from nexus-tools into a standard software repository. The goal of this project was to develop a more stable, maintainable, and more tightly controlled software library for production applications.

Following project launch, we have committed numerous bug fixes, code clean-ups, features, and solutions to a wide variety of compatibility issues. We continue to work closely with users and developers from the Harvard and Martinos/MGH communities to identify and resolve issues that are affecting productivity.

The NRG processing environment and scripts are only supported under the bash shell, which is usually the default shell on all major GNU/Linux distributions. If you are at the Martinos Center, you will have to change your shell permanently by emailing help@nmr.mgh.harvard.edu.

How to use the library

Each release of this repository is located at a specific location within both the Martinos/MGH and NCF environments


user@compute-node:~$ DIR=/ncf/tools


user@compute-node:~$ DIR=/cluster/nrg/tools

The most recent, stable1 release as of this writing is 0.9.9. The most recent beta is 0.10.0b. To use this release you must source the appropriate environment setup script env_setup.sh.

NCF or Martinos/MGH

user@compute-node:~$ . ${DIR}/0.9.9/code/bin/env_setup.sh

Please refer to the Notes for users section for any known issues or caveats e.g., sourcing multiple setup scripts from within the same login session. This setup script will not work for csh or tcsh users. For these users, we suggest that you switch to bash swiftly and quietly. If you intend to take full advantage of the more recent features, you might consider this a requirement. You may need to contact your systems administrator to permanently change your login shell. If you do decide to continue using csh or tcsh for whatever reason, we do offer a very basic environment setup script:


user@compute-node:~% source ${DIR}/0.9.9/code/bin/ncf_setup.csh


user@compute-node:~% source ${DIR}/0.9.9/code/bin/mgh_setup.csh

1 while we do test backward-compatibility to a degree, there is still plenty of abandoned code.

Notes for users

It is not recommended that you source multiple environment setup scripts from within the same login session. If you feel the need to do this, it's important that you ensure that all changes made to your environment by any previously sourced scripts are reverted. This can be a non-trivial task. There is an experimental solution that works on occasion

NCF or Martinos/MGH

user@compute-node:~$ . ${DIR}/0.9.9/code/bin/env_restore.sh

This script should not be used in a production setting. It will attempt to restore your environment to a state before any changes were made by env_setup.sh. Your safest bet is to switch your environment, log out, and log back in.

If you use one of the Harvard/NRG Tools releases, do not continue to source the older mgh_setup scripts from nexus-tools or the nmr-stable*-env scripts. Choose one, or the other. The interactions between these are completely unknown and probably not good.

If you are at MGH, do not use the software in /cluster/vc/buckner, /cluster/vc/staging, or /cluster/vc/unstable unless you're asked to do so. Similarly on the NCF, do not use the software in /ncf/tools/unstable or /ncf/tools/staging. These repositories are for development purposes only and may not behave reliably or in a backwards-compatible manner.

One of the environment variables set by env_setup.sh is $_HVD_SPM_DIR. If you want to load this version of SPM every time you start matlab, place the following line into your ~/matlab/startup.m:


This will prepend SPM and it's subdirectories to your Matlab path. To append, you must pass the -end switch:

addpath(genpath(getenv('_HVD_SPM_DIR')), '-end');

Still completely lost?

If you feel like all hope is lost, you don't know anything about login scripts, login shells, ~/.bashrc or ~/.cshrc files, then we need to roll all the way back to day one. Backup or remove any existing ~/.cshrc, ~/.profile, ~/.bashrc, and ~/.bash_profile files

user@compute-node:~$ mv ~/.cshrc ~/.cshrc.backup
user@compute-node:~$ mv ~/.profile ~/.profile.backup
user@compute-node:~$ mv ~/.bashrc ~/.bashrc.backup
user@compute-node:~$ mv ~/.bash_profile ~/.bash_profile.backup

Create a new file ~/.bashrc with the following contents

# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
## --- stylize command prompt
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
eval "`dircolors -b`"
## --- source global settings
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
## --- not sure why /usr/local/bin is the only path not making it into my PATH through cron
export PATH=$PATH:/usr/sbin:/usr/local/bin
## --- source environment setup script
if [ -f $setup_script ]; then
        . $setup_script
## --- configure how history works
export HISTCONTROL=ignoreboth
export HISTSIZE=10000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize

Create a new file ~/.bash_profile file with the following contents:

if [ -f ~/.bashrc ]; then
     . ~/.bashrc

Now logout and login.

Development (beta)

We develop and maintain the codebase primarily on CentOS. We try to make sure that existing code functions, that it is generally reliable, and valid. Unfortunately, there is simply too much code and too few tests to make any guarantees. As it turns out, folks generally prefer to contribute code but not tests. We have never tried any version of CentOS prior to v5.x.

We also maintain a VM that encapsulates this processing environment, including a current tag of the Harvard/NRG Tools repository, which anyone is free to request at any time.

If you decide to clone the repository yourself and you would like to compile various bits and pieces of code, you may find CentOS v5.x or greater to be more convenient. Trust.


Here are some directions on getting things up and running within Eclipse.

  1. Download and Install Eclipse Classic
  2. Install the Eclipse Marketplace: Help → Install new Software → Switch to the Juno Repository → General Purpose Tools → Marketplace Client
  3. From Eclipse Marketplace, install MercurialEclipse (by MercurialEclipse Project), PyDev (by Appcelerator), and Terminal Plug-In (by Google) plugins
  4. Create a new project called hvd_nrg_tools: New → Project → General → Project

Now, using either the Terminal Plug-In or a Terminal application, navigate to your project directory, and clone the repository manually:

$ cd ${WORKSPACE}/hvd_nrg_tools/
$ mkdir code && cd code
$ hg clone ssh://username@entry.nmr.mgh.harvard.edu//cluster/vc/unstable/code/ .

After the clone is complete, you should be able to refresh your Eclipse project (F5) and see all of the code.

3rd Party Apps and Extras

There are two artefacts that I didn't commit to the repository that are pretty much requirements: 3rd party apps and extras. This is a major wart on the organization strategy, but what can you do?

3rd Party Apps

3rd party apps e.g., freesurfer, fsl, afni, mricron, mongodb, php, python, etc. were never committed to the repository because, well, they're 3rd party. The original idea is that these apps are not ours and we're not the "maintainers". The problem is that there's at least one major assumption: If you download version A of some app today and download the same version again tomorrow, they will be the same. As it turns out that this is not always true e.g., "latest afni version" vs. "latest compile date" or SPM "revisions". If not managed carefully, this can introduce reproducibility problems.

At any rate, the apps directory must sit alongside the clone top-level since there is a dependency on that specific location. It can be a symlink, but it must be present there. The layout of the apps directory tree must be consistent with our deployment strategy:

  `-- arch
         `-- linux_x86_64
         |        `-- freesurfer
         |        |       `-- 4.5.0
         |        |       `-- <version>
         |        `-- <app>
         `-- <arch>

Everything is clearly separated to enable better portability between computer architectures. This idea also exists in the repository itself e.g., if you look in the lib folder.

Once everything is situated correctly, the loader scripts should work seamlessly:

user@compute-node:~$ . load_spm 8.5236


The main repository is around 5 GB and it's somewhat reasonable to maintain i.e., doing a hg status does not take just shy of a decade (especially over NFS). The extras folder is around 60 GB and most of it contains large artefacts e.g., surface templates, that I'm not sure anyone uses or will continue to use in a few years time. Basically it would have unnecessarily exploded the size of the repository. I keep these file outside of the repository and offer them up as a separate download.

Like the apps directory, the extras folder must be located within the repository top-level directory i.e., under code. I suggest using a symlink.

Personal tools

Open Data
Tools and Utilities