1. Overview

Availability of high-resolution neuroimaging and massively parallel computation support in modern HPC clusters have fueled an interest in Computational Neuroscience. This has triggered the development of algorithms to process neuroimaging data. However, most of these algorithms have been limited to a single core. In order to scale up and scale out these algorithms and bring HPC capabilities to the Neuroscience arena, we propose, MPI-LiFE; a scalable and distributed tool for statistical evaluation of brain connectomes.

MPI-LiFE is a parallel implementation of the LiFE method in the Encode Toolbox using the MVAPICH2 MPI library. The parallel implementation uses an MPI-based distributed algorithm for sparse multiway matrix multiplication which is heavily used in the optimization algorithm in LiFE. MPI-LiFE takes advantage of efficient communication primitives in MVAPICH2 to drastically improve performance.

2. System Requirements

The MPI-LiFE binary release requires the following system configuration and software:

  1. At least 2 core CPU and 8 GB RAM

  2. Docker (version 18.06 or later)

  3. Linux environment

    NOTE: MPI-LiFE can be run on any multi-core laptop, desktop, server, or cluster

3. Running MPI-LiFE Demo

MPI-LiFE Demo is configured to run an input dataset with just 50 iterations of the optimization algorithm. To run the demo, execute these commands in a Linux terminal.

  1. Get the latest docker image

    docker pull neurohpc/neurohpc

  2. Run MPI-LiFE with the demo dataset

    docker run --pid=host --ipc=host --rm -it          \
               --cpuset-cpus=0-$(( `nproc` - 1 ))      \
               --privileged neurohpc/neurohpc

3. Running MPI-LiFE

To run MPI-LiFE, execute these commands in a Linux terminal.

  1. Get the latest docker image

    docker pull neurohpc/neurohpc:v2

  2. Create the output directory and create data and parameter json config files containing the path to your input files (relative to /input that you are going to specify below).

    cat > data_config.json << CONF
    {
            "t1_aligned": "/input/sub-FP/anatomy/t1.nii.gz",
            "trilin_dwi": "/input/sub-FP/dwi/run01_fliprot_aligned_trilin.nii.gz",
            "trilin_bvecs": "/input/sub-FP/dwi/run01_fliprot_aligned_trilin.bvecs",
            "trilin_bvals": "/input/sub-FP/dwi/run01_fliprot_aligned_trilin.bvals",
            "track_tck": "/input/sub-FP/tractography/run01_fliprot_aligned_trilin_csd_lmax10_wm_SD_PROB-NUM01-500000.tck"
    }
    CONF
    

    cat > param_config.json << CONF
    {
            "life_discretization": 360,
            "num_iterations": 500
    }
    CONF
    

  3. Run MPI-LiFE with input dataset

    docker run --pid=host --ipc=host              \
               --cpuset-cpus=0-$(( `nproc` - 1 )) \
               --privileged --rm -it              \
               -v /path/to/life/data:/input       \
               -v `pwd`:/output -w /output        \
               neurohpc/neurohpc:v2               \
               data_config.json param_config.json /path/to/track_tck output_fe.mat output.json
    

    NOTE: Replace /path/to/life/data to where you have your input files. Replace /path/to/track_tck with path to track.tck file relative to /input. Replace pwd to point to your output directory (if you don't want them to go to your current working directory). If you change this, be sure to move your json config files there also. This container starts up with current directory set to /output.

3. Output

The main output will be a file called output_fe.mat. This file contains the following object.

fe = 
    name: 'temp'
    type: 'faseval'
    life: [1x1 struct]
      fg: [1x1 struct]
     roi: [1x1 struct]
    path: [1x1 struct]
     rep: []

output_fg.pdb contains all fasicles with > 0 weights within fg object (fibers).