Usage

The exact command to run PyNets® depends on several factors:

(1)

The Installation method (i.e. pip, docker, singularity, git), along with the environment resources available for computing;

(2)

The types and modalities of available data inputs;

(3)

The execution objective (e.g. ensemble connectome sampling, unitary connectome sampling, plotting, graph-theory, embedding, optimization/benchmarking).

Required Inputs

Required

(A)

An alphanumeric subject identifier must be specified with the -id flag. It can be a pre-existing label or an arbitrarily selected one, but it will be used by PyNets for naming of output directories. In the case of BIDS data, this should be PARTICIPANT`_`SESSION`_`RUN from sub-PARTICIPANT, ses-SESSION, run-RUN.

(B)

A supported connectivity model specified with the -mod flag. If PyNets is executed in multimodal mode (i.e. with both fMRI and dMRI inputs in the same command-line call), multiple modality-applicable connectivity models should be specified (minimally providing at least one for either modality). PyNets will automatically parse which model is appropriate for which data.

(C)

If an atlas is not specified with the -a flag, then a parcellation file must be specified with the -a flag. The following curated list of atlases is currently supported:

Atlas Library
  • ‘atlas_harvard_oxford’

  • ‘atlas_aal’

  • ‘atlas_destrieux_2009’

  • ‘atlas_talairach_gyrus’

  • ‘atlas_talairach_ba’

  • ‘atlas_talairach_lobe’

  • ‘coords_power_2011’ (only valid when using the -spheres flag)

  • ‘coords_dosenbach_2010’ (only valid when using the -spheres flag)

  • ‘atlas_msdl’

  • ‘atlas_pauli_2017’

  • ‘destrieux2009_rois’

  • ‘BrainnetomeAtlasFan2016’

  • ‘VoxelwiseParcellationt0515kLeadDBS’

  • ‘Juelichgmthr252mmEickhoff2005’

  • ‘CorticalAreaParcellationfromRestingStateCorrelationsGordon2014’

  • ‘AICHAreorderedJoliot2015’

  • ‘HarvardOxfordThr252mmWholeBrainMakris2006’

  • ‘VoxelwiseParcellationt058kLeadDBS’

  • ‘MICCAI2012MultiAtlasLabelingWorkshopandChallengeNeuromorphometrics’

  • ‘Hammers_mithAtlasn30r83Hammers2003Gousias2008’

  • ‘AALTzourioMazoyer2002’

  • ‘DesikanKlein2012’

  • ‘AAL2zourioMazoyer2002’

  • ‘VoxelwiseParcellationt0435kLeadDBS’

  • ‘AICHAJoliot2015’

  • ‘whole_brain_cluster_labels_PCA100’

  • ‘whole_brain_cluster_labels_PCA200’

  • ‘RandomParcellationsc05meanalll43Craddock2011’

(D)

A set of brain image files. PyNets is a post-processing workflow which means that input files should already be preprocessed. Minimally, all DWI, BOLD, and T1W image inputs should be motion-corrected (and ideally also susceptibility-corrected + denoised).

anat

The T1w can be preprocessed using any method, but should be in its native scanner anatomical space.

func

A BOLD/EPI series can be preprocessed using any method, but should in the same scanner anatomical space as the T1w (i.e. coregistered to the T1w anat and not yet normalized to a standard-space template since PyNets must do this in order that it can accurately map parcellations to individual subject anatomy).

dwi

A DWI series should ideally be in its native diffusion MRI (dMRI) space (though can also be co-registered to the T1w image) and must contain at least one B0 for reference. If -dwi is specified, then -bvec and -bval must also be. Note that the choice of models specified with -mod also depends on the sampling scheme of your dwi data (e.g. CSD will likely overfit your data in the case of too few directional volumes).

Note

Native-space DWI images are preferred for several reasons. Even when rigidly applied, intermodal registration of the diffusion signal to T1-weighted space, for instance, which has considerably different white-matter/grey-matter signal contrast (and lower specificity for the former), will inevitably result in some degree of spatial misalignment and signal loss. Note that this is unlike the case of BOLD EPI – an inherently noisy, temporal (i.e. non-structural) modality – which benefits from being co-registered to T1w images of significantly higher spatial resolution, particularly in grey-matter tissue where BOLD signal is typically observed. To ensure minimal within-subject variance and maximal between-subject variance as a function of numerous hyperparameters used to sample connectome ensembles with PyNets, input DWI data should ideally carry maximal SNR and have undergone the least amount of resampling necessary (e.g. minimally eddy/motion correction).

-g

A path to a raw graph can alternatively be specified, in which case the initial stages of the pipeline will be skipped. In this case, the graph should be in .txt, .npy, .csv, .tsv, or .ssv format.

Note

Prior normalization of the anat, func, or dwi inputs to PyNets is not (yet) supported. This is because PyNets relies on the inverse transform from an MNI-template to conform a template-resampled version of the atlas(es) specified (i.e. to define nodes) into native T1w anatomical space. PyNets uses the MNI152 template by default to accomplish this, but you can specify alternative templates in the advanced.yml advanced settings to override MNI152 (e.g. a Pediatric template), following the naming spec of templateflow (See: <https://github.com/templateflow/templateflow>).

Note

If you preprocessed your BOLD data using fMRIprep, then you will need to have specified either T1w or anat in the list of fmriprep –output-spaces.

Note

Input image orientation and voxel resolution are not relevant, as PyNets will create necessary working copies with standardized RAS+ orientations and either 1mm or 2mm voxel resolution reslicing, depending on the advanced.yml default or resolution override using the -vox flag.

Note

All file formats are assumed to be Nifti1Image (i.e. .nii or .nii.gz file suffix), and absolute file paths should always be specified to the CLI’s.

Note

Tissue segmentations are calculated automatically in PyNets using FAST, but if you are using the pynets_bids CLI on preprocessed BIDS derivatives containing existing segmentations, pynets will alternatively attempt to autodetect and use those.

Custom File Inputs

-m

(fMRI + dMRI) A binarized brain mask of the T1w image in its native anatomical space. Input images need not be skull-stripped. If brain masking has been applied already, PyNets will attempt to detect this, else it will attempt to extract automatically using a deep-learning classifier. See [deepbrain]<https://github.com/iitzco/deepbrain> for more information.

-roi

(fMRI + dMRI) A binarized ROI mask used to constrain connectome node-making to restricted brain regions of the parcellation being used. ROI inputs should be in MNI space.

-a

(fMRI + dMRI) A parcellation/atlas image (in MNI space) used to define nodes of a connectome. Labels should be spatially distinct across hemispheres and ordered with consecutive integers with a value of 0 as the background label. This flag can uniquely be listed with multiple, space-separated file inputs.

-ref

(fMRI + dMRI) An atlas reference .txt file that indices intensities corresponding to atlas labels of the parcellation specified with the -a flag. This label map is used only to delineate node labels manually. Otherwise, PyNets will attempt to perform automated node labeling via AAL, else sequential numeric labels will be used.

-way

(dMRI) A binarized white-matter ROI mask (in MNI template space) used to constrain tractography in native diffusion space such that streamlines are retained only if they pass within the vicinity of the mask. Like with ROI inputs, waymasks should be in MNI space.

-cm

(fMRI) A binarized ROI mask used to spatially-constrained clustering during parcellation-making. Note that if this flag is used, -k and -ct must also be included. Like with ROI inputs, clustering masks should be in MNI space.

-conf

(fMRI) An additional noise confound regressor file for extracting a cleaner time-series.

Multimodal Workflow Variations

In the case of running pynets on a single subject, several combinations of input files can be used:

fMRI Connectometry

-func, -anat, (-conf), (-roi), (-m), (-cm)

dMRI Connectometry

-dwi, -bval, -bvec, -anat, (-roi), (-m), (-way)

dMRI + fMRI Multiplex Connectometry

All of the above required flags should be included simultaneously. Note that in this case, -anat only needs to be specified once.

Raw Graph Connectometry (i.e. for graph analysis/embedding only)

-g

Command-Line Arguments

PyNets: A Reproducible Workflow for Structural and Functional Connectome Ensemble Learning

usage: pynets [-h] -id A subject id or other unique identifier
              [A subject id or other unique identifier ...]
              [-func Path to input functional file required for functional connectomes) [Path to input functional file (required for functional connectomes) ...]]
              [-dwi Path to diffusion-weighted imaging data file (required for dmri connectomes) [Path to diffusion-weighted imaging data file (required for dmri connectomes) ...]]
              [-bval Path to b-values file (required for dmri connectomes) [Path to b-values file (required for dmri connectomes) ...]]
              [-bvec Path to b-vectors file (required for dmri connectomes) [Path to b-vectors file (required for dmri connectomes) ...]]
              [-anat Path to a skull-stripped anatomical Nifti1Image [Path to a skull-stripped anatomical Nifti1Image ...]]
              [-m Path to a T1w brain mask image (if available) in native anatomical space [Path to a T1w brain mask image (if available) in native anatomical space ...]]
              [-conf Confound regressor file (.tsv/.csv format) [Confound regressor file (.tsv/.csv format) ...]]
              [-g Path to graph file input. [Path to graph file input. ...]]
              [-roi Path to binarized Region-of-Interest (ROI) Nifti1Image in template MNI space. [Path to binarized Region-of-Interest (ROI) Nifti1Image in template MNI space. ...]]
              [-ref Atlas reference file path]
              [-way Path to binarized Nifti1Image to constrain tractography [Path to binarized Nifti1Image to constrain tractography ...]]
              [-mod Connectivity estimation/reconstruction method [Connectivity estimation/reconstruction method ...]]
              [-a Atlas [Atlas ...]]
              [-ns Spherical centroid node size [Spherical centroid node size ...]]
              [-thr Graph threshold]
              [-min_thr Multi-thresholding minimum threshold]
              [-max_thr Multi-thresholding maximum threshold]
              [-step_thr Multi-thresholding step size]
              [-hp High-pass filter (Hz) [High-pass filter (Hz) ...]]
              [-es Node signal extraction strategy [Node signal extraction strategy ...]]
              [-k Number of k clusters [Number of k clusters ...]]
              [-ct Clustering type [Clustering type ...]]
              [-cm Cluster mask [Cluster mask ...]]
              [-sm Smoothing margin (mm FWHM) [Smoothing margin (mm FWHM) ...]]
              [-ml Minimum fiber length for tracking [Minimum fiber length for tracking ...]]
              [-dg Traversal strategy [Traversal strategy ...]]
              [-em Error margin (mm) [Error margin (mm) ...]]
              [-norm Normalization strategy for resulting graph(s)] [-bin]
              [-dt] [-mst] [-p Pruning Strategy] [-df]
              [-mplx Perform various levels of multiplex graph analysis (only if both structural and diffusion connectometry is run simultaneously.]
              [-embed] [-spheres]
              [-n Resting-state subnet [Resting-state subnet ...]]
              [-vox {1mm,2mm}] [-plt] [-pm Cores,memory]
              [-plug Scheduler type] [-v] [-noclean]
              [-config Advanced configuration file] [-work Working directory]
              [--version]
              output_dir

Positional Arguments

output_dir

The directory to store pynets derivatives.

Default: “/home/docs”

Named Arguments

-id

An subject identifier OR list of subject identifiers, separated by space and of equivalent length to the list of input files indicated with the -func flag. This parameter must be an alphanumeric string and can be arbitrarily chosen. If functional and dmri connectomes are being generated simultaneously, then space-separated id’s need to be repeated to match the total input file count.

-func

Specify either a path to a preprocessed functional Nifti1Image in MNI152 space OR multiple space-separated paths to multiple preprocessed functional Nifti1Image files in MNI152 space and in .nii or .nii.gz format, OR the path to a text file containing a list of paths to subject files.

-dwi

Specify either a path to a preprocessed dmri diffusion Nifti1Image in native diffusion space and in .nii or .nii.gz format OR multiple space-separated paths to multiple preprocessed dmri diffusion Nifti1Image files in native diffusion space and in .nii or .nii.gz format.

-bval

Specify either a path to a b-values text file containing gradient shell values per diffusion direction OR multiple space-separated paths to multiple b-values text files in the order of accompanying b-vectors and dwi files.

-bvec

Specify either a path to a b-vectors text file containing gradient directions (x,y,z) per diffusion direction OR multiple space-separated paths to multiple b-vectors text files in the order of accompanying b-values and dwi files.

-anat

Required for dmri and/or functional connectomes. Multiple paths to multiple anatomical files should be specified by space in the order of accompanying functional and/or dmri files. If functional and dmri connectomes are both being generated simultaneously, then anatomical Nifti1Image file paths need to be repeated, but separated by comma.

-m

File path to a T1w brain mask Nifti image (if available) in native anatomical space OR multiple file paths to multiple T1w brain mask Nifti images in the case of running multiple participants, in which case paths should be separated by a space. If no brain mask is supplied, the template mask will be used (see advanced.yaml).

-conf

Optionally specify a path to a confound regressor file to reduce noise in the time-series estimation for the graph. This can also be a list of paths in the case of running multiplesubjects, which requires separation by space and of equivalent length to the list of input files indicated with the -func flag.

-g

In either .txt, .npy, .graphml, .csv, .ssv, .tsv, or .gpickle format. This skips fMRI and dMRI graph estimation workflows and begins at the thresholding and graph analysis stage. Multiple graph files corresponding to multiple subject ID’s should be separated by space, and multiple graph files corresponding to the same subject ID should be separated by comma. If the -g flag is used, then the -id flag must also be used. Consider also including -thr flag to activate thresholding only or the -p and -norm flags if graph defragementation or normalization is desired. The -mod flag can be used for additional provenance/file-naming.

-roi

Optionally specify a binarized ROI mask and retain only those nodes of a parcellation contained within that mask for connectome estimation.

-ref

Specify the path to the atlas reference .txt file that maps labels to intensities corresponding to the atlas parcellation file specified with the -a flag.

-way

Optionally specify a binarized ROI mask in MNI-space toconstrain tractography in the case of dmri connectome estimation.

-mod

Possible choices: corr, sps, cov, partcorr, QuicGraphicalLasso, QuicGraphicalLassoCV, QuicGraphicalLassoEBIC, AdaptiveQuicGraphicalLasso, csa, csd, sfm, mcsd

(hyperparameter): Specify connectivity estimation model. For fMRI, possible models include: corr for correlation, cov for covariance, sps for precision covariance, partcorr for partial correlation. If skgmm is installed (https://github.com/skggm/skggm), then QuicGraphicalLasso, QuicGraphicalLassoCV, QuicGraphicalLassoEBIC, and AdaptiveQuicGraphicalLasso. For dMRI, current models include csa, csd, sfm, and mcsd (for multishell data).

Default: “?”

-a

(hyperparameter): Specify an atlas name from nilearn or local (pynets) library, and/or specify a path to a custom parcellation/atlas Nifti1Image file in MNI space. Labels shouldbe spatially distinct across hemispheres and ordered with consecutive integers with a value of 0 as the background label.If specifying a list of paths to multiple parcellations, separate them by space. If you wish to iterate your pynets run over multiple atlases, separate them by space. Available nilearn atlases are:

atlas_aal atlas_talairach_gyrus atlas_talairach_ba atlas_talairach_lobe atlas_harvard_oxford atlas_destrieux_2009 atlas_msdl coords_dosenbach_2010 coords_power_2011 atlas_pauli_2017.

Available local atlases are:

destrieux2009_rois BrainnetomeAtlasFan2016 VoxelwiseParcellationt0515kLeadDBS Juelichgmthr252mmEickhoff2005 CorticalAreaParcellationfromRestingStateCorrelationsGordon2014 whole_brain_cluster_labels_PCA100 AICHAreorderedJoliot2015 HarvardOxfordThr252mmWholeBrainMakris2006 VoxelwiseParcellationt058kLeadDBS MICCAI2012MultiAtlasLabelingWorkshopandChallengeNeuromorphometrics Hammers_mithAtlasn30r83Hammers2003Gousias2008 AALTzourioMazoyer2002 DesikanKlein2012 AAL2zourioMazoyer2002 VoxelwiseParcellationt0435kLeadDBS AICHAJoliot2015 whole_brain_cluster_labels_PCA200 RandomParcellationsc05meanalll43Craddock2011

-ns

(hyperparameter): Optionally specify coordinate-based node radius size(s). Default is 4 mm for fMRI and 8mm for dMRI. If you wish to iterate the pipeline across multiple node sizes, separate the list by space (e.g. 2 4 6).

Default: 4

-thr

Optionally specify a threshold indicating a proportion of weights to preserve in the graph. Default is no thresholding. If -mst, -dt, or -df flags are not included, than proportional thresholding will be performed

Default: 1.0

-min_thr

(hyperparameter): Minimum threshold for multi-thresholding.

-max_thr

(hyperparameter): Maximum threshold for multi-thresholding.

-step_thr

(hyperparameter): Threshold step value for multi-thresholding. Default is 0.01.

-hp

(hyperparameter): Optionally specify high-pass filter values to apply to node-extracted time-series for fMRI. Default is None. If you wish to iterate the pipeline across multiple values, separate the list by space (e.g. 0 0.02 0.1). Safe range: [0-0.15] for resting-state data.

-es

Possible choices: sum, mean, median, mininum, maximum, variance, standard_deviation

Include this flag if you are running functional connectometry and wish to specify the name of a specific function (i.e. other than the mean) to reduce the region’s time-series. Options are: sum, mean, median, mininum, maximum, variance, standard_deviation.

Default: “mean”

-k

(hyperparameter): Specify a number of clusters to produce. If you wish to iterate the pipeline across multiple values of k, separate the list by space (e.g. 200, 400, 600, 800).

-ct

Possible choices: ward, rena, kmeans, complete, average, single, ncut

(hyperparameter): Specify the types of clustering to use. Recommended options are: ward, rena, kmeans, or ncut. Note that imposing spatial constraints with a mask consisting of disconnected components will leading to clustering instability in the case of complete, average, or single clustering. If specifying list of clustering types, separate them by space.

Default: “ward”

-cm

(hyperparameter): Specify the path to a Nifti1Image mask file to constrained functional clustering. If specifying a list of paths to multiple cluster masks, separate them by space.

-sm

(hyperparameter): Intersection distance in mm of a node to an edge.Corresponds to the magnitude of smoothing to be applied to the node-extracted time-series. Default is 0 mm FWHM. Safe range for fMRI is: [0-9].If you wish to iterate the pipeline across multiple smoothing delimit the list by space (e.g. 2 4 6)

Default: 0

-ml

(hyperparameter): Include this flag to manually specify a minimum tract length (mm) for dmri connectome tracking. Default is 10. If you wish to iterate the pipeline across multiple minimums, separate the list by space (e.g. 10 30 50). Safe range: [0-150]. Depending on the tissue classifier used and the restrictiveness of the parcellation or any way-masking, values >60mm may fail.

Default: 10

-dg

Possible choices: det, prob, clos

(hyperparameter): Include this flag to manually specify the statistical approach to tracking for dmri connectome estimation. Options are: det (deterministic), closest (clos), and prob (probabilistic). Default is det. If you wish to iterate the pipeline across multiple traversal methods, delimit the list by space (e.g. ‘det’, ‘prob’, ‘clos’).

Default: “det”

-em

(hyperparameter): Intersection distance in mm of a node to an edge.Corresponds to the extent of parcel-streamline overlap in tractography. If any coordinate in the streamline is within this distance from the center of any voxel in the node. Default is 5 mm. Safe range for dMRI is: [0-15]. If you wish to iterate the pipeline across multiple smoothing delimit the list by space (e.g. 2 4 6)

Default: 5

-norm

Possible choices: 0, 1, 2, 3, 4, 5, 6

Include this flag to normalize the resulting graph by (1) maximum edge weight; (2) using log10; (3) using pass-to-ranks for all non-zero edges; (4) using pass-to-ranks for all non-zero edges relative to the number of nodes; (5) using pass-to-ranks with zero-edge boost; and (6) which standardizes the matrix to values [0, 1]. Default is (6).

Default: 1

-bin

Include this flag to binarize the resulting graph such that edges are boolean and not weighted.

Default: False

-dt

Optionally use this flag if you wish to threshold to achieve a given density or densities indicated by the -thr and -min_thr, -max_thr, -step_thr flags, respectively.

Default: False

-mst

Optionally use this flag if you wish to apply local thresholding via the Minimum Spanning Tree approach. -thr values in this case correspond to a target density (if the -dt flag is also included), otherwise a target proportional threshold.

Default: False

-p

Possible choices: 0, 1, 2, 3

(Optional) Include this flag to (0) retain isolated nodes (1) retain only connected components of a minimal size. (2) prune the graph of all but hubs as defined by any of a variety of definitions (see advanced.yaml), or (3) retain only the largest connected component subgraph. Default is (1), which is equivalent to defragmenting only isolated nodes, unless the minimum threshold is >1 (see advanced.yaml).

Default: 1

-df

Optionally use this flag if you wish to apply local thresholding via the disparity filter approach. -thr values in this case correspond to α.

Default: False

-mplx

Possible choices: 0, 1, 2

Include this flag to perform multiplex graph analysis across structural-functional connectome modalities. Options include level (1) Create multiplex graphs using mutual information and adaptive thresholding; (2) Additionally perform multiplex graph embedding and analysis. Default is (0) which is no multiplex analysis.

Default: 0

-embed

Optionally use this flag if you wish to embed the ensemble(s) produced into feature vector(s).

Default: False

-spheres

Include this flag to use spheres instead of parcels as nodes.

Default: False

-n

Possible choices: Vis, SomMot, DorsAttn, SalVentAttn, Limbic, Cont, Default, VisCent, VisPeri, SomMotA, SomMotB, DorsAttnA, DorsAttnB, SalVentAttnA, SalVentAttnB, LimbicOFC, LimbicTempPole, ContA, ContB, ContC, DefaultA, DefaultB, DefaultC, TempPar

Optionally specify the name of any of the 2017 Yeo-Schaefer RSNs (7-subnet or 17-subnet): Vis, SomMot, DorsAttn, SalVentAttn, Limbic, Cont, Default, VisCent, VisPeri, SomMotA, SomMotB, DorsAttnA, DorsAttnB, SalVentAttnA, SalVentAttnB, LimbicOFC, LimbicTempPole, ContA, ContB, ContC, DefaultA, DefaultB, DefaultC, TempPar. If listing multiple RSNs, separate them by space. (e.g. -n ‘Default’ ‘Cont’ ‘SalVentAttn’)’.

-vox

Possible choices: 1mm, 2mm

Optionally use this flag if you wish to change the resolution of the images in the workflow. Default is 2mm.

Default: “2mm”

-plt

Optionally use this flag if you wish to activate plotting of adjacency matrices, connectomes, and time-series.

Default: False

-pm

Maximum number of cores and GB of memory, stated as two integers seperated by comma. Otherwise, default is auto, which uses all available resources detected on the compute node(s) used for execution.

Default: “auto”

-plug

Possible choices: Linear, MultiProc, SGE, PBS, SLURM, SGEgraph, SLURMgraph, LegacyMultiProc

Include this flag to specify a workflow plugin other than the default MultiProc.

Default: “MultiProc”

-v

Verbose print for debugging.

Default: False

-noclean

Disable post-workflow clean-up of temporary runtime metadata.

Default: False

-config

Optionally override advanced configuration parameters. Default is advanced.yaml.

Default: “advanced.yaml”

-work

Specify the path to a working directory for pynets to run. Default is /tmp/work.

Default: “/tmp/work”

--version

show program’s version number and exit

PyNets BIDS CLI: A Fully-Automated Workflow for Reproducible Ensemble Sampling of Functional and Structural Connectomes

usage: pynets_bids [-h]
                   [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
                   [--session_label SESSION_LABEL [SESSION_LABEL ...]]
                   [--run_label RUN_LABEL [RUN_LABEL ...]]
                   [--push_location PUSH_LOCATION]
                   [-ua Path to parcellation file in MNI-space [Path to parcellation file in MNI-space ...]]
                   [-cm Cluster mask [Cluster mask ...]]
                   [-roi Path to binarized Region-of-Interest ROI) Nifti1Image in template MNI space [Path to binarized Region-of-Interest (ROI Nifti1Image in template MNI space ...]]
                   [-ref Atlas reference file path]
                   [-way Path to binarized Nifti1Image to constrain tractography [Path to binarized Nifti1Image to constrain tractography ...]]
                   [-config Optional path to a config.json file with runtime settings.]
                   [-pm Cores,memory] [-plug Scheduler type] [-v] [-clean]
                   [-work Working directory]
                   bids_dir output_dir {participant,group} {dwi,func}
                   [{dwi,func} ...]

Positional Arguments

bids_dir
The directory with the input dataset formatted according to
the BIDS standard. To use data from s3, just pass

s3://<bucket>/<dataset> as the input directory.

output_dir

The directory to store pynets derivatives locally.

analysis_level

Possible choices: participant, group

Whether to instantiate an individual or group workflow

modality

Possible choices: dwi, func

Specify data modality to process from bids directory. Options are dwi and func.

Named Arguments

--participant_label
The label(s) of the participant(s) that should be analyzed.

The label corresponds to sub-<participant_label> from the BIDS spec (so it does not include “sub-“). If this parameter is not provided all subjects found in bids_dir will be analyzed. Multiple participants can be specified with a space separated list.

--session_label
The label(s) of the session that should be analyzed.

The label corresponds to ses-<participant_label> from the BIDS spec (so it does not include “ses-“). If this parameter is not provided all sessions should be analyzed. Multiple sessions can be specified

with a space separated list.

--run_label
The label(s) of the run, if any, within a given session that

should be analyzed. The label corresponds to run-<run_label> from the BIDS spec (so it does not include “run-“). If this parameter is not provided all runs should be analyzed. Specifying multiple runs is not yet supported.

--push_location

Name of folder on s3 to push output data to, if the folder does not exist, it will be created. Format the location as s3://<bucket>/<path>

-ua

Optionally specify a path to a parcellation/atlas Nifti1Image file in MNI152 space. Labels should be spatially distinct across hemispheres and ordered with consecutive integers with a value of 0 as the background label. If specifying a list of paths to multiple user atlases, separate them by space.

-cm

Optionally specify the path to a Nifti1Image mask file to constrained functional clustering. If specifying a list of paths to multiple cluster masks, separate them by space.

-roi

Optionally specify a binarized ROI mask in template MNI space and retain only those nodes of a parcellation contained within that mask for connectome estimation.

-ref

Specify the path to the atlas reference .txt file that maps labels to intensities corresponding to the atlas parcellation file specified with the -ua flag.

-way

Optionally specify a binarized ROI mask in template MNI-space to constrain tractography in the case of dmri connectome estimation.

-config

Including this flag will override the bids_config.json template in the base directory of pynets. See the template ad pynets -h for available settings.

-pm

Number of cores to use, number of GB of memory to use for single subject run, entered as two integers seperated by comma. Otherwise, default is auto, which uses all resources detected on the current compute node.

Default: “auto”

-plug

Possible choices: Linear, MultiProc, SGE, PBS, SLURM, SGEgraph, SLURMgraph, LegacyMultiProc

Include this flag to specify a workflow plugin other than the default MultiProc.

Default: “MultiProc”

-v

Verbose print for debugging.

Default: False

-clean

Clean up temporary runtime directory after workflow termination.

Default: False

-work

Specify the path to a working directory for pynets to run. Default is /tmp/work.

Default: “/tmp/work”

Quickstart

Execution on BIDS derivative datasets using the pynets_bids CLI

PyNets now includes an API for running single-subject and group workflows on BIDS derivatives (e.g. produced using popular BIDS apps like fmriprep/cpac and dmriprep/qsiprep). In this scenarioo, the input dataset should follow the derivatives specification of the BIDS (Brain Imaging Data Structure) format (<https://bids-specification.readthedocs.io/en/derivatives/05-derivatives/01-introduction.html>), which must include at least one subject’s fMRI image or dMRI image (in T1w space), along with a T1w anatomical image.

The advanced.yml file in the base directory includes parameter presets, but all file input options that are included with the pynets cli are also exposed to the pynets_bids cli.

The common parts of the command follow the BIDS-Apps definition. Example:

pynets_bids '/hnu/fMRIprep/fmriprep' '/Users/dPys/outputs/pynets' participant func --participant_label 0025427 0025428 --session_label 1 2 3 -config pynets/config/bids_config.json

A similar CLI, pynets_cloud has also been made available using AWS Batch and S3, which require a AWS credentials and configuration of job queues and definitions using cloud_config.json:

pynets_cloud --bucket 'hnu' --dataset 'HNU' participant func --participant_label 0025427 --session_label 1 --push_location 's3://hnu/outputs' --jobdir '/Users/derekpisner/.pynets/jobs' -cm 's3://hnu/HNU/masks/MyClusteringROI.nii.gz' -pm '30,110'

Manual Execution Using the pynets CLI

You have a preprocessed EPI bold dataset from the first session for subject 002, and you wish to analyze a whole-brain network using ‘sub-colin27_label-L2018_desc-scale1_atlas’, thresholding the connectivity graph proportionally to retain 0.20% of the strongest connections, and you wish to use partial correlation model estimation:

pynets -id '002_1' '/Users/dPys/outputs/pynets' \
-func '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/func/BOLD_PREPROCESSED_IN_ANAT_NATIVE.nii.gz' \ # The fMRI BOLD image data.
-anat '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/anat/ANAT_PREPROCESSED_NATIVE.nii.gz' \ # The T1w anatomical image.
-a 'sub-colin27_label-L2018_desc-scale1_atlas' \ # Lausanne parcellation at scale=1.
-mod 'partcorr' \ # The connectivity model.
-thr 0.20 \ # A single proportional threshold to apply post-hoc.

Building upon the previous example, let’s say you now wish to analyze the Default network for this same subject’s data, but based on the 95-node atlas parcellation scheme from Desikan-Klein 2012 called ‘DesikanKlein2012’ and the Brainnetome Atlas from Fan 2016 called ‘BrainnetomeAtlasFan2016’, you wish to threshold the graph to achieve a target density of 0.3, and you wish to fit a sparse inverse covariance model in addition to partial correlation, and you wish to plot the results:

pynets -id '002_1' '/Users/dPys/outputs/pynets' \
-func '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/func/BOLD_PREPROCESSED_IN_ANAT_NATIVE.nii.gz' \ # The fMRI BOLD image data.
-anat '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/anat/ANAT_PREPROCESSED_NATIVE.nii.gz' \ # The T1w anatomical image.
-a 'DesikanKlein2012' 'BrainnetomeAtlasFan2016' # Multiple spherical atlases.
-mod 'partcorr' 'sps' \ # The connectivity models.
-dt -thr 0.3 \ # The thresholding settings.
-n 'Default' \ # The resting-state network definition to restrict node-making from each of the input atlas.
-plt # Activate plotting.

Building upon the previous examples, let’s say you now wish to analyze the Default and Executive Control Networks for this subject, but this time based on a custom atlas (DesikanKlein2012.nii.gz), this time defining your nodes as parcels (as opposed to spheres), you wish to fit a partial correlation model, you wish to iterate the pipeline over a range of densities (i.e. 0.05-0.10 with 1% step), and you wish to prune disconnected nodes:

pynets -id '002_1' '/Users/dPys/outputs/pynets' \
-func '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/func/BOLD_PREPROCESSED_IN_ANAT_NATIVE.nii.gz' \ # The fMRI BOLD image data.
-anat '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/anat/ANAT_PREPROCESSED_NATIVE.nii.gz' \ # The T1w anatomical image.
-a '/Users/dPys/PyNets/pynets/atlases/MyCustomAtlas.nii.gz' \ # A user-supplied atlas parcellation.
-mod 'partcorr' \ # The connectivity model.
-dt -min_thr 0.05 -max_thr 0.10 -step_thr 0.01 -p 1 \ # The thresholding settings.
-n 'Default' 'Cont' # The resting-state network definitions to restrict node-making from each of the input atlas.

Note

In general, parcels are preferable to spheres as nodes because parcels more closely respect cortical topographgy.

Building upon the previous examples, let’s say you now wish to create a subject-specific atlas based on the subject’s unique spatial-temporal profile. In this case, you can specify the path to a binarized mask within which to performed spatially-constrained spectral clustering, and you want to try this at multiple resolutions of k clusters/nodes (i.e. k=50,100,150). You again also wish to define your nodes spherically with radii at both 2 and 4 mm, fitting a partial correlation and sparse inverse covariance model, you wish to iterate the pipeline over a range of densities (i.e. 0.05-0.10 with 1% step), you wish to prune disconnected nodes, and you wish to plot your results:

pynets -id '002_1' '/Users/dPys/outputs/pynets' \
-func '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/func/BOLD_PREPROCESSED_IN_ANAT_NATIVE.nii.gz' \ # The fMRI BOLD image data.
-anat '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/anat/ANAT_PREPROCESSED_NATIVE.nii.gz' \ # The T1w anatomical image.
-mod 'partcorr' 'sps' \ # The connectivity models.
-cm '/Users/dPys/PyNets/tests/examples/MyClusteringROI.nii.gz' -k 50 100 150 -ct 'ward' \ # Node-making specification with spatially-constrained clustering.
-dt -min_thr 0.05 -max_thr 0.10 -step_thr 0.01 -p 1 \ # The thresholding settings.
-plt # Activate plotting.

You wish to generate a structural connectome, using deterministic and probabilistic ensemble tractography, based on both constrained-spherical deconvolution (CSD), Constant Solid Angle (CSA), and Sparse Fascicle (SFM) models. You wish to use atlas parcels as defined by both DesikanKlein2012, and AALTzourioMazoyer2002, exploring only those nodes belonging to the Default Mode Network, iterate over a range of graph densities (i.e. 0.05-0.10 with 1% step), and prune disconnected nodes:

pynets -id '002_1' '/Users/dPys/outputs/pynets' \
-dwi '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/dwi/DWI_PREPROCESSED_NATIVE.nii.gz' \ # The dMRI diffusion-weighted image data.
-bval '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/dwi/BVAL.bval' \ # The b-values.
-bvec '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/dwi/BVEC.bvec' \ # The b-vectors.
-anat '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/anat/ANAT_PREPROCESSED_NATIVE.nii.gz' \ # The T1w anatomical image.
-a '/Users/dPys/.atlases/DesikanKlein2012.nii.gz' '/Users/dPys/.atlases/AALTzourioMazoyer2002.nii.gz' \ # The atlases.
-mod 'csd' 'csa' 'sfm' \ # The connectivity model.
-dg 'prob' 'det'  \ # The tractography settings.
-dt -min_thr 0.05 -max_thr 0.10 -step_thr 0.01 -p 1 \ # The thresholding settings.
-n 'Default' # The resting-state network definition to restrict node-making from each of the input atlases.

Note

Spherical nodes can be used by triggering the -spheres flag, and for some coordinate-based atlases like coords_power_2011 or coords_dosenbach_2010, only spheres are possible, but in general parcel volumes should be used as the default.

Note

Iterable sampling parameters specified at runtime should always be space-delimited.

There are many other runtime options than these examples demonstrate. To explore all of the possible hyper-parameter combinations that pynets has to offer, see pynets -h. A full set of tutorials and python notebooks are coming soon.

Docker and AWS

PyNets includes an API for running pynets_bids or pynets in a Docker container as well as using AWS Batch. The latter assumes a dataset with BIDS derivatives is stored in an S3 bucket. Docker Example:

docker run -ti --rm --privileged -v '/home/dPys/.aws/credentials:/home/neuro/.aws/credentials' dpys/pynets:latest pynets_bids 's3://hnu/HNU' '/outputs' participant func --participant_label 0025427 --session_label 1 -plug 'MultiProc' -pm '8,12' -work '/working' -config pynets/config/bids_config.json

Running a Singularity Image

If the data to be preprocessed is also on an HPC server, you are ready to run pynets, either manually or as a BIDS application. For example, where PARTICIPANT is a subject identifier and SESSION is a given scan session, we could sample an ensemble of connectomes manually as follows

singularity exec -w \
 '/scratch/04171/dPys/pynets_singularity_latest-2020-02-07-eccf145ea766.img' \
 pynets /outputs \
 -p 1 -mod 'partcorr' 'corr' -min_thr 0.20 -max_thr 1.00 -step_thr 0.10 -sm 0 2 4 -hp 0 0.028 0.080
 -ct 'ward' -k 100 200 -cm '/working/MyClusteringROI.nii.gz' \
 -pm '24,48' \
 -norm 6 \
 -anat '/inputs/sub-PARTICIPANT/ses-SESSION/anat/sub-PARTICIPANT_space-anat_desc-preproc_T1w_brain.nii.gz' \
 -func '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_space-anat_desc-smoothAROMAnonaggr_bold_masked.nii.gz' \
 -conf '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_desc-confounds_regressors.tsv' \
 -m '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_space-anat_desc-brain_mask.nii.gz' \
 -id 'PARTICIPANT_SESSION' -plug 'MultiProc' -work '/tmp'

Note

Singularity by default exposes all environment variables from the host inside the container. Because of this your host libraries (such as nipype) could be accidentally used instead of the ones inside the container - if they are included in PYTHONPATH. To avoid such situation we sometimes recommend using the --cleanenv singularity flag in production use. For example:

singularity exec --cleanenv --no-home_clust_est '/scratch/04171/dPys/pynets_latest-2016-12-04-5b74ad9a4c4d.img' \
  pynets /outputs \
  -p 1 -mod 'partcorr' 'corr' -min_thr 0.20 -max_thr 1.00 -step_thr 0.10 -sm 0 2 4 -hp 0 0.028 0.080
  -ct 'ward' -k 100 200 -cm '/working/MyClusteringROI.nii.gz' \
  -norm 6 \
  -anat '/inputs/sub-PARTICIPANT/ses-SESSION/anat/sub-PARTICIPANT_space-anat_desc-preproc_T1w_brain.nii.gz' \
  -func '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_space-anat_desc-smoothAROMAnonaggr_bold_masked.nii.gz' \
  -conf '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_desc-confounds_regressors.tsv' \
  -m '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_space-anat_desc-brain_mask.nii.gz' \
  -id 'PARTICIPANT_SESSION' -plug 'MultiProc' -work '/tmp' -pm '24,48'

or, unset the PYTHONPATH variable before running:

unset PYTHONPATH; singularity exec /scratch/04171/dPys/pynets_latest-2016-12-04-5b74ad9a4c4d.img \
  pynets /outputs \
  -p 1 -mod 'partcorr' 'corr' -min_thr 0.20 -max_thr 1.00 -step_thr 0.10 -sm 0 2 4 -hp 0 0.028 0.080
  -ct 'ward' -cm '/working/MyClusteringROI.nii.gz' -k 100 200 \
  -norm 6 \
  -anat '/inputs/sub-PARTICIPANT/ses-SESSION/anat/sub-PARTICIPANT_space-anat_desc-preproc_T1w_brain.nii.gz' \
  -func '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_space-anat_desc-smoothAROMAnonaggr_bold_masked.nii.gz' \
  -conf '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_desc-confounds_regressors.tsv' \
  -m '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_space-anat_desc-brain_mask.nii.gz' \
  -id 'PARTICIPANT_SESSION' -plug 'MultiProc' -work '/tmp' -pm '24,48'

Note

Depending on how Singularity is configured on your cluster it might or might not automatically bind (mount or expose) host folders to the container. If this is not done automatically you will need to bind the necessary folders using the -B <host_folder>:<container_folder> Singularity argument. For example:

singularity exec_clust_est -B /work:/work /scratch/04171/dPys/pynets_latest-2016-12-04-5b74ad9a4c4d.img \
  -B '/scratch/04171/dPys/pynets_out:/inputs,/scratch/04171/dPys/masks/PARTICIPANT_triple_network_masks_SESSION':'/outputs' \
  pynets /outputs \
  -p 1 -mod 'partcorr' 'corr' -min_thr 0.20 -max_thr 1.00 -step_thr 0.10 -sm 0 2 4 -hp 0 0.028 0.080 \
  -ct 'ward' -k 100 200 -cm '/working/MyClusteringROI.nii.gz' \
  -norm 6 \
  -anat '/inputs/sub-PARTICIPANT/ses-SESSION/anat/sub-PARTICIPANT_space-anat_desc-preproc_T1w_brain.nii.gz' \
  -func '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_space-anat_desc-smoothAROMAnonaggr_bold_masked.nii.gz' \
  -conf '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_desc-confounds_regressors.tsv' \
  -m '/inputs/sub-PARTICIPANT/ses-SESSION/func/sub-PARTICIPANT_ses-SESSION_task-rest_space-anat_desc-brain_mask.nii.gz' \
  -id 'PARTICIPANT_SESSION' -plug 'MultiProc' -work '/tmp'  -pm '24,48'

Debugging

Logs and crashfiles are outputted into the <working dir>/Wf_single_subject_<id> directory. To include verbose debugging and resource benchmarking, run pynets with the -v flag.

Support and communication

The documentation of this project is found here: http://pynets.readthedocs.org/en/latest/.

All bugs, concerns and enhancement requests for this software can be submitted here: https://github.com/dPys/PyNets/issues.

If you have a problem or would like to ask a question about how to use pynets, please submit a question to NeuroStars.org with an pynets tag. NeuroStars.org is a platform similar to StackOverflow but dedicated to neuroinformatics.

All previous pynets questions are available here: http://neurostars.org/tags/pynets/

To participate in the pynets development-related discussions please use the following mailing list: http://mail.python.org/mailman/listinfo/neuroimaging Please add [pynets] to the subject line when posting on the mailing list.

Not running on a local machine? - Data transfer

If you intend to run pynets on a remote system, you will need to make your data available within that system first.

Alternatively, more comprehensive solutions such as Datalad will handle data transfers with the appropriate settings and commands. Datalad also performs version control over your data.

Group Analysis

COMING SOON