deeplearning-prepare-data
- Prepare input data for deep learning with PyTorch¶
This pipeline prepares images generated by Clinica to be used with the PyTorch deep learning library [Paszke et al., 2019]. Three types of tensors are proposed: 3D images, 3D patches or 2D slices.
Outputs from the t1-linear
pipeline and t1-extensive
pipeline
can be processed (for the moment, latter is available from
ClinicaDL . These pipelines are designed as
a prerequisite for the deep learning classification algorithms presented in
[Wen et al., 2020] and showcased
in the AD-DL framework.
Prerequisites¶
You need to have performed the t1-linear
pipeline or
[t1-extensive
pipeline](https://clinicadl.readthedocs.io/en/latest/Preprocessing/T1_Extensive/)
on your T1-weighted MRI. There exists an option to convert custom nifty images
into tensors. This option is chosen by adding the parameter
custom` to the
command line instruction and a known suffix, as explained below.
Dependencies¶
If you installed the core of Clinica, this pipeline needs no further dependencies.
Running the pipeline¶
The pipeline can be run with the following command line:
clinica run deeplearning-prepare-data <caps_directory> <modality> <tensor_format>
caps_directory
is the folder containing the results of thet1-linear
pipeline and the output of the present command, both in a CAPS hierarchy.modality
is the name of the preprocessing done in the original images. It can bet1-linear
ort1-extensive
. You can chosecustom
if you want to get a tensor from a custom filename.tensor_format
is the format of the extracted tensors. You can choose betweenimage
to convert to PyTorch tensor the whole 3D image,patch
to extract 3D patches andslice
to extract 2D slices from the image.
By default the features are extracted from the cropped image (see the
documentation of the t1-linear
pipeline). You can deactivate
this behaviour with the --use_uncropped_image
flag.
Pipeline options if you use patch
extraction:
--patch_size
: patch size. Default value:50
.--stride_size
: stride size. Default value:50
.
Pipeline options if you use slice
extraction:
--slice_direction
: slice direction. You can choose between0
(sagittal plane),1
(coronal plane) or2
(axial plane). Default value:0
.--slice_mode
: slice mode. You can choose betweenrgb
(will save the slice in three identical channels) orsingle
(will save the slice in a single channel). Default value:rgb
.
Pipeline options if you use custom
modality:
--custom_suffix
: suffix of the filename that should be converted to the tensor format. The output will be saved into a folder namedcustom
but the processed files will kep their original name. E.g.: you can convert the images from the segmentation of the grey matter registered on the Ixi549Space. This images are obtained by runningt1-volume
pipeline (and SPM underhood). The suffix for these images is "graymatter_space-Ixi549Space_modulated-off_probability.nii.gz".
Regarding the default values
When using patch or slice extraction, default values were set according to [Wen et al., 2020].
Note
The arguments common to all Clinica pipelines are described in Interacting with clinica.
Tip
Do not hesitate to type clinica run deeplearning-prepare-data --help
to see
the full list of parameters.
Outputs¶
In the following subsections, files with the .pt
extension denote tensors in
PyTorch format.
The full list of output files can be found in the ClinicA Processed Structure (CAPS) Specification.
Image-based outputs¶
Results are stored in the following folder of the CAPS hierarchy:
subjects/<participant_id>/<session_id>/deeplearning_prepare_data/image_based/t1_linear
.
For the t1-linear
modality, the main output files are:
<source_file>_space-MNI152NLin2009cSym[_desc-Crop]_res-1x1x1_T1w.pt
: tensor version of the 3D T1w image registered to theMNI152NLin2009cSym
template and optionally cropped.
Corresponding folder and file names are obtained for the files processed with the
t1-extensive
modality.
For the case of files processed with the custom
modality, files are stored in
the following folder:
subjects/<participant_id>/<session_id>/deeplearning_prepare_data/image/custom
.
Patch-based outputs¶
Results are stored in the following folder of the CAPS hierarchy:
subjects/<participant_id>/<session_id>/deeplearning_prepare_data/patch_based/t1_linear
.
The main output files are:
<source_file>_space-MNI152NLin2009cSym[_desc-Crop]_res-1x1x1_patchsize-<N>_stride-<M>_patch-<i>_T1w.pt
: tensor version of the<i>
-th 3D isotropic patch of size<N>
with a stride of<M>
. Each patch is extracted from the T1w image registered to theMNI152NLin2009cSym
template and optionally cropped.
Slice-based outputs¶
Results are stored in the following folder of the CAPS hierarchy:
subjects/<participant_id>/<session_id>/deeplearning_prepare_data/slice_based/t1_linear
.
The main output files are:
<source_file>_space-MNI152NLin2009cSym[_desc-Crop]_res-1x1x1_axis-{sag|cor|axi}_channel-{single|rgb}_T1w.pt
: tensor version of the<i>
-th 2D slice insag
ittal,cor
onal oraxi
al plane using three identical channels (rgb
) or one channel (single
). Each slice is extracted from the T1w image registered to theMNI152NLin2009cSym
template and optionally cropped.
Going further¶
- You can now perform classification based on deep learning using the AD-DL framework presented in [Wen et al., 2020].
Describing this pipeline in your paper¶
Example of paragraph
These results have been obtained using the deeplearning-prepare-data
pipeline of Clinica [Routier et al;
Wen et al., 2020]. More
precisely,
-
3D images
-
3D patches with patch size of
<patch_size>
and stride size of<stride_size>
-
2D slices in {sagittal | coronal | axial} plane and saved in {three identical channels | a single channel}
were extracted and converted to PyTorch tensors [Paszke et al., 2019].
Tip
Easily access the papers cited on this page on Zotero.