From samuel.garcia at cnrs.fr Mon Oct 2 05:33:00 2017 From: samuel.garcia at cnrs.fr (Samuel Garcia) Date: Mon, 2 Oct 2017 11:33:00 +0200 Subject: [Neuroimaging] ANN: neo 0.5.2 released. Message-ID: Dear list, I am happy to announce the released of neo 0.5.2 . Neo is a package for representing electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats. Changes: ? * Removed support for Python 2.6 ? * Pickling AnalogSignal and SpikeTrain now preserves parent objects ? * Added NSDFIO, which reads and writes NSDF files ? * Fixes and improvements to PlexonIO, NixIO, BlackrockIO, NeuralynxIO, IgorIO, ElanIO, MicromedIO, TdtIO and others. Note: neo will now enter in a new cycle of development for re-factoring the io api. The goal will be to have a fast access for long/big files. Best, Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinit.k.srivastava at gmail.com Wed Oct 4 18:16:56 2017 From: vinit.k.srivastava at gmail.com (Vinit Srivastava) Date: Wed, 4 Oct 2017 18:16:56 -0400 Subject: [Neuroimaging] voxel axes reorientation Message-ID: Hi, I've been using nib.as_closest_canonical to reorient the voxel axes to RAS+. Is there an optio to reorient images to a chosen format other than canonical RAS format? For example, I'd like to reorient from LPS to LAS. Thanks, Vinny -------------- next part -------------- An HTML attachment was scrubbed... URL: From christopher.cox-2 at manchester.ac.uk Thu Oct 12 09:15:37 2017 From: christopher.cox-2 at manchester.ac.uk (Christopher Cox) Date: Thu, 12 Oct 2017 13:15:37 +0000 Subject: [Neuroimaging] Question about nibabel.processing Message-ID: Hello, My first question should really be: how do you review the archive for this mailing list? I imagine this question has been answered, but I do not know where to look. I am attempting something very simple: I want to resample a volume. The nifti file on disk is 4D, and contains two volumes. Nibabel.processing.resample_to_output() will not work with 4D data. Fortunately for me, I only care about the first volume in this dataset. So I should have no problem. But... nothing I can think to try works. I apologize it is difficult to read, but I've condensed several of my attempts into an interactive python session, and copied all of my work and the errors I am getting as a post script. I tried to color code, but that might not come through. My first attempt is to try and pass the sliced data object (as in, data = img.get_data()), but that is a memory map as lacks the metadata the function requires. I tried passing the image (as in, img = nib.load(...)), but it cannot be sliced like the data can. I then tried simply loading only a single volume into memory (as in, vol0 = img.datobj[...,0]), but that doesn't work either (vol0 is a numpy array, and img is still seen as 4D). I tried updating the shape metadata in img, but that's read only. Thank you very much for helping me figure out the intended way to use this function. Best, Chris Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 08:06:12) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import nibabel as nib >>> nib.__version__ '2.1.0' >>> import nibabel.processing >>> img = nib.load('Template_6.nii') >>> img.shape (121, 145, 121, 2) >>> data = img.get_data() >>> r = nib.processing.resample_to_output(data[...,0], [3,3,3]) Traceback (most recent call last): File "", line 1, in File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\processing.py", line 242, in resample_to_output out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) AttributeError: 'memmap' object has no attribute 'affine' >>> r = nib.processing.resample_to_output(img, [3,3,3]) Traceback (most recent call last): File "", line 1, in File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\processing.py", line 242, in resample_to_output out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\spaces.py", line 76, in vox2out_vox raise ValueError('This function can only deal with 3D images') ValueError: This function can only deal with 3D images >>> r = nib.processing.resample_to_output(img[...,0], [3,3,3]) Traceback (most recent call last): File "", line 1, in TypeError: __getitem__() takes 1 positional argument but 2 were given >>> vol0 = img.dataobj[...,0] >>> r = nib.processing.resample_to_output(vol0, [3,3,3]) Traceback (most recent call last): File "", line 1, in File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\processing.py", line 242, in resample_to_output out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) AttributeError: 'numpy.ndarray' object has no attribute 'affine' # This is after the previous attempt, loading only the first volume. img remains 4D >>> r = nib.processing.resample_to_output(img, [3,3,3]) Traceback (most recent call last): File "", line 1, in File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\processing.py", line 242, in resample_to_output out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\spaces.py", line 76, in vox2out_vox raise ValueError('This function can only deal with 3D images') ValueError: This function can only deal with 3D images >>> img.shape = img.shape[0:3] Traceback (most recent call last): File "", line 1, in AttributeError: can't set attribute Christopher R. Cox, PhD Neuroscience and Aphasia Research Unit (NARU) University of Manchester, UK christopher.cox-2 at manchester.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From zoltickb at mail.nih.gov Thu Oct 12 10:33:32 2017 From: zoltickb at mail.nih.gov (Zoltick, Brad (NIH/NIMH) [E]) Date: Thu, 12 Oct 2017 14:33:32 +0000 Subject: [Neuroimaging] Question about nibabel.processing In-Reply-To: References: Message-ID: Hello Chris, I have used these two functions to convert a 4d nifti dataset to N, 3d volumes. Perhaps this will help you. You can easily store the first volume and then resample. ____________________________________________________________ !/usr/bin/env python3 # # Use nibabel tools to convert a 4D (space+time) single file nifti # file to a set of single volume pairs of (.img,.hdr) files import os import sys import numpy as np import nibabel as nib def nii_4d_to_3d_pair(fname): """ convert a nifti1 4D image to N nifti1 3d pair (.img,.hdr) images """ if not fname.endswith('nii'): sys.exit('filename must be a 4D nifti file: {}'.format(fname)) img4d = nib.load(fname) basename = img4d.get_filename()[:-4] affine = img4d.affine # save affine header = img4d.header # save header numvols = img4d.shape[3] # 4th dimension is time (number of volumes) data = img4d.get_data() # 4d numpy array for i in range(numvols): img3d = nib.Nifti1Pair(data[:,:,:,i], affine, header) img3d_name = '{0}_{1:02d}.img'.format(basename, i) nib.save(img3d, img3d_name) def nii_4d_to_3d(fname): """ convert a nifti1 4D image to N nifti1 3d single file images """ if not fname.endswith('nii'): sys.exit('filename must be a 4D nifti file: {}'.format(fname)) img4d = nib.load(fname) basename = img4d.get_filename()[:-4] affine = img4d.affine # save affine header = img4d.header # save header numvols = img4d.shape[3] # 4th dimension is time (number of volumes) data = img4d.get_data() # 4d numpy array for i in range(numvols): img3d = nib.Nifti1Image(data[:,:,:,i], affine, header) img3d_name = '{0}_{1:02d}.nii'.format(basename, i) nib.save(img3d, img3d_name) if __name__ == '__main__': usage = 'conv_4dto3d.py file.nii' if len(sys.argv) < 2: sys.exit(usage) fname = sys.argv[1] nii_4d_to_3d_pair(fname) __________________________________ Brad J Zoltick Computer Engineer NIH/NIMH Building 10, Room 3C-210 Bethesda, MD 20892-1394 Tel (301)402-3232 Fax (301)480-7795 ________________________________ From: Christopher Cox Sent: Thursday, October 12, 2017 9:15 AM To: neuroimaging at python.org Subject: [Neuroimaging] Question about nibabel.processing Hello, My first question should really be: how do you review the archive for this mailing list? I imagine this question has been answered, but I do not know where to look. I am attempting something very simple: I want to resample a volume. The nifti file on disk is 4D, and contains two volumes. Nibabel.processing.resample_to_output() will not work with 4D data. Fortunately for me, I only care about the first volume in this dataset. So I should have no problem. But... nothing I can think to try works. I apologize it is difficult to read, but I?ve condensed several of my attempts into an interactive python session, and copied all of my work and the errors I am getting as a post script. I tried to color code, but that might not come through. My first attempt is to try and pass the sliced data object (as in, data = img.get_data()), but that is a memory map as lacks the metadata the function requires. I tried passing the image (as in, img = nib.load(?)), but it cannot be sliced like the data can. I then tried simply loading only a single volume into memory (as in, vol0 = img.datobj[?,0]), but that doesn?t work either (vol0 is a numpy array, and img is still seen as 4D). I tried updating the shape metadata in img, but that?s read only. Thank you very much for helping me figure out the intended way to use this function. Best, Chris Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 08:06:12) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import nibabel as nib >>> nib.__version__ '2.1.0' >>> import nibabel.processing >>> img = nib.load('Template_6.nii') >>> img.shape (121, 145, 121, 2) >>> data = img.get_data() >>> r = nib.processing.resample_to_output(data[...,0], [3,3,3]) Traceback (most recent call last): File "", line 1, in File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\processing.py", line 242, in resample_to_output out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) AttributeError: 'memmap' object has no attribute 'affine' >>> r = nib.processing.resample_to_output(img, [3,3,3]) Traceback (most recent call last): File "", line 1, in File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\processing.py", line 242, in resample_to_output out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\spaces.py", line 76, in vox2out_vox raise ValueError('This function can only deal with 3D images') ValueError: This function can only deal with 3D images >>> r = nib.processing.resample_to_output(img[...,0], [3,3,3]) Traceback (most recent call last): File "", line 1, in TypeError: __getitem__() takes 1 positional argument but 2 were given >>> vol0 = img.dataobj[...,0] >>> r = nib.processing.resample_to_output(vol0, [3,3,3]) Traceback (most recent call last): File "", line 1, in File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\processing.py", line 242, in resample_to_output out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) AttributeError: 'numpy.ndarray' object has no attribute 'affine' # This is after the previous attempt, loading only the first volume. img remains 4D >>> r = nib.processing.resample_to_output(img, [3,3,3]) Traceback (most recent call last): File "", line 1, in File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\processing.py", line 242, in resample_to_output out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\spaces.py", line 76, in vox2out_vox raise ValueError('This function can only deal with 3D images') ValueError: This function can only deal with 3D images >>> img.shape = img.shape[0:3] Traceback (most recent call last): File "", line 1, in AttributeError: can't set attribute Christopher R. Cox, PhD Neuroscience and Aphasia Research Unit (NARU) University of Manchester, UK christopher.cox-2 at manchester.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Thu Oct 12 10:24:37 2017 From: effigies at bu.edu (Christopher Markiewicz) Date: Thu, 12 Oct 2017 10:24:37 -0400 Subject: [Neuroimaging] Question about nibabel.processing In-Reply-To: References: Message-ID: Hi, What you need to do here is to get a 3D image from your 4D image. There's a function four_to_three that will give you a list of 3D images. >>> imgs = nib.four_to_three(img) >>> r = nib.processing.resample_to_output(imgs[0], [3, 3, 3]) Chris On Thu, Oct 12, 2017 at 9:15 AM, Christopher Cox < christopher.cox-2 at manchester.ac.uk> wrote: > Hello, > > > > My first question should really be: how do you review the archive for this > mailing list? I imagine this question has been answered, but I do not know > where to look. > > > > I am attempting something very simple: I want to resample a volume. The > nifti file on disk is 4D, and contains two volumes. > Nibabel.processing.resample_to_output() will not work with 4D data. > Fortunately for me, I only care about the first volume in this dataset. So > I should have no problem. > > > > But... nothing I can think to try works. I apologize it is difficult to > read, but I?ve condensed several of my attempts into an interactive python > session, and copied all of my work and the errors I am getting as a post > script. I tried to color code, but that might not come through. > > > > My first attempt is to try and pass the sliced data object (as in, data = > img.get_data()), but that is a memory map as lacks the metadata the > function requires. I tried passing the image (as in, img = nib.load(?)), > but it cannot be sliced like the data can. I then tried simply loading only > a single volume into memory (as in, vol0 = img.datobj[?,0]), but that > doesn?t work either (vol0 is a numpy array, and img is still seen as 4D). > I tried updating the shape metadata in img, but that?s read only. > > > > Thank you very much for helping me figure out the intended way to use this > function. > > > > Best, > > Chris > > > > Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 08:06:12) [MSC v.1900 64 > bit (AMD64)] on win32 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import nibabel as nib > > >>> nib.__version__ > > '2.1.0' > > > > >>> import nibabel.processing > > >>> img = nib.load('Template_6.nii') > > >>> img.shape > > (121, 145, 121, 2) > > > > >>> data = img.get_data() > > >>> r = nib.processing.resample_to_output(data[...,0], [3,3,3]) > > Traceback (most recent call last): > > File "", line 1, in > > File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site- > packages\nibabel\processing.py", line 242, in resample_to_output > > out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) > > *AttributeError: 'memmap' object has no attribute 'affine'* > > > > >>> r = nib.processing.resample_to_output(img, [3,3,3]) > > Traceback (most recent call last): > > File "", line 1, in > > File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site- > packages\nibabel\processing.py", line 242, in resample_to_output > > out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) > > File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\spaces.py", > line 76, in vox2out_vox > > raise ValueError('This function can only deal with 3D images') > > *ValueError: This function can only deal with 3D images* > > > > >>> r = nib.processing.resample_to_output(img[...,0], [3,3,3]) > > Traceback (most recent call last): > > File "", line 1, in > > *TypeError: __getitem__() takes 1 positional argument but 2 were given* > > > > >>> vol0 = img.dataobj[...,0] > > >>> r = nib.processing.resample_to_output(vol0, [3,3,3]) > > Traceback (most recent call last): > > File "", line 1, in > > File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site- > packages\nibabel\processing.py", line 242, in resample_to_output > > out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) > > *AttributeError: 'numpy.ndarray' object has no attribute 'affine'* > > > > # This is after the previous attempt, loading only the first volume. img > remains 4D > > >>> r = nib.processing.resample_to_output(img, [3,3,3]) > > Traceback (most recent call last): > > File "", line 1, in > > File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site- > packages\nibabel\processing.py", line 242, in resample_to_output > > out_vox_map = vox2out_vox((in_img.shape, in_img.affine), voxel_sizes) > > File "C:\Users\mbmhscc4\AppData\Roaming\Python\Python36\site-packages\nibabel\spaces.py", > line 76, in vox2out_vox > > raise ValueError('This function can only deal with 3D images') > > *ValueError: This function can only deal with 3D images* > > > > > > >>> img.shape = img.shape[0:3] > > Traceback (most recent call last): > > File "", line 1, in > > *AttributeError: can't set attribute* > > > > > > Christopher R. Cox, PhD > > Neuroscience and Aphasia Research Unit (NARU) > > University of Manchester, UK > > christopher.cox-2 at manchester.ac.uk > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Thu Oct 12 10:45:45 2017 From: effigies at bu.edu (Christopher Markiewicz) Date: Thu, 12 Oct 2017 10:45:45 -0400 Subject: [Neuroimaging] voxel axes reorientation In-Reply-To: References: Message-ID: The basics are this: img = nib.load(fname) orig_ornt = nib.orientations.io_orientation(img.affine) targ_ornt = nib.orientations.axcodes2ornt('LAS') ornt_xfm = nib.orientations.ornt_transform(orig_ornt, targ_ornt) If you're using the latest master, you can simply do: img_LAS = img.as_reoriented(ornt_xfm) Otherwise, you'll need to transform the data and the affine: data = nib.orientations.apply_orientation(img.dataobj, ornt_xfm) affine = img.affine.dot(nib.orientations.inv_ornt_aff(ornt_xfm, img.shape)) img_LAS = img.__class__(data, affine, img.header) (You should obviously check my work.) Chris On Wed, Oct 4, 2017 at 6:16 PM, Vinit Srivastava < vinit.k.srivastava at gmail.com> wrote: > Hi, > > I've been using nib.as_closest_canonical to reorient the voxel axes to > RAS+. > > Is there an optio to reorient images to a chosen format other than > canonical RAS format? For example, I'd like to reorient from LPS to LAS. > > Thanks, > > Vinny > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Fri Oct 13 16:30:41 2017 From: effigies at bu.edu (Christopher Markiewicz) Date: Fri, 13 Oct 2017 16:30:41 -0400 Subject: [Neuroimaging] ANN: Nibabel release 2.2 Message-ID: Hi all, Nibabel 2.2 has been released, featuring some very long-awaited CIFTI support. Other particularly interesting new features are TCK streamlines and indexed_gzip support, thanks to Marc-Alexandre C?t? and Paul McCarthy, respectively. Thanks to all contributors and reviewers, and of course Matthew Brett for all of his maintenance work. Please cite using the Zenodo DOI: https://doi.org/10.5281/zenodo.1011207 The full Changelog follows: New feature release for the 2.2 series. Most work on NiBabel so far has been by Matthew Brett (MB), Michael Hanke (MH) Ben Cipollini (BC), Marc-Alexandre C?t? (MC), Chris Markiewicz (CM), Stephan Gerhard (SG) and Eric Larson (EL). References like "pr/298" refer to github pull request numbers. New features ------------ * CIFTI support (pr/249) (Satra Ghosh, Michiel Cottaar, BC, CM, Demian Wassermann, MB) * Support for MRtrix TCK streamlines file format (pr/486) (MC, reviewed by MB, Arnaud Bore, J-Donald Tournier, Jean-Christophe Houde) * Added ``get_fdata()`` as default method to retrieve scaled floating point data from ``DataobjImage``s (pr/551) (MB, reviewed by CM, Satra Ghosh) Enhancements ------------ * Support for alternative header field name variants in .PAR files (pr/507) (Gregory R. Lee) * Various enhancements to streamlines API by MC: support for reading TRK version 1 (pr/512); concatenation of tractograms using `+`/`+=` operators (pr/495); function to concatenate multiple ArraySequence objects (pr/494) * Support for numpy 1.12 (pr/500, pr/502) (MC, MB) * Allow dtype specifiers as fileslice input (pr/485) (MB) * Support "headerless" ArrayProxy specification, enabling memory-efficient ArrayProxy reshaping (pr/521) (CM) * Allow unknown NIfTI intent codes, add FSL codes (pr/528) (Paul McCarthy) * Improve error handling for ``img.__getitem__`` (pr/533) (Ariel Rokem) * Delegate reorientation to SpatialImage classes (pr/544) (Mark Hymers, CM, reviewed by MB) * Enable using ``indexed_gzip`` to reduce memory usage when reading from gzipped NIfTI and MGH files (pr/552) (Paul McCarthy, reviewed by MB, CM) Bug fixes --------- * Miscellaneous MINC reader fixes (pr/493) (Robert D. Vincent, reviewed by CM, MB) * Fix corner case in ``wrapstruct.get`` (pr/516) (Paul McCarthy, reviewed by CM, MB) Maintenance ----------- * Fix documentation errors (pr/517, pr/536) (Fernando Perez, Venky Reddy) * Documentation update (pr/514) (Ivan Gonzalez) * Update testing to use pre-release builds of dependencies (pr/509) (MB) * Better warnings when nibabel not on path (pr/503) (MB) API changes and deprecations ---------------------------- * ``header`` argument to ``ArrayProxy.__init__`` is renamed to ``spec`` * Deprecation of ``header`` property of ``ArrayProxy`` object, for removal in 3.0 * ``wrapstruct.get`` now returns entries evaluating ``False``, instead of ``None`` * ``DataobjImage.get_data`` to be deprecated April 2018, scheduled for removal April 2020 Enjoy, Chris Markiewicz -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Fri Oct 13 16:44:14 2017 From: bertrand.thirion at inria.fr (bthirion) Date: Fri, 13 Oct 2017 22:44:14 +0200 Subject: [Neuroimaging] ANN: Nibabel release 2.2 In-Reply-To: References: Message-ID: <4848da74-0f8f-39c5-a765-b253e4d089e3@inria.fr> Great, thx ! Bertrand On 13/10/2017 22:30, Christopher Markiewicz wrote: > Hi all, > > Nibabel 2.2 has been released, featuring some very long-awaited CIFTI > support. > > Other particularly interesting new features are TCK streamlines and > indexed_gzip support, thanks to Marc-Alexandre C?t? and Paul McCarthy, > respectively. > > Thanks to all contributors and reviewers, and of course Matthew Brett > for all of his maintenance work. > > Please cite using the Zenodo DOI: https://doi.org/10.5281/zenodo.1011207 > > > The full Changelog follows: > > New feature release for the 2.2 series. > > Most work on NiBabel so far has been by Matthew Brett (MB), Michael > Hanke (MH) > Ben Cipollini (BC), Marc-Alexandre C?t? (MC), Chris Markiewicz (CM), > Stephan > Gerhard (SG) and Eric Larson (EL). > > References like "pr/298" refer to github pull request numbers. > > New features > ------------ > > * CIFTI support (pr/249) (Satra Ghosh, Michiel Cottaar, BC, CM, Demian > ? Wassermann, MB) > * Support for MRtrix TCK streamlines file format (pr/486) (MC, reviewed by > ? MB, Arnaud Bore, J-Donald Tournier, Jean-Christophe Houde) > * Added ``get_fdata()`` as default method to retrieve scaled floating > point > ? data from ``DataobjImage``s (pr/551) (MB, reviewed by CM, Satra Ghosh) > > Enhancements > ------------ > > * Support for alternative header field name variants in .PAR files > ? (pr/507) (Gregory R. Lee) > * Various enhancements to streamlines API by MC: support for reading TRK > ? version 1 (pr/512); concatenation of tractograms using `+`/`+=` > operators > ? (pr/495); function to concatenate multiple ArraySequence objects > (pr/494) > * Support for numpy 1.12 (pr/500, pr/502) (MC, MB) > * Allow dtype specifiers as fileslice input (pr/485) (MB) > * Support "headerless" ArrayProxy specification, enabling memory-efficient > ? ArrayProxy reshaping (pr/521) (CM) > * Allow unknown NIfTI intent codes, add FSL codes (pr/528) (Paul McCarthy) > * Improve error handling for ``img.__getitem__`` (pr/533) (Ariel Rokem) > * Delegate reorientation to SpatialImage classes (pr/544) (Mark > Hymers, CM, > ? reviewed by MB) > * Enable using ``indexed_gzip`` to reduce memory usage when reading from > ? gzipped NIfTI and MGH files (pr/552) (Paul McCarthy, reviewed by MB, CM) > > Bug fixes > --------- > > * Miscellaneous MINC reader fixes (pr/493) (Robert D. Vincent, > reviewed by CM, > ? MB) > * Fix corner case in ``wrapstruct.get`` (pr/516) (Paul McCarthy, > reviewed by > ? CM, MB) > > Maintenance > ----------- > > * Fix documentation errors (pr/517, pr/536) (Fernando Perez, Venky Reddy) > * Documentation update (pr/514) (Ivan Gonzalez) > * Update testing to use pre-release builds of dependencies (pr/509) (MB) > * Better warnings when nibabel not on path (pr/503) (MB) > > API changes and deprecations > ---------------------------- > > * ``header`` argument to ``ArrayProxy.__init__`` is renamed to ``spec`` > * Deprecation of ``header`` property of ``ArrayProxy`` object, for > removal in > ? 3.0 > * ``wrapstruct.get`` now returns entries evaluating ``False``, instead > of ``None`` > * ``DataobjImage.get_data`` to be deprecated April 2018, scheduled for > removal > ? April 2020 > > > Enjoy, > > Chris Markiewicz > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Oct 13 17:11:45 2017 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 13 Oct 2017 23:11:45 +0200 Subject: [Neuroimaging] ANN: Nibabel release 2.2 In-Reply-To: References: Message-ID: <20171013211145.GF2693161@phare.normalesup.org> Very nice! Ga?l On Fri, Oct 13, 2017 at 04:30:41PM -0400, Christopher Markiewicz wrote: > Hi all, > Nibabel 2.2 has been released, featuring some very long-awaited CIFTI support. > Other particularly interesting new features are TCK streamlines and > indexed_gzip support, thanks to Marc-Alexandre C?t? and Paul McCarthy, > respectively. > Thanks to all contributors and reviewers, and of course Matthew Brett for all > of his maintenance work. > Please cite using the Zenodo DOI:?https://doi.org/10.5281/zenodo.1011207 > The full Changelog follows: > New feature release for the 2.2 series. > Most work on NiBabel so far has been by Matthew Brett (MB), Michael Hanke (MH) > Ben Cipollini (BC), Marc-Alexandre C?t? (MC), Chris Markiewicz (CM), Stephan > Gerhard (SG) and Eric Larson (EL). > References like "pr/298" refer to github pull request numbers. > New features > ------------ > * CIFTI support (pr/249) (Satra Ghosh, Michiel Cottaar, BC, CM, Demian > ? Wassermann, MB) > * Support for MRtrix TCK streamlines file format (pr/486) (MC, reviewed by > ? MB, Arnaud Bore, J-Donald Tournier, Jean-Christophe Houde) > * Added ``get_fdata()`` as default method to retrieve scaled floating point > ? data from ``DataobjImage``s (pr/551) (MB, reviewed by CM, Satra Ghosh) > Enhancements > ------------ > * Support for alternative header field name variants in .PAR files > ? (pr/507) (Gregory R. Lee) > * Various enhancements to streamlines API by MC: support for reading TRK > ? version 1 (pr/512); concatenation of tractograms using `+`/`+=` operators > ? (pr/495); function to concatenate multiple ArraySequence objects (pr/494) > * Support for numpy 1.12 (pr/500, pr/502) (MC, MB) > * Allow dtype specifiers as fileslice input (pr/485) (MB) > * Support "headerless" ArrayProxy specification, enabling memory-efficient > ? ArrayProxy reshaping (pr/521) (CM) > * Allow unknown NIfTI intent codes, add FSL codes (pr/528) (Paul McCarthy) > * Improve error handling for ``img.__getitem__`` (pr/533) (Ariel Rokem) > * Delegate reorientation to SpatialImage classes (pr/544) (Mark Hymers, CM, > ? reviewed by MB) > * Enable using ``indexed_gzip`` to reduce memory usage when reading from > ? gzipped NIfTI and MGH files (pr/552) (Paul McCarthy, reviewed by MB, CM) > Bug fixes > --------- > * Miscellaneous MINC reader fixes (pr/493) (Robert D. Vincent, reviewed by CM, > ? MB) > * Fix corner case in ``wrapstruct.get`` (pr/516) (Paul McCarthy, reviewed by > ? CM, MB) > Maintenance > ----------- > * Fix documentation errors (pr/517, pr/536) (Fernando Perez, Venky Reddy) > * Documentation update (pr/514) (Ivan Gonzalez) > * Update testing to use pre-release builds of dependencies (pr/509) (MB) > * Better warnings when nibabel not on path (pr/503) (MB) > API changes and deprecations > ---------------------------- > * ``header`` argument to ``ArrayProxy.__init__`` is renamed to ``spec`` > * Deprecation of ``header`` property of ``ArrayProxy`` object, for removal in > ? 3.0 > * ``wrapstruct.get`` now returns entries evaluating ``False``, instead of > ``None`` > * ``DataobjImage.get_data`` to be deprecated April 2018, scheduled for removal > ? April 2020 > Enjoy, > Chris Markiewicz > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From blaise.frederick at gmail.com Fri Oct 13 17:14:00 2017 From: blaise.frederick at gmail.com (Blaise Frederick) Date: Fri, 13 Oct 2017 17:14:00 -0400 Subject: [Neuroimaging] ANN: Nibabel release 2.2 In-Reply-To: <20171013211145.GF2693161@phare.normalesup.org> References: <20171013211145.GF2693161@phare.normalesup.org> Message-ID: Very cool! Can?t wait to try out the CIFTI stuff. Blaise > On Oct 13, 2017, at 5:11 PM, Gael Varoquaux wrote: > > Very nice! > > Ga?l > > On Fri, Oct 13, 2017 at 04:30:41PM -0400, Christopher Markiewicz wrote: >> Hi all, > >> Nibabel 2.2 has been released, featuring some very long-awaited CIFTI support. > >> Other particularly interesting new features are TCK streamlines and >> indexed_gzip support, thanks to Marc-Alexandre C?t? and Paul McCarthy, >> respectively. > >> Thanks to all contributors and reviewers, and of course Matthew Brett for all >> of his maintenance work. > >> Please cite using the Zenodo DOI: https://doi.org/10.5281/zenodo.1011207 > > >> The full Changelog follows: > >> New feature release for the 2.2 series. > >> Most work on NiBabel so far has been by Matthew Brett (MB), Michael Hanke (MH) >> Ben Cipollini (BC), Marc-Alexandre C?t? (MC), Chris Markiewicz (CM), Stephan >> Gerhard (SG) and Eric Larson (EL). > >> References like "pr/298" refer to github pull request numbers. > >> New features >> ------------ > >> * CIFTI support (pr/249) (Satra Ghosh, Michiel Cottaar, BC, CM, Demian >> Wassermann, MB) >> * Support for MRtrix TCK streamlines file format (pr/486) (MC, reviewed by >> MB, Arnaud Bore, J-Donald Tournier, Jean-Christophe Houde) >> * Added ``get_fdata()`` as default method to retrieve scaled floating point >> data from ``DataobjImage``s (pr/551) (MB, reviewed by CM, Satra Ghosh) > >> Enhancements >> ------------ > >> * Support for alternative header field name variants in .PAR files >> (pr/507) (Gregory R. Lee) >> * Various enhancements to streamlines API by MC: support for reading TRK >> version 1 (pr/512); concatenation of tractograms using `+`/`+=` operators >> (pr/495); function to concatenate multiple ArraySequence objects (pr/494) >> * Support for numpy 1.12 (pr/500, pr/502) (MC, MB) >> * Allow dtype specifiers as fileslice input (pr/485) (MB) >> * Support "headerless" ArrayProxy specification, enabling memory-efficient >> ArrayProxy reshaping (pr/521) (CM) >> * Allow unknown NIfTI intent codes, add FSL codes (pr/528) (Paul McCarthy) >> * Improve error handling for ``img.__getitem__`` (pr/533) (Ariel Rokem) >> * Delegate reorientation to SpatialImage classes (pr/544) (Mark Hymers, CM, >> reviewed by MB) >> * Enable using ``indexed_gzip`` to reduce memory usage when reading from >> gzipped NIfTI and MGH files (pr/552) (Paul McCarthy, reviewed by MB, CM) > >> Bug fixes >> --------- > >> * Miscellaneous MINC reader fixes (pr/493) (Robert D. Vincent, reviewed by CM, >> MB) >> * Fix corner case in ``wrapstruct.get`` (pr/516) (Paul McCarthy, reviewed by >> CM, MB) > >> Maintenance >> ----------- > >> * Fix documentation errors (pr/517, pr/536) (Fernando Perez, Venky Reddy) >> * Documentation update (pr/514) (Ivan Gonzalez) >> * Update testing to use pre-release builds of dependencies (pr/509) (MB) >> * Better warnings when nibabel not on path (pr/503) (MB) > >> API changes and deprecations >> ---------------------------- > >> * ``header`` argument to ``ArrayProxy.__init__`` is renamed to ``spec`` >> * Deprecation of ``header`` property of ``ArrayProxy`` object, for removal in >> 3.0 >> * ``wrapstruct.get`` now returns entries evaluating ``False``, instead of >> ``None`` >> * ``DataobjImage.get_data`` to be deprecated April 2018, scheduled for removal >> April 2020 > > >> Enjoy, > >> Chris Markiewicz > > >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > > > -- > Gael Varoquaux > Researcher, INRIA Parietal > NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France > Phone: ++ 33-1-69-08-79-68 > http://gael-varoquaux.info http://twitter.com/GaelVaroquaux > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging From vinit.k.srivastava at gmail.com Fri Oct 13 17:58:22 2017 From: vinit.k.srivastava at gmail.com (Vinit Srivastava) Date: Fri, 13 Oct 2017 17:58:22 -0400 Subject: [Neuroimaging] voxel axes reorientation In-Reply-To: References: Message-ID: Thanks Chris! On Thu, Oct 12, 2017 at 10:45 AM, Christopher Markiewicz wrote: > The basics are this: > > img = nib.load(fname) > orig_ornt = nib.orientations.io_orientation(img.affine) > targ_ornt = nib.orientations.axcodes2ornt('LAS') > ornt_xfm = nib.orientations.ornt_transform(orig_ornt, targ_ornt) > > If you're using the latest master, you can simply do: > > img_LAS = img.as_reoriented(ornt_xfm) > > Otherwise, you'll need to transform the data and the affine: > > data = nib.orientations.apply_orientation(img.dataobj, ornt_xfm) > affine = img.affine.dot(nib.orientations.inv_ornt_aff(ornt_xfm, > img.shape)) > img_LAS = img.__class__(data, affine, img.header) > > (You should obviously check my work.) > > Chris > > > On Wed, Oct 4, 2017 at 6:16 PM, Vinit Srivastava < > vinit.k.srivastava at gmail.com> wrote: > >> Hi, >> >> I've been using nib.as_closest_canonical to reorient the voxel axes to >> RAS+. >> >> Is there an optio to reorient images to a chosen format other than >> canonical RAS format? For example, I'd like to reorient from LPS to LAS. >> >> Thanks, >> >> Vinny >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Oct 13 19:24:56 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 14 Oct 2017 00:24:56 +0100 Subject: [Neuroimaging] ANN: Nibabel release 2.2 In-Reply-To: References: Message-ID: Hi Chris, On Fri, Oct 13, 2017 at 9:30 PM, Christopher Markiewicz wrote: > Hi all, > > Nibabel 2.2 has been released, featuring some very long-awaited CIFTI > support. > > Other particularly interesting new features are TCK streamlines and > indexed_gzip support, thanks to Marc-Alexandre C?t? and Paul McCarthy, > respectively. > > Thanks to all contributors and reviewers, and of course Matthew Brett for > all of his maintenance work. > > Please cite using the Zenodo DOI: https://doi.org/10.5281/zenodo.1011207 A tip of my broadest hat, for doing the release, and for all the work leading up to it. You've been tireless with your reviews, as well as contributing a lot of code, and that's really made a big difference in keeping nibabel moving, and making it a cheerful project to work on, Thanks much, Matthew From pauldmccarthy at gmail.com Sat Oct 14 13:42:36 2017 From: pauldmccarthy at gmail.com (paul mccarthy) Date: Sat, 14 Oct 2017 18:42:36 +0100 Subject: [Neuroimaging] ANN: Nibabel release 2.2 In-Reply-To: References: Message-ID: Thanks guys - you've been great to work with, and CIFTI is a big step forward! Cheers, Paul On 14 October 2017 at 00:24, Matthew Brett wrote: > Hi Chris, > > On Fri, Oct 13, 2017 at 9:30 PM, Christopher Markiewicz > wrote: > > Hi all, > > > > Nibabel 2.2 has been released, featuring some very long-awaited CIFTI > > support. > > > > Other particularly interesting new features are TCK streamlines and > > indexed_gzip support, thanks to Marc-Alexandre C?t? and Paul McCarthy, > > respectively. > > > > Thanks to all contributors and reviewers, and of course Matthew Brett for > > all of his maintenance work. > > > > Please cite using the Zenodo DOI: https://doi.org/10.5281/zenodo.1011207 > > A tip of my broadest hat, for doing the release, and for all the work > leading up to it. You've been tireless with your reviews, as well as > contributing a lot of code, and that's really made a big difference in > keeping nibabel moving, and making it a cheerful project to work on, > > Thanks much, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From keren.meron at mail.huji.ac.il Sat Oct 14 12:42:55 2017 From: keren.meron at mail.huji.ac.il (Keren Meron) Date: Sat, 14 Oct 2017 19:42:55 +0300 Subject: [Neuroimaging] nibabel question Message-ID: I have a Nifti object generated from a directory of dicom files. It seems that the Nifti should know how many frames it holds, but all I can find in the header info is the shape. The problem is, the shape is at times (num_images, x, y) and at times (x, y, num_images). The only nibabel functions I found relevant where from the Ecat library. I am not familiar with ecat format, but I want my method to work for any nii file. I am working with the nibabel library. Is there a way to retrieve the number of images in a Nifti file? Virus-free. www.avg.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Sat Oct 14 17:05:31 2017 From: effigies at bu.edu (Christopher Markiewicz) Date: Sat, 14 Oct 2017 17:05:31 -0400 Subject: [Neuroimaging] nibabel question In-Reply-To: References: Message-ID: I think there's a little bit of a DICOM-NIfTI vocabulary mismatch here. In my mind, a NIfTI file is an "image", and contains one or more "volumes" of a given size. The three spatial dimensions may be called x, y, and z, and the affine matrix encoded by the header translates voxel indices along these dimensions into coordinates in RAS space, which is millimeters to the right, anterior and superior of an origin (usually somewhere in the brain). x, y and z may correspond to phase-, frequency- and slice-encoding directions. Even if you don't have phase/frequency directions, usually you have slices. >From your email, I'm guessing you're using both "frame" and "image" to refer to what I'd call a slice, or a 2D matrix that, when stacked in series, produce a volume. So I'm interpreting your question as "How do I determine the number of slices in a NIfTI image." which really hinges upon determining the slice direction. Assuming you've loaded a file: >>> img = nib.load(fname) You can look at the shape (as you have): >>> img.shape (x, y, z) This might be enough, as often the slice dimension is different from the other two, which are the same size. You can also look at the zooms: >>> img.header.get_zooms() (1.0, 1.0, 1.0) Zooms are the width of the voxels in each direction. Again, often the slice dimension is the odd one out. You can also look at orientation information: >>> nib.aff2axcodes(img.afffine) # Will give some sequence of R/L, A/P, S/I, e.g. ('L', 'A', 'S') If you know your slice direction was superior/inferior, then the z-direction is what you want, and you can now go back to `img.shape` and see how many slices you had. Finally, there is a chance that the slice dimension is actually encoded in the NIfTI header: >>> freq_dim, phase_dim, slice_dim = img.header.get_dim_info() If slice_dim is not `None`, then you can get the number of slices with >>> img.shape[slice_dim] Hopefully the vocabulary mismatch didn't send me off in entirely the wrong direction, and this was somewhat helpful. -- Chris Markiewicz On Sat, Oct 14, 2017 at 12:42 PM, Keren Meron wrote: > I have a Nifti object generated from a directory of dicom files. It seems > that the Nifti should know how many frames it holds, but all I can find in > the header info is the shape. The problem is, the shape is at times > (num_images, x, y) and at times (x, y, num_images). > > The only nibabel functions I found relevant where from the Ecat library. I > am not familiar with ecat format, but I want my method to work for any nii > file. I am working with the nibabel library. > > Is there a way to retrieve the number of images in a Nifti file? > > > Virus-free. > www.avg.com > > <#m_8563802930650991843_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Sun Oct 15 22:27:46 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Mon, 16 Oct 2017 02:27:46 +0000 Subject: [Neuroimaging] [DIPY] Question about whole tracks *.trk registration using SLR In-Reply-To: References: <63C189EA-8788-4FB9-BDCA-5D3DAC191164@mgh.harvard.edu> Message-ID: Hi Rodrigo, Apologies for the delay. But hey I got a solution for you! I hope you are excited :) Look at this gist https://gist.github.com/Garyfallidis/51e34aab47de99eafa887b2b818384ea This code shows how to do step by step whole brain SLR and I made it work with the specific datasets that you gave me. The specific datasets don't need exactly a whole brain SLR because the one brain has much fewer structures than the other brain but still SLR is robust to incomplete datasets such as these. Give it a go and give us feedback! :) All the best, Eleftherios p.s. Notice that to load the streamlines I used a more recent API than what you originally used. Make sure you have a recent version of dipy and nibabel. On Mon, Sep 25, 2017 at 9:50 AM Perea Camargo, Rodrigo Dennis < RPEREACAMARGO at mgh.harvard.edu> wrote: > Hi Eleftherios, > I am included both *.trk files in here: > http://www.nmr.mgh.harvard.edu/~rdp20/drop/trk_slf/ > IIT2mean.trk was a template TRK downloaded from the tract_querier example ( > http://tract-querier.readthedocs.io/en/latest/ ) and > ADRC_TMP_wholeBrain.trk.gz was generated using dsi_studio ( > http://dsi-studio.labsolver.org/) > > Thanks again, > > Rodrigo > > > > On Sep 21, 2017, at 3:46 PM, Eleftherios Garyfallidis > wrote: > > Hi Rodrigo, > > Whole brain SLR is a great idea! Thank you for your question. :) > > Have in mind that nibabel.trackvis is the old API. Use > nibabel.streamlines.load instead. > > Can you please share with me (off the list) your trk files so that I can > correct your > script? > > A wild guess is that you are not loading the streamlines correctly. > > Did you use DIPY to create those trks or another software? > > Also, another suggestion is to run QuickBundles first to reduce the size > of your > datasets. I mean before starting the SLR. > > Here is a sample script (in one of my branches). > > https://github.com/Garyfallidis/dipy/blob/recobundles/dipy/workflows/align.py > > We are working on making something similar available asap in master. > > Best, > Eleftherios > > > > On Thu, Sep 21, 2017 at 1:36 AM Perea Camargo, Rodrigo Dennis < > RPEREACAMARGO at mgh.harvard.edu> wrote: > > Hi Eleftherios & DIPY community, >> I am trying to register two whole brain tracts as shown in your recent >> publication using the streamline-based linear registration. I am diving >> into Dipy now and I might have some problems loading the files or >> registering them. >> Following your example ( >> http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration), >> there are 2 issues that may arise when I try it. >> >> 1) How to load *.trk files is not shown in your example (it looks like >> you added to cingulum bundles from your dataset using the dipy.data value?) >> So I follow this tutorial ( >> http://nipy.org/dipy/examples_built/streamline_formats.html ) to load >> the streamline and hdr but I am not sure if this is the format SLF() wants >> it in. >> >> 2) So then I try using the srr.optimize( ) function but I get the >> following problems (check below). >> >> I hope you can help me. >> Rodrigo >> >> >> Here is my Jupyter notebook with the errors I found (any help will be >> greatly appreciate it): >> >> >> >> >> >> >> The information in this e-mail is intended only for the person to whom it >> is >> addressed. If you believe this e-mail was sent to you in error and the >> e-mail >> contains patient information, please contact the Partners Compliance >> HelpLine at >> http://www.partners.org/complianceline . If the e-mail was sent to you >> in error >> but does not contain patient information, please contact the sender and >> properly >> dispose of the e-mail. >> > _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > _______________________________________________ > > > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RPEREACAMARGO at mgh.harvard.edu Tue Oct 17 21:14:19 2017 From: RPEREACAMARGO at mgh.harvard.edu (Perea Camargo, Rodrigo Dennis) Date: Wed, 18 Oct 2017 01:14:19 +0000 Subject: [Neuroimaging] [DIPY] Question about whole tracks *.trk registration using SLR In-Reply-To: References: <63C189EA-8788-4FB9-BDCA-5D3DAC191164@mgh.harvard.edu> Message-ID: Hey Eleftherios, I tried the code and it works! ? (Time to digest/adapt some some python code now? ). Since I just started poking some python code (and for other new users in case needed), I had to install the vtk package (?conda install -c clinicalgraphics vtk?) within python 3 (anaconda dist). Also, as suggested I `pip --upgrade nibabel` and ` pip --upgrade dipy` Thank you! Rodrigo From: Neuroimaging [mailto:neuroimaging-bounces+rpereacamargo=mgh.harvard.edu at python.org] On Behalf Of Eleftherios Garyfallidis Sent: Sunday, October 15, 2017 10:28 PM To: Neuroimaging analysis in Python Subject: Re: [Neuroimaging] [DIPY] Question about whole tracks *.trk registration using SLR Hi Rodrigo, Apologies for the delay. But hey I got a solution for you! I hope you are excited :) Look at this gist https://gist.github.com/Garyfallidis/51e34aab47de99eafa887b2b818384ea This code shows how to do step by step whole brain SLR and I made it work with the specific datasets that you gave me. The specific datasets don't need exactly a whole brain SLR because the one brain has much fewer structures than the other brain but still SLR is robust to incomplete datasets such as these. Give it a go and give us feedback! :) All the best, Eleftherios p.s. Notice that to load the streamlines I used a more recent API than what you originally used. Make sure you have a recent version of dipy and nibabel. On Mon, Sep 25, 2017 at 9:50 AM Perea Camargo, Rodrigo Dennis > wrote: Hi Eleftherios, I am included both *.trk files in here: http://www.nmr.mgh.harvard.edu/~rdp20/drop/trk_slf/ IIT2mean.trk was a template TRK downloaded from the tract_querier example (http://tract-querier.readthedocs.io/en/latest/ ) and ADRC_TMP_wholeBrain.trk.gz was generated using dsi_studio (http://dsi-studio.labsolver.org/) Thanks again, Rodrigo On Sep 21, 2017, at 3:46 PM, Eleftherios Garyfallidis > wrote: Hi Rodrigo, Whole brain SLR is a great idea! Thank you for your question. :) Have in mind that nibabel.trackvis is the old API. Use nibabel.streamlines.load instead. Can you please share with me (off the list) your trk files so that I can correct your script? A wild guess is that you are not loading the streamlines correctly. Did you use DIPY to create those trks or another software? Also, another suggestion is to run QuickBundles first to reduce the size of your datasets. I mean before starting the SLR. Here is a sample script (in one of my branches). https://github.com/Garyfallidis/dipy/blob/recobundles/dipy/workflows/align.py We are working on making something similar available asap in master. Best, Eleftherios On Thu, Sep 21, 2017 at 1:36 AM Perea Camargo, Rodrigo Dennis > wrote: Hi Eleftherios & DIPY community, I am trying to register two whole brain tracts as shown in your recent publication using the streamline-based linear registration. I am diving into Dipy now and I might have some problems loading the files or registering them. Following your example ( http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration), there are 2 issues that may arise when I try it. 1) How to load *.trk files is not shown in your example (it looks like you added to cingulum bundles from your dataset using the dipy.data value?) So I follow this tutorial (http://nipy.org/dipy/examples_built/streamline_formats.html ) to load the streamline and hdr but I am not sure if this is the format SLF() wants it in. 2) So then I try using the srr.optimize( ) function but I get the following problems (check below). I hope you can help me. Rodrigo Here is my Jupyter notebook with the errors I found (any help will be greatly appreciate it): The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpengo at umn.edu Wed Oct 18 11:03:12 2017 From: tpengo at umn.edu (Thomas Pengo) Date: Wed, 18 Oct 2017 10:03:12 -0500 Subject: [Neuroimaging] [Job] Neuroimaging job at U of Minnesota Message-ID: Dear all the University of Minnesota Informatics Institute (UMII) seeks a skilled, staff scientist to serve the MR Neuroimaging research community at the University of Minnesota. The successful candidate is expected to: 1. carry out consultations with researchers across the University and serve in the role of a staff scientist who provides technical assistance across multiple research groups 2. provide technical and analytic support in designing and supporting neuroimaging analysis pipelines that will be run on platforms of the Minnesota Supercomputing Institute (MSI) 3. appropriately document work so that researchers are able to prepare grants, research papers and reports based on the analysis 4. simultaneously support different versions of analysis tools that might require different libraries and operating system versions 5. develop new computational environments, such as Docker, for major neuroimaging analysis platforms (AFNI, FSL, etc.) in the HPC environment at MSI; and 6. build computational workflows that scale and work across a broad range of file system storage. The successful candidate will join a growing group of imaging analysts in the University of Minnesota Informatics Institute. The Imaging Informatics Manager at the University of Minnesota Informatics Institute will serve as the supervisor. On a daily basis, the analyst will work closely with MR neuroimaging researchers, and staff at the Minnesota Supercomputing Institute. This position is both posted as a Researcher 5 , Research Scientist (minimum required qualification: PhD) under Job Opening 319904 ( https://z.umn.edu/umii-neuroimaging-job-5) and as a Researcher 3 , Assistant Scientist, (minimum required qualification: Bachelor's degree) under Job Opening 319899 ( https://z.umn.edu/umii-neuroimaging-job-3). -- Qualifications Required: ? Background in scientific image processing, analysis or multi-dimensional signal processing ? Have proficiency in at least one programming language (Python, MATLAB, Java, C++, ?) ? Be familiar with the Linux operating system ? Have excellent interpersonal and organizational skills ? Be able to articulate processes verbally and in writing ? Be highly motivated to learn new technical skills Additional requirements for the Researcher 5 position (Job Opening 319904): ? Doctoral degree in a science or engineering field ? Experience in MR neuroimaging image analysis (fMRI, DWI, or structural) ? Significant research experience and ability to work independently ? Experience working in a team environment with people with diverse expertise ? Publications, software, or grant applications demonstrating creativity and independence Preferred: ? (for Researcher 3 position) Master?s degree in a science or engineering field ? (for Researcher 3 position) Experience in MR neuroimaging image analysis (fMRI, DWI, structural) ? (for both positions) Experience in medical image analysis ? (for both positions) Experience with container technology, such as Docker ? (for both positions) Experience in HPC environments ? (for both positions) Experience with neuroimaging analysis software packages such as FSL, AFNI, SPM ? (for both positions) Experience working in a team environment with people with diverse expertise ________________________________________ Dr. Thomas Pengo Imaging Informatics Manager University of Minnesota Informatics Institute University of Minnesota, Twin Cities Campus Cancer and Cardiovascular Research Building 2231 6th St SE Minneapolis, MN 55455 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.gramfort at inria.fr Fri Oct 20 03:10:39 2017 From: alexandre.gramfort at inria.fr (Alexandre Gramfort) Date: Fri, 20 Oct 2017 09:10:39 +0200 Subject: [Neuroimaging] [ANN] MNE-Python 0.15 Message-ID: Hi, We are very pleased to announce the new 0.15 release of MNE-Python. This release comes with new features, bug fixes, and many improvements to usability, visualization, and documentation. A few highlights ============ - We reinvented our documentation. Our website now unifies tutorials, examples and background information into one coherent narrative structure while preserving context. Check it out ! - Add mne.decoding.cross_val_multiscore() to allow scoring of multiple tasks, typically used with the new mne.decoding.SlidingEstimator - Add mne.decoding.ReceptiveField module for modeling neural responses to continuous stimulation - Add mne.decoding.SPoC to fit and apply spatial filters based on continuous target variables - mne.io.Raw.plot() butterfly mode (toggled with ?b? key) - IO support for EGI MFF format - mne.fit_dipole() confidence intervals, number of free parameters, and ?? - Add mne.VectorSourceEstimate class which enables working with both source power and dipole orientations; use option pick_ori='vector' to mne.minimum_norm.apply_inverse() - New high-frequency somatosensory MEG dataset - Add unit-noise-gain beamformer and neural activity index (weight normalization) to LCMV beamformer with weight_norm parameter - Add filtering functions mne.Epochs.filter() and mne.Evoked.filter() , as well as pad argument to mne.io.Raw.filter() - Enable morphing between hemispheres with mne.compute_morph_matrix() - Add interactive time cursor and category/amplitude status message in window for evoked plot - We exposed a rank parameter in mne.viz.evoked.plot_evoked_white() that allows for correcting the scaling of the visualization on the spot in cases where the rank estimate of the covariance is not accurate (for certain SSS?d data) Notable API changes ================ - ICA channel names have now been reformatted to start from zero, e.g. "ICA000", to match indexing schemes in mne.preprocessing.ICA - Add skip_by_annotation to mne.io.Raw.filter() to process data concatenated with e.g. mne.concatenate_raws() separately - Add new filtering mode fir_design='firwin' (default in the next 0.16 release) that gets improved attenuation using fewer samples compared to fir_design='firwin2' (default in 0.15) - Add mne.beamformer.make_lcmv() and mne.beamformer.apply_lcmv() , mne.beamformer.apply_lcmv_epochs() , and mne.beamformer.apply_lcmv_raw() to enable the separate computation and application of LCMV beamformer weights - mne.set_eeg_reference() and related methods (e.g. mne.io.Raw.set_eeg_reference() ) have a new argument projection, which if set to False directly applies an average reference instead of adding an SSP projector - mne.find_events() mask_type parameter will change from 'not_and' to 'and' d - picks parameter in mne.beamformer.lcmv() , mne.beamformer.lcmv_epochs() , mne.beamformer.lcmv_raw() , mne.beamformer.tf_lcmv() and mne.beamformer.rap_music() is now deprecated - The keyword argument frequencies has been deprecated in favor of freqs in various time-frequency functions, e.g. mne.time_frequency.tfr_array_morlet() - Deprecate force_fixed and surf_ori in mne.read_forward_solution() - The behavior of 'mean_flip' label-flipping in mne.extract_label_time_course() and related functions has been changed such that the flip, instead of having arbitrary sign, maximally aligns in the positive direction of the normals of the label For a full list of improvements and API changes, see: http://martinos.org/mne/stable/whats_new.html#version-0-15 To install the latest release the following command should do the job: pip install --upgrade --user mne As usual we welcome your bug reports, feature requests, critiques, and contributions. Some links: - https://github.com/mne-tools/mne-python (code + readme on how to install) - http://martinos.org/mne/stable/ (full MNE documentation) Follow us on Twitter: https://twitter.com/mne_news Regards, The MNE-Python developers People who contributed to this release (in alphabetical order): * akshay0724 * Alejandro Weinstein * Alexander Rudiuk * Alexandre Barachant * Alexandre Gramfort * Andrew Dykstra * Britta Westner * Chris Bailey * Chris Holdgraf * Christian Brodbeck * Christopher Holdgraf * Clemens Brunner * Crist?bal Mo?nne-Loccoz * Daniel McCloy * Daniel Strohmeier * Denis A. Engemann * Emily P. Stephen * Eric Larson * Fede Raimondo * Jaakko Leppakangas * Jean-Baptiste Schiratti * Jean-Remi King * Jesper Duemose Nielsen * Joan Massich * Jon Houck * Jona Sassenhagen * Jussi Nurminen * Laetitia Grabot * Laura Gwilliams * Luke Bloy * Luk?? Hejtm?nek * Mainak Jas * Marijn van Vliet * Mathurin Massias * Matt Boggess * Mikolaj Magnuski * Nicolas Barascud * Nicole Proulx * Phillip Alday * Ramonapariciog Apariciogarcia * Robin Tibor Schirrmeister * Rodrigo H?bner * S. M. Gutstein * Simon Kern * Teon Brooks * Yousra Bekhti -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Mon Oct 23 12:26:43 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Mon, 23 Oct 2017 12:26:43 -0400 Subject: [Neuroimaging] [ANN] MNE-Python 0.15 In-Reply-To: References: Message-ID: hello congratulations. the documentation looks really nice! cheers, satra On Fri, Oct 20, 2017 at 3:10 AM, Alexandre Gramfort < alexandre.gramfort at inria.fr> wrote: > Hi, > > We are very pleased to announce the new 0.15 release of MNE-Python. This > release comes with new features, bug fixes, and many improvements to > usability, visualization, and documentation. > > A few highlights > > ============ > > - > > We reinvented our documentation. Our website now unifies tutorials, > examples and background information into one coherent narrative structure > while preserving context. Check it out > ! > > > - > > Add mne.decoding.cross_val_multiscore() > > to allow scoring of multiple tasks, typically used with the new > mne.decoding.SlidingEstimator > > - > > Add mne.decoding.ReceptiveField > > module for modeling neural responses to continuous stimulation > - > > Add mne.decoding.SPoC > > to fit and apply spatial filters based on continuous target variables > - > > mne.io.Raw.plot() > > butterfly mode (toggled with ?b? key) > - > > IO support for EGI MFF format > - > > mne.fit_dipole() > > confidence intervals, number of free parameters, and ?? > - > > Add mne.VectorSourceEstimate > > class which enables working with both source power and dipole orientations; > use option pick_ori='vector' to mne.minimum_norm.apply_inverse() > - > > New high-frequency somatosensory MEG dataset > - > > Add unit-noise-gain beamformer and neural activity index (weight > normalization) to LCMV beamformer with weight_norm parameter > - > > Add filtering functions mne.Epochs.filter() > > and mne.Evoked.filter() > , > as well as pad argument to mne.io.Raw.filter() > > - > > Enable morphing between hemispheres with mne.compute_morph_matrix() > > - > > Add interactive time cursor and category/amplitude status message in > window for evoked plot > - > > We exposed a rank parameter in mne.viz.evoked.plot_evoked_white() > > that allows for correcting the scaling of the visualization on the spot in > cases where the rank estimate of the covariance is not accurate (for > certain SSS?d data) > > > Notable API changes > > ================ > > - > > ICA channel names have now been reformatted to start from zero, e.g. > "ICA000", to match indexing schemes in mne.preprocessing.ICA > > - > > Add skip_by_annotation to mne.io.Raw.filter() > > to process data concatenated with e.g. mne.concatenate_raws() > > separately > - > > Add new filtering mode fir_design='firwin' (default in the next 0.16 > release) that gets improved attenuation using fewer samples compared to > fir_design='firwin2' (default in 0.15) > - > > Add mne.beamformer.make_lcmv() > > and mne.beamformer.apply_lcmv() > , > mne.beamformer.apply_lcmv_epochs() > , > and mne.beamformer.apply_lcmv_raw() > > to enable the separate computation and application of LCMV beamformer > weights > - > > mne.set_eeg_reference() > > and related methods (e.g. mne.io.Raw.set_eeg_reference() > ) > have a new argument projection, which if set to False directly applies > an average reference instead of adding an SSP projector > - > > mne.find_events() > > mask_type parameter will change from 'not_and' to 'and' d > - > > picks parameter in mne.beamformer.lcmv() > , > mne.beamformer.lcmv_epochs() > , > mne.beamformer.lcmv_raw() > , > mne.beamformer.tf_lcmv() > > and mne.beamformer.rap_music() > > is now deprecated > - > > The keyword argument frequencies has been deprecated in favor of freqs > in various time-frequency functions, e.g. > mne.time_frequency.tfr_array_morlet() > > - > > Deprecate force_fixed and surf_ori in mne.read_forward_solution() > > - > > The behavior of 'mean_flip' label-flipping in > mne.extract_label_time_course() > > and related functions has been changed such that the flip, instead of > having arbitrary sign, maximally aligns in the positive direction of the > normals of the label > > > For a full list of improvements and API changes, see: > > http://martinos.org/mne/stable/whats_new.html#version-0-15 > > To install the latest release the following command should do the job: > > pip install --upgrade --user mne > > As usual we welcome your bug reports, feature requests, critiques, and > > contributions. > > Some links: > > - https://github.com/mne-tools/mne-python (code + readme on how to > install) > > - http://martinos.org/mne/stable/ (full MNE documentation) > > Follow us on Twitter: https://twitter.com/mne_news > > Regards, > > The MNE-Python developers > > People who contributed to this release (in alphabetical order): > > * akshay0724 > > * Alejandro Weinstein > > * Alexander Rudiuk > > * Alexandre Barachant > > * Alexandre Gramfort > > * Andrew Dykstra > > * Britta Westner > > * Chris Bailey > > * Chris Holdgraf > > * Christian Brodbeck > > * Christopher Holdgraf > > * Clemens Brunner > > * Crist?bal Mo?nne-Loccoz > > * Daniel McCloy > > * Daniel Strohmeier > > * Denis A. Engemann > > * Emily P. Stephen > > * Eric Larson > > * Fede Raimondo > > * Jaakko Leppakangas > > * Jean-Baptiste Schiratti > > * Jean-Remi King > > * Jesper Duemose Nielsen > > * Joan Massich > > * Jon Houck > > * Jona Sassenhagen > > * Jussi Nurminen > > * Laetitia Grabot > > * Laura Gwilliams > > * Luke Bloy > > * Luk?? Hejtm?nek > > * Mainak Jas > > * Marijn van Vliet > > * Mathurin Massias > > * Matt Boggess > > * Mikolaj Magnuski > > * Nicolas Barascud > > * Nicole Proulx > > * Phillip Alday > > * Ramonapariciog Apariciogarcia > > * Robin Tibor Schirrmeister > > * Rodrigo H?bner > > * S. M. Gutstein > > * Simon Kern > > * Teon Brooks > > * Yousra Bekhti > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RPEREACAMARGO at mgh.harvard.edu Mon Oct 23 19:30:03 2017 From: RPEREACAMARGO at mgh.harvard.edu (Perea Camargo, Rodrigo Dennis) Date: Mon, 23 Oct 2017 23:30:03 +0000 Subject: [Neuroimaging] [DIPY] Question about whole tracks *.trk registration using SLR In-Reply-To: References: <63C189EA-8788-4FB9-BDCA-5D3DAC191164@mgh.harvard.edu> Message-ID: Hi Eleftherios and Dipy Community, I believe my registration is working properly (yay, thanks!). Now I have the following issues. 1. After looking at some documentation, I tried to save the moved streamlines using the following code: ``` sname= ?move_trk.trk? sname = dname + sname1 from nibabel.streamlines import save from nibabel.streamlines import Tractogram save(Tractogram(centroids2, affine_to_rasmm=cur_matrix),cname) ``` I was able to save the moved streamlines but when I load them up in dsi_studio, they do not overlap with each other (as the window.show(ren) shows). I believe the problem might be with the 2nd argument when using save() [ affine_to_rasmm=??]. So my question is, how can I save the streamlines to be in the same space as the ref_sstr? I tried different matrices provided by the initial vox_to_ras in the ref_sstr or the mov_sstr rather than an identity matrix but this doesn?t seem to solve my question :/ 1. Given that this will work, I am now confused on how could I apply this transformation and extract diffusivity metrics (e.g. FA, AxD, RD, etc?) in the registered space to a template? Should I apply a matrix transformation to my niftii images? Thanks in advance, Rodrigo From: Neuroimaging on behalf of Eleftherios Garyfallidis Reply-To: Neuroimaging analysis in Python Date: Sunday, October 15, 2017 at 10:29 PM To: Neuroimaging analysis in Python Subject: Re: [Neuroimaging] [DIPY] Question about whole tracks *.trk registration using SLR Hi Rodrigo, Apologies for the delay. But hey I got a solution for you! I hope you are excited :) Look at this gist https://gist.github.com/Garyfallidis/51e34aab47de99eafa887b2b818384ea This code shows how to do step by step whole brain SLR and I made it work with the specific datasets that you gave me. The specific datasets don't need exactly a whole brain SLR because the one brain has much fewer structures than the other brain but still SLR is robust to incomplete datasets such as these. Give it a go and give us feedback! :) All the best, Eleftherios p.s. Notice that to load the streamlines I used a more recent API than what you originally used. Make sure you have a recent version of dipy and nibabel. On Mon, Sep 25, 2017 at 9:50 AM Perea Camargo, Rodrigo Dennis > wrote: Hi Eleftherios, I am included both *.trk files in here: http://www.nmr.mgh.harvard.edu/~rdp20/drop/trk_slf/ IIT2mean.trk was a template TRK downloaded from the tract_querier example (http://tract-querier.readthedocs.io/en/latest/ ) and ADRC_TMP_wholeBrain.trk.gz was generated using dsi_studio (http://dsi-studio.labsolver.org/) Thanks again, Rodrigo On Sep 21, 2017, at 3:46 PM, Eleftherios Garyfallidis > wrote: Hi Rodrigo, Whole brain SLR is a great idea! Thank you for your question. :) Have in mind that nibabel.trackvis is the old API. Use nibabel.streamlines.load instead. Can you please share with me (off the list) your trk files so that I can correct your script? A wild guess is that you are not loading the streamlines correctly. Did you use DIPY to create those trks or another software? Also, another suggestion is to run QuickBundles first to reduce the size of your datasets. I mean before starting the SLR. Here is a sample script (in one of my branches). https://github.com/Garyfallidis/dipy/blob/recobundles/dipy/workflows/align.py We are working on making something similar available asap in master. Best, Eleftherios On Thu, Sep 21, 2017 at 1:36 AM Perea Camargo, Rodrigo Dennis > wrote: Hi Eleftherios & DIPY community, I am trying to register two whole brain tracts as shown in your recent publication using the streamline-based linear registration. I am diving into Dipy now and I might have some problems loading the files or registering them. Following your example ( http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration), there are 2 issues that may arise when I try it. 1) How to load *.trk files is not shown in your example (it looks like you added to cingulum bundles from your dataset using the dipy.data value?) So I follow this tutorial (http://nipy.org/dipy/examples_built/streamline_formats.html ) to load the streamline and hdr but I am not sure if this is the format SLF() wants it in. 2) So then I try using the srr.optimize( ) function but I get the following problems (check below). I hope you can help me. Rodrigo Here is my Jupyter notebook with the errors I found (any help will be greatly appreciate it): The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Mon Oct 23 20:52:11 2017 From: krzysztof.gorgolewski at gmail.com (Chris Gorgolewski) Date: Mon, 23 Oct 2017 17:52:11 -0700 Subject: [Neuroimaging] [job] Program Manager for the Brain Imaging Data Structure project Message-ID: Do you have great people skills and enjoy organizing events? Are you interested in helping neuroscientists exchange data in a more useful way? The Brain Imaging Data Structure project is looking for a full time Program Manager - this might be a perfect job for you! No neuroimaging experience is required, but some will come handy. This position might be especially attractive if you appreciate working in an academic environment, but prefer to avoid the stress of ?publish or perish? and enjoy a competitive salary. The position is based in the legendarily sunny San Francisco Bay Area. The successful candidate will receive relocation support. More details at https://stanford.taleo.net/careersection/2/jobdetail.ftl?job=76713&lang=en#.WeUP-AdO9j4.email. We will start reviewing applications on the 6th of November. Best regards, Chris Gorgolewski PS Apologies for cross posting - please forward this ad if you know someone who might be interested. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinit.k.srivastava at gmail.com Tue Oct 24 21:36:51 2017 From: vinit.k.srivastava at gmail.com (Vinit Srivastava) Date: Tue, 24 Oct 2017 21:36:51 -0400 Subject: [Neuroimaging] Affine 3D registration in Dipy Message-ID: Hi, I am attempting to coregister a moving T1 image to a static reference b0 volume by performing affine registration in 3D (I'm following the Dipy tutorial but using my own data). Everything goes well until running 'resampled = affine.map.transform(moving)' The following error message appears: File "C:\Users\Vinit\Anaconda2\lib\site-packages\dipy\align\imaffine.py", line 267, in _apply_transform transformed = _transform_method[(dim, interp)](image, shape, comp) KeyError: (4, 'linear') Any help would be really appreciated. Thanks, Vinny -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Wed Oct 25 18:24:15 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 25 Oct 2017 18:24:15 -0400 Subject: [Neuroimaging] Fwd: Neuroimaing/neuroinformatics Position at Harvard Message-ID: fyi ---------- Forwarded message ---------- From: Bouix, Sylvain,Ph.D. Date: Wed, Oct 25, 2017 at 4:20 PM Subject: Neuroimaing/neuroinformatics Position at Harvard To: Satrajit Ghosh Dear Satra, I am Associate Director of the Psychiatry Neuroimaging Laboratory at BWH where I lead a number of efforts in designing image analysis algorithms and processing pipelines for various neuroimaging projects including one of the HCP disease U01. I am in need of a research engineer interested in designing and maintaining robust pipelines. I was wondering if you?d be willing to share in your circles. Here is the job posting: https://partners.taleo.net/careersection/jobdetail.ftl? job=3050462&lang=en#.WfDstC-W33w.email Thank you, Sylvain Sylvain Bouix, Ph.D. Associate Director, Psychiatry Neuroimaging Laboratory, Assistant Professor, Department of Psychiatry, Brigham and Women?s Hospital, Harvard Medical School, Boston, MA. Tel: 617-525-6233 <(617)%20525-6233> Fax: 617-525-6170 <(617)%20525-6170> The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinit.k.srivastava at gmail.com Thu Oct 26 08:05:10 2017 From: vinit.k.srivastava at gmail.com (Vinit Srivastava) Date: Thu, 26 Oct 2017 08:05:10 -0400 Subject: [Neuroimaging] Reorienting bvecs in Dipy Message-ID: Hi all, I used FSL-FLIRT to coregister the diffusion-weighted imaging series to the anatomical T1 template, which also outputted an affine transformation matrix (as a .mat file). I'd like to reorient the bvecs/b-matrix table to compensate for any rotations and translations performed during the coregistration process for downstream tractography application. I understand that Dipy offers such a function to reorient bvecs to compensate for motion. However, I do not know how to use the outputted affine matrix from FSL as an input for the affines argument in the function reorient_bvecs(gtab, affines). I get the following error: 'Number of affine transformations must match number of non-zero gradients' I'm guessing that the argument affines requires 'n' number of affine matrices? If so, do I need to modify the FSL outputted affine 4x4 matrix in some way? Thanks for your help, Vinny -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Tue Oct 31 21:13:55 2017 From: effigies at bu.edu (Christopher Markiewicz) Date: Tue, 31 Oct 2017 21:13:55 -0400 Subject: [Neuroimaging] RFC: Changes to nibabel API for .mgh/.mgz files Message-ID: Hi all, I'd like to solicit input for the Python interface to .mgh/.mgz files in nibabel[0]. I've recently had cause to dig into this interface (MGHImage[1]), and found that the naming of header fields[2] (the low-level interface to the raw binary data) is inconsistent with all of my experience with how FreeSurfer refers to these fields in the code (MRI_IMAGE[3]) as well as in the outputs of many programs, such as mri_info. (In fact, the current names seem to reflect the intermediate variables used in load/save_mgh.m and the description of the affine transforms in the FS Coordinates powerpoints[4].) I'm proposing (https://github.com/nipy/nibabel/pull/569) an API change in nibabel, with field names[5] that more closely reflect what I deem to be common FreeSurfer usage (although it does not adhere precisely to the C structure fields). Given that, it was felt that the Python-using FreeSurfer community more broadly should have some say in the final API. To put a few specific questions: Do you depend on the current MGHHeader field names? Would you be averse to updating the field names? Are there alternatives to my proposal you would find preferable? While I would be willing to discuss in more detail on this list, I would prefer (if reasonably convenient) discussion to remain on the Github pull request as much as possible, so participants can follow easily. Thanks, Chris Markiewicz [0] http://nipy.org/nibabel/ [1] https://github.com/nipy/nibabel/blob/master/nibabel/freesurfer/mghformat.py [2] https://github.com/nipy/nibabel/blob/2139ce0d24e65a83295bb6b3eaaf005eaeaebb5f/nibabel/freesurfer/mghformat.py#L28-L35 [3] https://github.com/freesurfer/freesurfer/blob/master/include/mri.h#L157-L252 [4] https://surfer.nmr.mgh.harvard.edu/fswiki/CoordinateSystems [5] https://github.com/effigies/nibabel/blob/55c9bf905ec8785617755f900635fc31bae43232/nibabel/freesurfer/mghformat.py#L30-L49 -------------- next part -------------- An HTML attachment was scrubbed... URL: