From markiewicz at stanford.edu Sat Nov 16 12:39:56 2019 From: markiewicz at stanford.edu (Christopher Markiewicz) Date: Sat, 16 Nov 2019 17:39:56 +0000 Subject: [Neuroimaging] ANN: Nibabel 3.0 Release Candidate (Please test!) Message-ID: Hi all, The first release candidate for Nibabel 3.0 is out. As a major release, there are API changes and a greater than usual opportunity for pain. Therefore, I'm setting a minimum 1 month window (as proposed in https://github.com/nipy/nibabel/issues/734) to help us find bugs, and allow downstream tools to make any necessary adjustments. The only pull requests that will be accepted during this period will be bug fixes or documentation and testing improvements. This window can be extended if needed, so please let me know if you need more time. I would ask all downstream projects to add pre-release testing if they do not already. Pre-release testing requires specifically requesting pre-release packages from PyPI, so if you have not set up this up in continuous integration configuration, it is very likely that you will not install the correct package. To do this, use the `--pre` flag for pip when installing nibabel. Please report any issues to https://github.com/nipy/nibabel/issues. The most consequential changes in this release are the removal of Python 2 support and the deprecation of the img.get_data() accessor method to retrieve the image data block. The supported APIs for accessing data are img.get_fdata(), which always casts to float, and img.dataobj, which affords more control over the interpretation of the data object. Additionally, GIFTI images have a new agg_data() method that simplifies the retrieval of DataArrays from GIFTI files into usable numpy arrays. Most, if not all, filenames can now be passed as pathlib.Path objects. And there are significant updates to the streamlines package. Exercising these functionalities will be a valuable contribution during this release candidate phase. Many thanks to everbody who took the time to investigate and report bugs, propose fixes, review pull requests and review the documentation, including first-time contributors Cameron Riddell, Hao-Ting Wang, Oscar Esteban, Dorota Jarecka, and Chris Gorgolewski. And thanks in advance to your help in making this a smooth upgrade for users. Full changelog follows. ---- Most work on NiBabel so far has been by Matthew Brett (MB), Chris Markiewicz (CM), Michael Hanke (MH), Marc-Alexandre C?t? (MC), Ben Cipollini (BC), Paul McCarthy (PM), Chris Cheng (CC), Yaroslav Halchenko (YOH), Satra Ghosh (SG), Eric Larson (EL), Demian Wassermann, and Stephan Gerhard. References like "pr/298" refer to github pull request numbers. # 3.0.0rc1 (Saturday 16 November 2019) Release candidate for NiBabel 3.0, initiating a minimum one-month testing window. Downstream projects are requested to test against the release candidate by installing with ``pip install --pre nibabel``. New features ------------ * ArrayProxy method ``get_scaled()`` scales data with a dtype of a specified precision, promoting as necessary to avoid overflow. This is to used in ``img.get_fdata()`` to control memory usage. (pr/833) (CM, reviewed by Ross Markello) * GiftiImage method ``agg_data()`` to return usable data arrays (pr/793) (Hao-Ting Wang, reviewed by CM) * Accept ``os.PathLike`` objects in place of filenames (pr/610) (Cameron Riddell, reviewed by MB, CM) * Function to calculate obliquity of affines (pr/815) (Oscar Esteban, reviewed by MB) Enhancements ------------ * ``get_fdata(dtype=np.float32)`` will attempt to avoid casting data to ``np.float64`` when scaling parameters would otherwise promote the data type unnecessarily. (pr/833) (CM, reviewed by Ross Markello) * ``ArraySequence`` now supports a large set of Python operators to combine or update in-place. (pr/811) (MC, reviewed by Serge Koudoro, Philippe Poulin, CM, MB) * Warn, rather than fail, on DICOMs with unreadable Siemens CSA tags (pr/818) (Henry Braun, reviewed by CM) * Improve clarity of coordinate system tutorial (pr/823) (Egor Panfilov, reviewed by MB) Bug fixes --------- * Sliced ``Tractogram``s no longer ``apply_affine`` to the original ``Tractogram``'s streamlines. (pr/811) (MC, reviewed by Serge Koudoro, Philippe Poulin, CM, MB) * Re-import externals/netcdf.py from scipy to resolve numpy deprecation (pr/821) (CM) Maintenance ----------- * Support Python >=3.5.1, including Python 3.8.0 (pr/787) (CM) * Manage versioning with slightly customized Versioneer (pr/786) (CM) * Reference Nipy Community Code and Nibabel Developer Guidelines in GitHub community documents (pr/778) (CM, reviewed by MB) API changes and deprecations ---------------------------- * Deprecate ``ArraySequence.data`` in favor of ``ArraySequence.get_data()``, which will return a copy. ``ArraySequence.data`` now returns a read-only view. (pr/811) (MC, reviewed by Serge Koudoro, Philippe Poulin, CM, MB) * Deprecate ``DataobjImage.get_data()`` API, to be removed in nibabel 5.0 (pr/794, pr/809) (CM, reviewed by MB) -- Chris Markiewicz Center for Reproducible Neuroscience Stanford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Sun Nov 17 04:49:55 2019 From: bertrand.thirion at inria.fr (bthirion) Date: Sun, 17 Nov 2019 10:49:55 +0100 Subject: [Neuroimaging] ANN: Nibabel 3.0 Release Candidate (Please test!) In-Reply-To: References: Message-ID: Congratulations to all contributors and to you in particular ! Bertrand On 16/11/2019 18:39, Christopher Markiewicz wrote: > Hi all, > > The first release candidate for Nibabel 3.0 is out. As a major > release, there are API changes and a greater than usual opportunity > for pain. Therefore, I'm setting a minimum 1 month window (as proposed > in https://github.com/nipy/nibabel/issues/734) to help us find bugs, > and allow downstream tools to make any necessary adjustments. The only > pull requests that will be accepted during this period will be bug > fixes or documentation and testing improvements. This window can be > extended if needed, so please let me know if you need more time. > > I would ask all downstream projects to add pre-release testing if they > do not already. Pre-release testing requires specifically requesting > pre-release packages from PyPI, so if you have not set up this up in > continuous integration configuration, it is very likely that you will > not install the correct package. To do this, use the `--pre` flag for > pip when installing nibabel. > > Please report any issues to https://github.com/nipy/nibabel/issues > . > > The most consequential changes in this release are the removal of > Python 2 support and the deprecation of the img.get_data() accessor > method to retrieve the image data block. The supported APIs for > accessing data are img.get_fdata(), which always casts to float, and > img.dataobj, which affords more control over the interpretation of the > data object. > > Additionally, GIFTI images have a new agg_data() method that > simplifies the retrieval of DataArrays from GIFTI files into usable > numpy arrays. Most, if not all, filenames can now be passed as > pathlib.Path objects. And there are significant updates to the > streamlines package. Exercising these functionalities will be a > valuable contribution during this release candidate phase. > > Many thanks to everbody who took the time to investigate and report > bugs, propose fixes, review pull requests and review the > documentation, including first-time contributors Cameron Riddell, > Hao-Ting Wang, Oscar Esteban, Dorota Jarecka, and Chris Gorgolewski. > And thanks in advance to your help in making this a smooth upgrade for > users. > > Full changelog follows. > > ---- > > Most work on NiBabel so far has been by Matthew Brett (MB), Chris > Markiewicz (CM), Michael Hanke (MH), Marc-Alexandre C?t? (MC), Ben > Cipollini (BC), Paul McCarthy (PM), Chris Cheng (CC), Yaroslav > Halchenko (YOH), Satra Ghosh (SG), Eric Larson (EL), Demian > Wassermann, and Stephan Gerhard. > > References like "pr/298" refer to github pull request numbers. > > # 3.0.0rc1 (Saturday 16 November 2019) > > Release candidate for NiBabel 3.0, initiating a minimum one-month > testing window. > > Downstream projects are requested to test against the release > candidate by installing with ``pip install --pre nibabel``. > > New features > ------------ > > * ArrayProxy method ``get_scaled()`` scales data with a dtype of a > specified precision, promoting as necessary to avoid overflow. This is > to used in ``img.get_fdata()`` to control memory usage. (pr/833) (CM, > reviewed by Ross Markello) > * GiftiImage method ``agg_data()`` to return usable data arrays > (pr/793) (Hao-Ting Wang, reviewed by CM) > * Accept ``os.PathLike`` objects in place of filenames (pr/610) > (Cameron Riddell, reviewed by MB, CM) > * Function to calculate obliquity of affines (pr/815) (Oscar Esteban, > reviewed by MB) > > Enhancements > ------------ > > * ``get_fdata(dtype=np.float32)`` will attempt to avoid casting data > to ``np.float64`` when scaling parameters would otherwise promote the > data type unnecessarily. (pr/833) (CM, reviewed by Ross Markello) > * ``ArraySequence`` now supports a large set of Python operators to > combine or update in-place. (pr/811) (MC, reviewed by Serge Koudoro, > Philippe Poulin, CM, MB) > * Warn, rather than fail, on DICOMs with unreadable Siemens CSA tags > (pr/818) (Henry Braun, reviewed by CM) > * Improve clarity of coordinate system tutorial (pr/823) (Egor > Panfilov, reviewed by MB) > > Bug fixes > --------- > > * Sliced ``Tractogram``s no longer ``apply_affine`` to the original > ``Tractogram``'s streamlines. (pr/811) (MC, reviewed by Serge Koudoro, > Philippe Poulin, CM, MB) > * Re-import externals/netcdf.py from scipy to resolve numpy > deprecation (pr/821) (CM) > > Maintenance > ----------- > > * Support Python >=3.5.1, including Python 3.8.0 (pr/787) (CM) > * Manage versioning with slightly customized Versioneer (pr/786) (CM) > * Reference Nipy Community Code and Nibabel Developer Guidelines in > GitHub community documents (pr/778) (CM, reviewed by MB) > > API changes and deprecations > ---------------------------- > > * Deprecate ``ArraySequence.data`` in favor of > ``ArraySequence.get_data()``, which will return a copy. > ``ArraySequence.data`` now returns a read-only view. (pr/811) (MC, > reviewed by Serge Koudoro, Philippe Poulin, CM, MB) > * Deprecate ``DataobjImage.get_data()`` API, to be removed in nibabel > 5.0 (pr/794, pr/809) (CM, reviewed by MB) > > -- > > Chris Markiewicz > > Center for Reproducible Neuroscience > > Stanford University > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Sun Nov 17 09:53:12 2019 From: jbpoline at gmail.com (JB Poline) Date: Sun, 17 Nov 2019 09:53:12 -0500 Subject: [Neuroimaging] ANN: Nibabel 3.0 Release Candidate (Please test!) In-Reply-To: References: Message-ID: Indeed - thanks so much JB On Sun, Nov 17, 2019 at 4:50 AM bthirion wrote: > Congratulations to all contributors and to you in particular ! > > Bertrand > > On 16/11/2019 18:39, Christopher Markiewicz wrote: > > Hi all, > > The first release candidate for Nibabel 3.0 is out. As a major release, > there are API changes and a greater than usual opportunity for pain. > Therefore, I'm setting a minimum 1 month window (as proposed in > https://github.com/nipy/nibabel/issues/734) to help us find bugs, and > allow downstream tools to make any necessary adjustments. The only pull > requests that will be accepted during this period will be bug fixes or > documentation and testing improvements. This window can be extended if > needed, so please let me know if you need more time. > > I would ask all downstream projects to add pre-release testing if they do > not already. Pre-release testing requires specifically requesting > pre-release packages from PyPI, so if you have not set up this up in > continuous integration configuration, it is very likely that you will not > install the correct package. To do this, use the `--pre` flag for pip when > installing nibabel. > > Please report any issues to https://github.com/nipy/nibabel/issues. > > The most consequential changes in this release are the removal of Python 2 > support and the deprecation of the img.get_data() accessor method to > retrieve the image data block. The supported APIs for accessing data are > img.get_fdata(), which always casts to float, and img.dataobj, which > affords more control over the interpretation of the data object. > > Additionally, GIFTI images have a new agg_data() method that simplifies > the retrieval of DataArrays from GIFTI files into usable numpy arrays. > Most, if not all, filenames can now be passed as pathlib.Path objects. And > there are significant updates to the streamlines package. Exercising these > functionalities will be a valuable contribution during this release > candidate phase. > > Many thanks to everbody who took the time to investigate and report bugs, > propose fixes, review pull requests and review the documentation, including > first-time contributors Cameron Riddell, Hao-Ting Wang, Oscar Esteban, > Dorota Jarecka, and Chris Gorgolewski. And thanks in advance to your help > in making this a smooth upgrade for users. > > Full changelog follows. > > ---- > > Most work on NiBabel so far has been by Matthew Brett (MB), Chris > Markiewicz (CM), Michael Hanke (MH), Marc-Alexandre C?t? (MC), Ben > Cipollini (BC), Paul McCarthy (PM), Chris Cheng (CC), Yaroslav Halchenko > (YOH), Satra Ghosh (SG), Eric Larson (EL), Demian Wassermann, and Stephan > Gerhard. > > References like "pr/298" refer to github pull request numbers. > > # 3.0.0rc1 (Saturday 16 November 2019) > > Release candidate for NiBabel 3.0, initiating a minimum one-month testing > window. > > Downstream projects are requested to test against the release candidate by > installing with ``pip install --pre nibabel``. > > New features > ------------ > > * ArrayProxy method ``get_scaled()`` scales data with a dtype of a > specified precision, promoting as necessary to avoid overflow. This is to > used in ``img.get_fdata()`` to control memory usage. (pr/833) (CM, reviewed > by Ross Markello) > * GiftiImage method ``agg_data()`` to return usable data arrays (pr/793) > (Hao-Ting Wang, reviewed by CM) > * Accept ``os.PathLike`` objects in place of filenames (pr/610) (Cameron > Riddell, reviewed by MB, CM) > * Function to calculate obliquity of affines (pr/815) (Oscar Esteban, > reviewed by MB) > > Enhancements > ------------ > > * ``get_fdata(dtype=np.float32)`` will attempt to avoid casting data to > ``np.float64`` when scaling parameters would otherwise promote the data > type unnecessarily. (pr/833) (CM, reviewed by Ross Markello) > * ``ArraySequence`` now supports a large set of Python operators to > combine or update in-place. (pr/811) (MC, reviewed by Serge Koudoro, > Philippe Poulin, CM, MB) > * Warn, rather than fail, on DICOMs with unreadable Siemens CSA tags > (pr/818) (Henry Braun, reviewed by CM) > * Improve clarity of coordinate system tutorial (pr/823) (Egor Panfilov, > reviewed by MB) > > Bug fixes > --------- > > * Sliced ``Tractogram``s no longer ``apply_affine`` to the original > ``Tractogram``'s streamlines. (pr/811) (MC, reviewed by Serge Koudoro, > Philippe Poulin, CM, MB) > * Re-import externals/netcdf.py from scipy to resolve numpy deprecation > (pr/821) (CM) > > Maintenance > ----------- > > * Support Python >=3.5.1, including Python 3.8.0 (pr/787) (CM) > * Manage versioning with slightly customized Versioneer (pr/786) (CM) > * Reference Nipy Community Code and Nibabel Developer Guidelines in GitHub > community documents (pr/778) (CM, reviewed by MB) > > API changes and deprecations > ---------------------------- > > * Deprecate ``ArraySequence.data`` in favor of > ``ArraySequence.get_data()``, which will return a copy. > ``ArraySequence.data`` now returns a read-only view. (pr/811) (MC, reviewed > by Serge Koudoro, Philippe Poulin, CM, MB) > * Deprecate ``DataobjImage.get_data()`` API, to be removed in nibabel 5.0 > (pr/794, pr/809) (CM, reviewed by MB) > > -- > > Chris Markiewicz > > Center for Reproducible Neuroscience > > Stanford University > > _______________________________________________ > Neuroimaging mailing listNeuroimaging at python.orghttps://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From seralouk at hotmail.com Mon Nov 18 06:59:09 2019 From: seralouk at hotmail.com (serafim loukas) Date: Mon, 18 Nov 2019 11:59:09 +0000 Subject: [Neuroimaging] Combined surface plot using plotting.view_surf function Message-ID: <82694185-53F3-470D-A158-C9BC98BDC8DC@hotmail.com> Dear Nilearn community, First of all thanks for this beautiful python module. I am trying to use the ?plotting.view_surf? function but instead of plotting only one hemisphere, I want to plot both hemispheres combined. To do so, I load the individual surfaces (left and right) and I combine them in order to plot them. However, the surface plot only shows one hemisphere. Any tip? My code is: from nilearn import datasets, plotting from nilearn.surface import load_surf_data import numpy as np fsaverage = datasets.fetch_surf_fsaverage() left = load_surf_data(fsaverage['pial_left']) right = load_surf_data(fsaverage['pial_right']) combined_vertices = np.concatenate([left[0], right[0]]) assert(left[0].shape[0] + right[0].shape[0]== combined_vertices.shape[0]) combined_faces = np.concatenate([left[1], right[1]]) assert(left[1].shape[0] + right[1].shape[0]== combined_faces.shape[0]) combined_surface= [combined_vertices, combined_faces] view = plotting.view_surf(combined_surface) view.open_in_browser() Bests, Makis -------------- next part -------------- An HTML attachment was scrubbed... URL: From michiel.cottaar at ndcn.ox.ac.uk Mon Nov 18 07:08:00 2019 From: michiel.cottaar at ndcn.ox.ac.uk (Michiel Cottaar) Date: Mon, 18 Nov 2019 12:08:00 +0000 Subject: [Neuroimaging] Combined surface plot using plotting.view_surf function In-Reply-To: <82694185-53F3-470D-A158-C9BC98BDC8DC@hotmail.com> References: <82694185-53F3-470D-A158-C9BC98BDC8DC@hotmail.com> Message-ID: <06BC7A31-51F8-4644-8CEE-2F915C1E269D@ndcn.ox.ac.uk> Hi Makis, Note that the faces contain the indices of the vertices. So, when you change the order of the vertices, you also have to change these indices accordingly. In this case that means that if the left surface has 3,000 vertices, then the vertices with index 10 or the right surface will have an index of 3,010 in the combined surface. So when we create the combined_faces array we have to update the indices of the right hemisphere: combined_faces = np.concatenate([left[1], right[1] + left[0].shape[0]]) Best wishes, Michiel On 18 Nov 2019, at 11:59, serafim loukas > wrote: Dear Nilearn community, First of all thanks for this beautiful python module. I am trying to use the ?plotting.view_surf? function but instead of plotting only one hemisphere, I want to plot both hemispheres combined. To do so, I load the individual surfaces (left and right) and I combine them in order to plot them. However, the surface plot only shows one hemisphere. Any tip? My code is: from nilearn import datasets, plotting from nilearn.surface import load_surf_data import numpy as np fsaverage = datasets.fetch_surf_fsaverage() left = load_surf_data(fsaverage['pial_left']) right = load_surf_data(fsaverage['pial_right']) combined_vertices = np.concatenate([left[0], right[0]]) assert(left[0].shape[0] + right[0].shape[0]== combined_vertices.shape[0]) combined_faces = np.concatenate([left[1], right[1]]) assert(left[1].shape[0] + right[1].shape[0]== combined_faces.shape[0]) combined_surface= [combined_vertices, combined_faces] view = plotting.view_surf(combined_surface) view.open_in_browser() Bests, Makis _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From seralouk at hotmail.com Mon Nov 18 10:05:20 2019 From: seralouk at hotmail.com (serafim loukas) Date: Mon, 18 Nov 2019 15:05:20 +0000 Subject: [Neuroimaging] Combined surface plot using plotting.view_surf function In-Reply-To: <06BC7A31-51F8-4644-8CEE-2F915C1E269D@ndcn.ox.ac.uk> References: <82694185-53F3-470D-A158-C9BC98BDC8DC@hotmail.com> <06BC7A31-51F8-4644-8CEE-2F915C1E269D@ndcn.ox.ac.uk> Message-ID: Right! I missed that. Thanks, Makis On 18 Nov 2019, at 13:08, Michiel Cottaar > wrote: Hi Makis, Note that the faces contain the indices of the vertices. So, when you change the order of the vertices, you also have to change these indices accordingly. In this case that means that if the left surface has 3,000 vertices, then the vertices with index 10 or the right surface will have an index of 3,010 in the combined surface. So when we create the combined_faces array we have to update the indices of the right hemisphere: combined_faces = np.concatenate([left[1], right[1] + left[0].shape[0]]) Best wishes, Michiel On 18 Nov 2019, at 11:59, serafim loukas > wrote: Dear Nilearn community, First of all thanks for this beautiful python module. I am trying to use the ?plotting.view_surf? function but instead of plotting only one hemisphere, I want to plot both hemispheres combined. To do so, I load the individual surfaces (left and right) and I combine them in order to plot them. However, the surface plot only shows one hemisphere. Any tip? My code is: from nilearn import datasets, plotting from nilearn.surface import load_surf_data import numpy as np fsaverage = datasets.fetch_surf_fsaverage() left = load_surf_data(fsaverage['pial_left']) right = load_surf_data(fsaverage['pial_right']) combined_vertices = np.concatenate([left[0], right[0]]) assert(left[0].shape[0] + right[0].shape[0]== combined_vertices.shape[0]) combined_faces = np.concatenate([left[1], right[1]]) assert(left[1].shape[0] + right[1].shape[0]== combined_faces.shape[0]) combined_surface= [combined_vertices, combined_faces] view = plotting.view_surf(combined_surface) view.open_in_browser() Bests, Makis _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From samfmri at gmail.com Wed Nov 20 17:25:08 2019 From: samfmri at gmail.com (Sam W) Date: Wed, 20 Nov 2019 23:25:08 +0100 Subject: [Neuroimaging] select all voxels in one hemisphere Message-ID: Hello! I've been trying unsuccessfully to select all voxels in one hemisphere using nilearn/nibabel I tried selecting all voxels that are to the left of the center of the image, something along these lines: im = load_img('my_im.nii') mid = int(np.ceil(im.shape[0]/2)) #centre of image data = im.get_data() data[mid:,:,:] #this should select the left side of image This wont work however because mid might not be exactly the middle of the brain... Is there a better way to do this? Thanks! Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From amitvakula at gmail.com Wed Nov 20 17:40:21 2019 From: amitvakula at gmail.com (Amit Akula) Date: Wed, 20 Nov 2019 14:40:21 -0800 Subject: [Neuroimaging] select all voxels in one hemisphere In-Reply-To: References: Message-ID: Hi Sam! Hope you are doing well. There are many segmentation algorithms that can be used to do this in a more robust approach. The Roland Henry Laboratory at UCSF (where I work) generally uses Freesurfer's recon_all pipeline. It can be used go generate a brain mask and further pipelines can be use to generate hemisphere masks. It may be a bit overkill for your work as it does much more subcortical segmentations. There is also a way to manually segment/correct ROIs using nilearn if you want to manually do more sophisticated segmentations/corrections. Here is a starting guide on that using nilearn. Let me know if you want to take this offline and I can walk you through it. Sincerely, Amit p.s. I can look into and send you more stuff like hemisplit , etc. if you would like a more focused tool other than recon_all. Lemme know! On Wed, Nov 20, 2019 at 2:25 PM Sam W wrote: > Hello! > I've been trying unsuccessfully to select all voxels in one hemisphere > using nilearn/nibabel > I tried selecting all voxels that are to the left of the center of the > image, something along these lines: > > im = load_img('my_im.nii') > mid = int(np.ceil(im.shape[0]/2)) #centre of image > data = im.get_data() > data[mid:,:,:] #this should select the left side of image > > This wont work however because mid might not be exactly the middle of the > brain... > Is there a better way to do this? > Thanks! > Sam > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From samfmri at gmail.com Thu Nov 21 13:52:07 2019 From: samfmri at gmail.com (Sam W) Date: Thu, 21 Nov 2019 19:52:07 +0100 Subject: [Neuroimaging] select all voxels in one hemisphere In-Reply-To: References: Message-ID: Hi Amit! Thanks for your suggestions. I just need to assign a number to all left hemisphere voxels (and leave right hemisphere voxels unchanged), so using freesurfer for this purpose might really be an overkill. I was hoping I could get away with combining numpy with nilearn/nibabel, is this not feasible? Best regards, Sam On Wed, Nov 20, 2019 at 11:41 PM Amit Akula wrote: > Hi Sam! > > Hope you are doing well. > > There are many segmentation algorithms that can be used to do this in a > more robust approach. > > The Roland Henry Laboratory at UCSF (where I work) generally uses > Freesurfer's recon_all > pipeline. It can be > used go generate a brain mask and further pipelines > can > be use to generate hemisphere masks. It may be a bit overkill for your work > as it does much more subcortical segmentations. > > There is also a way to manually segment/correct ROIs using nilearn if you > want to manually do more sophisticated segmentations/corrections. Here > is > a starting guide on that using nilearn. > > Let me know if you want to take this offline and I can walk you through it. > > Sincerely, > Amit > > p.s. I can look into and send you more stuff like hemisplit > , > etc. if you would like a more focused tool other than recon_all. Lemme know! > > On Wed, Nov 20, 2019 at 2:25 PM Sam W wrote: > >> Hello! >> I've been trying unsuccessfully to select all voxels in one hemisphere >> using nilearn/nibabel >> I tried selecting all voxels that are to the left of the center of the >> image, something along these lines: >> >> im = load_img('my_im.nii') >> mid = int(np.ceil(im.shape[0]/2)) #centre of image >> data = im.get_data() >> data[mid:,:,:] #this should select the left side of image >> >> This wont work however because mid might not be exactly the middle of the >> brain... >> Is there a better way to do this? >> Thanks! >> Sam >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.bloy at gmail.com Thu Nov 21 15:02:16 2019 From: luke.bloy at gmail.com (Luke Bloy) Date: Thu, 21 Nov 2019 15:02:16 -0500 Subject: [Neuroimaging] select all voxels in one hemisphere In-Reply-To: References: Message-ID: Not sure how accurate you need this to be but a rough approach would be : 1) Manually make R/L labels on a atlas/template brain. MNI152 from fsl for instance. 2) for each new subject coregister your subject to the atlas. Affine registration after a rough brain extraction would probably be sufficient 3) transform your R/L labels into subject space. This would be a relatively simple pipeline to put together using simpleITK or maybe one of the pure python neuroimaging tools (Dipy maybe?) On Thu, Nov 21, 2019 at 1:52 PM Sam W wrote: > Hi Amit! > > Thanks for your suggestions. I just need to assign a number to all left > hemisphere voxels (and leave right hemisphere voxels unchanged), so using > freesurfer for this purpose might really be an overkill. I was hoping I > could get away with combining numpy with nilearn/nibabel, is this not > feasible? > > Best regards, > Sam > > On Wed, Nov 20, 2019 at 11:41 PM Amit Akula wrote: > >> Hi Sam! >> >> Hope you are doing well. >> >> There are many segmentation algorithms that can be used to do this in a >> more robust approach. >> >> The Roland Henry Laboratory at UCSF (where I work) generally uses >> Freesurfer's recon_all >> pipeline. It can >> be used go generate a brain mask and further pipelines >> can >> be use to generate hemisphere masks. It may be a bit overkill for your work >> as it does much more subcortical segmentations. >> >> There is also a way to manually segment/correct ROIs using nilearn if you >> want to manually do more sophisticated segmentations/corrections. Here >> is >> a starting guide on that using nilearn. >> >> Let me know if you want to take this offline and I can walk you through >> it. >> >> Sincerely, >> Amit >> >> p.s. I can look into and send you more stuff like hemisplit >> , >> etc. if you would like a more focused tool other than recon_all. Lemme know! >> >> On Wed, Nov 20, 2019 at 2:25 PM Sam W wrote: >> >>> Hello! >>> I've been trying unsuccessfully to select all voxels in one hemisphere >>> using nilearn/nibabel >>> I tried selecting all voxels that are to the left of the center of the >>> image, something along these lines: >>> >>> im = load_img('my_im.nii') >>> mid = int(np.ceil(im.shape[0]/2)) #centre of image >>> data = im.get_data() >>> data[mid:,:,:] #this should select the left side of image >>> >>> This wont work however because mid might not be exactly the middle of >>> the brain... >>> Is there a better way to do this? >>> Thanks! >>> Sam >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From angus.j.g.campbell at gmail.com Sun Nov 24 21:09:13 2019 From: angus.j.g.campbell at gmail.com (Angus Campbell) Date: Sun, 24 Nov 2019 19:09:13 -0700 Subject: [Neuroimaging] Extracting a network from a nifti with 6 others Message-ID: I have a nifti file which contains 7 networks. I want to extract each one as a seperate image so I can pull the zscore of each mni coordinate to make a binary mask. Is there any information in the header i have missed that would let me do this? How can I pull the data I want out as an array? *header=Buckner7net_img.header* *print(header) object, endian='<'sizeof_hdr : 348data_type : b' 'db_name : b' 'extents : 0session_error : 0regular : b' 'dim_info : 32dim : [ 4 256 256 256 1 1 1 1]intent_p1 : 0.0intent_p2 : 0.0intent_p3 : 0.0intent_code : nonedatatype : float32bitpix : 32slice_start : 0pixdim : [-1. 1. 1. 1. 0. 0. 0. 0.]vox_offset : 0.0scl_slope : nanscl_inter : nanslice_end : 0slice_code : unknownxyzt_units : 18cal_max : 7.0cal_min : 0.0slice_duration : 0.0toffset : 0.0glmax : 0glmin : 0descrip : b'FreeSurfer matlab 'aux_file : b' 'qform_code : scannersform_code : scannerquatern_b : 0.0quatern_c : 0.70710677quatern_d : -0.70710677qoffset_x : 127.0qoffset_y : -145.0qoffset_z : 147.0srow_x : [ -1. 0. 0. 127.]srow_y : [ 0. 0. 1. -145.]srow_z : [ 0. -1. 0. 147.]intent_name : b'huh? 'magic : b'n+1' * -------------- next part -------------- An HTML attachment was scrubbed... URL: From nawazktk99 at gmail.com Mon Nov 25 01:55:05 2019 From: nawazktk99 at gmail.com (Ali khattak) Date: Mon, 25 Nov 2019 11:55:05 +0500 Subject: [Neuroimaging] Error to load all nii files from directory using Nibable Message-ID: Hi I am working on Alzheimer disease detection and classification using CNN which will solve the imbalance class and multi modality data to our network for that I am getting error in loading the data from my directory I am using pytorch libraries for further processing I am stuck in the dataloading module. I tried many dataloader and modules and also I used pytorch data loading codes and tutorials but I failed to execute in smooth way. Please someone refer me a way to find the exact code and implementation to get out from this. I am trying from last one month please help me in this regard *Ali Nawaz* *Software Engineer* Cell: 03358043653 Address: University of Engineering and Technology Taxila. -------------- next part -------------- An HTML attachment was scrubbed... URL: From seralouk at gmail.com Mon Nov 25 04:40:42 2019 From: seralouk at gmail.com (Serafeim Loukas) Date: Mon, 25 Nov 2019 10:40:42 +0100 Subject: [Neuroimaging] Extracting a network from a nifti with 6 others In-Reply-To: References: Message-ID: Why don?t you just open the file and mask it (e.g. put 1s where voxel values are 5) for each network, then save it and then repeat. It should be easy. > On 25 Nov 2019, at 03:09, Angus Campbell wrote: > > I have a nifti file which contains 7 networks. I want to extract each one as a seperate image so I can pull the zscore of each mni coordinate to make a binary mask. > > Is there any information in the header i have missed that would let me do this? How can I pull the data I want out as an array? > > header=Buckner7net_img.header > > print(header) > object, endian='<' > sizeof_hdr : 348 > data_type : b' ' > db_name : b' ' > extents : 0 > session_error : 0 > regular : b' ' > dim_info : 32 > dim : [ 4 256 256 256 1 1 1 1] > intent_p1 : 0.0 > intent_p2 : 0.0 > intent_p3 : 0.0 > intent_code : none > datatype : float32 > bitpix : 32 > slice_start : 0 > pixdim : [-1. 1. 1. 1. 0. 0. 0. 0.] > vox_offset : 0.0 > scl_slope : nan > scl_inter : nan > slice_end : 0 > slice_code : unknown > xyzt_units : 18 > cal_max : 7.0 > cal_min : 0.0 > slice_duration : 0.0 > toffset : 0.0 > glmax : 0 > glmin : 0 > descrip : b'FreeSurfer matlab ' > aux_file : b' ' > qform_code : scanner > sform_code : scanner > quatern_b : 0.0 > quatern_c : 0.70710677 > quatern_d : -0.70710677 > qoffset_x : 127.0 > qoffset_y : -145.0 > qoffset_z : 147.0 > srow_x : [ -1. 0. 0. 127.] > srow_y : [ 0. 0. 1. -145.] > srow_z : [ 0. -1. 0. 147.] > intent_name : b'huh? ' > magic : b'n+1' > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From seralouk at hotmail.com Mon Nov 25 04:45:24 2019 From: seralouk at hotmail.com (serafim loukas) Date: Mon, 25 Nov 2019 09:45:24 +0000 Subject: [Neuroimaging] Error to load all nii files from directory using Nibable In-Reply-To: References: Message-ID: Hi, Are you asking for code that just loads .nii files from a directory? Like this: ``` myfiles = [i for i in os.listdir(".") if i.endswith(".nii")] for image in range(len(myfiles)): loaded_image = nib.load(os.path.abspath(myfiles[image])) ``` On 25 Nov 2019, at 07:55, Ali khattak > wrote: Hi I am working on Alzheimer disease detection and classification using CNN which will solve the imbalance class and multi modality data to our network for that I am getting error in loading the data from my directory I am using pytorch libraries for further processing I am stuck in the dataloading module. I tried many dataloader and modules and also I used pytorch data loading codes and tutorials but I failed to execute in smooth way. Please someone refer me a way to find the exact code and implementation to get out from this. I am trying from last one month please help me in this regard Ali Nawaz Software Engineer Cell: 03358043653 Address: University of Engineering and Technology Taxila. _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From SyedQasim.Abbas at latrobe.edu.au Mon Nov 25 04:59:25 2019 From: SyedQasim.Abbas at latrobe.edu.au (Syed Qasim Abbas) Date: Mon, 25 Nov 2019 09:59:25 +0000 Subject: [Neuroimaging] ACPC alignment through BRAINS Constellation Detector Message-ID: Hi, I am trying to correctly align my MRI scans through ACPC alignment, and for this I am trying to use Brains Constellation Detector. I am unable to implement it properly as there are no detailed tutorials available. Does any body know how to implement this detector to correct ACPC alignment over some 20 images. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From 19830112 at students.latrobe.edu.au Mon Nov 25 04:53:34 2019 From: 19830112 at students.latrobe.edu.au (QASIM ABBAS) Date: Mon, 25 Nov 2019 09:53:34 +0000 Subject: [Neuroimaging] ACPC alignment through BRAINS Constellation Detector Message-ID: Hi, I am trying to correctly align my MRI scans through ACPC alignment, and for this I am trying to use Brains Constellation Detector. I am unable to implement it properly as there are no detailed tutorials available. Does any body know how to implement this detector to correct ACPC alignment over some 20 images. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: