From markiewicz at stanford.edu Sun Aug 4 09:30:58 2019 From: markiewicz at stanford.edu (Christopher Markiewicz) Date: Sun, 4 Aug 2019 13:30:58 +0000 Subject: [Neuroimaging] Announcing Nibabel 2.5.0 Message-ID: Hi all, Nibabel 2.5.0 is out, and with it, we'll be beginning our transition away from Python 2. The 2.5.x series will have extended bug-fix-only support for Python 2.7 (and probably 3.4, in passing) through the end of 2020, while Nibabel 3+ will require Python 3.5+. This release also sees a number of API changes that have been threatened in FutureWarnings and DeprecationWarnings for a while, so if you've been ignoring them, expect some minor breakage, and hopefully some other warnings will be getting louder. Nibabel 3 will be seeing the first round of significant removals, so there will be a 1-month minimum release candidate period there. The most interesting new feature is to_bytes/from_bytes methods for some single-image formats, which should make passing images around networked applications much less fiddly. Moving forward, I'm strongly considering replacing the current versioning scheme with Versioneer, which will basically normalize the non-release versions to PEP-440 compatible version strings and drop our custom githash code. I recently wrote my overall strategy up as a gist. If you have concerns with this move, please email me or open an issue (or wait on the PR). The full changelog follows. ---- 2.5.0 (Sunday 4 August 2019) ============================ The 2.5.x series is the last with support for either Python 2 or Python 3.4. Extended support for this series 2.5 will last through December 2020. Thanks for the test ECAT file and fix provided by Andrew Crabb. Enhancements ------------ * Add SerializableImage class with to/from_bytes methods (pr/644) (CM, reviewed by MB) * Check CIFTI-2 data shape matches shape described by header (pr/774) (Michiel Cottaar, reviewed by CM) Bug fixes --------- * Handle stricter numpy casting rules in tests (pr/768) (CM) reviewed by PM) * TRK header fields flipped in files written on big-endian systems (pr/782) (CM, reviewed by YOH, MB) * Load multiframe ECAT images with Python 3 (CM and Andrew Crabb) Maintenance ----------- * Fix CodeCov paths on Appveyor for more accurate coverage (pr/769) (CM) * Move to setuptools and reduce use ``nisext`` functions (pr/764) (CM, reviewed by YOH) * Better handle test setup/teardown (pr/785) (CM, reviewed by YOH) API changes and deprecations ---------------------------- * Effect threatened warnings and set some deprecation timelines (pr/755) (CM) * Trackvis methods now default to v2 formats * ``nibabel.trackvis`` scheduled for removal in nibabel 4.0 * ``nibabel.minc`` and ``nibabel.MincImage`` will be removed in nibabel 3.0 -- Chris Markiewicz Center for Reproducible Neuroscience Stanford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Aug 4 14:05:55 2019 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 4 Aug 2019 19:05:55 +0100 Subject: [Neuroimaging] Announcing Nibabel 2.5.0 In-Reply-To: References: Message-ID: Hi Chris, On Sun, Aug 4, 2019 at 3:06 PM Christopher Markiewicz wrote: > > Hi all, > > Nibabel 2.5.0 is out, and with it, we'll be beginning our transition away from Python 2. The 2.5.x series will have extended bug-fix-only support for Python 2.7 (and probably 3.4, in passing) through the end of 2020, while Nibabel 3+ will require Python 3.5+. > > This release also sees a number of API changes that have been threatened in FutureWarnings and DeprecationWarnings for a while, so if you've been ignoring them, expect some minor breakage, and hopefully some other warnings will be getting louder. Nibabel 3 will be seeing the first round of significant removals, so there will be a 1-month minimum release candidate period there. > > The most interesting new feature is to_bytes/from_bytes methods for some single-image formats, which should make passing images around networked applications much less fiddly. > > Moving forward, I'm strongly considering replacing the current versioning scheme with Versioneer, which will basically normalize the non-release versions to PEP-440 compatible version strings and drop our custom githash code. I recently wrote my overall strategy up as a gist. If you have concerns with this move, please email me or open an issue (or wait on the PR). Many thanks for doing the release, Cheers, Matthew From jbpoline at gmail.com Sun Aug 4 18:09:19 2019 From: jbpoline at gmail.com (JB Poline) Date: Sun, 4 Aug 2019 18:09:19 -0400 Subject: [Neuroimaging] Announcing Nibabel 2.5.0 In-Reply-To: References: Message-ID: thanks so much : Nibabel is really core to our work ! On Sun, Aug 4, 2019 at 2:07 PM Matthew Brett wrote: > Hi Chris, > > On Sun, Aug 4, 2019 at 3:06 PM Christopher Markiewicz > wrote: > > > > Hi all, > > > > Nibabel 2.5.0 is out, and with it, we'll be beginning our transition > away from Python 2. The 2.5.x series will have extended bug-fix-only > support for Python 2.7 (and probably 3.4, in passing) through the end of > 2020, while Nibabel 3+ will require Python 3.5+. > > > > This release also sees a number of API changes that have been threatened > in FutureWarnings and DeprecationWarnings for a while, so if you've been > ignoring them, expect some minor breakage, and hopefully some other > warnings will be getting louder. Nibabel 3 will be seeing the first round > of significant removals, so there will be a 1-month minimum release > candidate period there. > > > > The most interesting new feature is to_bytes/from_bytes methods for some > single-image formats, which should make passing images around networked > applications much less fiddly. > > > > Moving forward, I'm strongly considering replacing the current > versioning scheme with Versioneer, which will basically normalize the > non-release versions to PEP-440 compatible version strings and drop our > custom githash code. I recently wrote my overall strategy up as a gist. If > you have concerns with this move, please email me or open an issue (or wait > on the PR). > > Many thanks for doing the release, > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcohen at polymtl.ca Sun Aug 4 21:40:54 2019 From: jcohen at polymtl.ca (Julien Cohen-Adad) Date: Sun, 4 Aug 2019 21:40:54 -0400 Subject: [Neuroimaging] Announcing SCT 4.0.0 Message-ID: Dear neuroimaging community, We are happy to announce the official release of the Spinal Cord Toolbox (SCT) 4.0.0: https://github.com/neuropoly/spinalcordtoolbox/releases/tag/4.0.0. For a list of changes to the release, go here: https://github.com/neuropoly/spinalcordtoolbox/blob/release/CHANGES.md For installation instructions, go here: https://github.com/neuropoly/spinalcordtoolbox/blob/master/README.md#installation If you have any question or feature request, please post in the SCT forum: http://forum.spinalcordmri.org/c/sct Happy processing! The Spinal Cord Toolbox Team p.s. ...and congratulations to Nibabel 2.5.0! -- Julien Cohen-Adad, PhD Associate Professor, Polytechnique Montreal Associate Director, Functional Neuroimaging Unit, University of Montreal Canada Research Chair in Quantitative Magnetic Resonance Imaging Phone: 514 340 5121 (office: 2264); Skype: jcohenadad Web: www.neuro.polymtl.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From russo.silviapaola at gmail.com Tue Aug 6 13:55:27 2019 From: russo.silviapaola at gmail.com (Silvia Russo) Date: Tue, 6 Aug 2019 10:55:27 -0700 Subject: [Neuroimaging] Obtaining power spectrum for each component Message-ID: Hello! I am relatively new to fMRI analysis and trying to use open-source python-based methods to do my work. I am supposed to run single-subject ICA analysis on 25 subjects, identify 5 resting state networks and look at the averaged low frequency across networks in each subject. I am using preprocessed images from the human connectome project and running them through nilearn?s module DictLearn. I asked for 10 components (as the images are already preprocessed). I had three questions regarding this: Is it correct to create a ?for? loop that runs DictLearn on each of the 25 subjects and call this a single-subject analysis? If not, what would be a good way to do single-subject ICA without having to use FSL Melodic? How can I automatically identify the specific network each component may represent? E.g. component 1 is 90% likely to represent the default mode network, component 2 is 92% likely to represent the salience network, etc FSL automatically creates power spectrums for each component, how can you do the same in python? Thank you so much for your help! Silvia -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Tue Aug 6 16:13:59 2019 From: bertrand.thirion at inria.fr (bthirion) Date: Tue, 6 Aug 2019 22:13:59 +0200 Subject: [Neuroimaging] Obtaining power spectrum for each component In-Reply-To: References: Message-ID: On 06/08/2019 19:55, Silvia Russo wrote: > Hello! > > I am relatively new to fMRI analysis and trying to use open-source > python-based methods to do my work. > I am supposed to run single-subject ICA analysis on 25 subjects, > identify 5 resting state networks and look at the averaged low > frequency across networks in each subject. > > I am using preprocessed images from the human connectome project and > running them through nilearn?s module DictLearn. > I asked for 10 components (as the images are already preprocessed). > > I had three questions regarding this: > > 1. Is it correct to create a ?for? loop that runs DictLearn on each > of the 25 subjects and call this a single-subject analysis? If > not, what would be a good way to do single-subject ICA without > having to use FSL Melodic? > That sounds right. > > 1. How can I automatically identify the specific network each > component may represent? E.g. component 1 is 90% likely to > represent the default mode network, component 2 is 92% likely to > represent the salience network, etc > I'm not aware of such an automated labelling in the Python ecosystem. You probably want to do this manually. > 1. FSL automatically creates power spectrums for each component, how > can you do the same in python? > You would have to create a small utility for that using scipy, e.g. https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.signal.periodogram.html Best, Bertrand Thirion PS: I would advise to ask such question through the Neurostars interface. -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Fri Aug 16 16:46:31 2019 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Fri, 16 Aug 2019 16:46:31 -0400 Subject: [Neuroimaging] ANN: DIPY 1.0.0 - a historic release Message-ID: We are excited to announce a new major and historic release of Diffusion Imaging in Python (DIPY). DIPY 1.0.0 is out! Please cite using the following DOI: 10.3389/fninf.2014.00008 DIPY 1.0.0 (Monday, 5 August 2019) This release received contributions from 17 developers (the full release notes are at: https://dipy.org/documentation/1.0.0./release_notes/release1.0/). Thank you all for your contributions and feedback! A new DIPY era is starting: this release is compatible with python 3.5+ and breaks backward compatibility with 0.x.x. Please click here to check API changes or look at the end of this email. The 0.16.x series will have extended bug-fix-only support for Python 2.7 until June 2020. Highlights of this release include: - Critical API changes - New awesome website - Large refactoring of tracking API - New denoising algorithm: MP-PCA - New Gibbs ringing removal - New interpolation module: dipy.core.interpolation - New reconstruction models: Mean Signal DKI, MTMS-CSD - Increased coordinate systems consistency - New object to manage safely tractography data: StatefulTractogram - New command line interface for downloading datasets: FetchFlow - Horizon updated, medical visualization interface powered by QuickBundlesX - Removed all deprecated functions and parameters - Removed compatibility with Python 2.7 - Updated minimum dependencies version (Numpy, Scipy) - All tutorials updated to API changes and 3 new added - Large documentation update - Closed 289 issues and merged 98 pull requests Note: - DIPY 0.16.x will be the last series to support python 2. The next release, DIPY 1.0, will support python 3 only. To upgrade or install DIPY Run the following command in your terminal: pip install --upgrade dipy or conda install -c conda-forge dipy This version of DIPY depends on nibabel (2.4.0+). For visualization you need FURY (0.3.0+). Questions or suggestions? For any questions go to http://dipy.org, or send an e-mail to dipy at python.org We also have an instant messaging service and chat room available at https://gitter.im/nipy/dipy On behalf of the DIPY developers, Eleftherios Garyfallidis, Ariel Rokem, Serge Koudoro https://dipy.org/contributors API Changes Some of the changes introduced in the 1.0 release will break backwards compatibility with previous versions. This release is compatible with Python 3.5+ Reconstruction The spherical harmonics bases mrtrix and fibernav have been renamed to tournier07 and descoteaux07 after the deprecation cycle started in the 0.15 release. We changed dipy.data.default_sphere from symmetric724 to repulsion724 which is more evenly distributed. Segmentation The API of dipy.segment.mask.median_otsu has changed in the following ways: if you are providing a 4D volume, vol_idx is now a required argument. The order of parameters has also changed. Tractogram loading and saving The API of dipy.io.streamlines.load_tractogram and dipy.io.streamlines.save_tractogram has changed in the following ways: When loading trk, tck, vtk, fib, or dpy) a reference nifti file is needed to guarantee proper spatial transformation handling. Spatial transformation handling Functions from dipy.tracking.streamlines were modified to enforce the affine parameter and uniform docstrings. deform_streamlines select_by_rois, orient_by_rois, _extract_vals and values_from_volume. Functions from dipy.tracking.utils were modified to enforce the affine parameter and uniform docstring. density_map connectivity_matrix, seeds_from_mask, random_seeds_from_mask, target, target_line_based, near_roi, length and path_length were all modified. The function affine_for_trackvis, move_streamlines, flexi_tvis_affine and get_flexi_tvis_affine were deleted. Functions from dipy.tracking.life were modified to enforce the affine parameter and uniform docstring. voxel2streamline, setup and fit from class FiberModel were all modified. afq_profile from dipy.stats.analysis was modified in a similar way. Simulations - dipy.sims.voxel.SingleTensor has been replaced by dipy.sims.voxel.single_tensor - dipy.sims.voxel.MultiTensor has been replaced by dipy.sims.voxel.multi_tensor - dipy.sims.voxel.SticksAndBall has been replaced by dipy.sims.voxel.sticks_and_ball Interpolation All interpolation functions have been moved to a new module name dipy.core.interpolation Tracking The voxel_size parameter has been removed from the following function: - dipy.tracking.utils.connectivity_matrix - dipy.tracking.utils.density_map - dipy.tracking.utils.stremline_mapping - dipy.tracking._util._mapping_to_voxel The dipy.reconst.peak_direction_getter.PeaksAndMetricsDirectionGetter has been renamed dipy.reconst.peak_direction_getter.EuDXDirectionGetter. The LocalTracking and ParticleFilteringTracking functions were moved from dipy.tracking.local.localtrackingto dipy.tracking.local_tracking. They now need to be imported from dipy.tracking.local_tracking. - functions argument tissue_classifier were renamed stopping_criterion The TissueClassifier were renamed StoppingCriterion and moved from dipy.tracking.local.tissue_classifier to dipy.tracking.stopping_criterion. They now need to be imported from dipy.tracking.stopping_criterion. - TissueClassifier -> StoppingCriterion - BinaryTissueClassifier -> BinaryStoppingCriterion - ThresholdTissueClassifier -> ThresholdStoppingCriterion - ConstrainedTissueClassifier -> AnatomicalStoppingCriterion - ActTissueClassifier -> ActStoppingCriterion - CmcTissueClassifier -> CmcStoppingCriterion The dipy.tracking.local.tissue_classifier.TissueClass was renamed dipy.tracking.stopping_criterion.StreamlineStatus. The EuDX tracking function has been removed. EuDX tractography can be performed using dipy.tracking.local_tracking using dipy.reconst.peak_direction_getter.EuDXDirectionGetter. Streamlines dipy.io.trackvis has been removed. Use dipy.io.streamline instead. Other - dipy.external package has been removed. - dipy.fixes package has been removed. - dipy.segment.quickbundes module has been removed. - dipy.reconst.peaks module has been removed. - Compatibility with Python 2.7 has been removed -------------- next part -------------- An HTML attachment was scrubbed... URL: From ffein at stanford.edu Mon Aug 19 14:41:09 2019 From: ffein at stanford.edu (Franklin Feingold) Date: Mon, 19 Aug 2019 18:41:09 +0000 Subject: [Neuroimaging] BIDS Steering Group nominations Message-ID: <1B92B06D-B995-4B5B-8241-E6E390B55D49@stanford.edu> Dear all, We are announcing the nomination period for our fellow colleagues (or self-nomination) to the BIDS Steering Group! The BIDS Steering Group is proposed within our governance document. This group is charged with preserving the longevity and sustainability of the BIDS standard. This will be a 5 person group that represents different modalities and perspectives. To nominate please fill out this form. We will be running the nominations until August 28 at 11:59pm PST Thank you! Franklin -------------- next part -------------- An HTML attachment was scrubbed... URL: From AiWern.Chung at childrens.harvard.edu Mon Aug 26 15:25:33 2019 From: AiWern.Chung at childrens.harvard.edu (Chung, Ai Wern) Date: Mon, 26 Aug 2019 19:25:33 +0000 Subject: [Neuroimaging] FINAL CALL for challengers: MICCAI 2019 Connectomics in NeuroImaging Transfer-Learning Challenge (CNI-TLC) In-Reply-To: <1564076654342.50102@childrens.harvard.edu> References: <1559656196223.77412@childrens.harvard.edu> <1559745851975.55289@childrens.harvard.edu> <1560171940966.80109@childrens.harvard.edu> <1560172967828.18782@childrens.harvard.edu> <1560184318236.53889@childrens.harvard.edu> <1560187209491.12194@childrens.harvard.edu> <1560295231182.25157@childrens.harvard.edu> <66af52c354ee4ab0bf152d05b3d717c3@ESGEBEX7.win.ad.jhu.edu> <1561386182331.96213@childrens.harvard.edu> , , <1561993663310.85635@childrens.harvard.edu>, <1562010983705.40993@childrens.harvard.edu>, <1562076295433.82718@childrens.harvard.edu>, <1562076462247.99267@childrens.harvard.edu>, <1562166902740.68167@childrens.harvard.edu>, <1562166982137.30592@childrens.harvard.edu>, <1562168435025.7068@childrens.harvard.edu>, <1562858823013.29682@childrens.harvard.edu>, <1562859152482.12803@childrens.harvard.edu>, <1562863498247.64412@childrens.harvard.edu>, <1564076654342.50102@childrens.harvard.edu> Message-ID: <1566847533767.10312@childrens.harvard.edu> Apologies for cross posting **Submission deadline extended to Sun 1st Sept 2019** This is a final call for challengers to our Connectomics in NeuroImaging Transfer-Learning Challenge 2019 held in parallel with the 22nd International Conference on Medical Image Computing and Computer-assisted Intervention (MICCAI 2019) in Shenzhen, China. CNI 2019 will be taking place on October 13th, 2019. *** CNI Call for Challengers *** Addressing the issues of generalizability and clinical relevance for functional connectomes, you can leverage a unique resting-state fMRI (rsfMRI) dataset of attention deficit hyperactivity disorder (ADHD) and neurotypical controls (NC) to design a classification framework that can predict subject diagnosis (ADHD vs. NC) based on brain connectivity data. In a surprise twist, we will also evaluate the classification performance on a related clinical population with an ADHD comorbidity. This challenge will allow us to assess (1) whether the method is extracting functional connectivity patterns related to ADHD symptomatology, and (2) how much of this information "transfers" between clinical populations. Training and validation data are both available: http://www.brainconnectivity.net/challenge_data.html *** Why submit to the CNI Challenge? *** - Two great keynote speakers Prof Yong He (Beijing Normal University, China) and Dr. Fan Zhang (Harvard Medical School, USA); - Oral presentations and poster sessions to provide you with ample opportunity for exchanges and discussions; - Sponsored prizes for Challenge winners. *** Important dates for CNI Challenge *** - Submission deadline: Sept 1st, 2019, 23:59 EST - Submission website: https://cmt3.research.microsoft.com/CNIChallenge2019 For more information, visit http://www.brainconnectivity.net/challenge.html If you have any questions, do not hesitate to contact us http://www.brainconnectivity.net/contact.html We look forward to your participation! CNI 2019 Chairs CNI website: http://www.brainconnectivity.net/index.html -------------- next part -------------- An HTML attachment was scrubbed... URL: