From mmwoodman at gmail.com Fri Nov 4 10:50:45 2016 From: mmwoodman at gmail.com (Marmaduke Woodman) Date: Fri, 4 Nov 2016 15:50:45 +0100 Subject: [Neuroimaging] Interest in modeling library for NiPy Message-ID: hi I'm writing to poll for interest in a community oriented, NiPy-brand modeling library [1], for generic dynamical systems models such as those in DCM but also more realistic models e.g. neural mass models, DWI connectome based networks thereof, forward models for fMRI/EEG/MEG. Methods therein would include both forward simulation schemes (Euler-Maruyama etc) and model inversion using e.g. HMC or variational schemes in PyMC3. Most these elements are already implemented, across various NiPy libs and TVB, but the hope would be to make their cross-section, ie. nonlinear dynamical network modeling, a more accessible tool in the neuroimaging toolbox, by forming a library bringing all those elements together. Any questions, suggestions, criticisms are welcome. Marmaduke Woodman [1] I'm aware that many libs implement e.g. MVAR methods which is also model-based, but here I mean nonlinear, continuous time models, such as Jansen-Rit model of visual evoked potentials. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.davidgriffiths at gmail.com Fri Nov 4 12:40:08 2016 From: j.davidgriffiths at gmail.com (John Griffiths) Date: Fri, 4 Nov 2016 12:40:08 -0400 Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: References: Message-ID: +1 !! On 4 November 2016 at 10:50, Marmaduke Woodman wrote: > hi > > I'm writing to poll for interest in a community oriented, NiPy-brand > modeling library [1], for generic dynamical systems models such as those in > DCM but also more realistic models e.g. neural mass models, DWI connectome > based networks thereof, forward models for fMRI/EEG/MEG. Methods therein > would include both forward simulation schemes (Euler-Maruyama etc) and > model inversion using e.g. HMC or variational schemes in PyMC3. > > Most these elements are already implemented, across various NiPy libs and > TVB, but the > hope would be to make their cross-section, ie. nonlinear dynamical > network modeling, a more accessible tool in the neuroimaging toolbox, by > forming a library bringing all those elements together. > > > Any questions, suggestions, criticisms are welcome. > > Marmaduke Woodman > > > [1] I'm aware that many libs implement e.g. MVAR methods which is also > model-based, but here I mean nonlinear, continuous time models, such as > Jansen-Rit model of visual evoked potentials. > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Dr. John Griffiths Post-Doctoral Research Fellow Rotman Research Institute, Baycrest Toronto, Canada and Honorary Associate School of Physics University of Sydney -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Fri Nov 4 12:44:45 2016 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Fri, 04 Nov 2016 16:44:45 +0000 Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: References: Message-ID: +1 On Fri, Nov 4, 2016 at 12:40 PM John Griffiths wrote: > > +1 !! > > > > > > > > On 4 November 2016 at 10:50, Marmaduke Woodman > wrote: > > hi > > I'm writing to poll for interest in a community oriented, NiPy-brand > modeling library [1], for generic dynamical systems models such as those in > DCM but also more realistic models e.g. neural mass models, DWI connectome > based networks thereof, forward models for fMRI/EEG/MEG. Methods therein > would include both forward simulation schemes (Euler-Maruyama etc) and > model inversion using e.g. HMC or variational schemes in PyMC3. > > Most these elements are already implemented, across various NiPy libs and > TVB, but the > hope would be to make their cross-section, ie. nonlinear dynamical > network modeling, a more accessible tool in the neuroimaging toolbox, by > forming a library bringing all those elements together. > > > Any questions, suggestions, criticisms are welcome. > > Marmaduke Woodman > > > [1] I'm aware that many libs implement e.g. MVAR methods which is also > model-based, but here I mean nonlinear, continuous time models, such as > Jansen-Rit model of visual evoked potentials. > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > -- > > Dr. John Griffiths > > Post-Doctoral Research Fellow > > Rotman Research Institute, Baycrest > > Toronto, Canada > > and > > Honorary Associate > > School of Physics > > University of Sydney > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Nov 4 12:48:22 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 4 Nov 2016 09:48:22 -0700 Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: References: Message-ID: Hi, On Fri, Nov 4, 2016 at 7:50 AM, Marmaduke Woodman wrote: > hi > > I'm writing to poll for interest in a community oriented, NiPy-brand > modeling library [1], for generic dynamical systems models such as those in > DCM but also more realistic models e.g. neural mass models, DWI connectome > based networks thereof, forward models for fMRI/EEG/MEG. Methods therein > would include both forward simulation schemes (Euler-Maruyama etc) and model > inversion using e.g. HMC or variational schemes in PyMC3. > > Most these elements are already implemented, across various NiPy libs and > TVB, but the > hope would be to make their cross-section, ie. nonlinear dynamical network > modeling, a more accessible tool in the neuroimaging toolbox, by forming a > library bringing all those elements together. > > > Any questions, suggestions, criticisms are welcome. That would be excellent! How can we help? Best, Matthew From demian.wassermann at inria.fr Fri Nov 4 13:09:35 2016 From: demian.wassermann at inria.fr (Demian Wassermann) Date: Fri, 4 Nov 2016 18:09:35 +0100 (CET) Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: References: Message-ID: <1F94842D-7E7F-40ED-8206-9EABD32D2155@inria.fr> +1 -- Demian Wassermann, PhD demian.wassermann at inria.fr Associate Research Professor (CR1) Athena Project Team INRIA Sophia Antipolis - M?diterran?e 2004 route des lucioles - FR-06902 > On 4 Nov 2016, at 17:49, Matthew Brett wrote: > > Hi, > >> On Fri, Nov 4, 2016 at 7:50 AM, Marmaduke Woodman wrote: >> hi >> >> I'm writing to poll for interest in a community oriented, NiPy-brand >> modeling library [1], for generic dynamical systems models such as those in >> DCM but also more realistic models e.g. neural mass models, DWI connectome >> based networks thereof, forward models for fMRI/EEG/MEG. Methods therein >> would include both forward simulation schemes (Euler-Maruyama etc) and model >> inversion using e.g. HMC or variational schemes in PyMC3. >> >> Most these elements are already implemented, across various NiPy libs and >> TVB, but the >> hope would be to make their cross-section, ie. nonlinear dynamical network >> modeling, a more accessible tool in the neuroimaging toolbox, by forming a >> library bringing all those elements together. >> >> >> Any questions, suggestions, criticisms are welcome. > > That would be excellent! > > How can we help? > > Best, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmwoodman at gmail.com Fri Nov 4 13:47:01 2016 From: mmwoodman at gmail.com (Marmaduke Woodman) Date: Fri, 4 Nov 2016 18:47:01 +0100 Subject: [Neuroimaging] Interest in modeling library for NiPy Message-ID: hi > How can we help? It would be really really helpful to get feedback on the scope: what sort of functionality would you want from such a library? If you've ever done modeling work, what tripped you up or made life difficult? I see a few main dimensions of scope & functionality: First, in the projects I've seen using TVB and DCM, I see network models used both parametrically and non parametrically. By parametrically, I mean, for example, that we do a parameter sweep over coupling strength, and compare empirical and simulated FC, and then look at, for example, a group wise difference of the best coupling strength. By non parametrically, I have in mind things like DCM's estimates of effective connectivity, where many parameters are being estimated and they are interpreted as an ensemble, not individually. This is naturally a slippery slope and some modeling questions lie between the two extremes. The second dimension of scope is in terms of the methods and numerics implemented. Simple time-stepping schemes for differential equations are easy to implement, but making them high-performance is less so (think CUDA/OpenCL). Bayesian inversion is really neat, but requires computing gradients or using packages like PyMC3 or Stan. Finally, I would assume we're mainly interested in human or primate neuroimaging, so modalities like fMRI & MEG, and maybe invasive clinical modalities too. As more of a methods library, this would be a detail I guess, and I would expect to delegate I/O, formatting, etc to the respective libraries. Again, consider this a RFC and let me know what you think. Marmaduke -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Fri Nov 4 13:53:44 2016 From: jbpoline at gmail.com (JB Poline) Date: Fri, 4 Nov 2016 10:53:44 -0700 Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: <1F94842D-7E7F-40ED-8206-9EABD32D2155@inria.fr> References: <1F94842D-7E7F-40ED-8206-9EABD32D2155@inria.fr> Message-ID: Yes, this would great! JB On 4 November 2016 at 10:09, Demian Wassermann wrote: > +1 > > -- > Demian Wassermann, PhD > demian.wassermann at inria.fr > Associate Research Professor (CR1) > Athena Project Team > INRIA Sophia Antipolis - M?diterran?e > 2004 route des lucioles - FR-06902 > > > > On 4 Nov 2016, at 17:49, Matthew Brett wrote: > > Hi, > > On Fri, Nov 4, 2016 at 7:50 AM, Marmaduke Woodman > wrote: > > hi > > > I'm writing to poll for interest in a community oriented, NiPy-brand > > modeling library [1], for generic dynamical systems models such as those in > > DCM but also more realistic models e.g. neural mass models, DWI connectome > > based networks thereof, forward models for fMRI/EEG/MEG. Methods therein > > would include both forward simulation schemes (Euler-Maruyama etc) and model > > inversion using e.g. HMC or variational schemes in PyMC3. > > > Most these elements are already implemented, across various NiPy libs and > > TVB, but the > > hope would be to make their cross-section, ie. nonlinear dynamical network > > modeling, a more accessible tool in the neuroimaging toolbox, by forming a > > library bringing all those elements together. > > > > Any questions, suggestions, criticisms are welcome. > > > That would be excellent! > > How can we help? > > Best, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > From j.davidgriffiths at gmail.com Fri Nov 4 14:18:07 2016 From: j.davidgriffiths at gmail.com (John Griffiths) Date: Fri, 4 Nov 2016 14:18:07 -0400 Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: References: Message-ID: On 4 November 2016 at 13:47, Marmaduke Woodman wrote: > hi > > > How can we help? > > It would be really really helpful to get feedback on the scope: what sort > of functionality would you want from such a library? If you've ever done > modeling work, what tripped you up or made life difficult? > > I see a few main dimensions of scope & functionality: > > First, in the projects I've seen using TVB and DCM, I see network models > used both parametrically and non parametrically. By parametrically, I mean, > for example, that we do a parameter sweep over coupling strength, and > compare empirical and simulated FC, and then look at, for example, a group > wise difference of the best coupling strength. By non parametrically, I > have in mind things like DCM's estimates of effective connectivity, where > many parameters are being estimated and they are interpreted as an > ensemble, not individually. This is naturally a slippery slope and some > modeling questions lie between the two extremes. > > I think the distinction you're pointing to here is between inference on parameters vs. inference on models (parametric/non-parametric has separate meanings); and not DCM's estimates of effective connectivity parameters per se but rather model evidence/fit/frenergy metrics and comparisons thereof. Certainly it is essential to support both. > The second dimension of scope is in terms of the methods and numerics > implemented. Simple time-stepping schemes for differential equations are > easy to implement, but making them high-performance is less so (think > CUDA/OpenCL). Bayesian inversion is really neat, but requires computing > gradients or using packages like PyMC3 or Stan. > Definitely agree that model inversion/fitting should be a priority design consideration from the very start. PyMC3 does look like the way to go. > Finally, I would assume we're mainly interested in human or primate > neuroimaging, so modalities like fMRI & MEG, and maybe invasive clinical > modalities too. As more of a methods library, this would be a detail I > guess, and I would expect to delegate I/O, formatting, etc to the > respective libraries. > > Incidentally: I understand that there is a new MNE-NEURON project on the go (PIs Matti Hamalainen & Stephanie Jones), that will be looking to fit lower-level (compartmental) neuron models to MEG signals from humans and animals. Could well be a lot of overlap on model fitting problems. > Again, consider this a RFC and let me know what you think. > > Marmaduke > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Dr. John Griffiths Post-Doctoral Research Fellow Rotman Research Institute, Baycrest Toronto, Canada and Honorary Associate School of Physics University of Sydney -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris.burle at univ-amu.fr Fri Nov 4 14:54:55 2016 From: boris.burle at univ-amu.fr (Boris BURLE) Date: Fri, 4 Nov 2016 19:54:55 +0100 Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: References: Message-ID: <1910b2b9-1aab-e05f-5e92-fce945ec0824@univ-amu.fr> On 04/11/2016 15:50, Marmaduke Woodman wrote: > hi > > I'm writing to poll for interest in a community oriented, NiPy-brand > modeling library [1], for generic dynamical systems models such as > those in DCM but also more realistic models e.g. neural mass models, > DWI connectome based networks thereof, forward models for > fMRI/EEG/MEG. Methods therein would include both forward simulation > schemes (Euler-Maruyama etc) and model inversion using e.g. HMC or > variational schemes in PyMC3. > > Most these elements are already implemented, across various NiPy libs > and TVB, but the > hope would be to make their cross-section, ie. nonlinear dynamical > network modeling, a more accessible tool in the neuroimaging toolbox, > by forming a library bringing all those elements together. > > > Any questions, suggestions, criticisms are welcome. > > Marmaduke Woodman > I would be MUCH interested !! -------------- next part -------------- An HTML attachment was scrubbed... URL: From delacyn at u.washington.edu Fri Nov 4 14:18:28 2016 From: delacyn at u.washington.edu (Nina de Lacy) Date: Fri, 4 Nov 2016 11:18:28 -0700 (PDT) Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: Message-ID: + another 1 On Fri, 4 Nov 2016, JB Poline wrote: > Yes, this would great! > JB > > On 4 November 2016 at 10:09, Demian Wassermann > wrote: >> +1 >> >> -- >> Demian Wassermann, PhD >> demian.wassermann at inria.fr >> Associate Research Professor (CR1) >> Athena Project Team >> INRIA Sophia Antipolis - M?diterran?e >> 2004 route des lucioles - FR-06902 >> >> >> >> On 4 Nov 2016, at 17:49, Matthew Brett wrote: >> >> Hi, >> >> On Fri, Nov 4, 2016 at 7:50 AM, Marmaduke Woodman >> wrote: >> >> hi >> >> >> I'm writing to poll for interest in a community oriented, NiPy-brand >> >> modeling library [1], for generic dynamical systems models such as those in >> >> DCM but also more realistic models e.g. neural mass models, DWI connectome >> >> based networks thereof, forward models for fMRI/EEG/MEG. Methods therein >> >> would include both forward simulation schemes (Euler-Maruyama etc) and model >> >> inversion using e.g. HMC or variational schemes in PyMC3. >> >> >> Most these elements are already implemented, across various NiPy libs and >> >> TVB, but the >> >> hope would be to make their cross-section, ie. nonlinear dynamical network >> >> modeling, a more accessible tool in the neuroimaging toolbox, by forming a >> >> library bringing all those elements together. >> >> >> >> Any questions, suggestions, criticisms are welcome. >> >> >> That would be excellent! >> >> How can we help? >> >> Best, >> >> Matthew >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging This message and any attached files might contain confidential information protected by federal and state law. The information is intended only for the use of the individual(s) or entities originally named as addressees. The improper disclosure of such information may be subject to civil or criminal penalties. If this message reached you in error, please contact the sender and destroy this message. Disclosing, copying, forwarding, or distributing the information by unauthorized individuals or entities is strictly prohibited by law. From jbpoline at gmail.com Sat Nov 5 13:19:57 2016 From: jbpoline at gmail.com (JB Poline) Date: Sat, 5 Nov 2016 10:19:57 -0700 Subject: [Neuroimaging] Fwd: Urgent - Job offer - Engineer in Medical Image Computing In-Reply-To: References: Message-ID: FYI ---------- Forwarded message ---------- From: Olivier Colliot Date: 5 November 2016 at 08:57 Subject: Urgent - Job offer - Engineer in Medical Image Computing To: Dear colleagues, We have a position opened for an engineer an medical image computing, to be recruited as soon as possible. Could you please help us spread the word? Best regards Olivier -- Olivier Colliot ARAMIS Lab Web: http://www.aramislab.fr Twitter: @AramisLabParis -------------- next part -------------- A non-text attachment was scrubbed... Name: engineer_software_developper.pdf Type: application/pdf Size: 412346 bytes Desc: not available URL: From mmwoodman at gmail.com Mon Nov 7 05:11:45 2016 From: mmwoodman at gmail.com (Marmaduke Woodman) Date: Mon, 7 Nov 2016 11:11:45 +0100 Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: References: Message-ID: On Fri, Nov 4, 2016 at 7:18 PM, John Griffiths wrote: the distinction is between inference on parameters vs. inference on models > (parametric/non-parametric has separate meanings); and not DCM's estimates > of effective connectivity parameters per se but rather model > evidence/fit/frenergy metrics and comparisons thereof. Certainly it is > essential to support both. > I would focus first on the former: an API would allow specification of a dataset, a generative model and an inference scheme; the results would be inference diagnostics and posteriors. One could build on that to specific multiple models or a model space and comparison criteria. Anyone with experience in DCM's API might be able to suggest how to make that user friendly? > PyMC3 does look like the way to go. > Edward (http://edwardlib.org) is a new one also worth looking at, because it builds mainly on TensorFlow. I'm not sure even HMC will scale to full size neuroimaging data (though networks with several or tens of nodes would work), so it's important to keep the variational schemes available. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.davidgriffiths at gmail.com Mon Nov 7 08:09:23 2016 From: j.davidgriffiths at gmail.com (John Griffiths) Date: Mon, 7 Nov 2016 08:09:23 -0500 Subject: [Neuroimaging] Interest in modeling library for NiPy In-Reply-To: References: Message-ID: OK well a few navigational points first: DCM and SPM in general are obviously well developed code bases but leave a lot to be desired in terms of explicit documentation. The manual doesn't even begin to touch on half of the stuff squirreled away inside the 'toolbox' folder https://github.com/neurodebian/spm12/tree/master/toolbox A general browse through that folder is well worth doing. More specifically: many of the main DCM workshorse functions are the ones in this folder with 'dcm' in the title: https://github.com/neurodebian/spm12 I think that contains all of the main inversion routines, which don't necessarily have 'dcm' names, such as https://github.com/neurodebian/spm12/blob/master/spm_nlsi.m ...as well as high level model comparison functions like https://github.com/neurodebian/spm12/blob/master/spm_dcm_compare.m The DCM M/EEG toolbox https://github.com/neurodebian/spm12/tree/master/toolbox/dcm_meeg then has lots of model specific things with the (relative to fMRI) more detailed and more developed M/EEG neurophysiological models. Also worth looking at the DEM (dynamic expectation maximization) toolbox https://github.com/neurodebian/spm12/tree/master/toolbox/DEM and Neural models toolbox https://github.com/neurodebian/spm12/tree/master/toolbox/Neural_Models General point: there are a number 'demo' functions littered around; e.g. https://github.com/neurodebian/spm12/blob/master/toolbox/dcm_meeg/spm_epileptor_demo.m Which seem to me to be often the best place to look for general documentation. On 7 November 2016 at 05:11, Marmaduke Woodman wrote: > > On Fri, Nov 4, 2016 at 7:18 PM, John Griffiths > wrote: > > the distinction is between inference on parameters vs. inference on models >> (parametric/non-parametric has separate meanings); and not DCM's estimates >> of effective connectivity parameters per se but rather model >> evidence/fit/frenergy metrics and comparisons thereof. Certainly it is >> essential to support both. >> > > I would focus first on the former: an API would allow specification of a > dataset, a generative model and an inference scheme; the results would be > inference diagnostics and posteriors. > > One could build on that to specific multiple models or a model space and > comparison criteria. > > Anyone with experience in DCM's API might be able to suggest how to make > that user friendly? > > >> PyMC3 does look like the way to go. >> > > Edward (http://edwardlib.org) is a new one also worth looking at, because > it builds mainly on TensorFlow. I'm not sure even HMC will scale to full > size neuroimaging data (though networks with several or tens of nodes would > work), so it's important to keep the variational schemes available. > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Dr. John Griffiths Post-Doctoral Research Fellow Rotman Research Institute, Baycrest Toronto, Canada and Honorary Associate School of Physics University of Sydney -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Nov 8 14:00:18 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 8 Nov 2016 11:00:18 -0800 Subject: [Neuroimaging] Good papers on bias field correction, skull stripping? Message-ID: Hi, Can anyone recommend a paper with a good summary of methods for correcting MRI images for lack of uniformity in the magnetic field (bias field correction)? How about a good review of methods for skull stripping? Cheers, Matthew From jhlegarreta at vicomtech.org Wed Nov 9 06:25:26 2016 From: jhlegarreta at vicomtech.org (Jon Haitz Legarreta) Date: Wed, 9 Nov 2016 12:25:26 +0100 Subject: [Neuroimaging] Good papers on bias field correction, skull stripping? In-Reply-To: References: Message-ID: Hi Matthew, although may be this is not what you could expect as a review, in a submission to the Insight Journal [1], the authors enumerated a number of methods that had been used until that time for skull stripping. As for the bias field correction, I do not have any suggestions for now. HTH, JON HAITZ [1] http://www.insight-journal.org/browse/publication/859 On 8 November 2016 at 20:00, Matthew Brett wrote: > Hi, > > Can anyone recommend a paper with a good summary of methods for > correcting MRI images for lack of uniformity in the magnetic field > (bias field correction)? > > How about a good review of methods for skull stripping? > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Wed Nov 9 07:21:52 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 9 Nov 2016 07:21:52 -0500 Subject: [Neuroimaging] Good papers on bias field correction, skull stripping? In-Reply-To: References: Message-ID: hi matthew, for brain extraction, the following paper compares a few methods and provides a manual labeled data set: https://gigascience.biomedcentral.com/articles/10.1186/s13742-016-0150-5 and also some here: https://www.ncbi.nlm.nih.gov/pubmed/21373993 regarding bias correction, this covers a bit of ground: https://www.ncbi.nlm.nih.gov/pubmed/20378467 but there is also the possibility of addressing this at the acquisition level using T1 mapping with multi flip angle methods, but that has it's own caveats (this paper has some discussion of those things - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3295910/). cheers, satra On Tue, Nov 8, 2016 at 2:00 PM, Matthew Brett wrote: > Hi, > > Can anyone recommend a paper with a good summary of methods for > correcting MRI images for lack of uniformity in the magnetic field > (bias field correction)? > > How about a good review of methods for skull stripping? > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken.sullivan at kohyoung.com Thu Nov 10 13:38:42 2016 From: ken.sullivan at kohyoung.com (sullivan, ken) Date: Thu, 10 Nov 2016 10:38:42 -0800 Subject: [Neuroimaging] How to write a color 3D NIfTI Message-ID: I've had no problem writing out 3D grayscale .nii files with nibabel and opening them in NIfTI viewers (Mango, MCIcron). However I haven't been able to write out 3D color, as each RGB plane is being interpreted as a different volume. Looking around a bit it seems there is a "dataset" field in the header that needs to be set to 128 for 24-bit RGB planar images. One nice thing about nibabel is how it automatically sets up the header based on the numpy array fed to it. However this is an ambiguous case and if I print the header information I can see it setting the datatype to 2 (uint8), which presumably is why viewers are interpreting it as separate volumes, not RGB24. I don't see any official support in the API for setting the datatype, but the documentation does mention access to the raw fields for those with "great courage". I tried this: hdr = ni_img.header raw = hdr.structarr raw['datatype'] = 128 Which, if I print the header, does now show "datatype : RGB", but when I call nib.save() I get: File "\lib\site-packages\nibabel\arraywriters.py", line 126, in scaling_needed raise WriterError('Cannot cast to or from non-numeric types') Which looks like it is caused by an inconsistency of internal types (arr_dtype != out_dtype) , which I presume is because just changing the header like I did isn't enough. Is there a proper way to do this? Code giving 3 separate 20 x 201 x 202 volumes: import nibabel as nib import numpy as np nifti_path = "/my/local/path" test_stack = (255.0 * np.random.rand(20, 201, 202, 3)).astype(np.uint8) ni_img = nib.Nifti1Image(test_stack, np.eye(4)) nib.save(ni_img, nifti_path) -- ------------------------------ The information contained in this message and any attachments may be confidential and legally privileged. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this e-mail and destroy any copies. Any dissemination or use of this information by a person other than the intended recipient is unauthorized and may be illegal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Nov 10 14:01:06 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 10 Nov 2016 11:01:06 -0800 Subject: [Neuroimaging] How to write a color 3D NIfTI In-Reply-To: References: Message-ID: Hi, On Thu, Nov 10, 2016 at 10:38 AM, sullivan, ken wrote: > I've had no problem writing out 3D grayscale .nii files with nibabel and > opening them in NIfTI viewers (Mango, MCIcron). However I haven't been able > to write out 3D color, as each RGB plane is being interpreted as a different > volume. Looking around a bit it seems there is a "dataset" field in the > header that needs to be set to 128 for 24-bit RGB planar images. One nice > thing about nibabel is how it automatically sets up the header based on the > numpy array fed to it. However this is an ambiguous case and if I print the > header information I can see it setting the datatype to 2 (uint8), which > presumably is why viewers are interpreting it as separate volumes, not > RGB24. I don't see any official support in the API for setting the datatype, > but the documentation does mention access to the raw fields for those with > "great courage". I tried this: Yes, it's a bit difficult to set the datatype post-hoc to the data array. In practice, you'll have to cast the array correctly before passing it to the image constructor. Something like: In [21]: shape_3d = (5, 6, 7) In [22]: rgb_arr = np.random.randint(0, 256, size=shape_3d + (3,)).astype('u1') In [23]: rgb_dtype = np.dtype([('R', 'u1'), ('G', 'u1'), ('B', 'u1')]) In [24]: rgb_typed = rgb_arr.view(rgb_dtype).reshape(shape_3d) In [25]: import nibabel as nib In [26]: img = nib.Nifti1Image(rgb_typed, np.eye(4)) In [27]: img.header['datatype'] Out[27]: array(128, dtype=int16) Cheers, Matthew From ken.sullivan at kohyoung.com Thu Nov 10 14:24:17 2016 From: ken.sullivan at kohyoung.com (sullivan, ken) Date: Thu, 10 Nov 2016 11:24:17 -0800 Subject: [Neuroimaging] How to write a color 3D NIfTI In-Reply-To: References: Message-ID: Worked perfect, thank you! -Ken On Thu, Nov 10, 2016 at 11:01 AM, Matthew Brett wrote: > Hi, > > On Thu, Nov 10, 2016 at 10:38 AM, sullivan, ken > wrote: > > I've had no problem writing out 3D grayscale .nii files with nibabel and > > opening them in NIfTI viewers (Mango, MCIcron). However I haven't been > able > > to write out 3D color, as each RGB plane is being interpreted as a > different > > volume. Looking around a bit it seems there is a "dataset" field in the > > header that needs to be set to 128 for 24-bit RGB planar images. One nice > > thing about nibabel is how it automatically sets up the header based on > the > > numpy array fed to it. However this is an ambiguous case and if I print > the > > header information I can see it setting the datatype to 2 (uint8), which > > presumably is why viewers are interpreting it as separate volumes, not > > RGB24. I don't see any official support in the API for setting the > datatype, > > but the documentation does mention access to the raw fields for those > with > > "great courage". I tried this: > > Yes, it's a bit difficult to set the datatype post-hoc to the data > array. In practice, you'll have to cast the array correctly before > passing it to the image constructor. Something like: > > In [21]: shape_3d = (5, 6, 7) > In [22]: rgb_arr = np.random.randint(0, 256, size=shape_3d + > (3,)).astype('u1') > In [23]: rgb_dtype = np.dtype([('R', 'u1'), ('G', 'u1'), ('B', 'u1')]) > In [24]: rgb_typed = rgb_arr.view(rgb_dtype).reshape(shape_3d) > In [25]: import nibabel as nib > In [26]: img = nib.Nifti1Image(rgb_typed, np.eye(4)) > In [27]: img.header['datatype'] > Out[27]: array(128, dtype=int16) > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- ------------------------------ The information contained in this message and any attachments may be confidential and legally privileged. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this e-mail and destroy any copies. Any dissemination or use of this information by a person other than the intended recipient is unauthorized and may be illegal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.gramfort at telecom-paristech.fr Sat Nov 12 08:49:41 2016 From: alexandre.gramfort at telecom-paristech.fr (Alexandre Gramfort) Date: Sat, 12 Nov 2016 14:49:41 +0100 Subject: [Neuroimaging] [ANN] PySurfer 0.7 Message-ID: Hi everyone, I just made a release of PySurfer : https://pypi.python.org/pypi/pysurfer/ thanks everyone who helped. enjoy ! Alex From elef at indiana.edu Sat Nov 12 16:38:52 2016 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Sat, 12 Nov 2016 21:38:52 +0000 Subject: [Neuroimaging] [ANN] PySurfer 0.7 In-Reply-To: References: Message-ID: Congrats! :) On Sat, Nov 12, 2016 at 8:50 AM Alexandre Gramfort < alexandre.gramfort at telecom-paristech.fr> wrote: > Hi everyone, > > I just made a release of PySurfer : > > https://pypi.python.org/pypi/pysurfer/ > > thanks everyone who helped. > > enjoy ! > > Alex > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoh at onerussian.com Sun Nov 13 12:02:25 2016 From: yoh at onerussian.com (Yaroslav Halchenko) Date: Sun, 13 Nov 2016 09:02:25 -0800 Subject: [Neuroimaging] Who is at sfn - come to chat at DataLad booth 4113 In-Reply-To: References: Message-ID: <337614D3-4133-4F44-9C8F-D7904A6581C7@onerussian.com> Would be glad to display or distribute promotional materials on python open source projects relevant to our common endeavor ;-) . As always, at the booth, I will display some python materials but primarily only digital version making visitors to download their own copy of PDF, see http://centerforopenneuroscience.org/engage . So if you would like to add or adjust your project within those trifolds, there is still time. -- Sent from a phone which beats iPhone. From jbpoline at gmail.com Sun Nov 13 16:03:49 2016 From: jbpoline at gmail.com (JB Poline) Date: Sun, 13 Nov 2016 13:03:49 -0800 Subject: [Neuroimaging] [ANN] PySurfer 0.7 In-Reply-To: References: Message-ID: Indeed ! On 12 November 2016 at 13:38, Eleftherios Garyfallidis wrote: > Congrats! :) > > On Sat, Nov 12, 2016 at 8:50 AM Alexandre Gramfort > wrote: >> >> Hi everyone, >> >> I just made a release of PySurfer : >> >> https://pypi.python.org/pypi/pysurfer/ >> >> thanks everyone who helped. >> >> enjoy ! >> >> Alex >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > From gael.varoquaux at normalesup.org Mon Nov 14 08:52:04 2016 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 Nov 2016 14:52:04 +0100 Subject: [Neuroimaging] [ANN] PySurfer 0.7 In-Reply-To: References: Message-ID: <20161114135204.GC2025241@phare.normalesup.org> +1 On Sun, Nov 13, 2016 at 01:03:49PM -0800, JB Poline wrote: > Indeed ! > On 12 November 2016 at 13:38, Eleftherios Garyfallidis wrote: > > Congrats! :) > > On Sat, Nov 12, 2016 at 8:50 AM Alexandre Gramfort > > wrote: > >> Hi everyone, > >> I just made a release of PySurfer : > >> https://pypi.python.org/pypi/pysurfer/ > >> thanks everyone who helped. > >> enjoy ! > >> Alex > >> _______________________________________________ > >> Neuroimaging mailing list > >> Neuroimaging at python.org > >> https://mail.python.org/mailman/listinfo/neuroimaging > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From Reid.Robert at mayo.edu Mon Nov 14 12:42:11 2016 From: Reid.Robert at mayo.edu (Reid, Robert I. (Rob)) Date: Mon, 14 Nov 2016 17:42:11 +0000 Subject: [Neuroimaging] Interpretation of beta in the Sparse Fascicle Model Message-ID: <021cdb$4q2hso@ironport10.mayo.edu> Hi, I am trying to use a set of simulations to optimize the b values in a multishell acquisition for general use. My current choice for the objective (cost) function is the difference between the true input and apparent recovered "total fiber vector"s, which I define as (f0 * d0, f1 * d1, f2 * d2), where fi and di are the voxel fraction and direction of fiber I, so it is a 9 dimensional vector, and the error in each fiber's direction is weighted by its voxel fraction. My problem is getting the fiber fractions. I have mostly followed the sparse fascicle model tutorial in http://nipy.org/dipy/examples_built/sfm_reconst.html#example-sfm-reconst , and the beta values seem to be what I should use. I set the apparent fiber fraction to sum(beta_j), for j in the part of the sphere closest to the true direction of fiber i. (That can misassign outliers, I know, but that's a different problem.) It *almost* works, but sum(beta) is often a bit larger than 1, especially as b of the outer shell is raised from 2000 to 3000. For example, with (f0, f1, f2) = (0.500, 0.250, 0.125), with b_hi = 2000 I get [ 0.50418062, 0.21846355, 0.15918703] with b_hi = 3000 I get [ 0.63809217, 0.36634215, 0.30759466] When averaged over a large number of simulations and scenarios the trend is that there is less angular error at b_hi = 3000, but the overall error function favors b_hi = 2000, because the fiber fraction estimates are so bad at b_hi = 3000. I am using the ExponentialIsotropicModel for the isotropic part. Am I abusing beta in some way, or is it just overestimating the fiber fractions "naturally" and I should accept the indication that the fiber fraction estimation degrades when going from 2000 to 3000? Note that beta should not (in my understanding) be normalized so that sum(beta) = 1. In the above example the sum of the fiber fractions is 0.875, and in general this is a quantity that I would like to estimate. Thanks, Rob -- Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology Aging and Dementia Imaging Research | Opus Center for Advanced Imaging Research Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdutwangkaisdut at gmail.com Wed Nov 16 18:28:04 2016 From: sdutwangkaisdut at gmail.com (wang kai) Date: Wed, 16 Nov 2016 16:28:04 -0700 Subject: [Neuroimaging] [PySurfer] UnicodeEncodeError when loading fMRI activations Message-ID: Dear Experts, I am a user of Pysurfer (http://pysurfer.github.io/index.html) when I run "zstat = project_volume_data(volume_file, "lh", reg_file)" from the tutorial at ( http://pysurfer.github.io/examples/plot_fmri_activation_volume.html) the following error appeared: "UnicodeEncodeError: 'ascii' codec can't encode character u'\u201c' in position 131: ordinal not in range(128)" Can anyone help? Thanks, Kai Wang, Ph.D. Postdoctoral Researcher Institute of Cognitive Science, University of Colorado Boulder 1777 Exposition Dr., Boulder, CO, 80301 k ai.wang-1 at colorado.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdutwangkaisdut at gmail.com Fri Nov 18 14:39:31 2016 From: sdutwangkaisdut at gmail.com (wang kai) Date: Fri, 18 Nov 2016 12:39:31 -0700 Subject: [Neuroimaging] [PySurfer] Transparency control through brain.add_overlay Message-ID: Dear experts, To my understanding, to load multiple maps on the surface, one has to use brain.add_overlay(), which do not allow transparency control. That's kind of awkward for those overlapping regions between the multiple maps. Can the author consider fixing that in the next version? Thank you, Kai Wang, Ph.D. Postdoctoral Researcher Institute of Cognitive Science, University of Colorado Boulder 1777 Exposition Dr., Boulder, CO, 80301 k ai.wang-1 at colorado.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From berleant at stanford.edu Fri Nov 18 20:16:04 2016 From: berleant at stanford.edu (Shoshana Berleant) Date: Sat, 19 Nov 2016 01:16:04 +0000 Subject: [Neuroimaging] [PySurfer] Transparency control through brain.add_overlay In-Reply-To: References: Message-ID: Are you talking about nilearn's plotting module? If so, try passing an alpha parameter (brain.add_overlay(alpha=0.5). It's a keyword parameter passed straight on to matplotlib. On Fri, Nov 18, 2016 at 11:39 AM wang kai wrote: > Dear experts, > > To my understanding, to load multiple maps on the surface, one has to use > brain.add_overlay(), which do not allow transparency control. That's kind > of awkward for those overlapping regions between the multiple maps. Can the > author consider fixing that in the next version? > > Thank you, > Kai Wang, Ph.D. > > Postdoctoral Researcher > Institute of Cognitive Science, University of Colorado Boulder > 1777 Exposition Dr., Boulder, CO, 80301 > k ai.wang-1 at colorado.edu > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.gramfort at telecom-paristech.fr Mon Nov 21 03:37:02 2016 From: alexandre.gramfort at telecom-paristech.fr (Alexandre Gramfort) Date: Mon, 21 Nov 2016 09:37:02 +0100 Subject: [Neuroimaging] [ANN] MNE-Python 0.13 In-Reply-To: References: Message-ID: hi, I just pushed a maintenance version 0.13.1 on PyPi: https://pypi.python.org/pypi/mne/0.13.1 you're recommended to upgrade. pip install -U mne Alex On Wed, Sep 28, 2016 at 10:02 PM, Alexandre Gramfort wrote: > Hi, > > > We are pleased to announce the new 0.13 release of MNE-Python. As usual this > release comes with new features, many improvements to usability, > visualization and documentation and bug fixes. > > > A couple of major API changes are being implemented, so we recommend that > users read through the changes carefully. > > > Support for Python 2.6 has been dropped, and the minimum supported > dependencies are now NumPy 1.8, SciPy 0.12, and Matplotlib 1.3. > > > A few highlights > > ============ > > > Our filtering functionality has been significantly improved: > > In FIR filters the parameters filter_length, l_trans_bandwidth, and > h_trans_bandwidth are now automatically determined. We also added a phase > argument in e.g. in mne.io.Raw.filter(). This means that the new recommended > defaults are l_trans_bandwidth='auto', h_trans_bandwidth='auto', and > filter_length='auto'. This should generally reduce filter artifacts at the > expense of slight decrease in effective filter stop-band attenuation. For > details see Defaults in MNE-Python. > > An improved phase='zero' zero-phase FIR filtering has been added. > > We added a second-order sections (instead of (b, a) form) IIR filtering > which commonly has less numerical error > > We added a generic array-filtering function mne.filter.filter_data() for > numpy arrays. > > Constructing IIR filters in mne.filter.construct_iir_filter() will default > to output='sos' in 0.14 > > > We extended and tuned our visualization functionality: > > The ordering parameters ?selection? and ?position? were added to > mne.viz.plot_raw() to allow plotting of specific regions of the sensor > array. > > mne.viz.plot_trans() now also shows head position indicators. > > We have new plotting functions for independent component properties, similar > to `pop_prop` in EEGLAB. > > There is a new function mne.viz.plot_compare_evokeds() to show multiple > evoked time courses at a single location, or the mean over a ROI, or the > GFP. This is achieved by automatically averaging and calculating a > confidence interval if multiple subjects are given. > > We now have an interactive colormap option in our image plotting functions. > > Subsets of sensors can now be interactively selected by the so called lasso > selector. Checkout mne.viz.plot_sensors() and mne.viz.plot_raw() when using > order=?selection? or order=?position?. > > In viz.plot_bem() brain surfaces can now be plotted. > > mne.preprocessing.ICA.plot_components() can now be used interactively. > > > We refactored and extended our multvariate statistical analysis > functionality and made it more compatible with scikit-klearn: > > The mne.decoding.TimeFrequency allows to transform signals in scikit-learn > pipelines. > > the mne.decoding.UnsupervisedSpatialFilter provides interface for > scikit-learn decomposition algorithms such that they can be easily used with > MNE data. > > We added support for multiclass decoding in mne.decoding.CSP. > > > And as always many more good things: > > There is now a --filterchpi option to mne browse_raw. > > mne.Evoked objects can now be decimated with mne.Evoked.decimate(). > > Functional near-infrared spectroscopy (fNIRS) data can now be processed. > > MaxShield (IAS) can now be read for evoked data (e.g., from the acquisition > machine) in mne.read_evokeds() > > We added a single trial container for time-frequency representations > (mne.time_frequency.EpochsTFR), an average parameter to > mne.time_frequency.tfr_morlet() and mne.time_frequency.tfr_multitaper(). > This way time-frequency transforms can be easily computed on single trial > epochs without averaging. > > > Notable API changes > > ================ > > > Components obtained from mne.preprocessing.ICA are now sorted by explained > variance > > Adding an EEG reference channel using mne.io.add_reference_channels() will > now use its digitized location from the FIFF file if present. > > The add_eeg_ref argument in core functions like mne.io.read_raw_fif() and > mne.Epochs has been deprecated in favor of using mne.set_eeg_reference() and > equivalent instance methods like raw.set_eeg_reference(). > > When CTF gradient compensation is applied to raw data, it is no longer > reverted on save of mne.io.Raw.save(). > > Weighted addition and subtraction of Evoked as ev1 + ev2 and ev1 - ev2 have > been deprecated, use explicit mne.combine_evoked(...,weights='nave') > instead. > > Deprecated support for passing a lits of filenames to mne.io.Raw > constructor, use mne.io.read_raw_fif() and mne.concatenate_raws() instead. > > Now channels with units of ?C?, ??S?, ?uS?, ?ARU? and ?S? will be turned to > misc by default in mne.io.read_raw_brainvision(). > > Add mne.io.anonymize_info() function to anonymize measurements and add > methods to mne.io.Raw, mne.Epochs and mne.Evoked. > > Deprecated the baseline parameter in mne.Evoked. Use > mne.Epochs.apply_baseline() instead. > > The default dataset location has been changed from examples/ in the > MNE-Python root directory to ~/mne_data in the user?s home directory > > mne.decoding.EpochsVectorizer has been deprecated in favor of > mne.decoding.Vectorizer. > > Deprecated mne.time_frequency.cwt_morlet() and > mne.time_frequency.single_trial_power() in favour of > mne.time_frequency.tfr_morlet()with parameter average=False. > > Extended Infomax is now the new default in mne.preprocessing.infomax() > (extended=True). > > > For a full list of improvements and API changes, see: > > > http://martinos.org/mne/stable/whats_new.html#version-0-13 > > > To install the latest release the following command should do the job: > > > pip install --upgrade --user mne > > > As usual we welcome your bug reports, feature requests, critiques and > > contributions. > > > Some links: > > > - https://github.com/mne-tools/mne-python (code + readme on how to install) > > - http://martinos.org/mne/stable/ (full MNE documentation) > > > Follow us on Twitter: https://twitter.com/mne_python > > > Regards, > > The MNE-Python developers > > > People who contributed to this release (in alphabetical order): > > > * Alexander Rudiuk > > * Alexandre Barachant > > * Alexandre Gramfort > > * Asish Panda > > * Camilo Lamus > > * Chris Holdgraf > > * Christian Brodbeck > > * Christopher J. Bailey > > * Christopher Mullins > > * Clemens Brunner > > * Denis A. Engemann > > * Eric Larson > > * Federico Raimondo > > * F?lix Raimundo > > * Guillaume Dumas > > * Jaakko Leppakangas > > * Jair Montoya > > * Jean-Remi King > > * Johannes Niediek > > * Jona Sassenhagen > > * Jussi Nurminen > > * Keith Doelling > > * Mainak Jas > > * Marijn van Vliet > > * Michael Krause > > * Mikolaj Magnuski > > * Nick Foti > > * Phillip Alday > > * Simon-Shlomo Poil > > * Teon Brooks > > * Yaroslav Halchenko > > From Reid.Robert at mayo.edu Tue Nov 22 17:32:51 2016 From: Reid.Robert at mayo.edu (Reid, Robert I. (Rob)) Date: Tue, 22 Nov 2016 22:32:51 +0000 Subject: [Neuroimaging] [Dipy] RE: Interpretation of beta in the Sparse Fascicle Model Message-ID: <021cdb$4svbck@ironport10.mayo.edu> Hi again, Does anybody have any suggestions on quantitatively estimating the fraction of each fiber bundle in a voxel? Thanks, Rob -- Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology Aging and Dementia Imaging Research | Opus Center for Advanced Imaging Research Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org From: Neuroimaging [mailto:neuroimaging-bounces+reid.robert=mayo.edu at python.org] On Behalf Of Reid, Robert I. (Rob) Sent: Monday, November 14, 2016 11:42 AM To: 'neuroimaging at python.org' Subject: [Neuroimaging] Interpretation of beta in the Sparse Fascicle Model Hi, I am trying to use a set of simulations to optimize the b values in a multishell acquisition for general use. My current choice for the objective (cost) function is the difference between the true input and apparent recovered "total fiber vector"s, which I define as (f0 * d0, f1 * d1, f2 * d2), where fi and di are the voxel fraction and direction of fiber I, so it is a 9 dimensional vector, and the error in each fiber's direction is weighted by its voxel fraction. My problem is getting the fiber fractions. I have mostly followed the sparse fascicle model tutorial in http://nipy.org/dipy/examples_built/sfm_reconst.html#example-sfm-reconst , and the beta values seem to be what I should use. I set the apparent fiber fraction to sum(beta_j), for j in the part of the sphere closest to the true direction of fiber i. (That can misassign outliers, I know, but that's a different problem.) It *almost* works, but sum(beta) is often a bit larger than 1, especially as b of the outer shell is raised from 2000 to 3000. For example, with (f0, f1, f2) = (0.500, 0.250, 0.125), with b_hi = 2000 I get [ 0.50418062, 0.21846355, 0.15918703] with b_hi = 3000 I get [ 0.63809217, 0.36634215, 0.30759466] When averaged over a large number of simulations and scenarios the trend is that there is less angular error at b_hi = 3000, but the overall error function favors b_hi = 2000, because the fiber fraction estimates are so bad at b_hi = 3000. I am using the ExponentialIsotropicModel for the isotropic part. Am I abusing beta in some way, or is it just overestimating the fiber fractions "naturally" and I should accept the indication that the fiber fraction estimation degrades when going from 2000 to 3000? Note that beta should not (in my understanding) be normalized so that sum(beta) = 1. In the above example the sum of the fiber fractions is 0.875, and in general this is a quantity that I would like to estimate. Thanks, Rob -- Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology Aging and Dementia Imaging Research | Opus Center for Advanced Imaging Research Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.aspart at gmail.com Thu Nov 24 09:43:06 2016 From: florian.aspart at gmail.com (Florian Aspart) Date: Thu, 24 Nov 2016 15:43:06 +0100 Subject: [Neuroimaging] Linking hemisphere in "split" view Message-ID: Hi all, I'm pretty new to PySurfer but I already really like it. I was wondering if there is a possibility to link both hemisphere view when choosing the split view. By linking I mean, when I rotate one hemisphere, the other hemisphere gets rotated too. Thank you in advance for your answer! Best, Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlaplant at nmr.mgh.harvard.edu Thu Nov 24 09:46:55 2016 From: rlaplant at nmr.mgh.harvard.edu (Roan LaPlante) Date: Thu, 24 Nov 2016 09:46:55 -0500 Subject: [Neuroimaging] Linking hemisphere in "split" view In-Reply-To: References: Message-ID: Pysurfer does not do anything like this and it would be very complicated to code the camera listeners. The scenes are totally independent. best On Nov 24, 2016 9:43 AM, "Florian Aspart" wrote: > Hi all, > > I'm pretty new to PySurfer but I already really like it. > > I was wondering if there is a possibility to link both hemisphere view > when choosing the split view. By linking I mean, when I rotate one > hemisphere, the other hemisphere gets rotated too. > > Thank you in advance for your answer! > Best, > Florian > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > The information in this e-mail is intended only for the person to whom it > is > addressed. If you believe this e-mail was sent to you in error and the > e-mail > contains patient information, please contact the Partners Compliance > HelpLine at > http://www.partners.org/complianceline . If the e-mail was sent to you in > error > but does not contain patient information, please contact the sender and > properly > dispose of the e-mail. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Thu Nov 24 16:03:01 2016 From: arokem at gmail.com (Ariel Rokem) Date: Thu, 24 Nov 2016 13:03:01 -0800 Subject: [Neuroimaging] [Dipy] RE: Interpretation of beta in the Sparse Fascicle Model In-Reply-To: <021cdb$4svbck@ironport10.mayo.edu> References: <021cdb$4svbck@ironport10.mayo.edu> Message-ID: Hi Rob, Apologies for the delay in responding. This is not straightforward to do, and I believe that it would be an oversimplification to think of the SFM weights directly as indicating the volume fraction of nerve fibers in a particular direction. One reason for that is that the SFM does not separately model the constrained and hindered components of the signal (see this paper for some more details of this issue: https://www.ncbi.nlm.nih.gov/pubmed/15979342). Instead an interpretation that I think is more appropriate (if less satisfying) is that the weights are roughly proportional to a reduction in the variance of the signal, relative to an isotropic, that is explained by fibers in any given directions. As you noted, there is nothing that enforces that these sum to 1, or that they do not exceed 1. As for ways to do what you want to do, one approach to estimation of fiber density in any given direction is provided through the AFD framework, proposed by Raffelt and colleagues here: https://www.ncbi.nlm.nih.gov/pubmed/22036682 Another approach, more closely related to the SFM, is provided by Dell'Acqua and colleagues in their HMOA measure: https://www.ncbi.nlm.nih.gov/pubmed/22488973 Note the normalization procedure that they take when interpreting fODF weights (left column of page 2469). You would need to do something like that with SFM weights to increase their interpretability in this direction. Cheers, Ariel On Tue, Nov 22, 2016 at 2:32 PM, Reid, Robert I. (Rob) wrote: > Hi again, > > > > Does anybody have any suggestions on quantitatively estimating the > fraction of each fiber bundle in a voxel? > > > > Thanks, > > > > Rob > > > > -- > > Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology > > Aging and Dementia Imaging Research | Opus Center for Advanced Imaging > Research > > Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org > > > > > *From:* Neuroimaging [mailto:neuroimaging-bounces+reid.robert= > mayo.edu at python.org] *On Behalf Of *Reid, Robert I. (Rob) > *Sent:* Monday, November 14, 2016 11:42 AM > *To:* 'neuroimaging at python.org' > *Subject:* [Neuroimaging] Interpretation of beta in the Sparse Fascicle > Model > > > > Hi, > > > > I am trying to use a set of simulations to optimize the b values in a > multishell acquisition for general use. My current choice for the > objective (cost) function is the difference between the true input and > apparent recovered ?total fiber vector?s, which I define as > > (f0 * d0, f1 * d1, f2 * d2), > > where fi and di are the voxel fraction and direction of fiber I, so it is > a 9 dimensional vector, and the error in each fiber?s direction is weighted > by its voxel fraction. My problem is getting the fiber fractions. I have > mostly followed the sparse fascicle model tutorial in > http://nipy.org/dipy/examples_built/sfm_reconst.html#example-sfm-reconst > , and the beta values seem to be what I should use. I set the apparent > fiber fraction to sum(beta_j), for j in the part of the sphere closest to > the true direction of fiber i. (That can misassign outliers, I know, but > that?s a different problem.) > > > > It **almost** works, but sum(beta) is often a bit larger than 1, > especially as b of the outer shell is raised from 2000 to 3000. > > > > For example, with (f0, f1, f2) = (0.500, 0.250, 0.125), > > with b_hi = 2000 I get [ 0.50418062, 0.21846355, 0.15918703] > > with b_hi = 3000 I get [ 0.63809217, 0.36634215, 0.30759466] > > > > When averaged over a large number of simulations and scenarios the trend > is that there is less angular error at b_hi = 3000, but the overall error > function favors b_hi = 2000, because the fiber fraction estimates are so > bad at b_hi = 3000. I am using the ExponentialIsotropicModel for the > isotropic part. > > > > Am I abusing beta in some way, or is it just overestimating the fiber > fractions ?naturally? and I should accept the indication that the fiber > fraction estimation degrades when going from 2000 to 3000? > > > > Note that beta should not (in my understanding) be normalized so that > sum(beta) = 1. In the above example the sum of the fiber fractions is > 0.875, and in general this is a quantity that I would like to estimate. > > > > Thanks, > > > > Rob > > > > -- > > Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology > > Aging and Dementia Imaging Research | Opus Center for Advanced Imaging > Research > > Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Reid.Robert at mayo.edu Sun Nov 27 16:36:31 2016 From: Reid.Robert at mayo.edu (Reid, Robert I. (Rob)) Date: Sun, 27 Nov 2016 21:36:31 +0000 Subject: [Neuroimaging] [Dipy] RE: Interpretation of beta in the Sparse Fascicle Model In-Reply-To: References: <021cdb$4svbck@ironport10.mayo.edu> Message-ID: <021cdb$4uet9e@ironport10.mayo.edu> > Instead an interpretation that I think is more appropriate (if less satisfying) is that the weights are roughly proportional to a reduction in the variance of the signal Ah, that?s what I was missing. I was hoping to use a sparse formulation since those appear more tolerant of lower b than deconvolution approaches (which are also solving for the bundle dispersion), but maybe I can apply a post-hoc compromise. I think for now though I will continue with a constrained deconvolution and HMOA approach, since it explicitly includes normalization to the bundle fraction ~ 1 case. Thanks, Rob -- Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology Aging and Dementia Imaging Research | Opus Center for Advanced Imaging Research Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org From: Neuroimaging [mailto:neuroimaging-bounces+reid.robert=mayo.edu at python.org] On Behalf Of Ariel Rokem Sent: Thursday, November 24, 2016 3:03 PM To: Neuroimaging analysis in Python Subject: Re: [Neuroimaging] [Dipy] RE: Interpretation of beta in the Sparse Fascicle Model Hi Rob, Apologies for the delay in responding. This is not straightforward to do, and I believe that it would be an oversimplification to think of the SFM weights directly as indicating the volume fraction of nerve fibers in a particular direction. One reason for that is that the SFM does not separately model the constrained and hindered components of the signal (see this paper for some more details of this issue: https://www.ncbi.nlm.nih.gov/pubmed/15979342). Instead an interpretation that I think is more appropriate (if less satisfying) is that the weights are roughly proportional to a reduction in the variance of the signal, relative to an isotropic, that is explained by fibers in any given directions. As you noted, there is nothing that enforces that these sum to 1, or that they do not exceed 1. As for ways to do what you want to do, one approach to estimation of fiber density in any given direction is provided through the AFD framework, proposed by Raffelt and colleagues here: https://www.ncbi.nlm.nih.gov/pubmed/22036682 Another approach, more closely related to the SFM, is provided by Dell'Acqua and colleagues in their HMOA measure: https://www.ncbi.nlm.nih.gov/pubmed/22488973 Note the normalization procedure that they take when interpreting fODF weights (left column of page 2469). You would need to do something like that with SFM weights to increase their interpretability in this direction. Cheers, Ariel On Tue, Nov 22, 2016 at 2:32 PM, Reid, Robert I. (Rob) > wrote: Hi again, Does anybody have any suggestions on quantitatively estimating the fraction of each fiber bundle in a voxel? Thanks, Rob -- Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology Aging and Dementia Imaging Research | Opus Center for Advanced Imaging Research Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org From: Neuroimaging [mailto:neuroimaging-bounces+reid.robert=mayo.edu at python.org] On Behalf Of Reid, Robert I. (Rob) Sent: Monday, November 14, 2016 11:42 AM To: 'neuroimaging at python.org' Subject: [Neuroimaging] Interpretation of beta in the Sparse Fascicle Model Hi, I am trying to use a set of simulations to optimize the b values in a multishell acquisition for general use. My current choice for the objective (cost) function is the difference between the true input and apparent recovered ?total fiber vector?s, which I define as (f0 * d0, f1 * d1, f2 * d2), where fi and di are the voxel fraction and direction of fiber I, so it is a 9 dimensional vector, and the error in each fiber?s direction is weighted by its voxel fraction. My problem is getting the fiber fractions. I have mostly followed the sparse fascicle model tutorial in http://nipy.org/dipy/examples_built/sfm_reconst.html#example-sfm-reconst , and the beta values seem to be what I should use. I set the apparent fiber fraction to sum(beta_j), for j in the part of the sphere closest to the true direction of fiber i. (That can misassign outliers, I know, but that?s a different problem.) It *almost* works, but sum(beta) is often a bit larger than 1, especially as b of the outer shell is raised from 2000 to 3000. For example, with (f0, f1, f2) = (0.500, 0.250, 0.125), with b_hi = 2000 I get [ 0.50418062, 0.21846355, 0.15918703] with b_hi = 3000 I get [ 0.63809217, 0.36634215, 0.30759466] When averaged over a large number of simulations and scenarios the trend is that there is less angular error at b_hi = 3000, but the overall error function favors b_hi = 2000, because the fiber fraction estimates are so bad at b_hi = 3000. I am using the ExponentialIsotropicModel for the isotropic part. Am I abusing beta in some way, or is it just overestimating the fiber fractions ?naturally? and I should accept the indication that the fiber fraction estimation degrades when going from 2000 to 3000? Note that beta should not (in my understanding) be normalized so that sum(beta) = 1. In the above example the sum of the fiber fractions is 0.875, and in general this is a quantity that I would like to estimate. Thanks, Rob -- Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology Aging and Dementia Imaging Research | Opus Center for Advanced Imaging Research Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Sun Nov 27 22:13:31 2016 From: arokem at gmail.com (Ariel Rokem) Date: Sun, 27 Nov 2016 19:13:31 -0800 Subject: [Neuroimaging] [Dipy] RE: Interpretation of beta in the Sparse Fascicle Model In-Reply-To: <021cdb$4uet9e@ironport10.mayo.edu> References: <021cdb$4svbck@ironport10.mayo.edu> <021cdb$4uet9e@ironport10.mayo.edu> Message-ID: Hi Rob, On Sun, Nov 27, 2016 at 1:36 PM, Reid, Robert I. (Rob) wrote: > > Instead an interpretation that I think is more appropriate (if less > satisfying) is that the weights are roughly proportional to a reduction in > the variance of the signal > > > > Ah, that?s what I was missing. I was hoping to use a sparse formulation > since those appear more tolerant of lower b than deconvolution approaches > (which are also solving for the bundle dispersion), but maybe I can apply a > post-hoc compromise. I think for now though I will continue with a > constrained deconvolution and HMOA approach, since it explicitly includes > normalization to the bundle fraction ~ 1 case. > > If you end up writing code that calculates the HMOA from SFM weights, please consider sharing this code. I'd be happy to have something like that integrated into Dipy, and I know of others who would find this useful. Best, Ariel > > > Thanks, > > > > Rob > > > > -- > > Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology > > Aging and Dementia Imaging Research | Opus Center for Advanced Imaging > Research > > Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org > > > > > *From:* Neuroimaging [mailto:neuroimaging-bounces+reid.robert= > mayo.edu at python.org] *On Behalf Of *Ariel Rokem > *Sent:* Thursday, November 24, 2016 3:03 PM > *To:* Neuroimaging analysis in Python > *Subject:* Re: [Neuroimaging] [Dipy] RE: Interpretation of beta in the > Sparse Fascicle Model > > > > Hi Rob, > > > > Apologies for the delay in responding. This is not straightforward to do, > and I believe that it would be an oversimplification to think of the SFM > weights directly as indicating the volume fraction of nerve fibers in a > particular direction. One reason for that is that the SFM does not > separately model the constrained and hindered components of the signal (see > this paper for some more details of this issue: > https://www.ncbi.nlm.nih.gov/pubmed/15979342). Instead an interpretation > that I think is more appropriate (if less satisfying) is that the weights > are roughly proportional to a reduction in the variance of the signal, > relative to an isotropic, that is explained by fibers in any given > directions. As you noted, there is nothing that enforces that these sum to > 1, or that they do not exceed 1. > > > > As for ways to do what you want to do, one approach to estimation of fiber > density in any given direction is provided through the AFD framework, > proposed by Raffelt and colleagues here: > > > > https://www.ncbi.nlm.nih.gov/pubmed/22036682 > > > > Another approach, more closely related to the SFM, is provided by > Dell'Acqua and colleagues in their HMOA measure: > > > > https://www.ncbi.nlm.nih.gov/pubmed/22488973 > > > > Note the normalization procedure that they take when interpreting fODF > weights (left column of page 2469). You would need to do something like > that with SFM weights to increase their interpretability in this direction. > > > > Cheers, > > > > Ariel > > > > > > > > On Tue, Nov 22, 2016 at 2:32 PM, Reid, Robert I. (Rob) < > Reid.Robert at mayo.edu> wrote: > > Hi again, > > > > Does anybody have any suggestions on quantitatively estimating the > fraction of each fiber bundle in a voxel? > > > > Thanks, > > > > Rob > > > > -- > > Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology > > Aging and Dementia Imaging Research | Opus Center for Advanced Imaging > Research > > Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org > > > > > *From:* Neuroimaging [mailto:neuroimaging-bounces+reid.robert= > mayo.edu at python.org] *On Behalf Of *Reid, Robert I. (Rob) > *Sent:* Monday, November 14, 2016 11:42 AM > *To:* 'neuroimaging at python.org' > *Subject:* [Neuroimaging] Interpretation of beta in the Sparse Fascicle > Model > > > > Hi, > > > > I am trying to use a set of simulations to optimize the b values in a > multishell acquisition for general use. My current choice for the > objective (cost) function is the difference between the true input and > apparent recovered ?total fiber vector?s, which I define as > > (f0 * d0, f1 * d1, f2 * d2), > > where fi and di are the voxel fraction and direction of fiber I, so it is > a 9 dimensional vector, and the error in each fiber?s direction is weighted > by its voxel fraction. My problem is getting the fiber fractions. I have > mostly followed the sparse fascicle model tutorial in > http://nipy.org/dipy/examples_built/sfm_reconst.html#example-sfm-reconst > , and the beta values seem to be what I should use. I set the apparent > fiber fraction to sum(beta_j), for j in the part of the sphere closest to > the true direction of fiber i. (That can misassign outliers, I know, but > that?s a different problem.) > > > > It **almost** works, but sum(beta) is often a bit larger than 1, > especially as b of the outer shell is raised from 2000 to 3000. > > > > For example, with (f0, f1, f2) = (0.500, 0.250, 0.125), > > with b_hi = 2000 I get [ 0.50418062, 0.21846355, 0.15918703] > > with b_hi = 3000 I get [ 0.63809217, 0.36634215, 0.30759466] > > > > When averaged over a large number of simulations and scenarios the trend > is that there is less angular error at b_hi = 3000, but the overall error > function favors b_hi = 2000, because the fiber fraction estimates are so > bad at b_hi = 3000. I am using the ExponentialIsotropicModel for the > isotropic part. > > > > Am I abusing beta in some way, or is it just overestimating the fiber > fractions ?naturally? and I should accept the indication that the fiber > fraction estimation degrades when going from 2000 to 3000? > > > > Note that beta should not (in my understanding) be normalized so that > sum(beta) = 1. In the above example the sum of the fiber fractions is > 0.875, and in general this is a quantity that I would like to estimate. > > > > Thanks, > > > > Rob > > > > -- > > Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology > > Aging and Dementia Imaging Research | Opus Center for Advanced Imaging > Research > > Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org > > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.aspart at gmail.com Mon Nov 28 10:33:10 2016 From: florian.aspart at gmail.com (Florian Aspart) Date: Mon, 28 Nov 2016 16:33:10 +0100 Subject: [Neuroimaging] add_data: Threshholding on absolute value Message-ID: Hi all, I'm trying to display the ICA components of my reconstructed sources (from EEG signal) using pysurfer. I'd like to apply a threshold on the display to show only the most relevant (i.e. with the biggest impact) areas. In this case, this correspondd to points with the biggest absolute value. Is there a way to apply a threshold on absolute value when using the function add_data? Using the threshold function only work for the lower threshold. Digging a little bit in the code, I found out that the current thresholding is done using the mayavi filter mlab.pipeline.threshold. Should I design my own mayavi filter to do so? Do you know some nice introductory material on how I could on this? Best, Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: From panyiyuan at qq.com Tue Nov 29 10:07:39 2016 From: panyiyuan at qq.com (=?gb18030?B?zt7Tx6ShxP3Lqg==?=) Date: Tue, 29 Nov 2016 23:07:39 +0800 Subject: [Neuroimaging] dipy problem Message-ID: Use HCP separation of the data packet affine suggested wrong, then we use the original HCP affine, but still being given, please help solve the problem,thanks a lot. ValueError: The affine provided seems to contain shearing, data must be acquired or interpolated on a regular grid to be used with 'LocalTracking'. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Tue Nov 29 13:26:20 2016 From: arokem at gmail.com (Ariel Rokem) Date: Tue, 29 Nov 2016 10:26:20 -0800 Subject: [Neuroimaging] dipy problem In-Reply-To: References: Message-ID: Hi, Thank you for your email. On Tue, Nov 29, 2016 at 7:07 AM, ????? wrote: > Use HCP separation of the data packet affine suggested wrong, then we use > the original HCP affine, but still being given, please help solve the > problem,thanks a lot. > ValueError: The affine provided seems to contain shearing, data must be > acquired or interpolated on a regular grid to be used with 'LocalTracking'. > > This might be related to issues discussed here: https://github.com/nipy/dipy/pull/1045. Cheers, Ariel > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriela.asturias at duke.edu Tue Nov 29 15:03:21 2016 From: gabriela.asturias at duke.edu (Gabriela Asturias) Date: Tue, 29 Nov 2016 20:03:21 +0000 Subject: [Neuroimaging] Dipy Questions Message-ID: To whom it may concern, I hope this email finds you all well. I have a few questions concerning the dipy software. I am currently working on my thesis on the effect of repetitive Transcranial Magnetic Stimulation on the structural connectome in patients with Major Depressive Disorder. When I perform the tractography step using dipy on multiple iterations of the same data the results are not consistent. We ran 5-10 iterations of nine subjects and 91.78% if iterations had a perfect correlation. Could you help me understand what are the potential sources of variability in the dipy software that are producing these results? Thank you very much for your time. Best wishes, Gabriela Asturias Neuroscience & Pre-Med Duke University, 2017 Tel: (919) 808-8103 gabriela.asturias at duke.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From szorowi1 at gmail.com Tue Nov 29 19:52:31 2016 From: szorowi1 at gmail.com (Sam Zorowitz) Date: Tue, 29 Nov 2016 19:52:31 -0500 Subject: [Neuroimaging] Nibabel: Slowdown in traversing object using dataobj Message-ID: Hi all, Hopefully a quick question: Assuming a 4d volume image, can someone explain why it takes longer to load from memory the 100th acquisition the 1st acquisition? For example, I have an image that is (110, 110, 63, 977). When I perform: >> %timeit obj.dataobj[..., 0] I get: 10 loops, best of 3: 37.8 ms per loop >> %timeit obj.dataobj[..., 100] I get: 1 loop, best of 3: 5.99 s per loop Why is this? Can someone recommend an alternative? Thanks! -Sam ______________________________ *Sam Zorowitz* Research Assistant Massachusetts General Hospital Division of Neurotherapeutics Department of Psychiatry: Neurosciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Nov 29 19:57:48 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 29 Nov 2016 16:57:48 -0800 Subject: [Neuroimaging] Nibabel: Slowdown in traversing object using dataobj In-Reply-To: References: Message-ID: Hi, On Tue, Nov 29, 2016 at 4:52 PM, Sam Zorowitz wrote: > Hi all, > > Hopefully a quick question: Assuming a 4d volume image, can someone explain > why it takes longer to load from memory the 100th acquisition the 1st > acquisition? > > For example, I have an image that is (110, 110, 63, 977). When I perform: > >>> %timeit obj.dataobj[..., 0] > > I get: 10 loops, best of 3: 37.8 ms per loop > >>> %timeit obj.dataobj[..., 100] > > I get: 1 loop, best of 3: 5.99 s per loop > > Why is this? Can someone recommend an alternative? I'm guessing this is a ``.nii.gz`` file? If so, then the difference is just because nibabel has to gunzip 100 volumes worth of data in the latter case. If you can get away with an uncompressed file, my prediction is that you'll find the difference will go away. Cheers, Matthew From elef at indiana.edu Tue Nov 29 21:04:47 2016 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Wed, 30 Nov 2016 02:04:47 +0000 Subject: [Neuroimaging] Dipy Questions In-Reply-To: References: Message-ID: Hi Gabriela, It seems that your are probably generating random seeds every time you run the tracking algorithm. That means that the tracking starts at different positions. One direction of action is to set the random state (also known as seed) using numpy. It would help a lot if you could share your code so we can see exactly how you are running the tracking and suggest different solutions. But it is definitely something we have dealt in the past and I think we can fix. Best regards, Eleftherios On Tue, Nov 29, 2016 at 5:14 PM Gabriela Asturias < gabriela.asturias at duke.edu> wrote: > To whom it may concern, > > > I hope this email finds you all well. I have a few questions concerning > the dipy software. > > I am currently working on my thesis on the effect of repetitive > Transcranial Magnetic Stimulation on the structural connectome in patients > with Major Depressive Disorder. When I perform the tractography step > using dipy on multiple iterations of the same data the results are not > consistent. We ran 5-10 iterations of nine subjects and 91.78% if > iterations had a perfect correlation. Could you help me understand what are > the potential sources of variability in the dipy software that are > producing these results? > > Thank you very much for your time. > > Best wishes, > > *Gabriela Asturias* > > > Neuroscience & Pre-Med > > Duke University, 2017 > > Tel: (919) 808-8103 > > gabriela.asturias at duke.edu > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pauldmccarthy at gmail.com Wed Nov 30 06:16:20 2016 From: pauldmccarthy at gmail.com (paul mccarthy) Date: Wed, 30 Nov 2016 11:16:20 +0000 Subject: [Neuroimaging] Nibabel: Slowdown in traversing object using dataobj In-Reply-To: References: Message-ID: Howdy, If you need to stick with ``.nii.gz``, you could use my indexed_gzip library: https://github.com/pauldmccarthy/indexed_gzip The first access will be slow, but subsequent accesses much faster. Cheers, Paul On 30 November 2016 at 00:57, Matthew Brett wrote: > Hi, > > On Tue, Nov 29, 2016 at 4:52 PM, Sam Zorowitz wrote: > > Hi all, > > > > Hopefully a quick question: Assuming a 4d volume image, can someone > explain > > why it takes longer to load from memory the 100th acquisition the 1st > > acquisition? > > > > For example, I have an image that is (110, 110, 63, 977). When I perform: > > > >>> %timeit obj.dataobj[..., 0] > > > > I get: 10 loops, best of 3: 37.8 ms per loop > > > >>> %timeit obj.dataobj[..., 100] > > > > I get: 1 loop, best of 3: 5.99 s per loop > > > > Why is this? Can someone recommend an alternative? > > I'm guessing this is a ``.nii.gz`` file? If so, then the difference > is just because nibabel has to gunzip 100 volumes worth of data in the > latter case. > > If you can get away with an uncompressed file, my prediction is that > you'll find the difference will go away. > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From szorowi1 at gmail.com Wed Nov 30 09:10:48 2016 From: szorowi1 at gmail.com (Sam Zorowitz) Date: Wed, 30 Nov 2016 09:10:48 -0500 Subject: [Neuroimaging] Nibabel: Slowdown in traversing object using dataobj In-Reply-To: References: Message-ID: Uncompressing absolutely did the trick. Thank you! On Wed, Nov 30, 2016 at 6:16 AM, paul mccarthy wrote: > Howdy, > > If you need to stick with ``.nii.gz``, you could use my indexed_gzip > library: > > https://github.com/pauldmccarthy/indexed_gzip > > The first access will be slow, but subsequent accesses much faster. > > Cheers, > > Paul > > On 30 November 2016 at 00:57, Matthew Brett > wrote: > >> Hi, >> >> On Tue, Nov 29, 2016 at 4:52 PM, Sam Zorowitz wrote: >> > Hi all, >> > >> > Hopefully a quick question: Assuming a 4d volume image, can someone >> explain >> > why it takes longer to load from memory the 100th acquisition the 1st >> > acquisition? >> > >> > For example, I have an image that is (110, 110, 63, 977). When I >> perform: >> > >> >>> %timeit obj.dataobj[..., 0] >> > >> > I get: 10 loops, best of 3: 37.8 ms per loop >> > >> >>> %timeit obj.dataobj[..., 100] >> > >> > I get: 1 loop, best of 3: 5.99 s per loop >> > >> > Why is this? Can someone recommend an alternative? >> >> I'm guessing this is a ``.nii.gz`` file? If so, then the difference >> is just because nibabel has to gunzip 100 volumes worth of data in the >> latter case. >> >> If you can get away with an uncompressed file, my prediction is that >> you'll find the difference will go away. >> >> Cheers, >> >> Matthew >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From panyiyuan at qq.com Tue Nov 29 21:50:14 2016 From: panyiyuan at qq.com (=?gb18030?B?zt7Tx6ShxP3Lqg==?=) Date: Wed, 30 Nov 2016 10:50:14 +0800 Subject: [Neuroimaging] dipy problem Message-ID: Use HCP separation of the data packet affine suggested wrong, then we use the original HCP affine, but still being given, please help solve the problem,thanks a lot. ValueError: The affine provided seems to contain shearing, data must be acquired or interpolated on a regular grid to be used with 'LocalTracking'. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagutman at gmail.com Wed Nov 30 16:57:55 2016 From: dagutman at gmail.com (David Gutman) Date: Wed, 30 Nov 2016 21:57:55 +0000 Subject: [Neuroimaging] [nipype] DataGrabber Question to get Directory Message-ID: I think I'm probably using the wrong module to do this... but I am trying to write a simple script to iterate through my raw DICOM data and convert it into NIFTI. I had modified some of the old code I had written before, but in this case I want to match a directory name, not return a file. Basically I have a list of directories BaseSubjPath = '/some/where/data/lives' DirsWithSubjectData = [Subject1, Subject2, Subject3, Subject4] For now I am only going to convert the T1 Image, DCM2NII_wf = pe.Workflow('stoutDCM2NII_wf') ## Initialize the workflow DCM2NII_wf.base_dir = NiPypeOutputDir ## Target to dump the results of the workflow SessionId_InfoSrc = pe.Node(util.IdentityInterface(fields=['imageSessionName']),name='imageSession_Id_InfoSrc') SessionId_InfoSrc.iterables = ('imageSessionName', DirsWithSubjectData) # ## Create a datasource.. this basically helps me find the individual image files and data sets for an image session # ## a single image directory likely consists of DTI data, T2 images, T1 images, etc, etc datasource = pe.Node(interface=nio.DataGrabber(infields=['imageSessionName'], outfields=['t1Mprage_dir']), name='datasource') datasource.inputs.base_directory = StoutRawData datasource.inputs.template = '%s/t1_mprage_sag_*' datasource.inputs.sort_filelist = True datasource.inputs.template_args = dict( t1Mprage_dir=[['imageSessionName']]) ### eventually I'll add in templates for the DTI data, Functional data, etc... ## Now create a node for the dicom converter! dcmConvert = pe.Node(interface=Dcm2nii(),name='dcmConvert') dcmConvert.inputs.gzip_output = False dcmConvert.inputs.reorient = False dcmConvert.inputs.reorient_and_crop = False DCM2NII_wf.connect(imageSession_Id_InfoSrc, 'imageSessionName', datasource, 'imageSessionName') DCM2NII_wf.connect(datasource,'t1Mprage_dir', dcmConvert, 'source_dir' ) DCM2NII_wf.run() So the work flow as written failed because the datagrabber was/is trying to return files and not a directory.. As a hack, I just created a list of all the t1 directories and bypassed the datagrabber node, and connected it directly to the DCM Converter. However this isn't particularly elegant, and I'd like to know the Nipyponic method to achieve this so I can make this clean. T1ImageInputDirectories = glob( oj(RawData,'*/t1_mp*')) With the death of neurostars, I wasn't able to search in the archives in case this has already been addressed. -- David A Gutman MD PhD Assistant Professor of Neurology, Psychiatry & Biomedical Informatics Emory University School of Medicine Staff Physician, Mental Health Service Line Atlanta VA Medical Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Wed Nov 30 21:58:48 2016 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 30 Nov 2016 18:58:48 -0800 Subject: [Neuroimaging] Python 3 statement Message-ID: Hello everyone, I just learned about this statement this morning: http://www.python3statement.org/ What do folks here think about this? Should we sign on to this? Cheers, Ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcipolli at ucsd.edu Wed Nov 30 22:05:02 2016 From: bcipolli at ucsd.edu (Ben Cipollini) Date: Wed, 30 Nov 2016 19:05:02 -0800 Subject: [Neuroimaging] Python 3 statement In-Reply-To: References: Message-ID: Oh gosh, please let's! This whole 2.7 vs 3.x thing has been terrible. I wonder by then, however, if there will be a backwards-breaking Python 4 :-( On Wed, Nov 30, 2016 at 6:58 PM, Ariel Rokem wrote: > Hello everyone, > > I just learned about this statement this morning: > > http://www.python3statement.org/ > > What do folks here think about this? Should we sign on to this? > > Cheers, > > Ariel > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Nov 30 22:20:05 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 30 Nov 2016 19:20:05 -0800 Subject: [Neuroimaging] Python 3 statement In-Reply-To: References: Message-ID: Python 4 is going to be what they call Python 3.10 to avoid getting into two-digit version numbers. http://www.curiousefficiency.org/posts/2014/08/python-4000.html On Wed, Nov 30, 2016 at 7:05 PM, Ben Cipollini wrote: > Oh gosh, please let's! This whole 2.7 vs 3.x thing has been terrible. > > I wonder by then, however, if there will be a backwards-breaking Python 4 > :-( > > On Wed, Nov 30, 2016 at 6:58 PM, Ariel Rokem wrote: >> >> Hello everyone, >> >> I just learned about this statement this morning: >> >> http://www.python3statement.org/ >> >> What do folks here think about this? Should we sign on to this? >> >> Cheers, >> >> Ariel >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- Nathaniel J. Smith -- https://vorpus.org From effigies at bu.edu Wed Nov 30 22:15:34 2016 From: effigies at bu.edu (Christopher J Markiewicz) Date: Wed, 30 Nov 2016 22:15:34 -0500 Subject: [Neuroimaging] Python 3 statement In-Reply-To: References: Message-ID: <0cb98400-7b8d-ac8a-a5ed-a758039f1991@bu.edu> I would comfortably, if not exactly enthusiastically, vote in favor of this. The number of times I've come across people still wanting 2.6 support (CentOS...) makes me dread the issues and email threads that are bound to ensue. On 11/30/2016 10:05 PM, Ben Cipollini wrote: > Oh gosh, please let's! This whole 2.7 vs 3.x thing has been terrible. > > I wonder by then, however, if there will be a backwards-breaking Python > 4 :-( > > On Wed, Nov 30, 2016 at 6:58 PM, Ariel Rokem > wrote: > > Hello everyone, > > I just learned about this statement this morning: > > http://www.python3statement.org/ > > What do folks here think about this? Should we sign on to this? > > Cheers, > > Ariel > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- Christopher J Markiewicz Ph.D. Candidate, Quantitative Neuroscience Laboratory Boston University From satra at mit.edu Wed Nov 30 23:19:05 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 30 Nov 2016 23:19:05 -0500 Subject: [Neuroimaging] Python 3 statement In-Reply-To: References: Message-ID: hi ariel, I just learned about this statement this morning: > > http://www.python3statement.org/ > > What do folks here think about this? Should we sign on to this? > we should do this definitely. the amount of time that people still put in to support different platforms still adds up across python projects to a significant chunk of human resources. cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Nov 30 23:27:05 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 30 Nov 2016 20:27:05 -0800 Subject: [Neuroimaging] Python 3 statement In-Reply-To: References: Message-ID: On Wed, Nov 30, 2016 at 6:58 PM, Ariel Rokem wrote: > Hello everyone, > > I just learned about this statement this morning: > > http://www.python3statement.org/ > > What do folks here think about this? Should we sign on to this? Honestly, I suspect Chris is right, this is going to end in a world of pain for us, if we drop 2.7 support before it has died a natural death. I doubt that we'd gain much either, because we'd have to maintain a separate bug-fix only version for 2.7. So I'd like to apply the same rules as we do for any other Python version - when the cost of the features we are missing outweighs the pain we're going to cause people by dropping the version, we drop the version. Cheers, Matthew