From arokem at gmail.com Tue Mar 1 10:50:23 2016 From: arokem at gmail.com (Ariel Rokem) Date: Tue, 1 Mar 2016 07:50:23 -0800 Subject: [Neuroimaging] [GSoC-general] GSOC 2016 - Develop a new DIPY website In-Reply-To: References: Message-ID: Hi Dileep, Welcome! Thanks for getting in touch. The best way to get acquainted with the project and its goals by looking at the current website and the examples therein (http://nipy.org/dipy/examples_index.html). A good way to get acquainted with our development process is to read the developer documentation: http://nipy.org/dipy/devel/gitwash/index.html. Once you've read that (and the guidelines for contributing: https://github.com/nipy/dipy/blob/master/CONTRIBUTING.md), a good way to start getting involved is to choose one of the issues on our github issues page (https://github.com/nipy/dipy/issues), and trying to address it through a pull request. There are quite a few labeled "beginner-friendly" (many to do with formatting of some of the older parts of our code-base), so these should be a good place to start. Let us know (here, or on the gitter channel: https://gitter.im/nipy/dipy), as questions come up for you. Cheers, Ariel On Mon, Feb 29, 2016 at 7:52 AM, dileep Chandima wrote: > Hi All, > > I'm Dileepa Chandima, final year undergraduate student of Faculty of > Engineering, University of Peradeniya, Sri Lanka. > > .I went through all the project ideas and all of them are interesting. So > I decided to work on Develop a new DIPY website with more interactive > features for GSOC 2016 because of the experience gained by doing various > projects in my internship program And .I have knowledge about java, > python, RESTful API, HTML, bootstrap, and also CSS. And also I strongly > believe that this would be more beneficial to my future career development > as well > > And also I went through the following areas to get more familiar with the > DIPY Web site. So that it would help to adapt to the project to be done. > > - Subscribed the 'GSOC general Community' dev mailing list and also > 'NIPY Community' dev mailing list. > - Cloned the DIPY git repository (https://github.com/nipy/dipy.git). > - Executed a small website using 'Django'. > - Started to going through the Django and Sphinx Documentations. > > Please be kind enough to provide more details about the project and I > really appreciate your cooperation on this matter. > > Thank you. > > -- > Dileepa Chandima > Dept of Computer Engineering > University of Peradeniya > linkedIn > > _______________________________________________ > GSoC-general mailing list > GSoC-general at python.org > https://mail.python.org/mailman/listinfo/gsoc-general > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Tue Mar 1 10:56:35 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Tue, 1 Mar 2016 10:56:35 -0500 Subject: [Neuroimaging] Seeking postdoctoral candidates for datascience positions Message-ID: We are seeking two postdoctoral computational/data scientists with a PhD in computer science, electrical or biomedical engineering, neuroscience, statistics, or related fields to apply for positions in the McGovern Institute for Brain Research at Massachusetts Institute of Technology. The projects cover a broad array of neuroinformatics. - Datamining of gene-behavior-anatomy relationships - Architecting next generation dataflow systems - Reproducible research platforms and applications - Nonlinear image and other high-dimensional registration - Predictive analytics in mental health - Linked data platforms Working on these projects will involve collaboration partners within and across regional and international institutions. Candidates are expected to develop algorithms and prototype ideas, contribute to opensource tools, and perform software engineering, testing, and validation. Candidates will have the opportunity to mentor undergraduate and graduate students, and contribute to datascience at MIT. The ideal candidates will have strong computational skills, enjoy collaborating, and be able to adapt to and adopt a diverse set of technologies. A documented PhD in computer science, electrical or biomedical engineering, neuroscience, statistics, or related field is required before starting this position. Positions are available for one year, with a possibility of yearly extension depending on performance and funding. For inquiries please contact Satrajit Ghosh (satra at mit.edu). -------------- next part -------------- An HTML attachment was scrubbed... URL: From code at oscaresteban.es Tue Mar 1 13:04:07 2016 From: code at oscaresteban.es (Oscar Esteban) Date: Tue, 1 Mar 2016 10:04:07 -0800 Subject: [Neuroimaging] [nipype] Developers roundup Message-ID: Hi all, It looks like there's a good agreement to have this meeting on Tuesday, March 8th at 11.30am. Please feel free to join us at this hangout: https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs (link repeated for convenience in small screen devices) For some reason, you will need to request for acceptance before you join. If you want to propose any topic or make any previous comment, please post it at neurostars: https://neurostars.org/p/3733/ Thanks a lot! Oscar Esteban -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Tue Mar 1 14:46:18 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Tue, 1 Mar 2016 14:46:18 -0500 Subject: [Neuroimaging] [nipype] Developers roundup In-Reply-To: References: Message-ID: just to clarify this is 11.30am PST cheers, satra On Tue, Mar 1, 2016 at 1:04 PM, Oscar Esteban wrote: > Hi all, > > It looks like there's a good agreement to have this meeting on Tuesday, > March 8th at 11.30am. > > Please feel free to join us at this hangout: > > https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs > https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs > (link repeated for convenience in small screen devices) > > For some reason, you will need to request for acceptance before you join. > > If you want to propose any topic or make any previous comment, please post > it at neurostars: https://neurostars.org/p/3733/ > > Thanks a lot! > > Oscar Esteban > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Tue Mar 1 14:50:26 2016 From: lists at onerussian.com (Yaroslav Halchenko) Date: Tue, 1 Mar 2016 14:50:26 -0500 Subject: [Neuroimaging] [nipype] Developers roundup In-Reply-To: References: Message-ID: <20160301195026.GL7904@onerussian.com> > It looks like there's a good agreement to have this meeting on Tuesday, > March 8th at 11.30am. > Please feel free to join us at this hangout: > https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs > https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs what about creating a google calendar (e.g. nipype-dev), so we could all just add it in, and then for events add "video call" so URLs will be embedded with the event entry? -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From stjeansam at gmail.com Wed Mar 2 06:10:43 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Wed, 2 Mar 2016 12:10:43 +0100 Subject: [Neuroimaging] [nibabel] Loading data directly instead of using a memmap In-Reply-To: References: Message-ID: Well after all it seems to load the array in memory (and use it of course), since it is not a memmap anymore. In [1]: import nibabel as nib In [2]: %load_ext memory_profiler In [3]: %memit a=vol.get_data() In [4]: vol = nib.load('data.nii') In [5]: %memit a=vol.get_data() peak memory: 101.39 MiB, increment: 0.00 MiB In [6]: %memit a=np.array(vol.get_data()) peak memory: 8139.60 MiB, increment: 8038.21 MiB And it also seems to take twice the space in memory than on disk for some weird reason. Any idea why that is? 2016-01-14 12:24 GMT+01:00 Samuel St-Jean : > Oh, a simple fix after all, thanks! > > 2016-01-14 12:14 GMT+01:00 Nathaniel Smith : > >> On Thu, Jan 14, 2016 at 2:28 AM, Samuel St-Jean >> wrote: >> > Hello, >> > >> > While processing some hcp data, we decided to use directly nifti files >> > instead of using gzipped file as they use quite a lot of ram (there are >> some >> > PRs fixing this under the work in nibabel apparently). So when you load >> a >> > regular nifti file, it gets a memmap instead of a proper numpy array, >> which >> > does not support the same feature and sometimes ends up producing really >> > weird bugs down the line (https://github.com/numpy/numpy/issues/6750). >> > >> > So, we just ended up casting the memmap to a regular numpy array with >> > something like >> > >> > data = np.array(data) >> > >> > While this works, is it memory usage friendly (hcp data is ~4go after >> all) >> > or does it keep a reference in the background? Is there a better way to >> > achieve similar results, like for example forcing nibabel to load a >> numpy >> > array directly instead of memmap? >> >> It costs a few hundred bytes of memory, and otherwise will act >> identically except that you lose access to the special mmap methods. I >> wouldn't worry about it :-). >> >> -n >> >> -- >> Nathaniel J. Smith -- http://vorpus.org >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stjeansam at gmail.com Wed Mar 2 07:37:37 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Wed, 2 Mar 2016 13:37:37 +0100 Subject: [Neuroimaging] [nibabel] Loading data directly instead of using a memmap In-Reply-To: References: Message-ID: Well, this is not directly loading the data, but nibabel keeps the array in cache for future access, so doing instead vol = nib.load(args.input) data = np.array(vol.get_data()) vol.uncache() # Unload cached array from memory will remove the double copy from memory. If anyone wants to suggest another way, I found the info here : http://nipy.org/nibabel/images_and_memory.html 2016-03-02 12:10 GMT+01:00 Samuel St-Jean : > Well after all it seems to load the array in memory (and use it of > course), since it is not a memmap anymore. > > In [1]: import nibabel as nib > > In [2]: %load_ext memory_profiler > > In [3]: %memit a=vol.get_data() > > In [4]: vol = nib.load('data.nii') > > In [5]: %memit a=vol.get_data() > peak memory: 101.39 MiB, increment: 0.00 MiB > > In [6]: %memit a=np.array(vol.get_data()) > peak memory: 8139.60 MiB, increment: 8038.21 MiB > > And it also seems to take twice the space in memory than on disk for some > weird reason. > Any idea why that is? > > 2016-01-14 12:24 GMT+01:00 Samuel St-Jean : > >> Oh, a simple fix after all, thanks! >> >> 2016-01-14 12:14 GMT+01:00 Nathaniel Smith : >> >>> On Thu, Jan 14, 2016 at 2:28 AM, Samuel St-Jean >>> wrote: >>> > Hello, >>> > >>> > While processing some hcp data, we decided to use directly nifti files >>> > instead of using gzipped file as they use quite a lot of ram (there >>> are some >>> > PRs fixing this under the work in nibabel apparently). So when you >>> load a >>> > regular nifti file, it gets a memmap instead of a proper numpy array, >>> which >>> > does not support the same feature and sometimes ends up producing >>> really >>> > weird bugs down the line (https://github.com/numpy/numpy/issues/6750). >>> > >>> > So, we just ended up casting the memmap to a regular numpy array with >>> > something like >>> > >>> > data = np.array(data) >>> > >>> > While this works, is it memory usage friendly (hcp data is ~4go after >>> all) >>> > or does it keep a reference in the background? Is there a better way to >>> > achieve similar results, like for example forcing nibabel to load a >>> numpy >>> > array directly instead of memmap? >>> >>> It costs a few hundred bytes of memory, and otherwise will act >>> identically except that you lose access to the special mmap methods. I >>> wouldn't worry about it :-). >>> >>> -n >>> >>> -- >>> Nathaniel J. Smith -- http://vorpus.org >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stjeansam at gmail.com Thu Mar 3 05:25:16 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Thu, 3 Mar 2016 11:25:16 +0100 Subject: [Neuroimaging] ExploreDTI workshop from July 4th to 6th 2015 Message-ID: Hello everyone, I am pleased to announce the venue of a workshop in Utrecht, The Netherlands, for anyone interested in learning the basics of processing diffusion MRI using ExploreDTI. This is a great opportunity to learn both the theory and practical challenges behind diffusion MRI processing. Attendance is on a first come, first serve basis with a maximum of 20 attendees to ensure a high level of support by each lecturer. The workshop includes presentation from invited speakers and a hands-on tutorial with the following topics 1. Quality assessment - Residual maps - Estimation methods (OLLS, WLLS, NLS, REKINDLE) 2. Artifact correction - Signal drift - Gibbs Ringing - Subject motion and eddy current-induced distortions - EPI deformations 3. Diffusion approaches - Diffusion tensor imaging (DTI) - Diffusion kurtosis imaging (DKI) - Spherical deconvolution (SD) 4. Fiber tractography - Virtual dissection - Fiber bundle segment - Along-track analysis 5. Automated analyses - ROIs (native/atlas space) - Tract pathways - Connectomics 6. Visualization - ROIs - Tract pathways - Connectomes - Making Animations While the workshop will focus on ExploreDTI, it is also a perfect occasion for anyone wishing to learn the basics of processing diffusion datasets and ask specific questions about diffusion and related topics to any organizers. Feel free to visit http://www.exploredti.com/workshop/ for more information. Samuel St-Jean -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Thu Mar 3 10:28:36 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Thu, 3 Mar 2016 10:28:36 -0500 Subject: [Neuroimaging] [dipy]Fitting diffusion models in the absence of S0 signal In-Reply-To: References: Message-ID: Sorry your suggestion is not exactly clear. Can you give show us how the code will look with your proposal? Also, apart from DTI and DKI what other models will be affected from this changes. Is this a change suggested only for DTI and DKI or will affect all or most reconstruction models? On Mon, Feb 29, 2016 at 11:53 AM, Ariel Rokem wrote: > Hi everyone, > > In Rafael's recent PR implementing free-water-eliminated DTI ( > https://github.com/nipy/dipy/pull/835), we had a little bit of a > discussion about the use of the non-diffusion weighted signal (S0). As > pointed out by Rafael, in the absence of an S0 in the measured data, for > some models, that can be derived from the model fit ( > https://github.com/nipy/dipy/pull/835#issuecomment-183060855). > > I think that we would like to support using data both with and without S0. > On the other hand, I don't think that we should treat the derived S0 as a > model parameter, because in some cases, we want to provide S0 as an input > (for example, when predicting back the signal for another measurement, with > a different ). In addition, it would be hard to incorporate that into the > model_params variable of the TensorFit object, while maintaining backwards > compatibility of the TensorModel/TensorFit and derived classes (e.g., DKI). > > My proposal is to have an S0 property for ReconstFit objects. When this is > calculated from the model (e.g. in DTI), it gets set by the `fit` method of > the ReconstModel object. When it isn't, it can be set from the data. Either > way, it can be over-ridden by the user (e.g., for the purpose of predicting > on a new data-set). This might change the behavior of the prediction code > slightly, but maybe that is something we can live with? > > Happy to hear what everyone thinks, before we move ahead with this. > > Cheers, > > Ariel > > > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Thu Mar 3 11:24:46 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Thu, 3 Mar 2016 11:24:46 -0500 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY Message-ID: Dear Matthew, Maxime, Ariel and all, Mr. Dumont and I have started creating some workflows which can be run by the command line. These are made to work with large real datasets. I think it would be great if we could use a different type of testing from what we were using right now. Most of the testing we use is actually fast testing of functions and we should definitely continue having that. But I think we need also an end-to-end offline testing where we actually test with big whole brain datasets and then we can collect some automatic quality assurance reports. In that way we cover most of unexpected issues. Now, the problem with having such a platform is that it needs computing power and some disk space. It may need a descent computer to run for 24 hours for example and let's say around 100 GBytes of free disk space. Then it will also need to send some automated reports to say that is all good or not. Ariel has suggested to use the cloud and docker but I am afraid that it will be too expensive for our pockets right now except if someone can donate to the project. An alternative idea would be to go gradually and setup one of the computers in Sherbrooke or in Berkeley or in Seattle to do such a job. I think this QA should run once/twice a week rather than every day. Now there are other platforms that need to run relatively frequently. One is the examples for the documentation and then there is Omar's validation framework which actually needs a large cluster. We can deal with those at a later stage. The easiest way forward with the workflows that I see right now is that Mr. Dumont adds a script in dipy/tools that will run all the workflows as we do with make_examples.py that run all the examples. We first try this platform in Sherbrooke and then we need to figure out a way to send automated reports to the core developers or to berkeley builders and so on. Maybe sending a PDF or HTML of the output screenshots would be also a good idea. Let me know what you think. Cheers, Eleftherios -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Thu Mar 3 13:29:06 2016 From: arokem at gmail.com (Ariel Rokem) Date: Thu, 3 Mar 2016 10:29:06 -0800 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: References: Message-ID: Hi Eleftherios, I have resources to run this kind of thing on AWS, or some other cloud provider. I see many advantages to doing this on the cloud and using something like docker for deployment (e.g., portability and reproducibility in other people's hands, as well as relatively easy scaling in ours). Data can then also consistently be pulled from the HCP S3 buckets (see for example the beginning of the notebook here: https://github.com/arokem/end-to-end/blob/master/end-to-end.ipynb). Once we have automated all that, it will also be relatively easy to transfer these ideas to the other use-cases you mentioned. But we'd need to do some math to see how much this would actually cost. Do you have a sense of the requirements? For example, how often would you want to run the pipeline? Every time a PR happens? That's happening quite often these days ;-) I don't believe we need a really large machine to run persistently. We might want a small machine running persistently, monitoring github for us, and then waking up the big beast when there's a lot of work to do. That might reduce costs. Cheers, Ariel On Thu, Mar 3, 2016 at 8:24 AM, Eleftherios Garyfallidis < garyfallidis at gmail.com> wrote: > Dear Matthew, Maxime, Ariel and all, > > Mr. Dumont and I have started creating some workflows which can be run by > the command line. These are made to work with large real datasets. > > I think it would be great if we could use a different type of testing from > what we were using right now. Most of the testing we use is actually fast > testing of functions and we should definitely continue having that. > > But I think we need also an end-to-end offline testing where we actually > test with big whole brain datasets and then we can collect some automatic > quality assurance reports. In that way we cover most of unexpected issues. > > Now, the problem with having such a platform is that it needs computing > power and some disk space. It may need a descent computer to run for 24 > hours for example and let's say around 100 GBytes of free disk space. Then > it will also need to send some automated reports to say that is all good or > not. > > Ariel has suggested to use the cloud and docker but I am afraid that it > will be too expensive for our pockets right now except if someone can > donate to the project. > > An alternative idea would be to go gradually and setup one of the > computers in Sherbrooke or in Berkeley or in Seattle to do such a job. I > think this QA should run once/twice a week rather than every day. > > Now there are other platforms that need to run relatively frequently. One > is the examples for the documentation and then there is Omar's validation > framework which actually needs a large cluster. We can deal with those at a > later stage. > > The easiest way forward with the workflows that I see right now is that > Mr. Dumont adds a script in dipy/tools that will run all the workflows as > we do with make_examples.py that run all the examples. We first try this > platform in Sherbrooke and then we need to figure out a way to send > automated reports to the core developers or to berkeley builders and so on. > Maybe sending a PDF or HTML of the output screenshots would be also a good > idea. > > Let me know what you think. > > Cheers, > Eleftherios > > > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Thu Mar 3 13:33:35 2016 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Thu, 3 Mar 2016 10:33:35 -0800 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: References: Message-ID: Have a look at waht we are doing for Nipype on CircleCI (on the free open source tier): https://github.com/nipy/nipype/blob/master/circle.yml https://circleci.com/gh/nipy/nipype All of the workflows we run for tests take over 3h to finish. Similar set up is implemented in nilearn project. Best, Chris On Thu, Mar 3, 2016 at 10:29 AM, Ariel Rokem wrote: > Hi Eleftherios, > > I have resources to run this kind of thing on AWS, or some other cloud > provider. I see many advantages to doing this on the cloud and using > something like docker for deployment (e.g., portability and reproducibility > in other people's hands, as well as relatively easy scaling in ours). Data > can then also consistently be pulled from the HCP S3 buckets (see for > example the beginning of the notebook here: > https://github.com/arokem/end-to-end/blob/master/end-to-end.ipynb). Once > we have automated all that, it will also be relatively easy to transfer > these ideas to the other use-cases you mentioned. > > But we'd need to do some math to see how much this would actually cost. Do > you have a sense of the requirements? For example, how often would you want > to run the pipeline? Every time a PR happens? That's happening quite often > these days ;-) I don't believe we need a really large machine to run > persistently. We might want a small machine running persistently, > monitoring github for us, and then waking up the big beast when there's a > lot of work to do. That might reduce costs. > > Cheers, > > Ariel > > On Thu, Mar 3, 2016 at 8:24 AM, Eleftherios Garyfallidis < > garyfallidis at gmail.com> wrote: > >> Dear Matthew, Maxime, Ariel and all, >> >> Mr. Dumont and I have started creating some workflows which can be run by >> the command line. These are made to work with large real datasets. >> >> I think it would be great if we could use a different type of testing >> from what we were using right now. Most of the testing we use is actually >> fast testing of functions and we should definitely continue having that. >> >> But I think we need also an end-to-end offline testing where we actually >> test with big whole brain datasets and then we can collect some automatic >> quality assurance reports. In that way we cover most of unexpected issues. >> >> Now, the problem with having such a platform is that it needs computing >> power and some disk space. It may need a descent computer to run for 24 >> hours for example and let's say around 100 GBytes of free disk space. Then >> it will also need to send some automated reports to say that is all good or >> not. >> >> Ariel has suggested to use the cloud and docker but I am afraid that it >> will be too expensive for our pockets right now except if someone can >> donate to the project. >> >> An alternative idea would be to go gradually and setup one of the >> computers in Sherbrooke or in Berkeley or in Seattle to do such a job. I >> think this QA should run once/twice a week rather than every day. >> >> Now there are other platforms that need to run relatively frequently. One >> is the examples for the documentation and then there is Omar's validation >> framework which actually needs a large cluster. We can deal with those at a >> later stage. >> >> The easiest way forward with the workflows that I see right now is that >> Mr. Dumont adds a script in dipy/tools that will run all the workflows as >> we do with make_examples.py that run all the examples. We first try this >> platform in Sherbrooke and then we need to figure out a way to send >> automated reports to the core developers or to berkeley builders and so on. >> Maybe sending a PDF or HTML of the output screenshots would be also a >> good idea. >> >> Let me know what you think. >> >> Cheers, >> Eleftherios >> >> >> >> >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Thu Mar 3 13:35:16 2016 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Thu, 3 Mar 2016 10:35:16 -0800 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: References: Message-ID: Also if you want you can use Docker on CircleCI - I use it in NeuroVault: https://circleci.com/gh/NeuroVault/NeuroVault Best, Chris On Thu, Mar 3, 2016 at 10:33 AM, Chris Filo Gorgolewski < krzysztof.gorgolewski at gmail.com> wrote: > Have a look at waht we are doing for Nipype on CircleCI (on the free open > source tier): > > https://github.com/nipy/nipype/blob/master/circle.yml > https://circleci.com/gh/nipy/nipype > > All of the workflows we run for tests take over 3h to finish. Similar set > up is implemented in nilearn project. > > Best, > Chris > > On Thu, Mar 3, 2016 at 10:29 AM, Ariel Rokem wrote: > >> Hi Eleftherios, >> >> I have resources to run this kind of thing on AWS, or some other cloud >> provider. I see many advantages to doing this on the cloud and using >> something like docker for deployment (e.g., portability and reproducibility >> in other people's hands, as well as relatively easy scaling in ours). Data >> can then also consistently be pulled from the HCP S3 buckets (see for >> example the beginning of the notebook here: >> https://github.com/arokem/end-to-end/blob/master/end-to-end.ipynb). Once >> we have automated all that, it will also be relatively easy to transfer >> these ideas to the other use-cases you mentioned. >> >> But we'd need to do some math to see how much this would actually cost. >> Do you have a sense of the requirements? For example, how often would you >> want to run the pipeline? Every time a PR happens? That's happening quite >> often these days ;-) I don't believe we need a really large machine to run >> persistently. We might want a small machine running persistently, >> monitoring github for us, and then waking up the big beast when there's a >> lot of work to do. That might reduce costs. >> >> Cheers, >> >> Ariel >> >> On Thu, Mar 3, 2016 at 8:24 AM, Eleftherios Garyfallidis < >> garyfallidis at gmail.com> wrote: >> >>> Dear Matthew, Maxime, Ariel and all, >>> >>> Mr. Dumont and I have started creating some workflows which can be run >>> by the command line. These are made to work with large real datasets. >>> >>> I think it would be great if we could use a different type of testing >>> from what we were using right now. Most of the testing we use is actually >>> fast testing of functions and we should definitely continue having that. >>> >>> But I think we need also an end-to-end offline testing where we actually >>> test with big whole brain datasets and then we can collect some automatic >>> quality assurance reports. In that way we cover most of unexpected issues. >>> >>> Now, the problem with having such a platform is that it needs computing >>> power and some disk space. It may need a descent computer to run for 24 >>> hours for example and let's say around 100 GBytes of free disk space. Then >>> it will also need to send some automated reports to say that is all good or >>> not. >>> >>> Ariel has suggested to use the cloud and docker but I am afraid that it >>> will be too expensive for our pockets right now except if someone can >>> donate to the project. >>> >>> An alternative idea would be to go gradually and setup one of the >>> computers in Sherbrooke or in Berkeley or in Seattle to do such a job. I >>> think this QA should run once/twice a week rather than every day. >>> >>> Now there are other platforms that need to run relatively frequently. >>> One is the examples for the documentation and then there is Omar's >>> validation framework which actually needs a large cluster. We can deal with >>> those at a later stage. >>> >>> The easiest way forward with the workflows that I see right now is that >>> Mr. Dumont adds a script in dipy/tools that will run all the workflows as >>> we do with make_examples.py that run all the examples. We first try this >>> platform in Sherbrooke and then we need to figure out a way to send >>> automated reports to the core developers or to berkeley builders and so on. >>> Maybe sending a PDF or HTML of the output screenshots would be also a >>> good idea. >>> >>> Let me know what you think. >>> >>> Cheers, >>> Eleftherios >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Thu Mar 3 13:46:01 2016 From: lists at onerussian.com (Yaroslav Halchenko) Date: Thu, 3 Mar 2016 13:46:01 -0500 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: References: Message-ID: <20160303184601.GC7904@onerussian.com> On Thu, 03 Mar 2016, Chris Filo Gorgolewski wrote: > Have a look at waht we are doing for Nipype on CircleCI (on the free open > source tier): > https://github.com/nipy/nipype/blob/master/circle.yml > https://circleci.com/gh/nipy/nipype > All of the workflows we run for tests take over 3h to finish. Similar set > up is implemented in nilearn project. sorry for ignorance -- haven't used circleci yet. So is that the crucial advantage of circle ci (over travis) that it allows for longer jobs and more caching space or what? -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From krzysztof.gorgolewski at gmail.com Thu Mar 3 14:02:37 2016 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Thu, 3 Mar 2016 11:02:37 -0800 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: <20160303184601.GC7904@onerussian.com> References: <20160303184601.GC7904@onerussian.com> Message-ID: Yes, longer jobs (AFAIK there is no limit as long as the job is printing something every 10 minutes) and more RAM (4GB) are the main advantages. On Mar 3, 2016 10:46 AM, "Yaroslav Halchenko" wrote: > > On Thu, 03 Mar 2016, Chris Filo Gorgolewski wrote: > > > Have a look at waht we are doing for Nipype on CircleCI (on the free open > > source tier): > > > https://github.com/nipy/nipype/blob/master/circle.yml > > https://circleci.com/gh/nipy/nipype > > > All of the workflows we run for tests take over 3h to finish. Similar set > > up is implemented in nilearn project. > > sorry for ignorance -- haven't used circleci yet. So is that the > crucial advantage of circle ci (over travis) that it allows for > longer jobs and more caching space or what? > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vsochat at stanford.edu Thu Mar 3 14:15:20 2016 From: vsochat at stanford.edu (vanessa sochat) Date: Thu, 3 Mar 2016 11:15:20 -0800 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: References: <20160303184601.GC7904@onerussian.com> Message-ID: The upper limit for each test is 2 hours (120 minutes). On Thu, Mar 3, 2016 at 11:02 AM, Chris Filo Gorgolewski < krzysztof.gorgolewski at gmail.com> wrote: > Yes, longer jobs (AFAIK there is no limit as long as the job is printing > something every 10 minutes) and more RAM (4GB) are the main advantages. > On Mar 3, 2016 10:46 AM, "Yaroslav Halchenko" > wrote: > >> >> On Thu, 03 Mar 2016, Chris Filo Gorgolewski wrote: >> >> > Have a look at waht we are doing for Nipype on CircleCI (on the free >> open >> > source tier): >> >> > https://github.com/nipy/nipype/blob/master/circle.yml >> > https://circleci.com/gh/nipy/nipype >> >> > All of the workflows we run for tests take over 3h to finish. Similar >> set >> > up is implemented in nilearn project. >> >> sorry for ignorance -- haven't used circleci yet. So is that the >> crucial advantage of circle ci (over travis) that it allows for >> longer jobs and more caching space or what? >> >> -- >> Yaroslav O. Halchenko >> Center for Open Neuroscience http://centerforopenneuroscience.org >> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >> WWW: http://www.linkedin.com/in/yarik >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Vanessa Villamia Sochat Stanford University (603) 321-0676 -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Thu Mar 3 14:23:51 2016 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Thu, 3 Mar 2016 11:23:51 -0800 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: References: <20160303184601.GC7904@onerussian.com> Message-ID: Ah yes that makes sense. You can however split your tests (for example one workflow per line) and easily run a battery of tests that takes more than 2h. On Mar 3, 2016 11:16 AM, "vanessa sochat" wrote: > The upper limit for each test is 2 hours (120 minutes). > > On Thu, Mar 3, 2016 at 11:02 AM, Chris Filo Gorgolewski < > krzysztof.gorgolewski at gmail.com> wrote: > >> Yes, longer jobs (AFAIK there is no limit as long as the job is printing >> something every 10 minutes) and more RAM (4GB) are the main advantages. >> On Mar 3, 2016 10:46 AM, "Yaroslav Halchenko" >> wrote: >> >>> >>> On Thu, 03 Mar 2016, Chris Filo Gorgolewski wrote: >>> >>> > Have a look at waht we are doing for Nipype on CircleCI (on the free >>> open >>> > source tier): >>> >>> > https://github.com/nipy/nipype/blob/master/circle.yml >>> > https://circleci.com/gh/nipy/nipype >>> >>> > All of the workflows we run for tests take over 3h to finish. Similar >>> set >>> > up is implemented in nilearn project. >>> >>> sorry for ignorance -- haven't used circleci yet. So is that the >>> crucial advantage of circle ci (over travis) that it allows for >>> longer jobs and more caching space or what? >>> >>> -- >>> Yaroslav O. Halchenko >>> Center for Open Neuroscience http://centerforopenneuroscience.org >>> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >>> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >>> WWW: http://www.linkedin.com/in/yarik >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > Vanessa Villamia Sochat > Stanford University > (603) 321-0676 > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vsochat at stanford.edu Thu Mar 3 14:26:28 2016 From: vsochat at stanford.edu (vanessa sochat) Date: Thu, 3 Mar 2016 11:26:28 -0800 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: References: <20160303184601.GC7904@onerussian.com> Message-ID: +1 On Thu, Mar 3, 2016 at 11:23 AM, Chris Filo Gorgolewski < krzysztof.gorgolewski at gmail.com> wrote: > Ah yes that makes sense. You can however split your tests (for example one > workflow per line) and easily run a battery of tests that takes more than > 2h. > On Mar 3, 2016 11:16 AM, "vanessa sochat" wrote: > >> The upper limit for each test is 2 hours (120 minutes). >> >> On Thu, Mar 3, 2016 at 11:02 AM, Chris Filo Gorgolewski < >> krzysztof.gorgolewski at gmail.com> wrote: >> >>> Yes, longer jobs (AFAIK there is no limit as long as the job is printing >>> something every 10 minutes) and more RAM (4GB) are the main advantages. >>> On Mar 3, 2016 10:46 AM, "Yaroslav Halchenko" >>> wrote: >>> >>>> >>>> On Thu, 03 Mar 2016, Chris Filo Gorgolewski wrote: >>>> >>>> > Have a look at waht we are doing for Nipype on CircleCI (on the free >>>> open >>>> > source tier): >>>> >>>> > https://github.com/nipy/nipype/blob/master/circle.yml >>>> > https://circleci.com/gh/nipy/nipype >>>> >>>> > All of the workflows we run for tests take over 3h to finish. Similar >>>> set >>>> > up is implemented in nilearn project. >>>> >>>> sorry for ignorance -- haven't used circleci yet. So is that the >>>> crucial advantage of circle ci (over travis) that it allows for >>>> longer jobs and more caching space or what? >>>> >>>> -- >>>> Yaroslav O. Halchenko >>>> Center for Open Neuroscience http://centerforopenneuroscience.org >>>> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >>>> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >>>> WWW: http://www.linkedin.com/in/yarik >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> >> -- >> Vanessa Villamia Sochat >> Stanford University >> (603) 321-0676 >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Vanessa Villamia Sochat Stanford University (603) 321-0676 -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Thu Mar 3 16:08:35 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Thu, 3 Mar 2016 16:08:35 -0500 Subject: [Neuroimaging] [DIPY] Setting up a platform for offline end-to-end quality assurance for DIPY In-Reply-To: References: <20160303184601.GC7904@onerussian.com> Message-ID: The problem with using CircleCI is the memory restriction. I think it is 4GB. Please correct me if wrong. Just some of the tractography files we load is of this size. Even for running our doc examples we need more than 8 GB. We need a machine that has 12-16GB of RAM. Or we need to refactor some of the methods that require larger amounts of memory to use memmaps or similar. But still 4GB will be a strong restriction. We are often dealing with big datasets. Ariel, as I said in my previous e-mail at this stage we need a machine to process for 24 hours continuously and do that 1 to 2 times a week. But of course if there are important issues we will need to do that sooner. Also it would be good to have some multiprocessing involved. The machine used should have multiple cores if possible. So, what do you think will be the logistics for that? Also we need a system which will allow to report and send feedback after the execution of the workflows. In a way Matthew has already setup some of these things in the Berkeley bots. I want to hear his opinion too on this. I also like the idea of having a docker that can be moved anywhere. On Thu, Mar 3, 2016 at 2:26 PM, vanessa sochat wrote: > +1 > > On Thu, Mar 3, 2016 at 11:23 AM, Chris Filo Gorgolewski < > krzysztof.gorgolewski at gmail.com> wrote: > >> Ah yes that makes sense. You can however split your tests (for example >> one workflow per line) and easily run a battery of tests that takes more >> than 2h. >> On Mar 3, 2016 11:16 AM, "vanessa sochat" wrote: >> >>> The upper limit for each test is 2 hours (120 minutes). >>> >>> On Thu, Mar 3, 2016 at 11:02 AM, Chris Filo Gorgolewski < >>> krzysztof.gorgolewski at gmail.com> wrote: >>> >>>> Yes, longer jobs (AFAIK there is no limit as long as the job is >>>> printing something every 10 minutes) and more RAM (4GB) are the main >>>> advantages. >>>> On Mar 3, 2016 10:46 AM, "Yaroslav Halchenko" >>>> wrote: >>>> >>>>> >>>>> On Thu, 03 Mar 2016, Chris Filo Gorgolewski wrote: >>>>> >>>>> > Have a look at waht we are doing for Nipype on CircleCI (on the free >>>>> open >>>>> > source tier): >>>>> >>>>> > https://github.com/nipy/nipype/blob/master/circle.yml >>>>> > https://circleci.com/gh/nipy/nipype >>>>> >>>>> > All of the workflows we run for tests take over 3h to finish. >>>>> Similar set >>>>> > up is implemented in nilearn project. >>>>> >>>>> sorry for ignorance -- haven't used circleci yet. So is that the >>>>> crucial advantage of circle ci (over travis) that it allows for >>>>> longer jobs and more caching space or what? >>>>> >>>>> -- >>>>> Yaroslav O. Halchenko >>>>> Center for Open Neuroscience http://centerforopenneuroscience.org >>>>> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >>>>> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >>>>> WWW: http://www.linkedin.com/in/yarik >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> >>> -- >>> Vanessa Villamia Sochat >>> Stanford University >>> (603) 321-0676 >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > Vanessa Villamia Sochat > Stanford University > (603) 321-0676 > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Sat Mar 5 06:04:16 2016 From: arokem at gmail.com (Ariel Rokem) Date: Sat, 5 Mar 2016 03:04:16 -0800 Subject: [Neuroimaging] [dipy]Fitting diffusion models in the absence of S0 signal In-Reply-To: References: Message-ID: On Thu, Mar 3, 2016 at 7:28 AM, Eleftherios Garyfallidis < garyfallidis at gmail.com> wrote: > Sorry your suggestion is not exactly clear. Can you give show us how the > code will look with your proposal? Also, apart from DTI and DKI what other > models will be affected from this changes. Is this a change suggested only > for DTI and DKI or will affect all or most reconstruction models? > > First of all, to answer your last question: this will certainly affect DTI and DKI, and there will be other models to follow. For example the FWDTI that Rafael is currently proposing in that PR. The idea would be to also more tightly integrate these three models (and future extensions... !), so that we can remove some of the redundancies that currently exist. We could make this a part of the base.Reconst* methods - it might apply to other models as well (e.g. CSD, SFM, etc). But that's part of what I would like to discuss here. As for code, for now, here's a sketch of what this would look like for the tensor model: https://gist.github.com/arokem/508dc1b22bdbd0bdd748 Note that though it changes the prediction API a bit, not much else would have to change. In particular, all the code that relies on there being 12 model parameters will still be intact, because S0 doesn't go into the model parameters. What do you think? Am I missing something big here? Or should I go ahead and start working on a PR implementing this? Thanks! Ariel > On Mon, Feb 29, 2016 at 11:53 AM, Ariel Rokem wrote: > >> Hi everyone, >> >> In Rafael's recent PR implementing free-water-eliminated DTI ( >> https://github.com/nipy/dipy/pull/835), we had a little bit of a >> discussion about the use of the non-diffusion weighted signal (S0). As >> pointed out by Rafael, in the absence of an S0 in the measured data, for >> some models, that can be derived from the model fit ( >> https://github.com/nipy/dipy/pull/835#issuecomment-183060855). >> >> I think that we would like to support using data both with and without >> S0. On the other hand, I don't think that we should treat the derived S0 as >> a model parameter, because in some cases, we want to provide S0 as an input >> (for example, when predicting back the signal for another measurement, with >> a different ). In addition, it would be hard to incorporate that into the >> model_params variable of the TensorFit object, while maintaining backwards >> compatibility of the TensorModel/TensorFit and derived classes (e.g., DKI). >> >> My proposal is to have an S0 property for ReconstFit objects. When this >> is calculated from the model (e.g. in DTI), it gets set by the `fit` method >> of the ReconstModel object. When it isn't, it can be set from the data. >> Either way, it can be over-ridden by the user (e.g., for the purpose of >> predicting on a new data-set). This might change the behavior of the >> prediction code slightly, but maybe that is something we can live with? >> >> Happy to hear what everyone thinks, before we move ahead with this. >> >> Cheers, >> >> Ariel >> >> >> >> >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Tue Mar 8 14:28:05 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Tue, 8 Mar 2016 14:28:05 -0500 Subject: [Neuroimaging] [nipype] Developers roundup In-Reply-To: <20160301195026.GL7904@onerussian.com> References: <20160301195026.GL7904@onerussian.com> Message-ID: just a reminder that this is today at 2.30 EST, 11.30PST agenda here: https://docs.google.com/document/d/1M_eaD1EoDdIc_HQUVIv9VmjDDouQ6enPWipGnSOLm9g/edit cheers, satra On Tue, Mar 1, 2016 at 2:50 PM, Yaroslav Halchenko wrote: > > It looks like there's a good agreement to have this meeting on > Tuesday, > > March 8th at 11.30am. > > Please feel free to join us at this hangout: > > https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs > > https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs > > what about creating a google calendar (e.g. nipype-dev), so we could all > just add it in, > and then for events add "video call" so URLs will be embedded with the > event entry? > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Thu Mar 10 19:32:41 2016 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Thu, 10 Mar 2016 16:32:41 -0800 Subject: [Neuroimaging] indexed access to gziped files Message-ID: Hi, check this out: https://github.com/pauldmccarthy/indexed_gzip/ It could be incorporated in nibabel to provide memory mapped access to .nii.gz files! Best, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagutman at gmail.com Thu Mar 10 19:37:05 2016 From: dagutman at gmail.com (David Gutman) Date: Fri, 11 Mar 2016 00:37:05 +0000 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: Message-ID: Chris that's a great find... I hate storing .NII files ... On Thu, Mar 10, 2016 at 7:33 PM Chris Filo Gorgolewski < krzysztof.gorgolewski at gmail.com> wrote: > Hi, > check this out: https://github.com/pauldmccarthy/indexed_gzip/ > > It could be incorporated in nibabel to provide memory mapped access to > .nii.gz files! > > Best, > Chris > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Fri Mar 11 00:22:12 2016 From: pellman.john at gmail.com (John Pellman) Date: Fri, 11 Mar 2016 00:22:12 -0500 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: Message-ID: Only would work if you were using Python 3 though, no? 2016-03-10 19:32 GMT-05:00 Chris Filo Gorgolewski < krzysztof.gorgolewski at gmail.com>: > Hi, > check this out: https://github.com/pauldmccarthy/indexed_gzip/ > > It could be incorporated in nibabel to provide memory mapped access to > .nii.gz files! > > Best, > Chris > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Mar 11 01:33:12 2016 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 11 Mar 2016 07:33:12 +0100 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: Message-ID: <20160311063312.GF3792063@phare.normalesup.org> Indeed, Paul did that at the brain hack. It's a great initiative. Two caveats. First it needs compiled code (which means that it cannot just be copied in nibabel). Second, it's quite slow and cannot be made faster without changing the file format. One of the problems is that gzip isn't a good compression at all for these purposes. The people behind the use of nii.gz didn't have such things in mind. It would be great to make sure that the next iteration is a bit more thought through. Maybe it's just a question how using a different compressor than gzip. Ga?l On Thu, Mar 10, 2016 at 04:32:41PM -0800, Chris Filo Gorgolewski wrote: > check this out:?https://github.com/pauldmccarthy/indexed_gzip/ > It could be incorporated in nibabel to provide memory mapped access to .nii.gz > files! > Best, > Chris > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From matthew.brett at gmail.com Fri Mar 11 01:59:15 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 10 Mar 2016 22:59:15 -0800 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: <20160311063312.GF3792063@phare.normalesup.org> References: <20160311063312.GF3792063@phare.normalesup.org> Message-ID: On Thu, Mar 10, 2016 at 10:33 PM, Gael Varoquaux wrote: > Indeed, Paul did that at the brain hack. It's a great initiative. > > Two caveats. First it needs compiled code (which means that it cannot > just be copied in nibabel). No, but it could be an optional package used for gzip files if importable. Cheers, Matthew From krzysztof.gorgolewski at gmail.com Fri Mar 11 03:11:09 2016 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Fri, 11 Mar 2016 00:11:09 -0800 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> Message-ID: On Mar 10, 2016 11:00 PM, "Matthew Brett" wrote: > > On Thu, Mar 10, 2016 at 10:33 PM, Gael Varoquaux > wrote: > > Indeed, Paul did that at the brain hack. It's a great initiative. > > > > Two caveats. First it needs compiled code (which means that it cannot > > just be copied in nibabel). > > No, but it could be an optional package used for gzip files if importable. +1! It would be a great optional feature. > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From stjeansam at gmail.com Fri Mar 11 03:39:39 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Fri, 11 Mar 2016 09:39:39 +0100 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> Message-ID: If you ever go for all memmaped files, please also provide an easy way to return plain numpy arrays (like the unload arg for nibabel.load). Memmaps don't support all the kwargs in subfunctions, which can lead to weird broadcasting behavior. 2016-03-11 9:11 GMT+01:00 Chris Filo Gorgolewski < krzysztof.gorgolewski at gmail.com>: > > On Mar 10, 2016 11:00 PM, "Matthew Brett" wrote: > > > > On Thu, Mar 10, 2016 at 10:33 PM, Gael Varoquaux > > wrote: > > > Indeed, Paul did that at the brain hack. It's a great initiative. > > > > > > Two caveats. First it needs compiled code (which means that it cannot > > > just be copied in nibabel). > > > > No, but it could be an optional package used for gzip files if > importable. > +1! It would be a great optional feature. > > > > Cheers, > > > > Matthew > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arman.eshaghi at yahoo.com Fri Mar 11 06:06:55 2016 From: arman.eshaghi at yahoo.com (Arman Eshaghi) Date: Fri, 11 Mar 2016 11:06:55 +0000 (UTC) Subject: [Neuroimaging] [nipype] Developers roundup In-Reply-To: References: Message-ID: <1570282878.8404947.1457694415612.JavaMail.yahoo@mail.yahoo.com> Is there any chance of recorded video? On Tuesday, March 8, 2016, 7:29 PM, Satrajit Ghosh wrote: just a reminder that this is today at 2.30 EST, 11.30PST agenda here:? https://docs.google.com/document/d/1M_eaD1EoDdIc_HQUVIv9VmjDDouQ6enPWipGnSOLm9g/edit cheers, satra On Tue, Mar 1, 2016 at 2:50 PM, Yaroslav Halchenko wrote: >? ? ? It looks like there's a good agreement to have this meeting on Tuesday, >? ? ? March 8th at 11.30am. >? ? ? Please feel free to join us at this hangout: >? ? ? https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs >? ? ? https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs what about creating a google calendar (e.g. nipype-dev), so we could all just add it in, ?and then for events add "video call" so URLs will be embedded with the ?event entry? -- Yaroslav O. Halchenko Center for Open Neuroscience? ? ?http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834? ? ? ? ? ? ? ? ? ? ? ?Fax: +1 (603) 646-1419 WWW:? ?http://www.linkedin.com/in/yarik _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Fri Mar 11 09:17:49 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Fri, 11 Mar 2016 09:17:49 -0500 Subject: [Neuroimaging] [nipype] Developers roundup In-Reply-To: <1570282878.8404947.1457694415612.JavaMail.yahoo@mail.yahoo.com> References: <1570282878.8404947.1457694415612.JavaMail.yahoo@mail.yahoo.com> Message-ID: we did not - we will do the hangouts on air as of the next meeting, such that it is recorded. cheers, satra On Fri, Mar 11, 2016 at 6:06 AM, Arman Eshaghi wrote: > Is there any chance of recorded video? > > > On Tuesday, March 8, 2016, 7:29 PM, Satrajit Ghosh wrote: > > just a reminder that this is today at 2.30 EST, 11.30PST > > agenda here: > > > https://docs.google.com/document/d/1M_eaD1EoDdIc_HQUVIv9VmjDDouQ6enPWipGnSOLm9g/edit > > cheers, > > satra > > On Tue, Mar 1, 2016 at 2:50 PM, Yaroslav Halchenko > wrote: > >> > It looks like there's a good agreement to have this meeting on >> Tuesday, >> > March 8th at 11.30am. >> > Please feel free to join us at this hangout: >> > https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs >> > https://hangouts.google.com/hangouts/_/oscaresteban.es/nipypedevs >> >> what about creating a google calendar (e.g. nipype-dev), so we could all >> just add it in, >> and then for events add "video call" so URLs will be embedded with the >> event entry? >> >> -- >> Yaroslav O. Halchenko >> Center for Open Neuroscience http://centerforopenneuroscience.org >> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >> Phone: +1 (603) 646-9834 <+1%20(603)%20646-9834> >> Fax: +1 (603) 646-1419 <+1%20(603)%20646-1419> >> WWW: http://www.linkedin.com/in/yarik >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moloney at ohsu.edu Fri Mar 11 12:46:19 2016 From: moloney at ohsu.edu (Brendan Moloney) Date: Fri, 11 Mar 2016 17:46:19 +0000 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> , Message-ID: <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> I don't see any mention of memmaps on the github page. It seems like the code is just storing extra bits of info for different "seek points" that allow you to access random parts of the file without decompressing everything before it. So I think this could help with doing partial loading of the dataobj even without memmaps. Unless I am missing something... - Brendan ________________________________ From: Neuroimaging [neuroimaging-bounces+moloney=ohsu.edu at python.org] on behalf of Samuel St-Jean [stjeansam at gmail.com] Sent: Friday, March 11, 2016 12:39 AM To: Neuroimaging analysis in Python Subject: Re: [Neuroimaging] indexed access to gziped files If you ever go for all memmaped files, please also provide an easy way to return plain numpy arrays (like the unload arg for nibabel.load). Memmaps don't support all the kwargs in subfunctions, which can lead to weird broadcasting behavior. 2016-03-11 9:11 GMT+01:00 Chris Filo Gorgolewski >: On Mar 10, 2016 11:00 PM, "Matthew Brett" > wrote: > > On Thu, Mar 10, 2016 at 10:33 PM, Gael Varoquaux > > wrote: > > Indeed, Paul did that at the brain hack. It's a great initiative. > > > > Two caveats. First it needs compiled code (which means that it cannot > > just be copied in nibabel). > > No, but it could be an optional package used for gzip files if importable. +1! It would be a great optional feature. > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From frr28 at cam.ac.uk Fri Mar 11 11:12:22 2016 From: frr28 at cam.ac.uk (Franziska R Richter) Date: Fri, 11 Mar 2016 16:12:22 +0000 Subject: [Neuroimaging] [PySurfer] smoothing when displaying conjunctions? Message-ID: Hello all, I am fairly new to Pysurfer (and Python in general), so I am still trying to find my way around. I have followed this example to create a conjunction of 3 contrasts https://pysurfer.github.io/examples/plot_fmri_conjunction.html. So far so good - this worked. However, my activation maps are a lot more 'fuzzy' and not as nicely smoothed as in the example. Does anyone know how to smooth the activations? I assume the smoothing has to happen before the nifty files are loaded into Pysurfer. I am using SPM for my analysis in case this matters. Any hints as to how to achieve this, or links to examples would be really appreciated. Thanks Franka -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.manutej at gmail.com Fri Mar 11 14:36:46 2016 From: a.manutej at gmail.com (Manu Tej Sharma) Date: Sat, 12 Mar 2016 01:06:46 +0530 Subject: [Neuroimaging] [dipy] CHARMED model Message-ID: Let S be the spin echo magnitude and S0 be the the signal in the absence of the applied magnetic diffusion gradient. Is E, the net measured signal attenuation equal to S/S0 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pauldmccarthy at gmail.com Fri Mar 11 17:20:11 2016 From: pauldmccarthy at gmail.com (paul mccarthy) Date: Fri, 11 Mar 2016 22:20:11 +0000 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> References: <20160311063312.GF3792063@phare.normalesup.org> <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> Message-ID: Hi all, Sorry for the delay in my joining the conversation. Brendan is correct - this is not a memmap solution. The approach that I've implemented (which I have to emphasise is not my idea - I've just got it working in Python) just improves random seek/read time of the uncompressed data stream, while keeping the compressed data on disk. This is achieved by building an index of mappings between locations in the compressed and uncompressed data streams. The index can be fully built when the file is initially opened, or can be built on-demand as the file handle is used. So once an index is built, the IndexedGzipFile class can be used to read in parts of the compressed data, without having to decompress the entire file every time you seek to a new location. This is what is typically required when reading GZIP files, and is a fundamental limitation in the GZIP format. As Gael (and others) pointed out, using a different compression format would remove the need for silly indexing techniques like the one that I have implemented. But I figured that having something like indexed_gzip would make life a bit easier for those of us who have to work with large amounts of existing .nii.gz files, at least until a new file format is adopted. Going back to the topic of memory-mapping - I'm pretty sure that it is completely impossible to achieve true memory-mapping of compressed data, unless you're working at the OS kernel level. This means that it is not possible to wrap compressed data with a numpy array, because numpy arrays require access to a raw chunk of memory (which itself could be memory mapped, but must provide access to the raw array data). Gael pointed this out to me during the Brainhack, and I discovered it myself about an hour later :) In order to use indexed_gzip in nibabel, the best that we would be able to achieve is an ArrayProxy-like wrapper. For my requirements (visualisation), this is perfectly acceptable. All I want to do is to pull out arbitrary 3D volumes, and/or to pull out the time courses from individual voxels, from arbitrarily large 4D data sets. But, while experimenting with patching nibabel to use my IndexedGzipFile class (instead of the GzipFile or nibabel.openers.BufferedGzipFile classes), I discovered that instances of the nibabel Nifti1Image class do not seem to keep file handles open once they have been created - they appear to re-open the file (and re-create an IndexedGzipFile instance) every time the image data is accessed through the ArrayProxy dataobj attribute. So some discussion would be needed regarding how we could go about allowing nibabel to use indexed_gzip. Do we modify nibabel? Or can we build some sort of an index cache which allows IndexedGzipFile instances to be created/destroyed, but existing index mappings (without having to re-create the index every time a new IndexedGzipFile is created)? Honestly, with the current state of indexed_gzip, we're probably still a way off before there's even any point in having such a discussion. But I'm keen to pursue this if the Nibabel guys are, as it would make my life easier if I could keep using the nibabel interface, but get the speed improvements offered by indexed_gzip. As for Python 2 vs 3 support, I'm not an expert in writing Python extensions - this is the first non-trivial extension that I've written. So I'm not sure of what would be required to write an extension which would work under both Python 2 and 3. If anybody is willing to help out, I would really appreciate it! Thanks, and apologies for the rant-ish nature of this email! Paul On 11 March 2016 at 17:46, Brendan Moloney wrote: > I don't see any mention of memmaps on the github page. It seems like the > code is just storing extra bits of info for different "seek points" that > allow you to access random parts of the file without decompressing > everything before it. So I think this could help with doing partial > loading of the dataobj even without memmaps. Unless I am missing > something... > > - Brendan > > > ------------------------------ > *From:* Neuroimaging [neuroimaging-bounces+moloney=ohsu.edu at python.org] > on behalf of Samuel St-Jean [stjeansam at gmail.com] > *Sent:* Friday, March 11, 2016 12:39 AM > *To:* Neuroimaging analysis in Python > *Subject:* Re: [Neuroimaging] indexed access to gziped files > > If you ever go for all memmaped files, please also provide an easy way to > return plain numpy arrays (like the unload arg for nibabel.load). Memmaps > don't support all the kwargs in subfunctions, which can lead to weird > broadcasting behavior. > > 2016-03-11 9:11 GMT+01:00 Chris Filo Gorgolewski < > krzysztof.gorgolewski at gmail.com>: > >> >> On Mar 10, 2016 11:00 PM, "Matthew Brett" >> wrote: >> > >> > On Thu, Mar 10, 2016 at 10:33 PM, Gael Varoquaux >> > wrote: >> > > Indeed, Paul did that at the brain hack. It's a great initiative. >> > > >> > > Two caveats. First it needs compiled code (which means that it cannot >> > > just be copied in nibabel). >> > >> > No, but it could be an optional package used for gzip files if >> importable. >> +1! It would be a great optional feature. >> > >> > Cheers, >> > >> > Matthew >> > _______________________________________________ >> > Neuroimaging mailing list >> > Neuroimaging at python.org >> > https://mail.python.org/mailman/listinfo/neuroimaging >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Mar 11 17:30:25 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 11 Mar 2016 14:30:25 -0800 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> Message-ID: Hi, On Fri, Mar 11, 2016 at 2:20 PM, paul mccarthy wrote: > Hi all, > > Sorry for the delay in my joining the conversation. > > Brendan is correct - this is not a memmap solution. The approach that I've > implemented (which I have to emphasise is not my idea - I've just got it > working in Python) just improves random seek/read time of the uncompressed > data stream, while keeping the compressed data on disk. This is achieved by > building an index of mappings between locations in the compressed and > uncompressed data streams. The index can be fully built when the file is > initially opened, or can be built on-demand as the file handle is used. > > So once an index is built, the IndexedGzipFile class can be used to read in > parts of the compressed data, without having to decompress the entire file > every time you seek to a new location. This is what is typically required > when reading GZIP files, and is a fundamental limitation in the GZIP format. > > As Gael (and others) pointed out, using a different compression format would > remove the need for silly indexing techniques like the one that I have > implemented. But I figured that having something like indexed_gzip would > make life a bit easier for those of us who have to work with large amounts > of existing .nii.gz files, at least until a new file format is adopted. > > Going back to the topic of memory-mapping - I'm pretty sure that it is > completely impossible to achieve true memory-mapping of compressed data, > unless you're working at the OS kernel level. This means that it is not > possible to wrap compressed data with a numpy array, because numpy arrays > require access to a raw chunk of memory (which itself could be memory > mapped, but must provide access to the raw array data). Gael pointed this > out to me during the Brainhack, and I discovered it myself about an hour > later :) > > In order to use indexed_gzip in nibabel, the best that we would be able to > achieve is an ArrayProxy-like wrapper. For my requirements (visualisation), > this is perfectly acceptable. All I want to do is to pull out arbitrary 3D > volumes, and/or to pull out the time courses from individual voxels, from > arbitrarily large 4D data sets. > > But, while experimenting with patching nibabel to use my IndexedGzipFile > class (instead of the GzipFile or nibabel.openers.BufferedGzipFile classes), > I discovered that instances of the nibabel Nifti1Image class do not seem to > keep file handles open once they have been created - they appear to re-open > the file (and re-create an IndexedGzipFile instance) every time the image > data is accessed through the ArrayProxy dataobj attribute. > > So some discussion would be needed regarding how we could go about allowing > nibabel to use indexed_gzip. Do we modify nibabel? Or can we build some sort > of an index cache which allows IndexedGzipFile instances to be > created/destroyed, but existing index mappings (without having to re-create > the index every time a new IndexedGzipFile is created)? > > Honestly, with the current state of indexed_gzip, we're probably still a way > off before there's even any point in having such a discussion. But I'm keen > to pursue this if the Nibabel guys are, as it would make my life easier if I > could keep using the nibabel interface, but get the speed improvements > offered by indexed_gzip. > > As for Python 2 vs 3 support, I'm not an expert in writing Python extensions > - this is the first non-trivial extension that I've written. So I'm not sure > of what would be required to write an extension which would work under both > Python 2 and 3. If anybody is willing to help out, I would really appreciate > it! > > Thanks, and apologies for the rant-ish nature of this email! Please don't worry about rantishness, I didn't detect it myself :) Yes, nibabel drops the file handles. It could cache them, but it's fairly easy to hit a situation where you're opening hundreds or thousands of little image files, and that exhaust filehandles. In fact Gael hit this problem a few years ago, we had to add a test to make sure we were dropping them. This isn't so if you create an image via the fileobject itself. I can also imagine a non-default flag to the image loading routine to preserve the file objects, or a default that keeps compressed file objects while dropping uncompressed ones. Did you consider Cython for your bindings? It's very good for cross-Python compatibility, and readability, if the wrapping problem is reasonably simple. Cheers, Matthew From njs at pobox.com Fri Mar 11 19:55:32 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 11 Mar 2016 16:55:32 -0800 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> Message-ID: On Fri, Mar 11, 2016 at 2:20 PM, paul mccarthy wrote: > Hi all, > > Sorry for the delay in my joining the conversation. > > Brendan is correct - this is not a memmap solution. The approach that I've > implemented (which I have to emphasise is not my idea - I've just got it > working in Python) just improves random seek/read time of the uncompressed > data stream, while keeping the compressed data on disk. This is achieved by > building an index of mappings between locations in the compressed and > uncompressed data streams. The index can be fully built when the file is > initially opened, or can be built on-demand as the file handle is used. > > So once an index is built, the IndexedGzipFile class can be used to read in > parts of the compressed data, without having to decompress the entire file > every time you seek to a new location. This is what is typically required > when reading GZIP files, and is a fundamental limitation in the GZIP format. > > As Gael (and others) pointed out, using a different compression format would > remove the need for silly indexing techniques like the one that I have > implemented. But I figured that having something like indexed_gzip would > make life a bit easier for those of us who have to work with large amounts > of existing .nii.gz files, at least until a new file format is adopted. It's possible to create .gz files that allow seeking but are still compliant with all the usual standards (e.g. regular gunzip still works): http://blastedbio.blogspot.com/2011/11/bgzf-blocked-bigger-better-gzip.html It sounds likes the biopython folks are on top of this... The excellent xz tool suite has similar features: http://blastedbio.blogspot.com/2013/04/random-access-to-blocked-xz-format-bxzf.html > Going back to the topic of memory-mapping - I'm pretty sure that it is > completely impossible to achieve true memory-mapping of compressed data, > unless you're working at the OS kernel level. 100% pedantic and impractical correction: technically it is totally possible; the Dato folks did it for their numpy/SArray wrappers. The solution is to implement your own VM mapping system by registering your page fault routine as a SIGSEGV handler, and have it call mmap to manipulate the page tables. (If the previous sentence doesn't mean anything to you, then that's probably a good thing ...there's a difference between whether you *can* do something and whether you *should* ;-).) (Also, the result is unlikely to be particularly fast, and you still need some way to actually do the fast random access to the compressed disk file.) -n -- Nathaniel J. Smith -- https://vorpus.org From pauldmccarthy at gmail.com Mon Mar 14 06:51:31 2016 From: pauldmccarthy at gmail.com (paul mccarthy) Date: Mon, 14 Mar 2016 10:51:31 +0000 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> Message-ID: Hi all, This isn't so if you create an image via the fileobject itself. Matthew, is this currently possible in nibabel? I had a quick play, and poke through the code, but I couldn't get anything to work - it looks like there is no "from_fileobj" method defined in the Nifti1Image class (or any of its bases). If this is (or will be possible), then then the problem is solved, isn't it? Users of nibabel can just create IndexedGzipFile instances themselves, and pass the handle to nibabel. No need for nibabel to be dependent upon indexed_gzip - the choice would be up to the caller. Or am I missing something here? It's possible to create .gz files that allow seeking but are still > compliant with all the usual standards (e.g. regular gunzip still > works): Nathaniel, this is definitely a possibility - I did read through those blog posts before going down the indexed_gzip route. But I wanted a solution for existing data, which is already in unseekable .gz format, and not burden the owners/users/researchers with having to re-encode all of their image data. Having said this, I think it would be a good thing if all of our code which writes out nifti files would use a better compression scheme, be it seekable gzip, xz, bz2, or whatever. The solution is to implement your own VM mapping system by registering > your page fault routine as a SIGSEGV handler, and have it call mmap to > manipulate the page tables. A valid point, but I think I'll leave this one to you! Cheers, Paul On 12 March 2016 at 00:55, Nathaniel Smith wrote: > On Fri, Mar 11, 2016 at 2:20 PM, paul mccarthy > wrote: > > Hi all, > > > > Sorry for the delay in my joining the conversation. > > > > Brendan is correct - this is not a memmap solution. The approach that > I've > > implemented (which I have to emphasise is not my idea - I've just got it > > working in Python) just improves random seek/read time of the > uncompressed > > data stream, while keeping the compressed data on disk. This is achieved > by > > building an index of mappings between locations in the compressed and > > uncompressed data streams. The index can be fully built when the file is > > initially opened, or can be built on-demand as the file handle is used. > > > > So once an index is built, the IndexedGzipFile class can be used to read > in > > parts of the compressed data, without having to decompress the entire > file > > every time you seek to a new location. This is what is typically required > > when reading GZIP files, and is a fundamental limitation in the GZIP > format. > > > > As Gael (and others) pointed out, using a different compression format > would > > remove the need for silly indexing techniques like the one that I have > > implemented. But I figured that having something like indexed_gzip would > > make life a bit easier for those of us who have to work with large > amounts > > of existing .nii.gz files, at least until a new file format is adopted. > > It's possible to create .gz files that allow seeking but are still > compliant with all the usual standards (e.g. regular gunzip still > works): > > > http://blastedbio.blogspot.com/2011/11/bgzf-blocked-bigger-better-gzip.html > > It sounds likes the biopython folks are on top of this... > > The excellent xz tool suite has similar features: > > > http://blastedbio.blogspot.com/2013/04/random-access-to-blocked-xz-format-bxzf.html > > > Going back to the topic of memory-mapping - I'm pretty sure that it is > > completely impossible to achieve true memory-mapping of compressed data, > > unless you're working at the OS kernel level. > > 100% pedantic and impractical correction: technically it is totally > possible; the Dato folks did it for their numpy/SArray wrappers. The > solution is to implement your own VM mapping system by registering > your page fault routine as a SIGSEGV handler, and have it call mmap to > manipulate the page tables. (If the previous sentence doesn't mean > anything to you, then that's probably a good thing ...there's a > difference between whether you *can* do something and whether you > *should* ;-).) > > (Also, the result is unlikely to be particularly fast, and you still > need some way to actually do the fast random access to the compressed > disk file.) > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Mar 14 13:50:10 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 Mar 2016 10:50:10 -0700 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> Message-ID: Hi, On Mon, Mar 14, 2016 at 3:51 AM, paul mccarthy wrote: > Hi all, > > >> This isn't so if you create an image via the fileobject itself. > > > Matthew, is this currently possible in nibabel? I had a quick play, and poke > through the code, but I couldn't get anything to work - it looks like there > is no "from_fileobj" method defined in the Nifti1Image class (or any of its > bases). There isn't a `from_fileobj` because some images need more than one file (like nifti .img / .hdr pairs). It might be worth adding `from_fileobj` to image types that do need only one file (like .nii files) - I can't think of any big problems with that offhand. At the moment, you have do do this dance: In [1]: import nibabel as nib In [2]: fobj = open('my_mri.nii', 'rb') In [3]: fm = nib.Nifti1Image.make_file_map() In [6]: fm['image'].fileobj = fobj In [7]: img = nib.Nifti1Image.from_file_map(fm) In [8]: img.shape Out[8]: (2, 3, 4, 4) > If this is (or will be possible), then then the problem is solved, isn't it? > Users of nibabel can just create IndexedGzipFile instances themselves, and > pass the handle to nibabel. No need for nibabel to be dependent upon > indexed_gzip - the choice would be up to the caller. Or am I missing > something here? Sure - that could work, and be easier with a `from_fileobj` method. But it would involve the user having to use some boilerplate rather than having it happen automatically via `nib.load`. Did you have a chance to look into Cython for the wrapping problem? Cheers, Matthew From jrudascas at gmail.com Mon Mar 14 15:37:50 2016 From: jrudascas at gmail.com (Jorge Rudas) Date: Mon, 14 Mar 2016 14:37:50 -0500 Subject: [Neuroimaging] Convert Atlas from 1mm to 2mm Message-ID: Hi everbody I have a atlas in 1mm spatial resolution, but, i need this atlas in 2mm spatial resolution. How can i do this? bye *Jorge Rudas* -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Mon Mar 14 15:41:36 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Mon, 14 Mar 2016 15:41:36 -0400 Subject: [Neuroimaging] Convert Atlas from 1mm to 2mm In-Reply-To: References: Message-ID: Here is a way. http://nipy.org/dipy/examples_built/reslice_datasets.html#example-reslice-datasets On Mon, Mar 14, 2016 at 3:37 PM, Jorge Rudas wrote: > Hi everbody > > I have a atlas in 1mm spatial resolution, but, i need this atlas in 2mm > spatial resolution. How can i do this? > > bye > > *Jorge Rudas* > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Mar 14 16:17:00 2016 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 Mar 2016 21:17:00 +0100 Subject: [Neuroimaging] Convert Atlas from 1mm to 2mm In-Reply-To: References: Message-ID: <20160314201700.GI2005696@phare.normalesup.org> On Mon, Mar 14, 2016 at 03:41:36PM -0400, Eleftherios Garyfallidis wrote: > Here is a way. > http://nipy.org/dipy/examples_built/reslice_datasets.html# > example-reslice-datasets Here's another: http://nilearn.github.io/modules/generated/nilearn.image.resample_img.html (note that you can put "target_affine=np.diag([2, 2, 2])" to target 2mm resolution, and let nilearn figure the rest out. G > On Mon, Mar 14, 2016 at 3:37 PM, Jorge Rudas wrote: > Hi everbody > I have a atlas in 1mm spatial resolution, but, i need this atlas in 2mm > spatial resolution. How can i do this? > bye > Jorge Rudas > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From stjeansam at gmail.com Mon Mar 14 16:21:07 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Mon, 14 Mar 2016 21:21:07 +0100 Subject: [Neuroimaging] Convert Atlas from 1mm to 2mm In-Reply-To: <20160314201700.GI2005696@phare.normalesup.org> References: <20160314201700.GI2005696@phare.normalesup.org> Message-ID: <56E71D33.1090001@gmail.com> While we are at it, you can also consider upsampling your data to 1 mm to fit on the atlas as well, especially if this atlas contains label for finer structure like deep gray matter which are only a few voxel wide. In all cases, be sure they are registered/the affine can make both datasets correspond either at 1 mm or 2 mm. Le 2016-03-14 21:17, Gael Varoquaux a ?crit : > On Mon, Mar 14, 2016 at 03:41:36PM -0400, Eleftherios Garyfallidis wrote: >> Here is a way. >> http://nipy.org/dipy/examples_built/reslice_datasets.html# >> example-reslice-datasets > Here's another: > http://nilearn.github.io/modules/generated/nilearn.image.resample_img.html > (note that you can put "target_affine=np.diag([2, 2, 2])" to target 2mm > resolution, and let nilearn figure the rest out. > > G > > > >> On Mon, Mar 14, 2016 at 3:37 PM, Jorge Rudas wrote: >> Hi everbody >> I have a atlas in 1mm spatial resolution, but, i need this atlas in 2mm >> spatial resolution. How can i do this? >> bye >> Jorge Rudas > >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > > > >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > From arokem at gmail.com Mon Mar 14 16:35:59 2016 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 14 Mar 2016 13:35:59 -0700 Subject: [Neuroimaging] Convert Atlas from 1mm to 2mm In-Reply-To: References: Message-ID: A few more ways to do this are tallied in this previous thread: https://mail.python.org/pipermail/neuroimaging/2015-December/000656.html On Mon, Mar 14, 2016 at 12:41 PM, Eleftherios Garyfallidis < garyfallidis at gmail.com> wrote: > Here is a way. > > > http://nipy.org/dipy/examples_built/reslice_datasets.html#example-reslice-datasets > > > > On Mon, Mar 14, 2016 at 3:37 PM, Jorge Rudas wrote: > >> Hi everbody >> >> I have a atlas in 1mm spatial resolution, but, i need this atlas in 2mm >> spatial resolution. How can i do this? >> >> bye >> >> *Jorge Rudas* >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pauldmccarthy at gmail.com Mon Mar 14 17:12:21 2016 From: pauldmccarthy at gmail.com (paul mccarthy) Date: Mon, 14 Mar 2016 21:12:21 +0000 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> Message-ID: Hi Matthew, Thanks for clarifying the flieobj 'dance'! I had meant to ask you about cython - it looks like a good option (and is recommended in the official docs - https://docs.python.org/3/howto/cporting.html), so I'll look into it. Perhaps the best way forward would be for me to drop the mailing list a line when I've got something in a more useable state. Cheers, Paul On 14 March 2016 at 17:50, Matthew Brett wrote: > Hi, > > On Mon, Mar 14, 2016 at 3:51 AM, paul mccarthy > wrote: > > Hi all, > > > > > >> This isn't so if you create an image via the fileobject itself. > > > > > > Matthew, is this currently possible in nibabel? I had a quick play, and > poke > > through the code, but I couldn't get anything to work - it looks like > there > > is no "from_fileobj" method defined in the Nifti1Image class (or any of > its > > bases). > > There isn't a `from_fileobj` because some images need more than one > file (like nifti .img / .hdr pairs). > > It might be worth adding `from_fileobj` to image types that do need > only one file (like .nii files) - I can't think of any big problems > with that offhand. > > At the moment, you have do do this dance: > > In [1]: import nibabel as nib > In [2]: fobj = open('my_mri.nii', 'rb') > In [3]: fm = nib.Nifti1Image.make_file_map() > In [6]: fm['image'].fileobj = fobj > In [7]: img = nib.Nifti1Image.from_file_map(fm) > In [8]: img.shape > Out[8]: (2, 3, 4, 4) > > > If this is (or will be possible), then then the problem is solved, isn't > it? > > Users of nibabel can just create IndexedGzipFile instances themselves, > and > > pass the handle to nibabel. No need for nibabel to be dependent upon > > indexed_gzip - the choice would be up to the caller. Or am I missing > > something here? > > Sure - that could work, and be easier with a `from_fileobj` method. > But it would involve the user having to use some boilerplate rather > than having it happen automatically via `nib.load`. > > Did you have a chance to look into Cython for the wrapping problem? > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Mar 14 18:41:05 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 Mar 2016 15:41:05 -0700 Subject: [Neuroimaging] Convert Atlas from 1mm to 2mm In-Reply-To: References: Message-ID: On Mon, Mar 14, 2016 at 1:35 PM, Ariel Rokem wrote: > A few more ways to do this are tallied in this previous thread: > https://mail.python.org/pipermail/neuroimaging/2015-December/000656.html The way I would myself do this (because I usually have nipy installed) is to use nipy image slicing: In [1]: import nipy In [2]: img = nipy.load_image('/Users/mb312/data/mni_icbm152_nlin_asym_09a/mni_icbm152_t1_tal_nlin_asym_09a.nii') In [3]: img.shape Out[3]: (197, 233, 189) In [4]: img.affine Out[4]: array([[ 1., 0., 0., -98.], [ 0., 1., 0., -134.], [ 0., 0., 1., -72.], [ 0., 0., 0., 1.]]) In [5]: subsampled = img[::2, ::2, ::2] In [6]: subsampled.shape Out[6]: (99, 117, 95) In [7]: subsampled.affine Out[7]: array([[ 2., 0., 0., -98.], [ 0., 2., 0., -134.], [ 0., 0., 2., -72.], [ 0., 0., 0., 1.]]) In [8]: nipy.save_image(subsampled, 'smaller.nii') Cheers, Matthew From matthew.brett at gmail.com Mon Mar 14 21:58:46 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 Mar 2016 18:58:46 -0700 Subject: [Neuroimaging] indexed access to gziped files In-Reply-To: References: <20160311063312.GF3792063@phare.normalesup.org> <5F6A858FD00E5F4A82E3206D2D854EF8A26E6C2E@EXMB10.ohsu.edu> Message-ID: Hi, On Mon, Mar 14, 2016 at 2:12 PM, paul mccarthy wrote: > Hi Matthew, > > Thanks for clarifying the flieobj 'dance'! > > I had meant to ask you about cython - it looks like a good option (and is > recommended in the official docs - > https://docs.python.org/3/howto/cporting.html), so I'll look into it. > Perhaps the best way forward would be for me to drop the mailing list a line > when I've got something in a more useable state. That would be great. Please feel free to ask for help with Cython, we have a lot of collective experience here on the list. Cheers, Matthew From kw401 at cam.ac.uk Wed Mar 16 03:50:22 2016 From: kw401 at cam.ac.uk (Kirstie Whitaker) Date: Wed, 16 Mar 2016 07:50:22 +0000 Subject: [Neuroimaging] Postdoc job: Neuroimaging of mood disorders in adolescence at University of Cambridge Message-ID: Dear FSL, Freesurfer and Neuroimaging in Python list members (with apologies for cross posting), The closing date for this position is this SUNDAY 20th MARCH, so you don't have much time (although we only need a cover letter and CV type details for your application), but I've been asked to reach out to you because we'd really like to hire the very best postdoc for this role at the University of Cambridge. Please share this email widely with your personal and professional networks. The successful applicant will join a multidisciplinary team of scientists investigating the neural basis and cognitive correlates of behavioural phenotypes in large existing and still being collected longitudinal community and clinical cohorts of adolescents and young adults as part of the Neuroscience in Psychiatry Network (http://nspn.org.uk). Applicants will have a PhD and some postdoctoral experience reflecting an expert level of knowledge in cognitive and behavioural neuroscience together with a working knowledge and interest in longitudinal data analysis and mental illness. The full job advert and instructions on how to apply are at http://www.jobs.cam.ac.uk/job/9696. The role comes with an invitation to interview for a fellowship at Peterhouse, the oldest remaining college in Cambridge. The start date is (reasonably) flexible but projected for May 2016. The funds for this post are available until 31 March 2018 in the first instance. The Clinical School at the University of Cambridge is a proud holder of an Athena SWAN Silver award and candidates from groups that are under-represented in senior STEMM positions (such as women and people of colour) are particularly encouraged to apply. If you have any informal questions about this vacancy please contact Professor Ian Goodyer (ig104 at cam.ac.uk) For information on the application process for this vacancy, please contact Dominic Drane, HR Administrator via email on hradminpsychiatry at medschl.cam.ac.uk. Please quote reference RN08523 on your application and in any correspondence about this vacancy. Thank you Kirstie Whitaker -- Kirstie Whitaker, PhD Research Associate Department of Psychiatry University of Cambridge *Mailing Address* Brain Mapping Unit Department of Psychiatry Sir William Hardy Building Downing Street Cambridge CB2 3EB *Phone: *+44 7583 535 307 *Website:* www.kirstiewhitaker.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jetzel at wustl.edu Tue Mar 15 16:17:14 2016 From: jetzel at wustl.edu (Jo Etzel) Date: Tue, 15 Mar 2016 15:17:14 -0500 Subject: [Neuroimaging] PRNI 2016: final call for papers (deadline extended to 24 March) Message-ID: <56E86DCA.6020306@wustl.edu> ******* please accept our apologies for cross-posting ******* ------------------------------------------------------------------------------ FINAL CALL FOR PAPERS: SUBMISSION NOW ENDING 24 MARCH PRNI 2016 6th International Workshop on Pattern Recognition in Neuroimaging 22-24 June 2016 Fondazione Bruno Kessler (FBK), Trento, Italy www.prni.org - @PRNI2016 - www.facebook.com/PRNI2016/ ------------------------------------------------------------------------------ Paper submission deadline: 24 March 2016, 11:59 pm PST Acceptance notification: 22 April 2016 Camera-ready paper deadline: 7 May 2016 Oral and poster sessions: 22-24 June 2016 Pattern recognition techniques have become an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. PRNI welcomes submissions on topics including, but not limited to: * Learning from neuroimaging data - Algorithms for brain-state decoding or encoding - Optimization and regularization - Bayesian analysis of neuroimaging data - Causal inference and time delay techniques - Network and connectivity models (the connectome) - Dynamic and time-varying models - Dynamical systems and simulations - Empirical mode decomposition, multiscale decompositions - Combination of different data modalities - Efficient algorithms for large-scale data analysis * Interpretability of models and results - High-dimensional data visualization - Multivariate and multiple hypothesis testing - Summarization and presentation of inference results * Applications - Disease diagnosis and prognosis - Real-time decoding of brain states - Analysis of resting-state and task-based data - MEG, EEG, structural MRI, fMRI, diffusion MRI, ECoG, NIRS Authors should prepare full papers with a maximum length of 4 pages (two column IEEE style) for double-blind review. Manuscript submission is now open, and ends 24 March 2016, 11:59 pm PST. Accepted manuscripts will be assigned either to an oral or poster sessions; all accepted manuscripts will be included in the workshop proceedings, which will be published by the IEEE. From peled.noam at gmail.com Sat Mar 19 20:08:32 2016 From: peled.noam at gmail.com (Noam Peled) Date: Sun, 20 Mar 2016 00:08:32 +0000 Subject: [Neuroimaging] nibabel slice viewer - interactive Message-ID: Hey all, My program opens the new nibable slice viewer on its own thread, and I'm wondering how can I update the position after calling the show method. I tried to change the show behavior to be interactive, but without a success. Thanks, Noam -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Sun Mar 20 12:04:53 2016 From: arokem at gmail.com (Ariel Rokem) Date: Sun, 20 Mar 2016 09:04:53 -0700 Subject: [Neuroimaging] [dipy]Fitting diffusion models in the absence of S0 signal In-Reply-To: References: Message-ID: Hi everyone, Thought I would re-raise this. Anyone have any thoughts here? Would a PR against the DTI and DKI modules be more helpful to clarify? Cheers, Ariel On Sat, Mar 5, 2016 at 3:04 AM, Ariel Rokem wrote: > > On Thu, Mar 3, 2016 at 7:28 AM, Eleftherios Garyfallidis < > garyfallidis at gmail.com> wrote: > >> Sorry your suggestion is not exactly clear. Can you give show us how the >> code will look with your proposal? Also, apart from DTI and DKI what other >> models will be affected from this changes. Is this a change suggested only >> for DTI and DKI or will affect all or most reconstruction models? >> >> > First of all, to answer your last question: this will certainly affect DTI > and DKI, and there will be other models to follow. For example the FWDTI > that Rafael is currently proposing in that PR. The idea would be to also > more tightly integrate these three models (and future extensions... !), so > that we can remove some of the redundancies that currently exist. We could > make this a part of the base.Reconst* methods - it might apply to other > models as well (e.g. CSD, SFM, etc). But that's part of what I would like > to discuss here. > > As for code, for now, here's a sketch of what this would look like for the > tensor model: > > https://gist.github.com/arokem/508dc1b22bdbd0bdd748 > > Note that though it changes the prediction API a bit, not much else would > have to change. In particular, all the code that relies on there being 12 > model parameters will still be intact, because S0 doesn't go into the model > parameters. > > What do you think? Am I missing something big here? Or should I go ahead > and start working on a PR implementing this? > > Thanks! > > Ariel > > > >> On Mon, Feb 29, 2016 at 11:53 AM, Ariel Rokem wrote: >> >>> Hi everyone, >>> >>> In Rafael's recent PR implementing free-water-eliminated DTI ( >>> https://github.com/nipy/dipy/pull/835), we had a little bit of a >>> discussion about the use of the non-diffusion weighted signal (S0). As >>> pointed out by Rafael, in the absence of an S0 in the measured data, for >>> some models, that can be derived from the model fit ( >>> https://github.com/nipy/dipy/pull/835#issuecomment-183060855). >>> >>> I think that we would like to support using data both with and without >>> S0. On the other hand, I don't think that we should treat the derived S0 as >>> a model parameter, because in some cases, we want to provide S0 as an input >>> (for example, when predicting back the signal for another measurement, with >>> a different ). In addition, it would be hard to incorporate that into the >>> model_params variable of the TensorFit object, while maintaining backwards >>> compatibility of the TensorModel/TensorFit and derived classes (e.g., DKI). >>> >>> My proposal is to have an S0 property for ReconstFit objects. When this >>> is calculated from the model (e.g. in DTI), it gets set by the `fit` method >>> of the ReconstModel object. When it isn't, it can be set from the data. >>> Either way, it can be over-ridden by the user (e.g., for the purpose of >>> predicting on a new data-set). This might change the behavior of the >>> prediction code slightly, but maybe that is something we can live with? >>> >>> Happy to hear what everyone thinks, before we move ahead with this. >>> >>> Cheers, >>> >>> Ariel >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Sun Mar 20 15:45:43 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Sun, 20 Mar 2016 15:45:43 -0400 Subject: [Neuroimaging] [dipy]Fitting diffusion models in the absence of S0 signal In-Reply-To: References: Message-ID: Hi Ariel, Apologies for delaying to answer. What I understand is that now the fit_model is doing the prediction for the S0. Am I correct? You recreate a predicted S0 inside fit_model but fit_model is about fitting and not about predicting. I am not comfortable to changing fit_model to generate two parameters (params and S0). This command can be called inside the predict method S0 = np.mean(np.exp(np.dot(dm, params))[..., gtab.b0s_mask]) So, for me there is no reason of changing the init method of TensorFit. I hope I am not missing something. Let me know if this suggestion is helpful. Cheers, Eleftherios On Sun, Mar 20, 2016 at 12:04 PM, Ariel Rokem wrote: > Hi everyone, > > Thought I would re-raise this. Anyone have any thoughts here? Would a PR > against the DTI and DKI modules be more helpful to clarify? > > Cheers, > > Ariel > > On Sat, Mar 5, 2016 at 3:04 AM, Ariel Rokem wrote: > >> >> On Thu, Mar 3, 2016 at 7:28 AM, Eleftherios Garyfallidis < >> garyfallidis at gmail.com> wrote: >> >>> Sorry your suggestion is not exactly clear. Can you give show us how the >>> code will look with your proposal? Also, apart from DTI and DKI what other >>> models will be affected from this changes. Is this a change suggested only >>> for DTI and DKI or will affect all or most reconstruction models? >>> >>> >> First of all, to answer your last question: this will certainly affect >> DTI and DKI, and there will be other models to follow. For example the >> FWDTI that Rafael is currently proposing in that PR. The idea would be to >> also more tightly integrate these three models (and future extensions... >> !), so that we can remove some of the redundancies that currently exist. We >> could make this a part of the base.Reconst* methods - it might apply to >> other models as well (e.g. CSD, SFM, etc). But that's part of what I would >> like to discuss here. >> >> As for code, for now, here's a sketch of what this would look like for >> the tensor model: >> >> https://gist.github.com/arokem/508dc1b22bdbd0bdd748 >> >> Note that though it changes the prediction API a bit, not much else would >> have to change. In particular, all the code that relies on there being 12 >> model parameters will still be intact, because S0 doesn't go into the model >> parameters. >> >> What do you think? Am I missing something big here? Or should I go ahead >> and start working on a PR implementing this? >> >> Thanks! >> >> Ariel >> >> >> >>> On Mon, Feb 29, 2016 at 11:53 AM, Ariel Rokem wrote: >>> >>>> Hi everyone, >>>> >>>> In Rafael's recent PR implementing free-water-eliminated DTI ( >>>> https://github.com/nipy/dipy/pull/835), we had a little bit of a >>>> discussion about the use of the non-diffusion weighted signal (S0). As >>>> pointed out by Rafael, in the absence of an S0 in the measured data, for >>>> some models, that can be derived from the model fit ( >>>> https://github.com/nipy/dipy/pull/835#issuecomment-183060855). >>>> >>>> I think that we would like to support using data both with and without >>>> S0. On the other hand, I don't think that we should treat the derived S0 as >>>> a model parameter, because in some cases, we want to provide S0 as an input >>>> (for example, when predicting back the signal for another measurement, with >>>> a different ). In addition, it would be hard to incorporate that into the >>>> model_params variable of the TensorFit object, while maintaining backwards >>>> compatibility of the TensorModel/TensorFit and derived classes (e.g., DKI). >>>> >>>> My proposal is to have an S0 property for ReconstFit objects. When this >>>> is calculated from the model (e.g. in DTI), it gets set by the `fit` method >>>> of the ReconstModel object. When it isn't, it can be set from the data. >>>> Either way, it can be over-ridden by the user (e.g., for the purpose of >>>> predicting on a new data-set). This might change the behavior of the >>>> prediction code slightly, but maybe that is something we can live with? >>>> >>>> Happy to hear what everyone thinks, before we move ahead with this. >>>> >>>> Cheers, >>>> >>>> Ariel >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelnh21 at gmail.com Thu Mar 24 07:12:03 2016 From: rafaelnh21 at gmail.com (Rafael Henriques) Date: Thu, 24 Mar 2016 11:12:03 +0000 Subject: [Neuroimaging] [dipy]Fitting diffusion models in the absence of S0 signal Message-ID: Hi Eleftherios, What can we do if the data don't have b0s? In the last years, everyone was including the b0 data in their DWI acquisitions. However, nowadays some groups are starting to acquire diffusion volume of images with low b-values (e.g. 300 s.mm-2) instead of the b0 volumes. They are doing this to insure that when fitting diffusion models they do not take into account Perfusion confounding effects. So my question is - what can we do to generalize Dipy for these cases? My suggestion is to include S0 always as model parameter, so even if users do not have b0 data, the model can easily give the extrapolated non-perfusion effected S0 signal. Also, how can you recover the S0 information using the line that you are suggested? If params only have the diffusion tensor information, that line will always be equal to 1, right? Am I missing something here? Best, Rafael > Hi Ariel, > > Apologies for delaying to answer. > > What I understand is that now the fit_model is doing the prediction for the > S0. Am I correct? > You recreate a predicted S0 inside fit_model but fit_model is about fitting > and not about predicting. > > I am not comfortable to changing fit_model to generate two parameters > (params and S0). > > This command can be called inside the predict method > S0 = np.mean(np.exp(np.dot(dm, params))[..., gtab.b0s_mask]) > > So, for me there is no reason of changing the init method of TensorFit. > > I hope I am not missing something. > Let me know if this suggestion is helpful. > > Cheers, > Eleftherios > > On Sun, Mar 20, 2016 at 12:04 PM, Ariel Rokem wrote: > >> Hi everyone, >> >> Thought I would re-raise this. Anyone have any thoughts here? Would a PR >> against the DTI and DKI modules be more helpful to clarify? >> >> Cheers, >> >> Ariel >> >> On Sat, Mar 5, 2016 at 3:04 AM, Ariel Rokem wrote: >> >>> >>> On Thu, Mar 3, 2016 at 7:28 AM, Eleftherios Garyfallidis < >>> garyfallidis at gmail.com> wrote: >>> >>>> Sorry your suggestion is not exactly clear. Can you give show us how the >>>> code will look with your proposal? Also, apart from DTI and DKI what other >>>> models will be affected from this changes. Is this a change suggested only >>>> for DTI and DKI or will affect all or most reconstruction models? >>>> >>>> >>> First of all, to answer your last question: this will certainly affect >>> DTI and DKI, and there will be other models to follow. For example the >>> FWDTI that Rafael is currently proposing in that PR. The idea would be to >>> also more tightly integrate these three models (and future extensions... >>> !), so that we can remove some of the redundancies that currently exist. We >>> could make this a part of the base.Reconst* methods - it might apply to >>> other models as well (e.g. CSD, SFM, etc). But that's part of what I would >>> like to discuss here. >>> >>> As for code, for now, here's a sketch of what this would look like for >>> the tensor model: >>> >>> https://gist.github.com/arokem/508dc1b22bdbd0bdd748 >>> >>> Note that though it changes the prediction API a bit, not much else would >>> have to change. In particular, all the code that relies on there being 12 >>> model parameters will still be intact, because S0 doesn't go into the model >>> parameters. >>> >>> What do you think? Am I missing something big here? Or should I go ahead >>> and start working on a PR implementing this? >>> >>> Thanks! >>> >>> Ariel >>> >>> >>> >>>> On Mon, Feb 29, 2016 at 11:53 AM, Ariel Rokem wrote: >>>> >>>>> Hi everyone, >>>>> >>>>> In Rafael's recent PR implementing free-water-eliminated DTI ( >>>>> https://github.com/nipy/dipy/pull/835), we had a little bit of a >>>>> discussion about the use of the non-diffusion weighted signal (S0). As >>>>> pointed out by Rafael, in the absence of an S0 in the measured data, for >>>>> some models, that can be derived from the model fit ( >>>>> https://github.com/nipy/dipy/pull/835#issuecomment-183060855). >>>>> >>>>> I think that we would like to support using data both with and without >>>>> S0. On the other hand, I don't think that we should treat the derived S0 as >>>>> a model parameter, because in some cases, we want to provide S0 as an input >>>>> (for example, when predicting back the signal for another measurement, with >>>>> a different ). In addition, it would be hard to incorporate that into the >>>>> model_params variable of the TensorFit object, while maintaining backwards >>>>> compatibility of the TensorModel/TensorFit and derived classes (e.g., DKI). >>>>> >>>>> My proposal is to have an S0 property for ReconstFit objects. When this >>>>> is calculated from the model (e.g. in DTI), it gets set by the `fit` method >>>>> of the ReconstModel object. When it isn't, it can be set from the data. >>>>> Either way, it can be over-ridden by the user (e.g., for the purpose of >>>>> predicting on a new data-set). This might change the behavior of the >>>>> prediction code slightly, but maybe that is something we can live with? >>>>> >>>>> Happy to hear what everyone thinks, before we move ahead with this. >>>>> >>>>> Cheers, >>>>> >>>>> Ariel >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> From arokem at gmail.com Fri Mar 25 11:14:03 2016 From: arokem at gmail.com (Ariel Rokem) Date: Fri, 25 Mar 2016 08:14:03 -0700 Subject: [Neuroimaging] [dipy]Fitting diffusion models in the absence of S0 signal In-Reply-To: References: Message-ID: Hi Rafael, On Thu, Mar 24, 2016 at 4:12 AM, Rafael Henriques wrote: > Hi Eleftherios, > > What can we do if the data don't have b0s? > In the last years, everyone was including the b0 data in their DWI > acquisitions. However, nowadays some groups are starting to acquire > diffusion volume of images with low b-values (e.g. 300 s.mm-2) instead > of the b0 volumes. They are doing this to insure that when fitting > diffusion models they do not take into account Perfusion confounding > effects. So my question is - what can we do to generalize Dipy for > these cases? My suggestion is to include S0 always as model parameter, > so even if users do not have b0 data, the model can easily give the > extrapolated non-perfusion effected S0 signal. > My example code was not really that great to demonstrate this point. I have now updated the notebook so that it works with data that has a b=0 measurement, but also with data that doesn't (you'll need to change the commented out line in cell 3 to see both options). I also have two alternative implementations, following Eleftherios' suggestions (I think): https://gist.github.com/arokem/508dc1b22bdbd0bdd748 In one implementation an estimate of S0 (`S0_hat`) is part of the TensorFit object (I think that's what Eleftherios is suggesting). In the other implementation, the estimate is part of the TensorModel.fit function (as you suggest). The main disadvantage of alternative 1 is that we would have to pass the data again into a method of the `TensorFit` object. The main disadvantage of alternative 2 is that it requires a change to the `TensorFit.__init__` API. My own tendency is to prefer this change to the `TensorFit.__init__` API, because I don't think that people are using that API on its own, but are typically getting their `TensorFit` objects from the `TensorModel.fit` function. I think that passing the data in again into the `TensorFit` object will not only be error-prone, but is also not as efficient. Importantly, this is not just a matter for people who use the prediction API to see that the model fits the data, but also an issue for fitting models that depend on the DTI model, such as the new FWE DTI model. Cheers, Ariel > Also, how can you recover the S0 information using the line that you > are suggested? If params only have the diffusion tensor information, > that line will always be equal to 1, right? Am I missing something > here? Best, > Rafael > > > > Hi Ariel, > > > > Apologies for delaying to answer. > > > > What I understand is that now the fit_model is doing the prediction for > the > > S0. Am I correct? > > You recreate a predicted S0 inside fit_model but fit_model is about > fitting > > and not about predicting. > > > > I am not comfortable to changing fit_model to generate two parameters > > (params and S0). > > > > This command can be called inside the predict method > > S0 = np.mean(np.exp(np.dot(dm, params))[..., gtab.b0s_mask]) > > > > So, for me there is no reason of changing the init method of TensorFit. > > > > I hope I am not missing something. > > Let me know if this suggestion is helpful. > > > > Cheers, > > Eleftherios > > > > On Sun, Mar 20, 2016 at 12:04 PM, Ariel Rokem > wrote: > > > >> Hi everyone, > >> > >> Thought I would re-raise this. Anyone have any thoughts here? Would a PR > >> against the DTI and DKI modules be more helpful to clarify? > >> > >> Cheers, > >> > >> Ariel > >> > >> On Sat, Mar 5, 2016 at 3:04 AM, Ariel Rokem > wrote: > >> > >>> > >>> On Thu, Mar 3, 2016 at 7:28 AM, Eleftherios Garyfallidis < > >>> garyfallidis at gmail.com> wrote: > >>> > >>>> Sorry your suggestion is not exactly clear. Can you give show us how > the > >>>> code will look with your proposal? Also, apart from DTI and DKI what > other > >>>> models will be affected from this changes. Is this a change suggested > only > >>>> for DTI and DKI or will affect all or most reconstruction models? > >>>> > >>>> > >>> First of all, to answer your last question: this will certainly affect > >>> DTI and DKI, and there will be other models to follow. For example the > >>> FWDTI that Rafael is currently proposing in that PR. The idea would be > to > >>> also more tightly integrate these three models (and future > extensions... > >>> !), so that we can remove some of the redundancies that currently > exist. We > >>> could make this a part of the base.Reconst* methods - it might apply to > >>> other models as well (e.g. CSD, SFM, etc). But that's part of what I > would > >>> like to discuss here. > >>> > >>> As for code, for now, here's a sketch of what this would look like for > >>> the tensor model: > >>> > >>> https://gist.github.com/arokem/508dc1b22bdbd0bdd748 > >>> > >>> Note that though it changes the prediction API a bit, not much else > would > >>> have to change. In particular, all the code that relies on there being > 12 > >>> model parameters will still be intact, because S0 doesn't go into the > model > >>> parameters. > >>> > >>> What do you think? Am I missing something big here? Or should I go > ahead > >>> and start working on a PR implementing this? > >>> > >>> Thanks! > >>> > >>> Ariel > >>> > >>> > >>> > >>>> On Mon, Feb 29, 2016 at 11:53 AM, Ariel Rokem > wrote: > >>>> > >>>>> Hi everyone, > >>>>> > >>>>> In Rafael's recent PR implementing free-water-eliminated DTI ( > >>>>> https://github.com/nipy/dipy/pull/835), we had a little bit of a > >>>>> discussion about the use of the non-diffusion weighted signal (S0). > As > >>>>> pointed out by Rafael, in the absence of an S0 in the measured data, > for > >>>>> some models, that can be derived from the model fit ( > >>>>> https://github.com/nipy/dipy/pull/835#issuecomment-183060855). > >>>>> > >>>>> I think that we would like to support using data both with and > without > >>>>> S0. On the other hand, I don't think that we should treat the > derived S0 as > >>>>> a model parameter, because in some cases, we want to provide S0 as > an input > >>>>> (for example, when predicting back the signal for another > measurement, with > >>>>> a different ). In addition, it would be hard to incorporate that > into the > >>>>> model_params variable of the TensorFit object, while maintaining > backwards > >>>>> compatibility of the TensorModel/TensorFit and derived classes > (e.g., DKI). > >>>>> > >>>>> My proposal is to have an S0 property for ReconstFit objects. When > this > >>>>> is calculated from the model (e.g. in DTI), it gets set by the `fit` > method > >>>>> of the ReconstModel object. When it isn't, it can be set from the > data. > >>>>> Either way, it can be over-ridden by the user (e.g., for the purpose > of > >>>>> predicting on a new data-set). This might change the behavior of the > >>>>> prediction code slightly, but maybe that is something we can live > with? > >>>>> > >>>>> Happy to hear what everyone thinks, before we move ahead with this. > >>>>> > >>>>> Cheers, > >>>>> > >>>>> Ariel > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> Neuroimaging mailing list > >>>>> Neuroimaging at python.org > >>>>> https://mail.python.org/mailman/listinfo/neuroimaging > >>>>> > >>>>> > >>>>> > >>>> _______________________________________________ > >>>> Neuroimaging mailing list > >>>> Neuroimaging at python.org > >>>> https://mail.python.org/mailman/listinfo/neuroimaging > >>>> > >>>> > >>> > >> > >> _______________________________________________ > >> Neuroimaging mailing list > >> Neuroimaging at python.org > >> https://mail.python.org/mailman/listinfo/neuroimaging > >> > >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carolyn.parkinson at gmail.com Fri Mar 25 22:14:19 2016 From: carolyn.parkinson at gmail.com (Carolyn Parkinson) Date: Fri, 25 Mar 2016 19:14:19 -0700 Subject: [Neuroimaging] Postdoctoral fellowship at UCLA Message-ID: Dear community, I'm writing to announce that the Computational Social Neuroscience Lab in the Department of Psychology at UCLA is seeking postdoctoral fellows to begin in Fall or Winter 2016 (start date is flexible). The successful candidates will have the opportunity to contribute to research projects that integrate neuroimaging, machine learning, social network analysis, and behavioral experimentation to investigate how the human brain represents and navigates the social world. They will also be encouraged to pursue independent research projects in social neuroscience and psychology. For more information on the lab's research, please visit our website (csnlab.org ). The position is designed for a productive researcher with a PhD in neuroscience, psychology, cognitive science, computer science or a related field. Candidates with previous experience designing and analyzing fMRI experiments, and who have strong backgrounds in statistics and programming, are preferred. Candidates who have prior experience with machine learning, network analysis or computational modeling are particularly encouraged to apply. To apply, please email your application to cparkinson at ucla.edu. Applications should include a cover letter summarizing research interests and experience, a curriculum vitae, and the names and contact information for 3 references. Please include ?postdoctoral fellowship? in the subject line of any correspondence. Review of applications will begin immediately and will continue until the position is filled. The position is fully funded at NIH salary levels and also includes dedicated funds for functional neuroimaging and other research expenses. Initial appointment is for one year with potential for renewal pending satisfactory performance and funding availability. Please note that the candidate must complete all requirements for his or her PhD before being hired. The University of California is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age or protected veteran status. For the complete University of California nondiscrimination and affirmative action policy see: UC Nondiscrimination and Affirmative Action Policy ( http://policy.ucop.edu/doc/4000376/NondiscrimAffirmAct). Best, Carolyn -- Carolyn Parkinson, Ph.D. Assistant Professor UCLA Department of Psychology 6451A Franz Hall Box 951563 Los Angeles, CA 90095-1563 Tel: (310) 206-8177 Email: cparkinson at ucla.edu http://www.psych.ucla.edu/faculty/page/cparkinson http://csnlab.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From j9988t at hotmail.com Wed Mar 30 04:49:35 2016 From: j9988t at hotmail.com (=?big5?B?sWkgucWswg==?=) Date: Wed, 30 Mar 2016 16:49:35 +0800 Subject: [Neuroimaging] Dipy question Message-ID: Hello, I want to ask a question about Dipy. There is a parameter "a_low" within EuDX algorithm. "Towards an Accurate Brain Tractography" chapter 3 mentions that Athr defines the lowest possible peak value that allows tracking to continue. I set this threshold as 0.2(typical for tensor), but some FA values are lower than 0.2 in the FA distribution (within the track volume). My question: The threshold is a "smooth" threshold in the algorithm or not? thanks for help Chia-Ling Chang, Master student Medical Imaging Processing Laboratory National Cheng Kung University -------------- next part -------------- An HTML attachment was scrubbed... URL: