From khamael at gmail.com Mon Feb 1 09:47:31 2016 From: khamael at gmail.com (paulo rodrigues) Date: Mon, 1 Feb 2016 15:47:31 +0100 Subject: [Neuroimaging] Neuroimaging Engineer Opportunity Message-ID: Hello all, Mint Labs is looking to fill a R&D position on neuroimaging processing. Mint Labs is a multi-disciplinary science and engineering start-up. The company's mission is to develop technologies to better understand the human brain and the diseases that affect it. Our primary focus is on neuroimaging methods, including MRI, that yield insights into the living brain. *Context* The challenge is to develop analysis systems for multidimensional medical datasets that include imaging. Mint Labs is a SaaS platform that focus on integrating diverse clinical, imaging, genomic, and other data to better evaluate disease progression and response to interventions. You thrive in analyzing data, developing pipelines to process imaging data, and integrate multiple data points into meaningful group conclusions? You are experienced in volumetric, diffusion, rs-fMRI studies, and are not shy about Python, bash, R or whatever tool fits the work? *Objectives of the position* The position hinges on processing of very large databases. Important novelties of the project are: - Building predictive models to discriminate multiple pathologies in large inhomogeneous datasets. - Using and improving advanced connectomics and brain-parcellation techniques in dMRI / fMRI. - Support the interaction with several external research partners and clients to develop neuroimaging analysis *Desired profile* We are looking for a doctoral or post-doctoral fellow to hire in beginning of 2016. The ideal candidate would have some, but not all, of the following expertise and interests: - Experience in advanced processing of MRI, dMRI and fMRI - General knowledge of brain structure and function - Good communication skills to write high-impact neuroscience publications - Good computing skills, in particular with Python. Cloud computing experience is desired. *A great R&D environment* We're a growing, product-centric team located in Barcelona Spain. We strive on solving hard problems with creative solutions, and have fun during the process - after all we are in Barcelona. As an engineer in our team, you will tackle the challenges that arise as we build a large-scale system for data processing and distribution. The scale at which our systems must operate requires highly-performing algorithms and data structures. Solving complex distributed systems and ensuring data security are essential. In addition, you will interact closely with top researchers worldwide, using our platform. We are looking for independent thinkers with a deep understanding of the problems we are solving who are not afraid to significantly influence how the system can be improved. We hire people, not roles -- if you don't see an interesting job description but believe that you can add value to Mint Labs, we would love to hear from you. ***Contact**: * team at mint-labs.com, paulo at mint-labs.com ***Application**:* Interested candidate should send CV and a motivation letter ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jevillalonr at gmail.com Tue Feb 2 19:59:34 2016 From: jevillalonr at gmail.com (Julio Villalon) Date: Tue, 2 Feb 2016 16:59:34 -0800 Subject: [Neuroimaging] [Nipy-devel][Dipy] A new circle of Google Summer of Code starts - time for new proposals In-Reply-To: References: Message-ID: Hi Eleftherios + Nipy community, This is really great. GSoC 2015 was a very rewarding experience for me. I am willing to be a co-mentor with Eleftherios, Ariel and Omar this year. I have some ideas which I would like to share with you. 1. Bias field/non-uniformity correction of T1-weighted images. There are many freely available tools that do this: SPM, FSL-FAST, N3 (MINC and Freesurfer), N4 (ANTS), BrainVoyager, etc. The idea is to implement the best one of these and include it as part of the processing pipeline of Dipy. Bias Field correction of T1 images allows for better segmentation and consequently for better partial volume estimation of brain tissue types, which ultimately has a direct impact on novel tractography techniques such as Anatomically-Constrained Tractography (ACT). - Does anyone know is there is anything available in Python? - Which of the mentioned methods is better? 2. Recovery of local intra-voxel fiber structure for DTI-like sampling/acquisition schemes (6-40 samples, b<=1200 s/mm2). Most of the acquired Diffusion MRI (DMRI) data available nowadays is data acquired for clinical/neuroscientific studies looking at the effects of many diseases on the brain (schizophrenia, bipolar disorder, HIV, autism, Alzheimer's Disease, etc). The vast majority of this data has been acquired with sampling schemes with less than 40 samples and a single shell of less than 1200 s/mm2. With recent advancements in dictionary learning and sparse recovery techniques (e.g. Merlet et al, 2013), the idea is to make these tools available to the general public, especially to those who have this type of data and can make a better use of it. Please let me know what you think. Thanks Julio 2016-01-31 13:35 GMT-08:00 Eleftherios Garyfallidis : > Hello all, > > Taking part in Google Summer of Code (GSoC) is indeed rewording for our > project as it allows for new algorithms to be merged and at the same time > grow our development team with excellent contributors. > > After I believe a successful GSoC participation last year a new circle > starts for this year (2016). > > Last year it was Ariel and me who did most the mentoring. This year we > would like to hear others' ideas too. Therefore, we welcome other > developers/scientists who would like to mentor or propose new projects. For > those who want to mentor we will be happily co-mentors to help them with > the process and give extra feedback to the relevant students. > > In the following link we started adding projects that we think would be > interesting for this year's GSoC > > https://github.com/nipy/dipy/wiki/Google-Summer-of-Code-2016 > > Be happy to add your projects at the wiki or suggest ideas in this thread. > We also welcome the previous participants of GSoC (Julio and Rafael) to > take part as mentors this year. > > Finally, this year I am hoping to be able to get more than 2 projects > funded. Hopefully 4 but that is not certain. > > Waiting for your ideas/suggestions. What would you like to see in Dipy > that could be developed by a student during this summer? > > Best regards, > Eleftherios > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stjeansam at gmail.com Wed Feb 3 04:43:39 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Wed, 3 Feb 2016 10:43:39 +0100 Subject: [Neuroimaging] [Nipy-devel][Dipy] A new circle of Google Summer of Code starts - time for new proposals In-Reply-To: References: Message-ID: For 2, it already exists as you mentioned, and there is also the recently released [1] that propose another way to do it. It also has a python port (same original author as the matlab version) using cvxopt under the hood, but I don't know where the discussion is at for proper inclusion in dipy. [1] Deslauriers-Gauthier, S., P. Marziliano, M. Paquette, and M. Descoteaux . "The application of a new sampling theorem for non-bandlimited signals on the sphere: Improving the recovery of crossing fibers for low b-value acquisitions. " Medical Image Analysis, 2016. http://scil.dinf.usherbrooke.ca/wp-content/papers/deslauriers-etal-media16.pdf 2016-02-03 1:59 GMT+01:00 Julio Villalon : > Hi Eleftherios + Nipy community, > > This is really great. GSoC 2015 was a very rewarding experience for me. I > am willing to be a co-mentor with Eleftherios, Ariel and Omar this year. > > I have some ideas which I would like to share with you. > > 1. Bias field/non-uniformity correction of T1-weighted images. There are > many freely available tools that do this: SPM, FSL-FAST, N3 (MINC and > Freesurfer), N4 (ANTS), BrainVoyager, etc. The idea is to implement the > best one of these and include it as part of the processing pipeline of > Dipy. Bias Field correction of T1 images allows for better segmentation and > consequently for better partial volume estimation of brain tissue types, > which ultimately has a direct impact on novel tractography techniques such > as Anatomically-Constrained Tractography (ACT). > > - Does anyone know is there is anything available in Python? > - Which of the mentioned methods is better? > > > 2. Recovery of local intra-voxel fiber structure for DTI-like > sampling/acquisition schemes (6-40 samples, b<=1200 s/mm2). Most of the > acquired Diffusion MRI (DMRI) data available nowadays is data acquired for > clinical/neuroscientific studies looking at the effects of many diseases > on the brain (schizophrenia, bipolar disorder, HIV, autism, Alzheimer's > Disease, etc). The vast majority of this data has been acquired with > sampling schemes with less than 40 samples and a single shell of less than > 1200 s/mm2. With recent advancements in dictionary learning and sparse > recovery techniques (e.g. Merlet et al, 2013), the idea is to make these > tools available to the general public, especially to those who have this > type of data and can make a better use of it. > > Please let me know what you think. > > Thanks > > Julio > > > 2016-01-31 13:35 GMT-08:00 Eleftherios Garyfallidis < > garyfallidis at gmail.com>: > >> Hello all, >> >> Taking part in Google Summer of Code (GSoC) is indeed rewording for our >> project as it allows for new algorithms to be merged and at the same time >> grow our development team with excellent contributors. >> >> After I believe a successful GSoC participation last year a new circle >> starts for this year (2016). >> >> Last year it was Ariel and me who did most the mentoring. This year we >> would like to hear others' ideas too. Therefore, we welcome other >> developers/scientists who would like to mentor or propose new projects. For >> those who want to mentor we will be happily co-mentors to help them with >> the process and give extra feedback to the relevant students. >> >> In the following link we started adding projects that we think would be >> interesting for this year's GSoC >> >> https://github.com/nipy/dipy/wiki/Google-Summer-of-Code-2016 >> >> Be happy to add your projects at the wiki or suggest ideas in this >> thread. We also welcome the previous participants of GSoC (Julio and >> Rafael) to take part as mentors this year. >> >> Finally, this year I am hoping to be able to get more than 2 projects >> funded. Hopefully 4 but that is not certain. >> >> Waiting for your ideas/suggestions. What would you like to see in Dipy >> that could be developed by a student during this summer? >> >> Best regards, >> Eleftherios >> >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stjeansam at gmail.com Wed Feb 3 05:11:39 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Wed, 3 Feb 2016 11:11:39 +0100 Subject: [Neuroimaging] [Nipy-devel][Dipy] A new circle of Google Summer of Code starts - time for new proposals In-Reply-To: References: Message-ID: At the same time, a sample of project proposal that I could mentor if anyone is interested/want to add stuff. Currently, there is two noise estimation [1] methods in dipy, which both have their strengths and limitations. Having other methods which can complement these could enhance the performance of the RESTORE dti fitting or the nlmeans denoising currently, in addition to any other module which benefits from modelling noise uncertainty in its fitting process. Firstly, estimate_sigma works by predicting a single value for a whole volume, which is suboptimal for acquisition which produces spatially varying noise. It is also only designed for Rician noise. Secondly, piesno circumvent this problem by predicting a per slice value and works for both Rician/noncentral chi noise, but still assumes that the background contains a single noise distribution, which can fail when it was masked by the scanner. Moreover, there is no surefire way currently to estimate the degree of freedom arising from the noise distribution, which is left to the user and it reconstruction dependent. A cool project would be to 1. Implement a true 3D noise estimation function which works for rician/noncentral chi noise. 2. Implement a function to estimate the distribution/degrees of freedom producing the noise. 3. Enhance the examples by comparing the effect of these different estimation techniques on both RESTORE and nlmeans. For starters, [2] and [3] seems to fix task no 1 and 2. Of course any other worthwhile algorithm is welcomed. [1] https://github.com/nipy/dipy/blob/master/dipy/denoise/noise_estimate.py [2] Veraart, J., Fieremans, E., & Novikov, D. S. (2015). Diffusion MRI noise mapping using random matrix theory. Magnetic Resonance in Medicine, http://doi.org/10.1002/mrm.26059 [3] https://www.lpi.tel.uva.es/~santi/personal/docus/noise_survey_tec_report.pdf 2016-02-03 10:43 GMT+01:00 Samuel St-Jean : > For 2, it already exists as you mentioned, and there is also the recently > released [1] that propose another way to do it. It also has a python port > (same original author as the matlab version) using cvxopt under the hood, > but I don't know where the discussion is at for proper inclusion in dipy. > > [1] Deslauriers-Gauthier, S., P. Marziliano, M. Paquette, and M. > Descoteaux . "The > application of a new sampling theorem for non-bandlimited signals on the > sphere: Improving the recovery of crossing fibers for low b-value > acquisitions. > > " Medical Image Analysis, 2016. > > http://scil.dinf.usherbrooke.ca/wp-content/papers/deslauriers-etal-media16.pdf > > 2016-02-03 1:59 GMT+01:00 Julio Villalon : > >> Hi Eleftherios + Nipy community, >> >> This is really great. GSoC 2015 was a very rewarding experience for me. I >> am willing to be a co-mentor with Eleftherios, Ariel and Omar this year. >> >> I have some ideas which I would like to share with you. >> >> 1. Bias field/non-uniformity correction of T1-weighted images. There are >> many freely available tools that do this: SPM, FSL-FAST, N3 (MINC and >> Freesurfer), N4 (ANTS), BrainVoyager, etc. The idea is to implement the >> best one of these and include it as part of the processing pipeline of >> Dipy. Bias Field correction of T1 images allows for better segmentation and >> consequently for better partial volume estimation of brain tissue types, >> which ultimately has a direct impact on novel tractography techniques such >> as Anatomically-Constrained Tractography (ACT). >> >> - Does anyone know is there is anything available in Python? >> - Which of the mentioned methods is better? >> >> >> 2. Recovery of local intra-voxel fiber structure for DTI-like >> sampling/acquisition schemes (6-40 samples, b<=1200 s/mm2). Most of the >> acquired Diffusion MRI (DMRI) data available nowadays is data acquired for >> clinical/neuroscientific studies looking at the effects of many diseases >> on the brain (schizophrenia, bipolar disorder, HIV, autism, Alzheimer's >> Disease, etc). The vast majority of this data has been acquired with >> sampling schemes with less than 40 samples and a single shell of less than >> 1200 s/mm2. With recent advancements in dictionary learning and sparse >> recovery techniques (e.g. Merlet et al, 2013), the idea is to make these >> tools available to the general public, especially to those who have this >> type of data and can make a better use of it. >> >> Please let me know what you think. >> >> Thanks >> >> Julio >> >> >> 2016-01-31 13:35 GMT-08:00 Eleftherios Garyfallidis < >> garyfallidis at gmail.com>: >> >>> Hello all, >>> >>> Taking part in Google Summer of Code (GSoC) is indeed rewording for our >>> project as it allows for new algorithms to be merged and at the same time >>> grow our development team with excellent contributors. >>> >>> After I believe a successful GSoC participation last year a new circle >>> starts for this year (2016). >>> >>> Last year it was Ariel and me who did most the mentoring. This year we >>> would like to hear others' ideas too. Therefore, we welcome other >>> developers/scientists who would like to mentor or propose new projects. For >>> those who want to mentor we will be happily co-mentors to help them with >>> the process and give extra feedback to the relevant students. >>> >>> In the following link we started adding projects that we think would be >>> interesting for this year's GSoC >>> >>> https://github.com/nipy/dipy/wiki/Google-Summer-of-Code-2016 >>> >>> Be happy to add your projects at the wiki or suggest ideas in this >>> thread. We also welcome the previous participants of GSoC (Julio and >>> Rafael) to take part as mentors this year. >>> >>> Finally, this year I am hoping to be able to get more than 2 projects >>> funded. Hopefully 4 but that is not certain. >>> >>> Waiting for your ideas/suggestions. What would you like to see in Dipy >>> that could be developed by a student during this summer? >>> >>> Best regards, >>> Eleftherios >>> >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Thu Feb 4 18:58:34 2016 From: arokem at gmail.com (Ariel Rokem) Date: Thu, 4 Feb 2016 15:58:34 -0800 Subject: [Neuroimaging] nlmeans in HCP data Message-ID: Hi everyone, does anyone use the Dipy nlmeans with HCP diffusion data? Is that a good idea? What do you use to estimate the sigma input? If you use dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword argument for these data? Since the preprocessed data has gone through some heavy preprocessing, I am not sure whether assuming that 32 (the number of channels in these machines, if I understand correctly) is a good number is reasonable. Thanks! Ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.ferraris at ucl.ac.uk Thu Feb 4 11:35:07 2016 From: s.ferraris at ucl.ac.uk (Ferraris, Sebastiano) Date: Thu, 4 Feb 2016 16:35:07 +0000 Subject: [Neuroimaging] Load, Modify and Save Nifti Message-ID: <4C93867B-7D96-48FA-9A10-0D9CD33887D6@ucl.ac.uk> Alternatively, to avoid the creation of a new matrix you can do matrix[...] = np.zeros(matrix.shape) instead of matrix = np.zeros(matrix.shape) Or prepare the functions: def update_field(nibimg, new_data): data = nibimg.get_data() data[...] = new_data def update_affine(nibimg, new_affine): affine = nibimg.affine() affine[...] = new_affine The point is that: matrix = img.get_data() # matrix points to the data that are stored in the cache. matrix = np.zeros(matrix.shape) # matrix points to a new object just created, and the previous data is lost. nb.save(img, "output.nii.gz?) # img is saved without any modification on its data. Cheers Sebastiano From stjeansam at gmail.com Fri Feb 5 11:13:47 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Fri, 5 Feb 2016 17:13:47 +0100 Subject: [Neuroimaging] nlmeans in HCP data In-Reply-To: References: Message-ID: To partly answer the question, you should pick N=1 as the HCP data is using a SENSE1 reconstruction, and thus always give a rician distribution [1]. As for using estimate sigma, it tends to overblur stuff for higher b-value/spatially varying noise (it has a hard time on our philips 3T data for example, edges are overblurred and center is untouched). Regarding these shortcomings, I linked to some ideas to solve some of these caveats in the gsoc discussion thread though. [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/ 2016-02-05 0:58 GMT+01:00 Ariel Rokem : > Hi everyone, > > does anyone use the Dipy nlmeans with HCP diffusion data? Is that a good > idea? What do you use to estimate the sigma input? If you use > dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword > argument for these data? Since the preprocessed data has gone through some > heavy preprocessing, I am not sure whether assuming that 32 (the number of > channels in these machines, if I understand correctly) is a good number is > reasonable. > > Thanks! > > Ariel > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Fri Feb 5 21:31:27 2016 From: arokem at gmail.com (Ariel Rokem) Date: Fri, 5 Feb 2016 18:31:27 -0800 Subject: [Neuroimaging] [Nipy-devel][Dipy] A new circle of Google Summer of Code starts - time for new proposals In-Reply-To: References: Message-ID: Hi Julio, Would be great to have a hand. Especially in reviewing code from new contributors. That's always our bottle-neck... On Tue, Feb 2, 2016 at 4:59 PM, Julio Villalon wrote: > Hi Eleftherios + Nipy community, > > This is really great. GSoC 2015 was a very rewarding experience for me. I > am willing to be a co-mentor with Eleftherios, Ariel and Omar this year. > > I have some ideas which I would like to share with you. > > 1. Bias field/non-uniformity correction of T1-weighted images. There are > many freely available tools that do this: SPM, FSL-FAST, N3 (MINC and > Freesurfer), N4 (ANTS), BrainVoyager, etc. The idea is to implement the > best one of these and include it as part of the processing pipeline of > Dipy. Bias Field correction of T1 images allows for better segmentation and > consequently for better partial volume estimation of brain tissue types, > which ultimately has a direct impact on novel tractography techniques such > as Anatomically-Constrained Tractography (ACT). > > - Does anyone know is there is anything available in Python? > - Which of the mentioned methods is better? > > I don't know the answer to either of these questions, but I think this is in principle a good idea, as long as it's well scoped. Do you want to take a shot at writing this up on the wiki ideas page? > > 2. Recovery of local intra-voxel fiber structure for DTI-like > sampling/acquisition schemes (6-40 samples, b<=1200 s/mm2). Most of the > acquired Diffusion MRI (DMRI) data available nowadays is data acquired for > clinical/neuroscientific studies looking at the effects of many diseases > on the brain (schizophrenia, bipolar disorder, HIV, autism, Alzheimer's > Disease, etc). The vast majority of this data has been acquired with > sampling schemes with less than 40 samples and a single shell of less than > 1200 s/mm2. With recent advancements in dictionary learning and sparse > recovery techniques (e.g. Merlet et al, 2013), the idea is to make these > tools available to the general public, especially to those who have this > type of data and can make a better use of it. > Arguably, we already have a method like this implemented in dipy in the Sparse Fascicle Model. See Figure 10 of our paper ( http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0123272), that shows the convergence in accuracy of the model as a function of directions. Arguably it reaches fairly high accuracy already at 40 directions, and particularly for low b values. But having more methods would be great, of course. Could you please provide a link to the Merlet paper you mentioned? Cheers, Ariel > > Please let me know what you think. > > Thanks > > Julio > > > 2016-01-31 13:35 GMT-08:00 Eleftherios Garyfallidis < > garyfallidis at gmail.com>: > >> Hello all, >> >> Taking part in Google Summer of Code (GSoC) is indeed rewording for our >> project as it allows for new algorithms to be merged and at the same time >> grow our development team with excellent contributors. >> >> After I believe a successful GSoC participation last year a new circle >> starts for this year (2016). >> >> Last year it was Ariel and me who did most the mentoring. This year we >> would like to hear others' ideas too. Therefore, we welcome other >> developers/scientists who would like to mentor or propose new projects. For >> those who want to mentor we will be happily co-mentors to help them with >> the process and give extra feedback to the relevant students. >> >> In the following link we started adding projects that we think would be >> interesting for this year's GSoC >> >> https://github.com/nipy/dipy/wiki/Google-Summer-of-Code-2016 >> >> Be happy to add your projects at the wiki or suggest ideas in this >> thread. We also welcome the previous participants of GSoC (Julio and >> Rafael) to take part as mentors this year. >> >> Finally, this year I am hoping to be able to get more than 2 projects >> funded. Hopefully 4 but that is not certain. >> >> Waiting for your ideas/suggestions. What would you like to see in Dipy >> that could be developed by a student during this summer? >> >> Best regards, >> Eleftherios >> >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Fri Feb 5 21:44:34 2016 From: arokem at gmail.com (Ariel Rokem) Date: Fri, 5 Feb 2016 18:44:34 -0800 Subject: [Neuroimaging] nlmeans in HCP data In-Reply-To: References: Message-ID: Thanks for the answer. I actually hadn't read the GSoC thread before sending this question - just read that too. This might be a naive question: what do you think about estimating the noise in each voxel from the variance in the b0s image? When we noticed that the GE scanner at Stanford was masking out the background, we switched the implementation of RESTORE on vistasoft to use the variance between multiple b0 images as an estimate of the noise, including a correction for bias due to small sample: https://github.com/vistalab/vistasoft/blob/master/mrDiffusion/utils/dtiComputeImageNoise.m#L58 In this case, we take a median to have one number for the entire volume, but we could also just keep the variance in each voxel. Do you see any obvious problems with that? >From my point of view, it is rather straightforward to quantitatively evaluate whether a denoising method is improving your analysis. Either your model of the diffusion data fits the data better (in the cross-validation sense) following denoising, or it doesn't, in which case the method's probably no good. On Fri, Feb 5, 2016 at 8:13 AM, Samuel St-Jean wrote: > To partly answer the question, you should pick N=1 as the HCP data is > using a SENSE1 reconstruction, and thus always give a rician distribution > [1]. > As for using estimate sigma, it tends to overblur stuff for higher > b-value/spatially varying noise (it has a hard time on our philips 3T data > for example, edges are overblurred and center is untouched). > > Regarding these shortcomings, I linked to some ideas to solve some of > these caveats in the gsoc discussion thread though. > > [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/ > > 2016-02-05 0:58 GMT+01:00 Ariel Rokem : > >> Hi everyone, >> >> does anyone use the Dipy nlmeans with HCP diffusion data? Is that a good >> idea? What do you use to estimate the sigma input? If you use >> dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword >> argument for these data? Since the preprocessed data has gone through some >> heavy preprocessing, I am not sure whether assuming that 32 (the number of >> channels in these machines, if I understand correctly) is a good number is >> reasonable. >> >> Thanks! >> >> Ariel >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stjeansam at gmail.com Sat Feb 6 06:03:56 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Sat, 6 Feb 2016 12:03:56 +0100 Subject: [Neuroimaging] nlmeans in HCP data In-Reply-To: References: Message-ID: <56B5D31C.7030102@gmail.com> For starters, if you have motion between b0s volumes or a few of them, you might have problems and induce a larger variance because of that, but I guess if it works why not. As for a single voxel estimate, it might be unstable due to the small number of samples, but taking moving neighborhood could help. Actually they use it fr estimating mtion and pulsation artefact if I recall correctly [1] As for evaluating, predicting signal or not is one of the aspect you can look at from my opinion, but with all the local model fitting and tractography happening afterward, looking at a squared error value is not very informative, especially if it averaged over all the volume. Since a large error in a crossing voxel could be much worse than small errors in single fiber voxels, it depends on what yu want to get at the end of the day. I can be useful to judge an optimization scheme, but beyond that I don't feel like it reflect properties of the end goal. [1] https://www.ncbi.nlm.nih.gov/pubmed/21469191 Le 2016-02-06 03:44, Ariel Rokem a ?crit : > Thanks for the answer. I actually hadn't read the GSoC thread before > sending this question - just read that too.? > > This might be a naive question: what do you think about estimating the > noise in each voxel from the variance in the b0s image?? > > When we noticed that the GE scanner at Stanford was masking out the > background, we switched the implementation of RESTORE on vistasoft to > use the variance between multiple b0 images as an estimate of the > noise, including a correction for bias due to small sample:? > > https://github.com/vistalab/vistasoft/blob/master/mrDiffusion/utils/dtiComputeImageNoise.m#L58 > > In this case, we take a median to have one number for the entire > volume, but we could also just keep the variance in each voxel. Do you > see any obvious problems with that? > > From my point of view, it is rather straightforward to quantitatively > evaluate whether a denoising method is improving your analysis. Either > your model of the diffusion data fits the data better (in the > cross-validation sense) following denoising, or it doesn't, in which > case the method's probably no good. > > > On Fri, Feb 5, 2016 at 8:13 AM, Samuel St-Jean > wrote: > > To partly answer the question, you should pick N=1 as the HCP data > is using a SENSE1 reconstruction, and thus always give a rician > distribution [1]. > As for using estimate sigma, it tends to overblur stuff for higher > b-value/spatially varying noise (it has a hard time on our philips > 3T data for example, edges are overblurred and center is untouched). > > Regarding these shortcomings, I linked to some ideas to solve some > of these caveats in the gsoc discussion thread though. > > [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/ > > 2016-02-05 0:58 GMT+01:00 Ariel Rokem >: > > Hi everyone,? > > does anyone use the Dipy nlmeans with HCP diffusion data? Is > that a good idea? What do you use to estimate the sigma input? > If you use dipy.denoise.noise_estimate.estimate_sigma, how do > you set the `n` keyword argument for these data? Since the > preprocessed data has gone through some heavy preprocessing, I > am not sure whether assuming that 32 (the number of channels > in these machines, if I understand correctly) is a good number > is reasonable.? > > Thanks!? > > Ariel > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Sat Feb 6 13:57:35 2016 From: arokem at gmail.com (Ariel Rokem) Date: Sat, 6 Feb 2016 10:57:35 -0800 Subject: [Neuroimaging] nlmeans in HCP data In-Reply-To: <56B5D31C.7030102@gmail.com> References: <56B5D31C.7030102@gmail.com> Message-ID: Thanks for your answer: On Sat, Feb 6, 2016 at 3:03 AM, Samuel St-Jean wrote: > For starters, if you have motion between b0s volumes or a few of them, you > might have problems and induce a larger variance because of that, but I > guess if it works why not. As for a single voxel estimate, it might be > unstable due to the small number of samples, but taking moving neighborhood > could help. Actually they use it fr estimating mtion and pulsation artefact > if I recall correctly [1] > > I think that one practical thing would be to create the 3D map of the b0 noise, including the possibility for correction for small number of samples (see the Matlab code I referred to). I think that it would be up to the user to determine whether this map is useful, to smooth it spatially, or to take one number (e.g. the median) out of it, and whether to ignore certain parts of this image that are particularly susceptible to the motion issues (e.g. edges of the brain, interface between white matter and ventricles). I can go ahead and make a PR with that, and we can continue the discussion there, but it might take me a few days to get that up. > As for evaluating, predicting signal or not is one of the aspect you can > look at from my opinion, but with all the local model fitting and > tractography happening afterward, looking at a squared error value is not > very informative, especially if it averaged over all the volume. Since a > large error in a crossing voxel could be much worse than small errors in > single fiber voxels, it depends on what yu want to get at the end of the > day. I can be useful to judge an optimization scheme, but beyond that I > don't feel like it reflect properties of the end goal. > > [1] https://www.ncbi.nlm.nih.gov/pubmed/21469191 > > Le 2016-02-06 03:44, Ariel Rokem a ?crit : > > Thanks for the answer. I actually hadn't read the GSoC thread before > sending this question - just read that too.? > > This might be a naive question: what do you think about estimating the > noise in each voxel from the variance in the b0s image?? > > When we noticed that the GE scanner at Stanford was masking out the > background, we switched the implementation of RESTORE on vistasoft to use > the variance between multiple b0 images as an estimate of the noise, > including a correction for bias due to small sample:? > > > https://github.com/vistalab/vistasoft/blob/master/mrDiffusion/utils/dtiComputeImageNoise.m#L58 > > In this case, we take a median to have one number for the entire volume, > but we could also just keep the variance in each voxel. Do you see any > obvious problems with that? > > From my point of view, it is rather straightforward to quantitatively > evaluate whether a denoising method is improving your analysis. Either your > model of the diffusion data fits the data better (in the cross-validation > sense) following denoising, or it doesn't, in which case the method's > probably no good. > > > On Fri, Feb 5, 2016 at 8:13 AM, Samuel St-Jean > wrote: > >> To partly answer the question, you should pick N=1 as the HCP data is >> using a SENSE1 reconstruction, and thus always give a rician distribution >> [1]. >> As for using estimate sigma, it tends to overblur stuff for higher >> b-value/spatially varying noise (it has a hard time on our philips 3T data >> for example, edges are overblurred and center is untouched). >> >> Regarding these shortcomings, I linked to some ideas to solve some of >> these caveats in the gsoc discussion thread though. >> >> [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/ >> >> 2016-02-05 0:58 GMT+01:00 Ariel Rokem : >> >>> Hi everyone,? >>> >>> does anyone use the Dipy nlmeans with HCP diffusion data? Is that a good >>> idea? What do you use to estimate the sigma input? If you use >>> dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword >>> argument for these data? Since the preprocessed data has gone through some >>> heavy preprocessing, I am not sure whether assuming that 32 (the number of >>> channels in these machines, if I understand correctly) is a good number is >>> reasonable.? >>> >>> Thanks!? >>> >>> Ariel >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > _______________________________________________ > Neuroimaging mailing listNeuroimaging at python.orghttps://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Sat Feb 6 13:59:20 2016 From: arokem at gmail.com (Ariel Rokem) Date: Sat, 6 Feb 2016 10:59:20 -0800 Subject: [Neuroimaging] nlmeans in HCP data In-Reply-To: <56B5D31C.7030102@gmail.com> References: <56B5D31C.7030102@gmail.com> Message-ID: As for this: On Sat, Feb 6, 2016 at 3:03 AM, Samuel St-Jean wrote: > For starters, if you have motion between b0s volumes or a few of them, you > might have problems and induce a larger variance because of that, but I > guess if it works why not. As for a single voxel estimate, it might be > unstable due to the small number of samples, but taking moving neighborhood > could help. Actually they use it fr estimating mtion and pulsation artefact > if I recall correctly [1] > > As for evaluating, predicting signal or not is one of the aspect you can > look at from my opinion, but with all the local model fitting and > tractography happening afterward, looking at a squared error value is not > very informative, especially if it averaged over all the volume. Since a > large error in a crossing voxel could be much worse than small errors in > single fiber voxels, it depends on what yu want to get at the end of the > day. I can be useful to judge an optimization scheme, but beyond that I > don't feel like it reflect properties of the end goal. > > I'd say that model is accuracy is a necessary condition for useful inferences, though it might not always be sufficient. Wouldn't you agree? > [1] https://www.ncbi.nlm.nih.gov/pubmed/21469191 > > Le 2016-02-06 03:44, Ariel Rokem a ?crit : > > Thanks for the answer. I actually hadn't read the GSoC thread before > sending this question - just read that too.? > > This might be a naive question: what do you think about estimating the > noise in each voxel from the variance in the b0s image?? > > When we noticed that the GE scanner at Stanford was masking out the > background, we switched the implementation of RESTORE on vistasoft to use > the variance between multiple b0 images as an estimate of the noise, > including a correction for bias due to small sample:? > > > https://github.com/vistalab/vistasoft/blob/master/mrDiffusion/utils/dtiComputeImageNoise.m#L58 > > In this case, we take a median to have one number for the entire volume, > but we could also just keep the variance in each voxel. Do you see any > obvious problems with that? > > From my point of view, it is rather straightforward to quantitatively > evaluate whether a denoising method is improving your analysis. Either your > model of the diffusion data fits the data better (in the cross-validation > sense) following denoising, or it doesn't, in which case the method's > probably no good. > > > On Fri, Feb 5, 2016 at 8:13 AM, Samuel St-Jean > wrote: > >> To partly answer the question, you should pick N=1 as the HCP data is >> using a SENSE1 reconstruction, and thus always give a rician distribution >> [1]. >> As for using estimate sigma, it tends to overblur stuff for higher >> b-value/spatially varying noise (it has a hard time on our philips 3T data >> for example, edges are overblurred and center is untouched). >> >> Regarding these shortcomings, I linked to some ideas to solve some of >> these caveats in the gsoc discussion thread though. >> >> [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/ >> >> 2016-02-05 0:58 GMT+01:00 Ariel Rokem : >> >>> Hi everyone,? >>> >>> does anyone use the Dipy nlmeans with HCP diffusion data? Is that a good >>> idea? What do you use to estimate the sigma input? If you use >>> dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword >>> argument for these data? Since the preprocessed data has gone through some >>> heavy preprocessing, I am not sure whether assuming that 32 (the number of >>> channels in these machines, if I understand correctly) is a good number is >>> reasonable.? >>> >>> Thanks!? >>> >>> Ariel >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > _______________________________________________ > Neuroimaging mailing listNeuroimaging at python.orghttps://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bobd at stanford.edu Sat Feb 6 17:41:17 2016 From: bobd at stanford.edu (Bob Dougherty) Date: Sat, 6 Feb 2016 22:41:17 +0000 Subject: [Neuroimaging] nlmeans in HCP data In-Reply-To: References: <56B5D31C.7030102@gmail.com>, Message-ID: This is very relevant to what Chris Poetter, Charles Yang Zheng, and I are working on. We are taking a bit more of a principled approach than what we did in mrVista, including developing an optimized smoothing kernel (or local linear regression) to produce a noise map based on the b0 image variance. Our plan is to 1. implement this in dipy and 2. write a paper describing the method. ________________________________ From: Neuroimaging on behalf of Ariel Rokem Sent: Saturday, February 6, 2016 10:57 AM To: Neuroimaging analysis in Python Subject: Re: [Neuroimaging] nlmeans in HCP data Thanks for your answer: On Sat, Feb 6, 2016 at 3:03 AM, Samuel St-Jean > wrote: For starters, if you have motion between b0s volumes or a few of them, you might have problems and induce a larger variance because of that, but I guess if it works why not. As for a single voxel estimate, it might be unstable due to the small number of samples, but taking moving neighborhood could help. Actually they use it fr estimating mtion and pulsation artefact if I recall correctly [1] I think that one practical thing would be to create the 3D map of the b0 noise, including the possibility for correction for small number of samples (see the Matlab code I referred to). I think that it would be up to the user to determine whether this map is useful, to smooth it spatially, or to take one number (e.g. the median) out of it, and whether to ignore certain parts of this image that are particularly susceptible to the motion issues (e.g. edges of the brain, interface between white matter and ventricles). I can go ahead and make a PR with that, and we can continue the discussion there, but it might take me a few days to get that up. As for evaluating, predicting signal or not is one of the aspect you can look at from my opinion, but with all the local model fitting and tractography happening afterward, looking at a squared error value is not very informative, especially if it averaged over all the volume. Since a large error in a crossing voxel could be much worse than small errors in single fiber voxels, it depends on what yu want to get at the end of the day. I can be useful to judge an optimization scheme, but beyond that I don't feel like it reflect properties of the end goal. [1] https://www.ncbi.nlm.nih.gov/pubmed/21469191 Le 2016-02-06 03:44, Ariel Rokem a ?crit : Thanks for the answer. I actually hadn't read the GSoC thread before sending this question - just read that too.? This might be a naive question: what do you think about estimating the noise in each voxel from the variance in the b0s image?? When we noticed that the GE scanner at Stanford was masking out the background, we switched the implementation of RESTORE on vistasoft to use the variance between multiple b0 images as an estimate of the noise, including a correction for bias due to small sample:? https://github.com/vistalab/vistasoft/blob/master/mrDiffusion/utils/dtiComputeImageNoise.m#L58 In this case, we take a median to have one number for the entire volume, but we could also just keep the variance in each voxel. Do you see any obvious problems with that? >From my point of view, it is rather straightforward to quantitatively evaluate whether a denoising method is improving your analysis. Either your model of the diffusion data fits the data better (in the cross-validation sense) following denoising, or it doesn't, in which case the method's probably no good. On Fri, Feb 5, 2016 at 8:13 AM, Samuel St-Jean > wrote: To partly answer the question, you should pick N=1 as the HCP data is using a SENSE1 reconstruction, and thus always give a rician distribution [1]. As for using estimate sigma, it tends to overblur stuff for higher b-value/spatially varying noise (it has a hard time on our philips 3T data for example, edges are overblurred and center is untouched). Regarding these shortcomings, I linked to some ideas to solve some of these caveats in the gsoc discussion thread though. [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/ 2016-02-05 0:58 GMT+01:00 Ariel Rokem >: Hi everyone,? does anyone use the Dipy nlmeans with HCP diffusion data? Is that a good idea? What do you use to estimate the sigma input? If you use dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword argument for these data? Since the preprocessed data has gone through some heavy preprocessing, I am not sure whether assuming that 32 (the number of channels in these machines, if I understand correctly) is a good number is reasonable.? Thanks!? Ariel _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Sat Feb 6 18:35:25 2016 From: arokem at gmail.com (Ariel Rokem) Date: Sat, 6 Feb 2016 15:35:25 -0800 Subject: [Neuroimaging] nlmeans in HCP data In-Reply-To: References: <56B5D31C.7030102@gmail.com> Message-ID: On Sat, Feb 6, 2016 at 2:41 PM, Bob Dougherty wrote: > This is very relevant to what Chris Poetter, Charles Yang Zheng, and I > are working on. We are taking a bit more of a principled approach than > what we did in mrVista, including developing an optimized smoothing kernel > (or local linear regression) to produce a noise map based on the b0 image > variance. Our plan is to 1. implement this in dipy and 2. write a paper > describing the method. > That's great! I like every part of this message, and can't wait to hear what you've been developing. BTW - you might already have something, but you might find use for the local linear regression code I wrote based on ideas from Kendrick Kay's localregression3d (and the LOWESS papers): https://github.com/arokem/lowess It's slow as molasses, but fairly general. Cheers, Ariel > > ------------------------------ > *From:* Neuroimaging > on behalf of Ariel Rokem > *Sent:* Saturday, February 6, 2016 10:57 AM > *To:* Neuroimaging analysis in Python > *Subject:* Re: [Neuroimaging] nlmeans in HCP data > > Thanks for your answer: > > On Sat, Feb 6, 2016 at 3:03 AM, Samuel St-Jean > wrote: > >> For starters, if you have motion between b0s volumes or a few of them, >> you might have problems and induce a larger variance because of that, but I >> guess if it works why not. As for a single voxel estimate, it might be >> unstable due to the small number of samples, but taking moving neighborhood >> could help. Actually they use it fr estimating mtion and pulsation artefact >> if I recall correctly [1] >> >> > I think that one practical thing would be to create the 3D map of the b0 > noise, including the possibility for correction for small number of samples > (see the Matlab code I referred to). I think that it would be up to the > user to determine whether this map is useful, to smooth it spatially, or to > take one number (e.g. the median) out of it, and whether to ignore certain > parts of this image that are particularly susceptible to the motion issues > (e.g. edges of the brain, interface between white matter and ventricles). I > can go ahead and make a PR with that, and we can continue the discussion > there, but it might take me a few days to get that up. > > >> As for evaluating, predicting signal or not is one of the aspect you can >> look at from my opinion, but with all the local model fitting and >> tractography happening afterward, looking at a squared error value is not >> very informative, especially if it averaged over all the volume. Since a >> large error in a crossing voxel could be much worse than small errors in >> single fiber voxels, it depends on what yu want to get at the end of the >> day. I can be useful to judge an optimization scheme, but beyond that I >> don't feel like it reflect properties of the end goal. >> >> [1] https://www.ncbi.nlm.nih.gov/pubmed/21469191 >> >> Le 2016-02-06 03:44, Ariel Rokem a ?crit : >> >> Thanks for the answer. I actually hadn't read the GSoC thread before >> sending this question - just read that too.? >> >> This might be a naive question: what do you think about estimating the >> noise in each voxel from the variance in the b0s image?? >> >> When we noticed that the GE scanner at Stanford was masking out the >> background, we switched the implementation of RESTORE on vistasoft to use >> the variance between multiple b0 images as an estimate of the noise, >> including a correction for bias due to small sample:? >> >> >> https://github.com/vistalab/vistasoft/blob/master/mrDiffusion/utils/dtiComputeImageNoise.m#L58 >> >> In this case, we take a median to have one number for the entire volume, >> but we could also just keep the variance in each voxel. Do you see any >> obvious problems with that? >> >> From my point of view, it is rather straightforward to quantitatively >> evaluate whether a denoising method is improving your analysis. Either your >> model of the diffusion data fits the data better (in the cross-validation >> sense) following denoising, or it doesn't, in which case the method's >> probably no good. >> >> >> On Fri, Feb 5, 2016 at 8:13 AM, Samuel St-Jean >> wrote: >> >>> To partly answer the question, you should pick N=1 as the HCP data is >>> using a SENSE1 reconstruction, and thus always give a rician distribution >>> [1]. >>> As for using estimate sigma, it tends to overblur stuff for higher >>> b-value/spatially varying noise (it has a hard time on our philips 3T data >>> for example, edges are overblurred and center is untouched). >>> >>> Regarding these shortcomings, I linked to some ideas to solve some of >>> these caveats in the gsoc discussion thread though. >>> >>> [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3657588/ >>> >>> 2016-02-05 0:58 GMT+01:00 Ariel Rokem : >>> >>>> Hi everyone,? >>>> >>>> does anyone use the Dipy nlmeans with HCP diffusion data? Is that a >>>> good idea? What do you use to estimate the sigma input? If you use >>>> dipy.denoise.noise_estimate.estimate_sigma, how do you set the `n` keyword >>>> argument for these data? Since the preprocessed data has gone through some >>>> heavy preprocessing, I am not sure whether assuming that 32 (the number of >>>> channels in these machines, if I understand correctly) is a good number is >>>> reasonable.? >>>> >>>> Thanks!? >>>> >>>> Ariel >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> >> _______________________________________________ >> Neuroimaging mailing listNeuroimaging at python.orghttps://mail.python.org/mailman/listinfo/neuroimaging >> >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Sun Feb 7 18:28:07 2016 From: arokem at gmail.com (Ariel Rokem) Date: Sun, 7 Feb 2016 15:28:07 -0800 Subject: [Neuroimaging] Nitime 0.6 Message-ID: Hi everyone, I am happy to announce the release of nitime version 0.6, now available on PyPI. This is a maintenance release, supporting newer versions of numpy and matplotlib. Have fun! Ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Sun Feb 7 19:52:04 2016 From: lists at onerussian.com (Yaroslav Halchenko) Date: Sun, 7 Feb 2016 19:52:04 -0500 Subject: [Neuroimaging] Nitime 0.6 In-Reply-To: References: Message-ID: <20160208005204.GB7904@onerussian.com> On Sun, 07 Feb 2016, Ariel Rokem wrote: > Hi everyone, > I am happy to announce the release of nitime version 0.6, now available on > PyPI. > This is a maintenance release, supporting newer versions of numpy and > matplotlib. Congrats. And now 0.6 was uploaded to (Neuro)Debian Cheers! -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From stjeansam at gmail.com Tue Feb 9 09:38:20 2016 From: stjeansam at gmail.com (Samuel St-Jean) Date: Tue, 9 Feb 2016 15:38:20 +0100 Subject: [Neuroimaging] [dipy] Resampling in bundle registration with uneven stepsize Message-ID: While looking at the example over here [1], one important step is to resample everything to the same size. Unfortunately, the why is not mentioned in the example, so I was wondering if it is for performance reason (quickbundle needs that, and it is used internally or at least if uses the same distance metric) or for theoretical reasons? Would it still work well on uneven stepsize bundles (fancy tracking/compressed fibers) as long as they have the same number of points or would something break in the theory by doing that? Should one aim to resample to an even number of points and even stepsize or only the number of points matter for this algorithm? [1] http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Tue Feb 9 10:04:32 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Tue, 9 Feb 2016 10:04:32 -0500 Subject: [Neuroimaging] [dipy] Resampling in bundle registration with uneven stepsize In-Reply-To: References: Message-ID: Hello, Great question. Thanks for asking. Every streamline needs to have the same number of points. There is a theory behind this. This is needed to calculate properly the MDF (minimum direct flipped) distance which is then consequently used in the similarity metric to drive the registration, the BMD (bunde-based minimum distance). If you don't do that then the theory breaks and the algorithm will not do well. So, yes you have to set the number of points in order to use the SLR for streamlines. If you don't do it you will not have a good space to drive the registration and find the optimum. For more info you look at the paper http://www.ncbi.nlm.nih.gov/pubmed/25987367 On Tue, Feb 9, 2016 at 9:38 AM, Samuel St-Jean wrote: > While looking at the example over here [1], one important step is to > resample everything to the same size. Unfortunately, the why is not > mentioned in the example, so I was wondering if it is for performance > reason (quickbundle needs that, and it is used internally or at least if > uses the same distance metric) or for theoretical reasons? > > Would it still work well on uneven stepsize bundles (fancy > tracking/compressed fibers) as long as they have the same number of points > or would something break in the theory by doing that? Should one aim to > resample to an even number of points and even stepsize or only the number > of points matter for this algorithm? > > Yes, as long as you have the same number of points, it should be good. It's also more preferable to have equidistant points (all line-segments of equal length also known as fixed-length representation). So, every time you use the SLR with any streamlines (compressed/uncompressed etc.) always use set_number_of_points first to make sure all is good. This function will make sure that the streamlines will have the same number of points for all streamlines and equal length of segments per streamline. And then call StreamlineLinearRegistration. This is shown in the example but indeed there is no explanation for why (I should add a comment there) http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration Cheers, Eleftherios > > [1] > http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikasmishra95 at gmail.com Wed Feb 10 19:12:41 2016 From: vikasmishra95 at gmail.com (Vikas Mishra) Date: Thu, 11 Feb 2016 05:42:41 +0530 Subject: [Neuroimaging] Fwd: Interested in the project "Continuous quality assurance (QA) in cloud computing environment" for GSoC 2016 In-Reply-To: References: Message-ID: Hi everyone, I'm an undergraduate student at BITS Pilani, Goa Campus, India and an active open source contributor. I went through all the project ideas published and liked the one titled "Continuous quality assurance (QA) in cloud computing environment" since it matches my interests and skill sets from my past experiences. I participated in Google Summer of Code 2014 and contributed to OSGeo and have been an active contributor at Mozilla's automation and tools team for more than 2 years and am a core-contributor now. I've worked on several projects at Mozilla involving regression testing thus I already have past experience in QA and specifically in regression and performance testing. I have a very basic knowledge about diffusion MRI but it would be great to learn more about it. It would be great if Ariel and Eleftherios could take out some time to discuss the project in detail and I would be grateful if someone can point me to some easy to start issues to familiarize myself with the code base before the project begins. Github: @mishravikas Here's a query about me on bugzilla which shows my contributions at Mozilla. Thanks and Cheers! Vikas Mishra MSc(Hons) Economics + B.E. Electronics and Electrical Engineering *Birla Institute of Technology & Science,* Pilani KK Birla Goa Campus 8412898899|vikasmishra95 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001(2).jpg Type: image/jpeg Size: 11055 bytes Desc: not available URL: From vandiver.l.chaplin at vanderbilt.edu Mon Feb 15 14:34:48 2016 From: vandiver.l.chaplin at vanderbilt.edu (Chaplin, Vandiver) Date: Mon, 15 Feb 2016 19:34:48 +0000 Subject: [Neuroimaging] API doc links broken? Message-ID: <42D4925B5037D341BD1822B2143CC10CB87346B4@ITS-HCWNEM108.ds.vanderbilt.edu> Hi, It appears that links in the nibabel API documentation section are broken. http://nipy.org/nibabel/api.html Thanks for a great set of python routines! _____________________________________ Vandiver L. Chaplin, PhD Candidate Laboratory for Acoustic Therapy and Imaging Vanderbilt University Institute of Imaging Science (615) 322-8835 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Feb 15 20:18:50 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 15 Feb 2016 17:18:50 -0800 Subject: [Neuroimaging] API doc links broken? In-Reply-To: <42D4925B5037D341BD1822B2143CC10CB87346B4@ITS-HCWNEM108.ds.vanderbilt.edu> References: <42D4925B5037D341BD1822B2143CC10CB87346B4@ITS-HCWNEM108.ds.vanderbilt.edu> Message-ID: Hi, On Mon, Feb 15, 2016 at 11:34 AM, Chaplin, Vandiver wrote: > Hi, > > It appears that links in the nibabel API documentation section are broken. > > http://nipy.org/nibabel/api.html Thanks very much for taking time to report that - I think it's fixed now, but please let me know it still doesn't look right. Cheers, Matthew From fepegar at gmail.com Tue Feb 16 11:05:41 2016 From: fepegar at gmail.com (=?UTF-8?B?RmVybmFuZG8gUMOpcmV6LUdhcmPDrWE=?=) Date: Tue, 16 Feb 2016 17:05:41 +0100 Subject: [Neuroimaging] RGB Nifti Message-ID: Dear Nibabel experts, I'm trying to create an RGB nifti image from a PNG file. I've been messing around with a working RGB nifti, in order to mimic the datatype found in its header. So far I've managed to convert the image pixel-wise with for loops, which is very slow. Do you know how this could be done in a faster way? My code: import numpy as np import Image import nibabel as nib p = '/home/fernando/test/nii_rgb/TC1.png' im = Image.open(p) data = np.array(im) data = np.rot90(data) rgb = np.zeros((data.shape[0], data.shape[1], 1, 1), [('R', 'u1'), ('G', 'u1'), ('B', 'u1')]) for i in range(data.shape[0]): for j in range(data.shape[1]): rgb[i, j] = tuple(data[i, j, :]) nii = nib.Nifti1Image(rgb, np.eye(4)) nib.save(nii, p.replace('png', 'nii')) Thanks in advance, Fernando -------------- next part -------------- An HTML attachment was scrubbed... URL: From khamael at gmail.com Tue Feb 16 11:32:31 2016 From: khamael at gmail.com (paulo rodrigues) Date: Tue, 16 Feb 2016 17:32:31 +0100 Subject: [Neuroimaging] RGB Nifti In-Reply-To: References: Message-ID: Hi Fernando, Did you have a look at ants? It has some tools for that kind of operations: check ConvertScalarImageToRGB Cheers, Paulo On Tue, Feb 16, 2016 at 5:05 PM, Fernando P?rez-Garc?a wrote: > Dear Nibabel experts, > > I'm trying to create an RGB nifti image from a PNG file. I've been messing > around with a working RGB nifti, in order to mimic the datatype found in > its header. So far I've managed to convert the image pixel-wise with for > loops, which is very slow. Do you know how this could be done in a faster > way? > > > My code: > > import numpy as np > import Image > import nibabel as nib > > p = '/home/fernando/test/nii_rgb/TC1.png' > im = Image.open(p) > > data = np.array(im) > data = np.rot90(data) > rgb = np.zeros((data.shape[0], data.shape[1], 1, 1), [('R', 'u1'), ('G', > 'u1'), ('B', 'u1')]) > > for i in range(data.shape[0]): > for j in range(data.shape[1]): > rgb[i, j] = tuple(data[i, j, :]) > > nii = nib.Nifti1Image(rgb, np.eye(4)) > nib.save(nii, p.replace('png', 'nii')) > > > > Thanks in advance, > > Fernando > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fepegar at gmail.com Tue Feb 16 11:44:18 2016 From: fepegar at gmail.com (=?UTF-8?B?RmVybmFuZG8gUMOpcmV6LUdhcmPDrWE=?=) Date: Tue, 16 Feb 2016 17:44:18 +0100 Subject: [Neuroimaging] RGB Nifti In-Reply-To: References: Message-ID: Hi Paulo, Thanks for your response. It seems that ConvertScalarImageToRGB reads a gray nifti and makes an RGB nifti (using ITK) with a certain colormap. The image I'm reading is a PNG file already RGB, as you can see in my code, so I think ANTS won't help here. Cheers, Fernando 2016-02-16 17:32 GMT+01:00 paulo rodrigues : > Hi Fernando, > > Did you have a look at ants? It has some tools for that kind of > operations: check ConvertScalarImageToRGB > > Cheers, > Paulo > > On Tue, Feb 16, 2016 at 5:05 PM, Fernando P?rez-Garc?a > wrote: > >> Dear Nibabel experts, >> >> I'm trying to create an RGB nifti image from a PNG file. I've been >> messing around with a working RGB nifti, in order to mimic the datatype >> found in its header. So far I've managed to convert the image pixel-wise >> with for loops, which is very slow. Do you know how this could be done in a >> faster way? >> >> >> My code: >> >> import numpy as np >> import Image >> import nibabel as nib >> >> p = '/home/fernando/test/nii_rgb/TC1.png' >> im = Image.open(p) >> >> data = np.array(im) >> data = np.rot90(data) >> rgb = np.zeros((data.shape[0], data.shape[1], 1, 1), [('R', 'u1'), ('G', >> 'u1'), ('B', 'u1')]) >> >> for i in range(data.shape[0]): >> for j in range(data.shape[1]): >> rgb[i, j] = tuple(data[i, j, :]) >> >> nii = nib.Nifti1Image(rgb, np.eye(4)) >> nib.save(nii, p.replace('png', 'nii')) >> >> >> >> Thanks in advance, >> >> Fernando >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Tue Feb 16 11:46:33 2016 From: arokem at gmail.com (Ariel Rokem) Date: Tue, 16 Feb 2016 08:46:33 -0800 Subject: [Neuroimaging] RGB Nifti In-Reply-To: References: Message-ID: Hi Fernando, Just trying to understand what you are trying to achieve: On Tue, Feb 16, 2016 at 8:44 AM, Fernando P?rez-Garc?a wrote: > Hi Paulo, > > Thanks for your response. It seems that ConvertScalarImageToRGB reads a > gray nifti and makes an RGB nifti (using ITK) with a certain colormap. The > image I'm reading is a PNG file already RGB, as you can see in my code, so > I think ANTS won't help here. > > > Cheers, > > Fernando > > 2016-02-16 17:32 GMT+01:00 paulo rodrigues : > >> Hi Fernando, >> >> Did you have a look at ants? It has some tools for that kind of >> operations: check ConvertScalarImageToRGB >> >> Cheers, >> Paulo >> >> On Tue, Feb 16, 2016 at 5:05 PM, Fernando P?rez-Garc?a > > wrote: >> >>> Dear Nibabel experts, >>> >>> I'm trying to create an RGB nifti image from a PNG file. I've been >>> messing around with a working RGB nifti, in order to mimic the datatype >>> found in its header. So far I've managed to convert the image pixel-wise >>> with for loops, which is very slow. Do you know how this could be done in a >>> faster way? >>> >>> >>> My code: >>> >>> import numpy as np >>> import Image >>> import nibabel as nib >>> >>> p = '/home/fernando/test/nii_rgb/TC1.png' >>> im = Image.open(p) >>> >>> data = np.array(im) >>> data = np.rot90(data) >>> >> What is `data.shape` at this point? Why wouldn't the following work? nii = nib.Nifti1Image(data, np.eye(4)) nib.save(nii, p.replace('png', 'nii')) Cheers, Ariel > rgb = np.zeros((data.shape[0], data.shape[1], 1, 1), [('R', 'u1'), ('G', >>> 'u1'), ('B', 'u1')]) >>> >>> for i in range(data.shape[0]): >>> for j in range(data.shape[1]): >>> rgb[i, j] = tuple(data[i, j, :]) >>> >>> nii = nib.Nifti1Image(rgb, np.eye(4)) >>> nib.save(nii, p.replace('png', 'nii')) >>> >>> >>> >>> Thanks in advance, >>> >>> Fernando >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fepegar at gmail.com Tue Feb 16 11:53:30 2016 From: fepegar at gmail.com (=?UTF-8?B?RmVybmFuZG8gUMOpcmV6LUdhcmPDrWE=?=) Date: Tue, 16 Feb 2016 17:53:30 +0100 Subject: [Neuroimaging] RGB Nifti In-Reply-To: References: Message-ID: Hi Ariel, data.shape is (5256, 3216, 3) at that point. If I do just nii = nib.Nifti1Image(data, np.eye(4)) nib.save(nii, p.replace('png', 'nii')), I'll get a 3D nifti image with three slices, with one value per pixel. I want a 2D nifti image with one slice, three values per pixel. I do accomplish what I want with my code, but it's not very efficient. Do you think I should ask in the NumPy or SciPy mailing list? Cheers, Fernando 2016-02-16 17:46 GMT+01:00 Ariel Rokem : > Hi Fernando, > > Just trying to understand what you are trying to achieve: > > On Tue, Feb 16, 2016 at 8:44 AM, Fernando P?rez-Garc?a > wrote: > >> Hi Paulo, >> >> Thanks for your response. It seems that ConvertScalarImageToRGB reads a >> gray nifti and makes an RGB nifti (using ITK) with a certain colormap. The >> image I'm reading is a PNG file already RGB, as you can see in my code, so >> I think ANTS won't help here. >> >> >> Cheers, >> >> Fernando >> >> 2016-02-16 17:32 GMT+01:00 paulo rodrigues : >> >>> Hi Fernando, >>> >>> Did you have a look at ants? It has some tools for that kind of >>> operations: check ConvertScalarImageToRGB >>> >>> Cheers, >>> Paulo >>> >>> On Tue, Feb 16, 2016 at 5:05 PM, Fernando P?rez-Garc?a < >>> fepegar at gmail.com> wrote: >>> >>>> Dear Nibabel experts, >>>> >>>> I'm trying to create an RGB nifti image from a PNG file. I've been >>>> messing around with a working RGB nifti, in order to mimic the datatype >>>> found in its header. So far I've managed to convert the image pixel-wise >>>> with for loops, which is very slow. Do you know how this could be done in a >>>> faster way? >>>> >>>> >>>> My code: >>>> >>>> import numpy as np >>>> import Image >>>> import nibabel as nib >>>> >>>> p = '/home/fernando/test/nii_rgb/TC1.png' >>>> im = Image.open(p) >>>> >>>> data = np.array(im) >>>> data = np.rot90(data) >>>> >>> > What is `data.shape` at this point? > > Why wouldn't the following work? > > nii = nib.Nifti1Image(data, np.eye(4)) > nib.save(nii, p.replace('png', 'nii')) > > Cheers, > > Ariel > > >> rgb = np.zeros((data.shape[0], data.shape[1], 1, 1), [('R', 'u1'), ('G', >>>> 'u1'), ('B', 'u1')]) >>>> >>>> for i in range(data.shape[0]): >>>> for j in range(data.shape[1]): >>>> rgb[i, j] = tuple(data[i, j, :]) >>>> >>>> nii = nib.Nifti1Image(rgb, np.eye(4)) >>>> nib.save(nii, p.replace('png', 'nii')) >>>> >>>> >>>> >>>> Thanks in advance, >>>> >>>> Fernando >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Tue Feb 16 12:17:15 2016 From: arokem at gmail.com (Ariel Rokem) Date: Tue, 16 Feb 2016 09:17:15 -0800 Subject: [Neuroimaging] RGB Nifti In-Reply-To: References: Message-ID: Gotcha. On Tue, Feb 16, 2016 at 8:53 AM, Fernando P?rez-Garc?a wrote: > Hi Ariel, > > data.shape is (5256, 3216, 3) at that point. > > If I do just > nii = nib.Nifti1Image(data, np.eye(4)) > nib.save(nii, p.replace('png', 'nii')), > > I'll get a 3D nifti image with three slices, with one value per pixel. I > want a 2D nifti image with one slice, three values per pixel. I do > accomplish what I want with my code, but it's not very efficient. Do you > think I should ask in the NumPy or SciPy mailing list? > You can certainly ask on these lists as well -- lots of knowledgable people there. In the meanwhile, here's what I have managed to dig up on SO. Something along these lines might work: http://stackoverflow.com/a/10016379/3532933 But I don't have the full solution quite yet. Ariel > > Cheers, > > Fernando > > 2016-02-16 17:46 GMT+01:00 Ariel Rokem : > >> Hi Fernando, >> >> Just trying to understand what you are trying to achieve: >> >> On Tue, Feb 16, 2016 at 8:44 AM, Fernando P?rez-Garc?a > > wrote: >> >>> Hi Paulo, >>> >>> Thanks for your response. It seems that ConvertScalarImageToRGB reads a >>> gray nifti and makes an RGB nifti (using ITK) with a certain colormap. The >>> image I'm reading is a PNG file already RGB, as you can see in my code, so >>> I think ANTS won't help here. >>> >>> >>> Cheers, >>> >>> Fernando >>> >>> 2016-02-16 17:32 GMT+01:00 paulo rodrigues : >>> >>>> Hi Fernando, >>>> >>>> Did you have a look at ants? It has some tools for that kind of >>>> operations: check ConvertScalarImageToRGB >>>> >>>> Cheers, >>>> Paulo >>>> >>>> On Tue, Feb 16, 2016 at 5:05 PM, Fernando P?rez-Garc?a < >>>> fepegar at gmail.com> wrote: >>>> >>>>> Dear Nibabel experts, >>>>> >>>>> I'm trying to create an RGB nifti image from a PNG file. I've been >>>>> messing around with a working RGB nifti, in order to mimic the datatype >>>>> found in its header. So far I've managed to convert the image pixel-wise >>>>> with for loops, which is very slow. Do you know how this could be done in a >>>>> faster way? >>>>> >>>>> >>>>> My code: >>>>> >>>>> import numpy as np >>>>> import Image >>>>> import nibabel as nib >>>>> >>>>> p = '/home/fernando/test/nii_rgb/TC1.png' >>>>> im = Image.open(p) >>>>> >>>>> data = np.array(im) >>>>> data = np.rot90(data) >>>>> >>>> >> What is `data.shape` at this point? >> >> Why wouldn't the following work? >> >> nii = nib.Nifti1Image(data, np.eye(4)) >> nib.save(nii, p.replace('png', 'nii')) >> >> Cheers, >> >> Ariel >> >> >>> rgb = np.zeros((data.shape[0], data.shape[1], 1, 1), [('R', 'u1'), ('G', >>>>> 'u1'), ('B', 'u1')]) >>>>> >>>>> for i in range(data.shape[0]): >>>>> for j in range(data.shape[1]): >>>>> rgb[i, j] = tuple(data[i, j, :]) >>>>> >>>>> nii = nib.Nifti1Image(rgb, np.eye(4)) >>>>> nib.save(nii, p.replace('png', 'nii')) >>>>> >>>>> >>>>> >>>>> Thanks in advance, >>>>> >>>>> Fernando >>>>> >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Feb 16 12:58:41 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 16 Feb 2016 09:58:41 -0800 Subject: [Neuroimaging] RGB Nifti In-Reply-To: References: Message-ID: Hi, On Tue, Feb 16, 2016 at 9:17 AM, Ariel Rokem wrote: > Gotcha. > > On Tue, Feb 16, 2016 at 8:53 AM, Fernando P?rez-Garc?a > wrote: >> >> Hi Ariel, >> >> data.shape is (5256, 3216, 3) at that point. >> >> If I do just >> nii = nib.Nifti1Image(data, np.eye(4)) >> nib.save(nii, p.replace('png', 'nii')), >> >> I'll get a 3D nifti image with three slices, with one value per pixel. I >> want a 2D nifti image with one slice, three values per pixel. I do >> accomplish what I want with my code, but it's not very efficient. Do you >> think I should ask in the NumPy or SciPy mailing list? > > > You can certainly ask on these lists as well -- lots of knowledgable people > there. > > In the meanwhile, here's what I have managed to dig up on SO. Something > along these lines might work: > > http://stackoverflow.com/a/10016379/3532933 > > But I don't have the full solution quite yet. How about: dt = np.dtype(zip('RGB', ('u1',) * 3)) rgb_array = data.view(dt) # You may need data.copy().view nii = nib.Nifti1Image(rgb_array, np.eye(4)) ? Cheers, Matthew From fepegar at gmail.com Wed Feb 17 04:41:20 2016 From: fepegar at gmail.com (=?UTF-8?B?RmVybmFuZG8gUMOpcmV6LUdhcmPDrWE=?=) Date: Wed, 17 Feb 2016 10:41:20 +0100 Subject: [Neuroimaging] RGB Nifti In-Reply-To: References: Message-ID: Hi Matthew, Your solution worked (using copy). It took about 1 second, instead of the 40 it took using loops. Thanks a lot for your help. Best, Fernando 2016-02-16 18:58 GMT+01:00 Matthew Brett : > Hi, > > On Tue, Feb 16, 2016 at 9:17 AM, Ariel Rokem wrote: > > Gotcha. > > > > On Tue, Feb 16, 2016 at 8:53 AM, Fernando P?rez-Garc?a < > fepegar at gmail.com> > > wrote: > >> > >> Hi Ariel, > >> > >> data.shape is (5256, 3216, 3) at that point. > >> > >> If I do just > >> nii = nib.Nifti1Image(data, np.eye(4)) > >> nib.save(nii, p.replace('png', 'nii')), > >> > >> I'll get a 3D nifti image with three slices, with one value per pixel. I > >> want a 2D nifti image with one slice, three values per pixel. I do > >> accomplish what I want with my code, but it's not very efficient. Do you > >> think I should ask in the NumPy or SciPy mailing list? > > > > > > You can certainly ask on these lists as well -- lots of knowledgable > people > > there. > > > > In the meanwhile, here's what I have managed to dig up on SO. Something > > along these lines might work: > > > > http://stackoverflow.com/a/10016379/3532933 > > > > But I don't have the full solution quite yet. > > How about: > > dt = np.dtype(zip('RGB', ('u1',) * 3)) > rgb_array = data.view(dt) # You may need data.copy().view > nii = nib.Nifti1Image(rgb_array, np.eye(4)) > > ? > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jetzel at wustl.edu Fri Feb 19 09:17:28 2016 From: jetzel at wustl.edu (Jo Etzel) Date: Fri, 19 Feb 2016 08:17:28 -0600 Subject: [Neuroimaging] call for papers: PRNI 2016 submission now open Message-ID: <56C723F8.6000507@wustl.edu> ******* please accept our apologies for cross-posting ******* ------------------------------------------------------------------------------ SECOND CALL FOR PAPERS: SUBMISSION NOW OPEN PRNI 2016 6th International Workshop on Pattern Recognition in Neuroimaging 22-24 June 2016 Fondazione Bruno Kessler (FBK), Trento, Italy www.prni.org - @PRNI2016 - www.facebook.com/PRNI2016/ ------------------------------------------------------------------------------ Paper submission opens: 18 February 2016 Paper submission deadline: 18 March 2016, 11:59 pm PST Acceptance notification: 22 April 2016 Camera-ready paper deadline: 7 May 2016 Oral and poster sessions: 22-24 June 2016 Pattern recognition techniques have become an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. PRNI welcomes submissions on topics including, but not limited to: * Learning from neuroimaging data - Algorithms for brain-state decoding or encoding - Optimization and regularization - Bayesian analysis of neuroimaging data - Causal inference and time delay techniques - Network and connectivity models (the connectome) - Dynamic and time-varying models - Dynamical systems and simulations - Empirical mode decomposition, multiscale decompositions - Combination of different data modalities - Efficient algorithms for large-scale data analysis * Interpretability of models and results - High-dimensional data visualization - Multivariate and multiple hypothesis testing - Summarization and presentation of inference results * Applications - Disease diagnosis and prognosis - Real-time decoding of brain states - Analysis of resting-state and task-based data - MEG, EEG, structural MRI, fMRI, diffusion MRI, ECoG, NIRS Authors should prepare full papers with a maximum length of 4 pages (two column IEEE style) for double-blind review. Manuscript submission is now open, and ends 18 March 2016. Accepted manuscripts will be assigned either to an oral or poster sessions; all accepted manuscripts will be included in the workshop proceedings. -- Joset A. Etzel, Ph.D. Research Analyst Cognitive Control & Psychopathology Lab Washington University in St. Louis http://mvpa.blogspot.com/ From matthew.brett at gmail.com Sat Feb 20 20:13:37 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 20 Feb 2016 17:13:37 -0800 Subject: [Neuroimaging] Back to nibabel Message-ID: Hi, Sorry to all you nibabel folks, I've been away doing other things for a while, and have been slow to get to PRs and so on. I have been working on binary packaging for numpy, scipy and so on, but it looks like that may be calming down now. I'm starting back today, with renewed nibabel energy - and I have a proposal for y'all - which is here : https://github.com/nipy/nibabel/issues/410 If you have a nibabel idea or question or issue or PR, please do book some of my time via that issue, and I will reply as soon as I can. Y'all with open PRs - I am working on those now... Hasta la victoria siempre (de Python en neuroimaging), Matthew From garyfallidis at gmail.com Sun Feb 21 16:05:24 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Sun, 21 Feb 2016 16:05:24 -0500 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download Message-ID: Dear all, We are excited to announce a new public release of Diffusion Imaging in Python (DIPY). The 0.11 release follows closely on the heels of the 0.10 release, resolving issues that existed in that release on the Windows 64 bit platform. New features of the 0.11 and 0.10 release cycles include many additional fixes and new frameworks. Here are some of the highlights: DIPY 0.11.0 (Monday, 21 February 2016): - New framework for contextual enhancement of ODFs. - Compatibility with new version of numpy (1.11). - Compatibility with VTK 7.0 which supports Python 3.x. - Faster PIESNO for noise estimation. - Reorient gradient directions according to motion correction parameters. - Supporting Python 3.3+ but not 3.2. - Reduced memory usage in DTI prediction. - DSI now can use datasets with multiple b0s. - Fixed different issues with Windows 64bit and Python 3.5. DIPY 0.10.1 (Friday, 4 December 2015): - Compatibility with new versions of scipy (0.16) and numpy (1.10). - New cleaner visualization API, including compatibility with VTK 6, and functions to create your own interactive visualizations. - Diffusion Kurtosis Imaging (DKI): Google Summer of Code work by Rafael Henriques. - Mean Apparent Propagator (MAP) MRI for tissue microstructure estimation. - Anisotropic Power Maps from spherical harmonic coefficients. - A new framework for affine registration of images. Detailed release notes can be found here: http://dipy.org/release0.11.html http://dipy.org/release0.10.html To upgrade, run the following command in your terminal: pip install --upgrade dipy For the complete installation guide look here: http://dipy.org/installation.html For any questions go to http://dipy.org, or https://neurostars.org or send an e-mail to neuroimaging at python.org We also have a new instant messaging service and chat room available at https://gitter.im/nipy/dipy On behalf of the DIPY developers, Eleftherios Garyfallidis & Ariel Rokem http://dipy.org/developers.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcipolli at ucsd.edu Sun Feb 21 16:12:55 2016 From: bcipolli at ucsd.edu (Ben Cipollini) Date: Sun, 21 Feb 2016 13:12:55 -0800 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: References: Message-ID: You guys rock as always! Very cool, and congrats! On Sun, Feb 21, 2016 at 1:05 PM, Eleftherios Garyfallidis < garyfallidis at gmail.com> wrote: > Dear all, > > > We are excited to announce a new public release of Diffusion Imaging in > Python (DIPY). > > The 0.11 release follows closely on the heels of the 0.10 release, > resolving issues that existed in that release on the Windows 64 bit > platform. New features of the 0.11 and 0.10 release cycles include many > additional fixes and new frameworks. Here are some of the highlights: > > DIPY 0.11.0 (Monday, 21 February 2016): > > - New framework for contextual enhancement of ODFs. > > - Compatibility with new version of numpy (1.11). > > - Compatibility with VTK 7.0 which supports Python 3.x. > > - Faster PIESNO for noise estimation. > > - Reorient gradient directions according to motion correction parameters. > > - Supporting Python 3.3+ but not 3.2. > > - Reduced memory usage in DTI prediction. > > - DSI now can use datasets with multiple b0s. > > - Fixed different issues with Windows 64bit and Python 3.5. > > DIPY 0.10.1 (Friday, 4 December 2015): > > - Compatibility with new versions of scipy (0.16) and numpy (1.10). > > - New cleaner visualization API, including compatibility with VTK 6, and > functions to create your own interactive visualizations. > > - Diffusion Kurtosis Imaging (DKI): Google Summer of Code work by Rafael > Henriques. > > - Mean Apparent Propagator (MAP) MRI for tissue microstructure estimation. > > - Anisotropic Power Maps from spherical harmonic coefficients. > > - A new framework for affine registration of images. > > Detailed release notes can be found here: > > http://dipy.org/release0.11.html > > http://dipy.org/release0.10.html > > To upgrade, run the following command in your terminal: > > > pip install --upgrade dipy > > For the complete installation guide look here: > > http://dipy.org/installation.html > > For any questions go to http://dipy.org, or https://neurostars.org or > send an e-mail to neuroimaging at python.org > > We also have a new instant messaging service and chat room available at > https://gitter.im/nipy/dipy > > On behalf of the DIPY developers, > > Eleftherios Garyfallidis & Ariel Rokem > > http://dipy.org/developers.html > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Sun Feb 21 16:21:20 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Sun, 21 Feb 2016 16:21:20 -0500 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: References: Message-ID: Hi Yarik, This is the doclink for Neurodebian. https://dl.dropboxusercontent.com/u/2481924/dipy-0.11.0-doc-examples.tar.gz Yarik, is it possible to update the overview of the project? We have moved much beyond of what is written here http://neuro.debian.net/pkgs/python-dipy.html Which file do I need to edit? Or change to update the overview in Neurodebian? Cheers, Eleftherios On Sun, Feb 21, 2016 at 4:05 PM, Eleftherios Garyfallidis < garyfallidis at gmail.com> wrote: > Dear all, > > > We are excited to announce a new public release of Diffusion Imaging in > Python (DIPY). > > The 0.11 release follows closely on the heels of the 0.10 release, > resolving issues that existed in that release on the Windows 64 bit > platform. New features of the 0.11 and 0.10 release cycles include many > additional fixes and new frameworks. Here are some of the highlights: > > DIPY 0.11.0 (Monday, 21 February 2016): > > - New framework for contextual enhancement of ODFs. > > - Compatibility with new version of numpy (1.11). > > - Compatibility with VTK 7.0 which supports Python 3.x. > > - Faster PIESNO for noise estimation. > > - Reorient gradient directions according to motion correction parameters. > > - Supporting Python 3.3+ but not 3.2. > > - Reduced memory usage in DTI prediction. > > - DSI now can use datasets with multiple b0s. > > - Fixed different issues with Windows 64bit and Python 3.5. > > DIPY 0.10.1 (Friday, 4 December 2015): > > - Compatibility with new versions of scipy (0.16) and numpy (1.10). > > - New cleaner visualization API, including compatibility with VTK 6, and > functions to create your own interactive visualizations. > > - Diffusion Kurtosis Imaging (DKI): Google Summer of Code work by Rafael > Henriques. > > - Mean Apparent Propagator (MAP) MRI for tissue microstructure estimation. > > - Anisotropic Power Maps from spherical harmonic coefficients. > > - A new framework for affine registration of images. > > Detailed release notes can be found here: > > http://dipy.org/release0.11.html > > http://dipy.org/release0.10.html > > To upgrade, run the following command in your terminal: > > > pip install --upgrade dipy > > For the complete installation guide look here: > > http://dipy.org/installation.html > > For any questions go to http://dipy.org, or https://neurostars.org or > send an e-mail to neuroimaging at python.org > > We also have a new instant messaging service and chat room available at > https://gitter.im/nipy/dipy > > On behalf of the DIPY developers, > > Eleftherios Garyfallidis & Ariel Rokem > > http://dipy.org/developers.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Sun Feb 21 16:22:43 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Sun, 21 Feb 2016 16:22:43 -0500 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: References: Message-ID: On Sun, Feb 21, 2016 at 4:12 PM, Ben Cipollini wrote: > You guys rock as always! Very cool, and congrats! > > Thank you Ben. How nice of you. Have a great day. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Sun Feb 21 18:48:45 2016 From: lists at onerussian.com (Yaroslav Halchenko) Date: Sun, 21 Feb 2016 18:48:45 -0500 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: References: Message-ID: <20160221234845.GX7904@onerussian.com> On Sun, 21 Feb 2016, Eleftherios Garyfallidis wrote: > Hi Yarik,A > This is the doclink for Neurodebian. > https://dl.dropboxusercontent.com/u/2481924/dipy-0.11.0-doc-examples.tar.gz awesome -- thanks! downloading now > Yarik, is it possible to update the overview of the project? We have moved > much beyond of what is written here > http://neuro.debian.net/pkgs/python-dipy.html > Which file do I need to edit? Or change to update the overview in > Neurodebian? this one: https://github.com/neurodebian/dipy/blob/debian/debian/control I will wait then with package update until a new version/PR THANKS and congrats! and cheers! btw -- are you reserving a gratis exhibit table for OHBM this year??? -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From satra at mit.edu Sun Feb 21 19:23:48 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Sun, 21 Feb 2016 19:23:48 -0500 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: References: Message-ID: wonderful! looking forward to using this. cheers, satra On Sun, Feb 21, 2016 at 4:05 PM, Eleftherios Garyfallidis < garyfallidis at gmail.com> wrote: > Dear all, > > > We are excited to announce a new public release of Diffusion Imaging in > Python (DIPY). > > The 0.11 release follows closely on the heels of the 0.10 release, > resolving issues that existed in that release on the Windows 64 bit > platform. New features of the 0.11 and 0.10 release cycles include many > additional fixes and new frameworks. Here are some of the highlights: > > DIPY 0.11.0 (Monday, 21 February 2016): > > - New framework for contextual enhancement of ODFs. > > - Compatibility with new version of numpy (1.11). > > - Compatibility with VTK 7.0 which supports Python 3.x. > > - Faster PIESNO for noise estimation. > > - Reorient gradient directions according to motion correction parameters. > > - Supporting Python 3.3+ but not 3.2. > > - Reduced memory usage in DTI prediction. > > - DSI now can use datasets with multiple b0s. > > - Fixed different issues with Windows 64bit and Python 3.5. > > DIPY 0.10.1 (Friday, 4 December 2015): > > - Compatibility with new versions of scipy (0.16) and numpy (1.10). > > - New cleaner visualization API, including compatibility with VTK 6, and > functions to create your own interactive visualizations. > > - Diffusion Kurtosis Imaging (DKI): Google Summer of Code work by Rafael > Henriques. > > - Mean Apparent Propagator (MAP) MRI for tissue microstructure estimation. > > - Anisotropic Power Maps from spherical harmonic coefficients. > > - A new framework for affine registration of images. > > Detailed release notes can be found here: > > http://dipy.org/release0.11.html > > http://dipy.org/release0.10.html > > To upgrade, run the following command in your terminal: > > > pip install --upgrade dipy > > For the complete installation guide look here: > > http://dipy.org/installation.html > > For any questions go to http://dipy.org, or https://neurostars.org or > send an e-mail to neuroimaging at python.org > > We also have a new instant messaging service and chat room available at > https://gitter.im/nipy/dipy > > On behalf of the DIPY developers, > > Eleftherios Garyfallidis & Ariel Rokem > > http://dipy.org/developers.html > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Mon Feb 22 11:27:33 2016 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Mon, 22 Feb 2016 08:27:33 -0800 Subject: [Neuroimaging] Nipype is looking for your feedback Message-ID: Dear all, We are trying to figure out which features of Nipype we should focus in the upcoming year. Please help us by filling in this very short wiki survey: http://www.allourideas.org/nipype Best, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Mon Feb 22 13:29:59 2016 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 22 Feb 2016 10:29:59 -0800 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: <20160221234845.GX7904@onerussian.com> References: <20160221234845.GX7904@onerussian.com> Message-ID: On Sun, Feb 21, 2016 at 3:48 PM, Yaroslav Halchenko wrote: > > On Sun, 21 Feb 2016, Eleftherios Garyfallidis wrote: > > > Hi Yarik,A > > This is the doclink for Neurodebian. > > > https://dl.dropboxusercontent.com/u/2481924/dipy-0.11.0-doc-examples.tar.gz > > awesome -- thanks! downloading now > > > Yarik, is it possible to update the overview of the project? We have > moved > > much beyond of what is written here > > http://neuro.debian.net/pkgs/python-dipy.html > > Which file do I need to edit? Or change to update the overview in > > Neurodebian? > > this one: > https://github.com/neurodebian/dipy/blob/debian/debian/control > > I will wait then with package update until a new version/PR > > THANKS > and congrats! and cheers! > > Thank you! > btw -- are you reserving a gratis exhibit table for OHBM this year??? > > I am not planning to attend OHBM this year. Is anyone else from the Dipy devs planning to be there? > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vsochat at stanford.edu Mon Feb 22 13:26:27 2016 From: vsochat at stanford.edu (vanessa sochat) Date: Mon, 22 Feb 2016 10:26:27 -0800 Subject: [Neuroimaging] Nipype is looking for your feedback In-Reply-To: References: Message-ID: Graphical user interface! +1 +1 +1 On Mon, Feb 22, 2016 at 8:27 AM, Chris Filo Gorgolewski < krzysztof.gorgolewski at gmail.com> wrote: > Dear all, > We are trying to figure out which features of Nipype we should focus in > the upcoming year. Please help us by filling in this very short wiki > survey: http://www.allourideas.org/nipype > > Best, > Chris > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Vanessa Villamia Sochat Stanford University (603) 321-0676 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Mon Feb 22 13:43:26 2016 From: pellman.john at gmail.com (John Pellman) Date: Mon, 22 Feb 2016 13:43:26 -0500 Subject: [Neuroimaging] Nipype is looking for your feedback In-Reply-To: References: Message-ID: There's a potential for code reuse wrt a GUI as well, as evidenced by this project that a couple people have starred: https://github.com/belevtsoff/earlPipeline 2016-02-22 13:26 GMT-05:00 vanessa sochat : > Graphical user interface! +1 +1 +1 > > On Mon, Feb 22, 2016 at 8:27 AM, Chris Filo Gorgolewski < > krzysztof.gorgolewski at gmail.com> wrote: > >> Dear all, >> We are trying to figure out which features of Nipype we should focus in >> the upcoming year. Please help us by filling in this very short wiki >> survey: http://www.allourideas.org/nipype >> >> Best, >> Chris >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > Vanessa Villamia Sochat > Stanford University > (603) 321-0676 > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Mon Feb 22 13:38:15 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Mon, 22 Feb 2016 13:38:15 -0500 Subject: [Neuroimaging] Nipype is looking for your feedback In-Reply-To: References: Message-ID: hi vanessa, it's coming by the end of this semester. cheers, satra On Mon, Feb 22, 2016 at 1:26 PM, vanessa sochat wrote: > Graphical user interface! +1 +1 +1 > > On Mon, Feb 22, 2016 at 8:27 AM, Chris Filo Gorgolewski < > krzysztof.gorgolewski at gmail.com> wrote: > >> Dear all, >> We are trying to figure out which features of Nipype we should focus in >> the upcoming year. Please help us by filling in this very short wiki >> survey: http://www.allourideas.org/nipype >> >> Best, >> Chris >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > > -- > Vanessa Villamia Sochat > Stanford University > (603) 321-0676 > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Mon Feb 22 13:50:51 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Mon, 22 Feb 2016 13:50:51 -0500 Subject: [Neuroimaging] Nipype is looking for your feedback In-Reply-To: References: Message-ID: hi john, yes - we are building off of those ideas, but will likely generate a pure client side version that talks to a backend through a RESTful service. the technical challenges that are being solved are: - how to design a good interaction interface around complex workflows - how to visualize the execution graphs interactively - how to share and interact collaboratively as soon as the students get going, we will post to this list for feedback. cheers, satra On Mon, Feb 22, 2016 at 1:43 PM, John Pellman wrote: > There's a potential for code reuse wrt a GUI as well, as evidenced by this > project that a couple people have starred: > > https://github.com/belevtsoff/earlPipeline > > 2016-02-22 13:26 GMT-05:00 vanessa sochat : > >> Graphical user interface! +1 +1 +1 >> >> On Mon, Feb 22, 2016 at 8:27 AM, Chris Filo Gorgolewski < >> krzysztof.gorgolewski at gmail.com> wrote: >> >>> Dear all, >>> We are trying to figure out which features of Nipype we should focus in >>> the upcoming year. Please help us by filling in this very short wiki >>> survey: http://www.allourideas.org/nipype >>> >>> Best, >>> Chris >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> >> -- >> Vanessa Villamia Sochat >> Stanford University >> (603) 321-0676 >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Mon Feb 22 13:53:50 2016 From: jbpoline at gmail.com (JB Poline) Date: Mon, 22 Feb 2016 10:53:50 -0800 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: References: <20160221234845.GX7904@onerussian.com> Message-ID: Hi, FYI, in case you missed it, there will be a hackathon before HBM : book the dates: June 23-25! cheers JB On Mon, Feb 22, 2016 at 10:29 AM, Ariel Rokem wrote: > > On Sun, Feb 21, 2016 at 3:48 PM, Yaroslav Halchenko > wrote: > >> >> On Sun, 21 Feb 2016, Eleftherios Garyfallidis wrote: >> >> > Hi Yarik,A >> > This is the doclink for Neurodebian. >> > >> https://dl.dropboxusercontent.com/u/2481924/dipy-0.11.0-doc-examples.tar.gz >> >> awesome -- thanks! downloading now >> >> > Yarik, is it possible to update the overview of the project? We have >> moved >> > much beyond of what is written here >> > http://neuro.debian.net/pkgs/python-dipy.html >> > Which file do I need to edit? Or change to update the overview in >> > Neurodebian? >> >> this one: >> https://github.com/neurodebian/dipy/blob/debian/debian/control >> >> I will wait then with package update until a new version/PR >> >> THANKS >> and congrats! and cheers! >> >> > Thank you! > > >> btw -- are you reserving a gratis exhibit table for OHBM this year??? >> >> > I am not planning to attend OHBM this year. Is anyone else from the Dipy > devs planning to be there? > > >> -- >> Yaroslav O. Halchenko >> Center for Open Neuroscience http://centerforopenneuroscience.org >> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >> WWW: http://www.linkedin.com/in/yarik >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Mon Feb 22 14:11:03 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Mon, 22 Feb 2016 14:11:03 -0500 Subject: [Neuroimaging] ohbm events [was: Re: DIPY 0.11.0 is now available for download] Message-ID: hi jb, will this be in geneva or elsewhere? cheers, satra On Mon, Feb 22, 2016 at 1:53 PM, JB Poline wrote: > Hi, > > FYI, in case you missed it, there will be a hackathon before HBM : book > the dates: June 23-25! > > cheers > JB > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Mon Feb 22 15:06:52 2016 From: jbpoline at gmail.com (JB Poline) Date: Mon, 22 Feb 2016 12:06:52 -0800 Subject: [Neuroimaging] ohbm events [was: Re: DIPY 0.11.0 is now available for download] In-Reply-To: References: Message-ID: Hi, It is likely to be in Lausanne, very easy and short train ride from/to Geneva. Pierre Bellec / Cameron Craddock / Nolan Nichols / Daniel Margules are the organizers. cheers JB On Mon, Feb 22, 2016 at 11:11 AM, Satrajit Ghosh wrote: > hi jb, > > will this be in geneva or elsewhere? > > cheers, > > satra > > On Mon, Feb 22, 2016 at 1:53 PM, JB Poline wrote: > >> Hi, >> >> FYI, in case you missed it, there will be a hackathon before HBM : book >> the dates: June 23-25! >> >> cheers >> JB >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Tue Feb 23 12:30:34 2016 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Tue, 23 Feb 2016 12:30:34 -0500 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: References: <20160221234845.GX7904@onerussian.com> Message-ID: Hi all, I have forwarded the announcement to all lists that I knew of and updated website, twitter and G+ page. Twitter https://twitter.com/dipymri G+ http://bit.ly/dipymri Let your friends know to. Forward the announcement e-mail to groups you know or tweet etc. Yarik, yes I am planning to be in OHBM and get a table top. And I hope you too. Hopefully, will get into the Hackathon too JB. Thanks for the reminder. Finally, DIPY 0.11.0 is also available to Neurodebian. Make sure you upgrade. Many fixes and new features. Thanks all for contributing. Especially thanks to Stephan, Omar and Rafael for providing us with new frameworks for contextual enhancement, affine registration and diffusion kurtosis respectively. Have a great day! Eleftherios -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Tue Feb 23 14:33:23 2016 From: lists at onerussian.com (Yaroslav Halchenko) Date: Tue, 23 Feb 2016 14:33:23 -0500 Subject: [Neuroimaging] DIPY 0.11.0 is now available for download In-Reply-To: References: <20160221234845.GX7904@onerussian.com> Message-ID: <20160223193323.GM7904@onerussian.com> On Tue, 23 Feb 2016, Eleftherios Garyfallidis wrote: > Yarik, yes I am planning to be in OHBM and get a table top. And I hope you > too. Hopefully, will get into the Hackathon too JB. Thanks for the > reminder. I am already "there": http://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=3660 and also hoping to get to the Hackathon. So see you there -- it will be fun! Cheers, -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From code at oscaresteban.es Tue Feb 23 22:21:58 2016 From: code at oscaresteban.es (Oscar Esteban) Date: Tue, 23 Feb 2016 19:21:58 -0800 Subject: [Neuroimaging] [nipype] Developers roundup Message-ID: Hi all, I'm gauging the interest on having a meeting to catch up with the progress, development and future of nipype. If interested, please let us know your availability on the following doodle poll: http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem Thanks a lot! Cheers, Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Wed Feb 24 15:36:03 2016 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 24 Feb 2016 12:36:03 -0800 Subject: [Neuroimaging] Postdoc position in neuroimaging and data science at the University of Washington Message-ID: We are seeking scientists with a PhD in neuroscience, computer science, electrical engineering, statistics, psychology or related fields, and with an interest in human brain function and data science to apply for a position as a post-doc at the Institute for Learning & Brain Science (I-LABS) (http://depts.washington.edu/bdelab/) and the eScience Institute ( http://escience.washington.edu) at the University of Washington . The project focuses on the development of methods for analyzing multi-modal MRI data, and the application of these methods to questions pertaining to human brain development. The long-term goals of the project are the development and maintenance of software for the analysis of large openly available datasets of human MRI, and the extraction of valuable information about the biological basis of human cognitive abilities from these data. This involves developing new algorithms for the analysis of diffusion MRI, tools for harnessing the power of cloud computing to scale these tools to large datasets and the development of new statistical and modeling techniques that are tailored to the study of brain connections. The postdoc would have the opportunity to work within a large and international open-source development community (http://dipy.org), and would be encouraged to develop a portfolio of open and reproducible science. Suitable candidates should enjoy working in an interdisciplinary and collaborative environment, as the position sits at the intersection of the missions of eScience and I-LABS. There is one year of guaranteed funding for the position, and the opportunity to apply for extraordinary postdoctoral fellowships funded by the Washington Research Foundation, the Gordon & Betty Moore Foundation and the Alfred P. Sloan Foundation (deadline: July 15th), available through the University of Washington Institute for Neuroengineering and the eScience Institute: http://uwin.washington.edu/post-docs/apply-post-docs/ http://escience.washington.edu/postdoctoral-fellowships For inquiries please contact Prof. Yeatman (jyeatman at uw.edu) and Ariel Rokem (arokem at uw.edu). *The University of Washington is an affirmative action and equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, national origin, age, protected veteran or disabled status, or genetic information.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Thu Feb 25 00:50:21 2016 From: krzysztof.gorgolewski at gmail.com (Chris Filo Gorgolewski) Date: Wed, 24 Feb 2016 21:50:21 -0800 Subject: [Neuroimaging] Nipype is looking for your feedback In-Reply-To: References: Message-ID: BTW - whoever is in charge of the nipy twitter account - could you tweet about our survey? Thanks in advance! Best, Chris On Mon, Feb 22, 2016 at 10:50 AM, Satrajit Ghosh wrote: > hi john, > > yes - we are building off of those ideas, but will likely generate a pure > client side version that talks to a backend through a RESTful service. > > the technical challenges that are being solved are: > - how to design a good interaction interface around complex workflows > - how to visualize the execution graphs interactively > - how to share and interact collaboratively > > as soon as the students get going, we will post to this list for feedback. > > cheers, > > satra > > On Mon, Feb 22, 2016 at 1:43 PM, John Pellman > wrote: > >> There's a potential for code reuse wrt a GUI as well, as evidenced by >> this project that a couple people have starred: >> >> https://github.com/belevtsoff/earlPipeline >> >> 2016-02-22 13:26 GMT-05:00 vanessa sochat : >> >>> Graphical user interface! +1 +1 +1 >>> >>> On Mon, Feb 22, 2016 at 8:27 AM, Chris Filo Gorgolewski < >>> krzysztof.gorgolewski at gmail.com> wrote: >>> >>>> Dear all, >>>> We are trying to figure out which features of Nipype we should focus in >>>> the upcoming year. Please help us by filling in this very short wiki >>>> survey: http://www.allourideas.org/nipype >>>> >>>> Best, >>>> Chris >>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> >>> -- >>> Vanessa Villamia Sochat >>> Stanford University >>> (603) 321-0676 >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Feb 25 01:38:30 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 24 Feb 2016 22:38:30 -0800 Subject: [Neuroimaging] Nipype is looking for your feedback In-Reply-To: References: Message-ID: On Wed, Feb 24, 2016 at 9:50 PM, Chris Filo Gorgolewski wrote: > BTW - whoever is in charge of the nipy twitter account - could you tweet > about our survey? Thanks in advance! Not me I'm afraid - I don't know who is... Matthew From vsochat at gmail.com Thu Feb 25 02:11:32 2016 From: vsochat at gmail.com (vanessa s) Date: Wed, 24 Feb 2016 23:11:32 -0800 Subject: [Neuroimaging] Nipype is looking for your feedback In-Reply-To: References: Message-ID: ghosties... https://twitter.com/nipyorg/status/702751722121330689 http://neuroimaging.tumblr.com/post/139958035476/nipype-is-looking-for-your-feedback On Wed, Feb 24, 2016 at 10:38 PM, Matthew Brett wrote: > On Wed, Feb 24, 2016 at 9:50 PM, Chris Filo Gorgolewski > wrote: > > BTW - whoever is in charge of the nipy twitter account - could you tweet > > about our survey? Thanks in advance! > > Not me I'm afraid - I don't know who is... > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- Vanessa Villamia Sochat Stanford University '16 (603) 321-0676 -------------- next part -------------- An HTML attachment was scrubbed... URL: From code at oscaresteban.es Fri Feb 26 15:52:46 2016 From: code at oscaresteban.es (Oscar Esteban) Date: Fri, 26 Feb 2016 12:52:46 -0800 Subject: [Neuroimaging] [nipype] Remainder: developers roundup Message-ID: Hi all, This is just a reminder for those interested in this meeting to catch up with the nipype development. http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem Best, Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at oscaresteban.es Fri Feb 26 15:48:06 2016 From: me at oscaresteban.es (Oscar Esteban) Date: Fri, 26 Feb 2016 12:48:06 -0800 Subject: [Neuroimaging] [nipype] Remainder: developers roundup Message-ID: Hi all, This is just a reminder for those interested in this meeting to catch up with the nipype development. http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem http://doodle.com/poll/2ueny72ita7vpkem Best, Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Mon Feb 29 11:53:13 2016 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 29 Feb 2016 08:53:13 -0800 Subject: [Neuroimaging] [dipy]Fitting diffusion models in the absence of S0 signal Message-ID: Hi everyone, In Rafael's recent PR implementing free-water-eliminated DTI ( https://github.com/nipy/dipy/pull/835), we had a little bit of a discussion about the use of the non-diffusion weighted signal (S0). As pointed out by Rafael, in the absence of an S0 in the measured data, for some models, that can be derived from the model fit ( https://github.com/nipy/dipy/pull/835#issuecomment-183060855). I think that we would like to support using data both with and without S0. On the other hand, I don't think that we should treat the derived S0 as a model parameter, because in some cases, we want to provide S0 as an input (for example, when predicting back the signal for another measurement, with a different ). In addition, it would be hard to incorporate that into the model_params variable of the TensorFit object, while maintaining backwards compatibility of the TensorModel/TensorFit and derived classes (e.g., DKI). My proposal is to have an S0 property for ReconstFit objects. When this is calculated from the model (e.g. in DTI), it gets set by the `fit` method of the ReconstModel object. When it isn't, it can be set from the data. Either way, it can be over-ridden by the user (e.g., for the purpose of predicting on a new data-set). This might change the behavior of the prediction code slightly, but maybe that is something we can live with? Happy to hear what everyone thinks, before we move ahead with this. Cheers, Ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From dilipchandima at gmail.com Mon Feb 29 11:16:23 2016 From: dilipchandima at gmail.com (dileep Chandima) Date: Mon, 29 Feb 2016 21:46:23 +0530 Subject: [Neuroimaging] Fwd: GSOC 2016 - Develop a new DIPY website In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: dileep Chandima Date: Mon, Feb 29, 2016 at 9:22 PM Subject: GSOC 2016 - Develop a new DIPY website To: gsoc-general at python.org, nipy-devel at scipy.org Hi All, I'm Dileepa Chandima, final year undergraduate student of Faculty of Engineering, University of Peradeniya, Sri Lanka. .I went through all the project ideas and all of them are interesting. So I decided to work on Develop a new DIPY website with more interactive features for GSOC 2016 because of the experience gained by doing various projects in my internship program And .I have knowledge about java, python, RESTful API, HTML, bootstrap, and also CSS. And also I strongly believe that this would be more beneficial to my future career development as well And also I went through the following areas to get more familiar with the DIPY Web site. So that it would help to adapt to the project to be done. - Subscribed the 'GSOC general Community' dev mailing list and also 'NIPY Community' dev mailing list. - Cloned the DIPY git repository (https://github.com/nipy/dipy.git). - Executed a small website using 'Django'. - Started to going through the Django and Sphinx Documentations. Please be kind enough to provide more details about the project and I really appreciate your cooperation on this matter. Thank you. -- Dileepa Chandima Dept of Computer Engineering University of Peradeniya linkedIn -- Dileepa Chandima Dept of Computer Engineering University of Peradeniya linkedIn -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Mon Feb 29 13:16:18 2016 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 29 Feb 2016 10:16:18 -0800 Subject: [Neuroimaging] Neurohackweek: a summer school for neuroimaging and data science, September 5th-9th 2016 Message-ID: We are happy to announce a call for applications to participate in the first installment of the Neurohackweek summer school for neuroimaging and data science. This 5 day hands-on workshop, held at the University of Washington eScience Institute in Seattle, will focus on technologies used to analyze human neuroimaging data, on methods used to extract information from large datasets of publicly available data (such as the Human Connectome Project, Open fMRI, etc.), and on tools for making neuroimaging research open and reproducible. Morning sessions will be devoted to lectures and tutorials, and afternoon sessions will be devoted to participant-directed activities: guided work on team projects, hackathon sessions, and breakout sessions on topics of interest. For more details, see: http://neurohackweek.github.io/ We are now accepting applications from researchers in different stages of their career (graduate students, postdocs, faculty, and research scientists) to participate at: http://escience.washington.edu/neurohackweek2016-application Accepted applicants will be asked to pay a fee of $200 upon final registration. This fee will include participation in the course, accommodation in the UW dorms, and two meals a day (breakfast and lunch), for the duration of the course. A limited number of fee waivers and travel grants will be available. We encourage students with financial need and students from groups that are underrepresented in neuroimaging and data science to apply for these grants (email applications to: arokem at uw.edu) Important dates: April 18th: Deadline for applications to participate May 6th: Notification of acceptance June 1st: Final registration deadline On behalf of the instructors, Ariel Rokem, UW eScience Tal Yarkoni, UT Austin -------------- next part -------------- An HTML attachment was scrubbed... URL: