From AMAKSIMOVSKIY at MCLEAN.HARVARD.EDU Mon Sep 4 10:10:38 2017 From: AMAKSIMOVSKIY at MCLEAN.HARVARD.EDU (Maksimovskiy, Arkadiy) Date: Mon, 4 Sep 2017 14:10:38 +0000 Subject: [Neuroimaging] Integrating Freesurfer with Python Message-ID: <50F8AFCD-DF98-4F7A-A125-E988C0EC5041@mclean.harvard.edu> Hi Everyone, My apologies for the basic question. I am a beginner at integrating Freesurfer commands into python scripts, and was wondering if anyone has suggestions on what would be the best way to do it. Specifically, for a python program that uses Freesurfer commands, is it possible to source Freesurfer within the python script and then use the commands? Or does sourcing the environment need to happen prior to running such scripts? Any information would be greatly appreciated. Thank you, Arkadiy The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Mon Sep 4 10:24:56 2017 From: lists at onerussian.com (Yaroslav Halchenko) Date: Mon, 4 Sep 2017 10:24:56 -0400 Subject: [Neuroimaging] Integrating Freesurfer with Python In-Reply-To: <50F8AFCD-DF98-4F7A-A125-E988C0EC5041@mclean.harvard.edu> References: <50F8AFCD-DF98-4F7A-A125-E988C0EC5041@mclean.harvard.edu> Message-ID: <20170904142456.ov7wpeyo25egpt7r@hopa.kiewit.dartmouth.edu> On Mon, 04 Sep 2017, Maksimovskiy, Arkadiy wrote: > Hi Everyone, > My apologies for the basic question. I am a beginner at integrating > Freesurfer commands into python scripts, and was wondering if anyone has > suggestions on what would be the best way to do it. Specifically, for a > python program that uses Freesurfer commands, is it possible to source > Freesurfer within the python script and then use the commands? Or does > sourcing the environment need to happen prior to running such scripts? have you looked into using nipype? http://nipype.readthedocs.io/en/latest/ it provides interfaces for the majority of open source toolkits in neuroimaging (freesurfer included) and a number of precrafted analysis pipelines -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From klarnemann at gmail.com Tue Sep 5 14:28:16 2017 From: klarnemann at gmail.com (Katelyn Arnemann) Date: Tue, 5 Sep 2017 11:28:16 -0700 Subject: [Neuroimaging] Integrating Freesurfer with Python In-Reply-To: <20170904142456.ov7wpeyo25egpt7r@hopa.kiewit.dartmouth.edu> References: <50F8AFCD-DF98-4F7A-A125-E988C0EC5041@mclean.harvard.edu> <20170904142456.ov7wpeyo25egpt7r@hopa.kiewit.dartmouth.edu> Message-ID: I'm not sure what exactly you want to accomplish, but here are some basic things I do: I run freesurfer commands in python as such: > > import os cmd = 'recon-all -i %s -sd %s -subjid %s' %(dicom_file, subid, subid_directory) os.system(cmd) After converting freesurfer files to nifti, I open them using nibabel: import nibabel as ni f = '%s' % (nifti_file) data = ni.load(f).get_data() On Mon, Sep 4, 2017 at 7:24 AM, Yaroslav Halchenko wrote: > > On Mon, 04 Sep 2017, Maksimovskiy, Arkadiy wrote: > > > Hi Everyone, > > > > > My apologies for the basic question. I am a beginner at integrating > > Freesurfer commands into python scripts, and was wondering if anyone > has > > suggestions on what would be the best way to do it. Specifically, for > a > > python program that uses Freesurfer commands, is it possible to source > > Freesurfer within the python script and then use the commands? Or does > > sourcing the environment need to happen prior to running such scripts? > > have you looked into using nipype? > http://nipype.readthedocs.io/en/latest/ > it provides interfaces for the majority of open source toolkits in > neuroimaging (freesurfer included) and a number of precrafted analysis > pipelines > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- Katelyn Arnemann (n?e Begany) PhD Candidate University of California, Berkeley -------------- next part -------------- An HTML attachment was scrubbed... URL: From jstadler at lin-magdeburg.de Thu Sep 7 08:49:36 2017 From: jstadler at lin-magdeburg.de (=?UTF-8?Q?J=c3=b6rg_Stadler?=) Date: Thu, 7 Sep 2017 14:49:36 +0200 Subject: [Neuroimaging] util.Function probs after update Message-ID: <7819794c-3e3f-ce64-973a-fbd9748a4a0a@lin-magdeburg.de> Hi, my script was running with nipype 0.13.-dev and I thought it is a good idea to update to the lasted dev version ;) I got stocked with an error from util.Function. After spending some time reading the docs I realized the change from inputs.function = function to inputs.function_str = function definition as a string This drives my crazy. Is there an easy way to have the old behavior or do I have to go the long way? https://nipype.readthedocs.io/en/latest/devel/python_interface_devel.html BTW: The docstring of util.wrappers.Function is outdated: function : callable callable python object. must be able to execute in an isolated namespace (possibly in concert with the ``imports`` parameter) is now function_str: .... Joerg From mwaskom at nyu.edu Fri Sep 8 13:57:53 2017 From: mwaskom at nyu.edu (Michael Waskom) Date: Fri, 8 Sep 2017 13:57:53 -0400 Subject: [Neuroimaging] util.Function probs after update In-Reply-To: <7819794c-3e3f-ce64-973a-fbd9748a4a0a@lin-magdeburg.de> References: <7819794c-3e3f-ce64-973a-fbd9748a4a0a@lin-magdeburg.de> Message-ID: Hi Joerg, The docstring for the class is correct. You can pass a callable object to the Function class constructor and it will internally get converted to a string before the interface input trait is set. As far as I remember, when the interface was written it was necessary to use strings (and not callable objects) as the actual trait values. I do not know whether that changed at some point ... perhaps it would help if you showed the code that used to work and now fails. Best, Michael On Thu, Sep 7, 2017 at 8:49 AM, J?rg Stadler wrote: > Hi, > > my script was running with nipype 0.13.-dev and I thought it is a good > idea to update to the lasted dev version ;) > > I got stocked with an error from util.Function. > > > After spending some time reading the docs I realized the change from > inputs.function = function > to > inputs.function_str = function definition as a string > > This drives my crazy. > > Is there an easy way to have the old behavior or do I have to go the > long way? > > https://nipype.readthedocs.io/en/latest/devel/python_interface_devel.html > > > BTW: The docstring of util.wrappers.Function is outdated: > > function : callable > callable python object. must be able to execute in an > isolated namespace (possibly in concert with the ``imports`` > parameter) > > is now > function_str: .... > > Joerg > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Sun Sep 10 13:31:37 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Sun, 10 Sep 2017 17:31:37 +0000 Subject: [Neuroimaging] [Nibabel] Public dataset generates unexpected Message-ID: Hello Matthew and all, I downloaded a dataset from NITRC by Boekel et al. https://www.nitrc.org/projects/dwi_test-retest/ and I used nibabel to get the affine and voxel size. The authors claimed that the voxel size is 2x2x2mm^3 however the affine tells a different story. import nibabel as nib img = nib.load('pp26_dwi_run01_A.nii.gz') print(img.affine) [[ -1.999 0.047 0.077 108.292] [ 0.042 1.984 -0.471 -94.461] [ 0.047 0.251 3.703 -116.857] [ 0. 0. 0. 1. ]] nib.affines.voxel_sizes(img.affine) *array([ 2. , 2. , 3.733])* The zoom function gives a different answer in agreement with the authors' claim. img.header.get_zooms()[:3] *(2.0, 2.0, 2.0)* Could it be that the authors damaged the header during preprocessing? I am assuming here that nibabel is bringing the correct information i.e. whatever is in the Nifti1 image. If you agree that this is an issue with the data itself. It would be nice to contact the authors. Best regards, Eleftherios -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Sun Sep 10 16:04:13 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Sun, 10 Sep 2017 20:04:13 +0000 Subject: [Neuroimaging] [Nibabel] Public dataset generates unexpected In-Reply-To: References: Message-ID: Clearly the title was meant to be "Public dataset generates unexpected affine matrix" :) Apologies, Eleftherios On Sun, Sep 10, 2017 at 1:31 PM Eleftherios Garyfallidis wrote: > Hello Matthew and all, > > I downloaded a dataset from NITRC by Boekel et al. > https://www.nitrc.org/projects/dwi_test-retest/ > > and I used nibabel to get the affine and voxel size. The authors claimed > that the voxel size is 2x2x2mm^3 however the affine tells a different story. > > import nibabel as nib > > img = nib.load('pp26_dwi_run01_A.nii.gz') > > print(img.affine) > [[ -1.999 0.047 0.077 108.292] > [ 0.042 1.984 -0.471 -94.461] > [ 0.047 0.251 3.703 -116.857] > [ 0. 0. 0. 1. ]] > > nib.affines.voxel_sizes(img.affine) > *array([ 2. , 2. , 3.733])* > > The zoom function gives a different answer in agreement with the authors' > claim. > > img.header.get_zooms()[:3] > *(2.0, 2.0, 2.0)* > > Could it be that the authors damaged the header during preprocessing? > I am assuming here that nibabel is bringing the correct information i.e. > whatever is in the Nifti1 image. > > If you agree that this is an issue with the data itself. It would be nice > to contact the authors. > > Best regards, > Eleftherios > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Sep 10 17:02:31 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 10 Sep 2017 22:02:31 +0100 Subject: [Neuroimaging] [Nibabel] Public dataset generates unexpected In-Reply-To: References: Message-ID: Hi Elef, On Sun, Sep 10, 2017 at 9:04 PM, Eleftherios Garyfallidis wrote: > Clearly the title was meant to be "Public dataset generates unexpected > affine matrix" :) > Apologies, > Eleftherios > > On Sun, Sep 10, 2017 at 1:31 PM Eleftherios Garyfallidis > wrote: >> >> Hello Matthew and all, >> >> I downloaded a dataset from NITRC by Boekel et al. >> https://www.nitrc.org/projects/dwi_test-retest/ >> >> and I used nibabel to get the affine and voxel size. The authors claimed >> that the voxel size is 2x2x2mm^3 however the affine tells a different story. >> >> import nibabel as nib >> >> img = nib.load('pp26_dwi_run01_A.nii.gz') >> >> print(img.affine) >> [[ -1.999 0.047 0.077 108.292] >> [ 0.042 1.984 -0.471 -94.461] >> [ 0.047 0.251 3.703 -116.857] >> [ 0. 0. 0. 1. ]] >> >> nib.affines.voxel_sizes(img.affine) >> array([ 2. , 2. , 3.733]) >> >> The zoom function gives a different answer in agreement with the authors' >> claim. >> >> img.header.get_zooms()[:3] >> (2.0, 2.0, 2.0) >> >> Could it be that the authors damaged the header during preprocessing? >> I am assuming here that nibabel is bringing the correct information i.e. >> whatever is in the Nifti1 image. >> >> If you agree that this is an issue with the data itself. It would be nice >> to contact the authors. So - it looks like the authors have modified the sform field to set a new affine matrix. You can check with e.g. "print(img.header)" Maybe when they say "voxel size" they mean the values set in the "pixdim", specifically? Anyway, if I were the authors, I would want to know about the discrepancy. I think you can give them all the details with the information from "print(img.header)"? See you, Matthew From elef at indiana.edu Sun Sep 10 17:08:24 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Sun, 10 Sep 2017 21:08:24 +0000 Subject: [Neuroimaging] [Nibabel] Public dataset generates unexpected In-Reply-To: References: Message-ID: Thanks for the feedback Matthew. Here is also a script that corrects another problem in the DWI data. There the last bvector is incorrect. https://gist.github.com/Garyfallidis/199813624bb6d2dac3f51aa6b41717af I will send the authors an e-mail with both issues. For the first issue I think they used some tool that does not contain the affine after reslicing. Probably the original data have an actual resolution of 2x2x3.7 which is not ideal for tracking (even after reslicing). Cheers, Eleftherios On Sun, Sep 10, 2017 at 4:04 PM Eleftherios Garyfallidis wrote: > Clearly the title was meant to be "Public dataset generates unexpected > affine matrix" :) > Apologies, > Eleftherios > > On Sun, Sep 10, 2017 at 1:31 PM Eleftherios Garyfallidis > wrote: > >> Hello Matthew and all, >> >> I downloaded a dataset from NITRC by Boekel et al. >> https://www.nitrc.org/projects/dwi_test-retest/ >> >> and I used nibabel to get the affine and voxel size. The authors claimed >> that the voxel size is 2x2x2mm^3 however the affine tells a different story. >> >> import nibabel as nib >> >> img = nib.load('pp26_dwi_run01_A.nii.gz') >> >> print(img.affine) >> [[ -1.999 0.047 0.077 108.292] >> [ 0.042 1.984 -0.471 -94.461] >> [ 0.047 0.251 3.703 -116.857] >> [ 0. 0. 0. 1. ]] >> >> nib.affines.voxel_sizes(img.affine) >> *array([ 2. , 2. , 3.733])* >> >> The zoom function gives a different answer in agreement with the authors' >> claim. >> >> img.header.get_zooms()[:3] >> *(2.0, 2.0, 2.0)* >> >> Could it be that the authors damaged the header during preprocessing? >> I am assuming here that nibabel is bringing the correct information i.e. >> whatever is in the Nifti1 image. >> >> If you agree that this is an issue with the data itself. It would be nice >> to contact the authors. >> >> Best regards, >> Eleftherios >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Tue Sep 12 10:58:20 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Tue, 12 Sep 2017 14:58:20 +0000 Subject: [Neuroimaging] [Nibabel] Public dataset generates unexpected In-Reply-To: References: Message-ID: Done! Contacted the authors and they are now working to fix the issues. Will post here when the corrected data go online. On Sun, Sep 10, 2017, 5:08 PM Eleftherios Garyfallidis wrote: > > Thanks for the feedback Matthew. > > Here is also a script that corrects another problem in the DWI data. There > the last bvector is incorrect. > > https://gist.github.com/Garyfallidis/199813624bb6d2dac3f51aa6b41717af > > I will send the authors an e-mail with both issues. > For the first issue I think they used some tool that does not contain the > affine after reslicing. > Probably the original data have an actual resolution of 2x2x3.7 which is > not ideal for tracking (even after reslicing). > > Cheers, > Eleftherios > > On Sun, Sep 10, 2017 at 4:04 PM Eleftherios Garyfallidis > wrote: > >> Clearly the title was meant to be "Public dataset generates unexpected >> affine matrix" :) >> Apologies, >> Eleftherios >> >> On Sun, Sep 10, 2017 at 1:31 PM Eleftherios Garyfallidis < >> elef at indiana.edu> wrote: >> >>> Hello Matthew and all, >>> >>> I downloaded a dataset from NITRC by Boekel et al. >>> https://www.nitrc.org/projects/dwi_test-retest/ >>> >>> and I used nibabel to get the affine and voxel size. The authors claimed >>> that the voxel size is 2x2x2mm^3 however the affine tells a different story. >>> >>> import nibabel as nib >>> >>> img = nib.load('pp26_dwi_run01_A.nii.gz') >>> >>> print(img.affine) >>> [[ -1.999 0.047 0.077 108.292] >>> [ 0.042 1.984 -0.471 -94.461] >>> [ 0.047 0.251 3.703 -116.857] >>> [ 0. 0. 0. 1. ]] >>> >>> nib.affines.voxel_sizes(img.affine) >>> *array([ 2. , 2. , 3.733])* >>> >>> The zoom function gives a different answer in agreement with the >>> authors' claim. >>> >>> img.header.get_zooms()[:3] >>> *(2.0, 2.0, 2.0)* >>> >>> Could it be that the authors damaged the header during preprocessing? >>> I am assuming here that nibabel is bringing the correct information i.e. >>> whatever is in the Nifti1 image. >>> >>> If you agree that this is an issue with the data itself. It would be >>> nice to contact the authors. >>> >>> Best regards, >>> Eleftherios >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Thu Sep 14 16:54:17 2017 From: lists at onerussian.com (Yaroslav Halchenko) Date: Thu, 14 Sep 2017 16:54:17 -0400 Subject: [Neuroimaging] Job ad: Research programmer at Dartmouth (ReproNim project) Message-ID: <20170914205417.y4xapvjlyvyx2ruc@hopa.kiewit.dartmouth.edu> Dear NeuroPython gurus, I am delighted to invite you to join our good cause (thanks to NIH support): https://searchjobs.dartmouth.edu/postings/43187 NICEMAN (https://github.com/repronim/niceman) is developed in Python and heavily relies on utilization of contemporary infrastructure for collaboration (such as http://github.com) and platforms for distributing and deploying software and datasets (such as Debian, conda, DataLad etc.) and carrying out execution in virtualized environments (Singularity, Docker, etc). As all our other projects (NeuroDebian, PyMVPA, DataLad, etc), NICEMAN (and other parts of the ReproNim) is developed completely in the open, so this is an excellent opportunity to make valuable and visible contribution to the ecosystem of scientific software, and to get hands-on experience in the full cycle of open-source development and deployment. This project is a part of the larger effort (http://repronim.org) ? we are collaborating with many other excellent projects (NITRC, NIF, etc) in the scope of this effort, so it is also an excellent opportunity to establish working collaborations with leading projects in open neuroscience. With best regards, -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From elef at indiana.edu Thu Sep 14 17:24:26 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Thu, 14 Sep 2017 21:24:26 +0000 Subject: [Neuroimaging] Job ad: Research programmer at Dartmouth (ReproNim project) In-Reply-To: <20170914205417.y4xapvjlyvyx2ruc@hopa.kiewit.dartmouth.edu> References: <20170914205417.y4xapvjlyvyx2ruc@hopa.kiewit.dartmouth.edu> Message-ID: Cool, thanks for sharing. On Thu, Sep 14, 2017 at 5:13 PM Yaroslav Halchenko wrote: > Dear NeuroPython gurus, > > I am delighted to invite you to join our good cause (thanks to NIH > support): > > https://searchjobs.dartmouth.edu/postings/43187 > > NICEMAN (https://github.com/repronim/niceman) is developed in Python and > heavily relies on utilization of contemporary infrastructure for > collaboration > (such as http://github.com) and platforms for distributing and deploying > software and datasets (such as Debian, conda, DataLad etc.) and carrying > out > execution in virtualized environments (Singularity, Docker, etc). As all > our > other projects (NeuroDebian, PyMVPA, DataLad, etc), NICEMAN (and other > parts of > the ReproNim) is developed completely in the open, so this is an excellent > opportunity to make valuable and visible contribution to the ecosystem of > scientific software, and to get hands-on experience in the full cycle of > open-source development and deployment. This project is a part of the > larger > effort (http://repronim.org) ? we are collaborating with many other > excellent > projects (NITRC, NIF, etc) in the scope of this effort, so it is also an > excellent opportunity to establish working collaborations with leading > projects > in open neuroscience. > > With best regards, > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 <+1%20603-646-9834> Fax: +1 > (603) 646-1419 <+1%20603-646-1419> > WWW: http://www.linkedin.com/in/yarik > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Thu Sep 14 18:41:19 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 14 Sep 2017 18:41:19 -0400 Subject: [Neuroimaging] Repronim Training Workshop on Reproducible Research Nov 10-11 before SFN Message-ID: *ReproNim Training Workshop: Nov 10 full day - Nov 11 morning - George Washington** University* *Register and information at:* ?https://tinyurl.com/repronim-sfn17 *Purpose:* The issue of lack of reproducibility has been described in several scientific domains for several years, raising concerns specifically in the life science community. ReproNim has developed a curriculum ( http://www.reproducibleimaging.org/#trai... ) that will give the students the information, tools and practices to perform repeatable and efficient research. This training workshop will introduce material on the critical aspects of reproducible brain imaging and will orient attendees using a hands on and practical experience to conduct neuroimaging analyses with the next generation of tools. By the end of this course, the student will be aware of training materials and concepts necessary to perform reproducible research in neuroimaging. The student will be able to reuse these materials to conduct local workshops and training and be able to customize the training for their specific scenario. *Prerequisites:* If you are a student, postdoc or researcher in life science who directly works with neuroimaging data - or wish to work with these data, and you have some basic computational background, this training workshop is for you. For instance, you should have already done either some R, or Python, or Matlab or Shell scripting, or have used standard neuroimaging tools (SPM, FSL, Afni, FreeSurfer, etc) and be engaged in neuroimaging research projects. You should have already completed a neuroimaging analysis or know how to do one. *Logistics:* *Location:* George Washington University, Marvin Center, Room 402-404 https://events-venues.gwu.edu/meeting-ro... *Dates:* November 10-11, 2017. *Costs:* Free - but space is limited - please apply for approval. *Schedule:* Friday November 10th: 8:30-9am: Introduction to the course and participants "setup" 9am-10:45: Reproducibility Basics (Module 0) 10:45-11am : Coffee break 11am-12:45 : FAIR data (Module 1) 12:45-2pm : Lunch+coffee 2pm-3:45: Data Processing (Module 2) 3:45-4pm: coffee break 4pm-5:45pm: Statistics for reproducible analyses (Module 3) 5:45-6:15: Questions and answers and feedback session Saturday November 11th: 9am-12pm: The Re-executable Micro Publication Challenge During this time, we will propose a small challenge around producing an entirely re-executable neuroimaging analysis from fetching data to producing statistical results. This will also be a time with close interactions with neuroimaging experts in data handling and analysis. 12pm-12:30: Closing session: feedback and future: "become a trainer". Weekly online office hours will be held prior to the workshop. Registered attendees will receive information via email. *Modules:* *Module 0 - Reproducibility Basics:* Friday Nov. 10. 9am-10:45am. This module guides through three somewhat independent topics, which are in the heart of establishing and efficiently using common generic resources: command line shell, version control systems (for code and data), and distribution package managers. Gaining additional skills in any of those topics could help you to not only become more efficient in your day-to-day research activities, but also would lay foundation in establishing habits to make your work actually more reproducible. *Module 1 - FAIR Data:* Friday Nov. 10. 11am-12:45. This module provides an overview of strategies for making research outputs available through the web, with an emphasis on data. It introduces concepts such persistent identifiers, linked data, the semantic web and the FAIR principles. It is designed for those with little to no familiarity with these concepts. More technical discussions can be found in the reference materials. *Module 2 - Data Processing:* Friday Nov. 10. 2pm-3:45pm. This module teaches you to perform reproducible analysis, how to preserve the information, and how to share data and code with others. We will show an example framework for reproducible analysis, how to annotate, harmonize, clean, and version brain imaging data, how to create and maintain reproducible computational environments for analysis and use dataflow tools to capture provenance and perform efficient analyses (docker) and other tools. *Module 3 - Statistics:* Friday 4am-5:45 The goal of this module is to teach brain imagers about the statistical aspects of reproducibility. This module should give you a critical eye on most of the current literature and the knowledge to do solid work, understand exactly what is a p-value and its limitation to represent evidence for results, practical notion of power and associated tools, etc. *Instructors:* J. Bates, S. Ghosh, J. Grethe, Y. Halchenko, C. Haselgrove, S. Hodge, D. Jarecka, D. Keator, D. Kennedy, M. Martone, N. Nichols, S. Padhy, JB Poline, N. Preuss, M. Travers *This workshop is brought to you by ReproNim: A center for Reproducible Neuroimaging **Computation NIH-NIBIB P41 EB019936* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ReproNim-Course-Syllabus.pdf Type: application/pdf Size: 91024 bytes Desc: not available URL: From xinyingw at nevada.unr.edu Tue Sep 19 20:18:20 2017 From: xinyingw at nevada.unr.edu (Xinying Wang) Date: Tue, 19 Sep 2017 17:18:20 -0700 Subject: [Neuroimaging] Help with trk saved method Message-ID: Hello, Actually, I?m a PhD studnt and just have accessed with dipy 2 weeks ago in one of my topic. But there?s something wrong when I want to save trk file since the file I saved was too small(900k). I checked the API which said: dipy.io.trackvis.*save_trk*(filename, points, vox_to_ras, shape) A temporary helper function for saving trk files. This function will soon be replaced by better trk file support in nibabel. So could you help me solve the problem of saving the trk with nibable? I think may be the method of dipy has been already changed. Thank you and have a good day. -Xinying -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean.christophe.houde at gmail.com Wed Sep 20 09:25:24 2017 From: jean.christophe.houde at gmail.com (Jean-Christophe Houde) Date: Wed, 20 Sep 2017 09:25:24 -0400 Subject: [Neuroimaging] Help with trk saved method In-Reply-To: References: Message-ID: Hi Xinying, you should take a look at nibabel.streamlines.save() and nibabel.streamlines.trk See here: http://nipy.org/nibabel/reference/nibabel.streamlines.html#module-nibabel.streamlines You should have everything that you need. 2017-09-19 20:18 GMT-04:00 Xinying Wang : > Hello, > Actually, I?m a PhD studnt and just have accessed with dipy 2 weeks ago in > one of my topic. But there?s something wrong when I want to save trk file > since the file I saved was too small(900k). I checked the API which said: > dipy.io.trackvis.*save_trk*(filename, points, vox_to_ras, shape) > A temporary helper function for saving trk files. > > This function will soon be replaced by better trk file support in nibabel. > > > So could you help me solve the problem of saving the trk with nibable? I > think may be the method of dipy has been already changed. > > > Thank you and have a good day. > > > -Xinying > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RPEREACAMARGO at mgh.harvard.edu Wed Sep 20 18:43:25 2017 From: RPEREACAMARGO at mgh.harvard.edu (Perea Camargo, Rodrigo Dennis) Date: Wed, 20 Sep 2017 22:43:25 +0000 Subject: [Neuroimaging] [DIPY] Question about whole tracks *.trk registration using SLR Message-ID: <63C189EA-8788-4FB9-BDCA-5D3DAC191164@mgh.harvard.edu> Hi Eleftherios & DIPY community, I am trying to register two whole brain tracts as shown in your recent publication using the streamline-based linear registration. I am diving into Dipy now and I might have some problems loading the files or registering them. Following your example ( http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration), there are 2 issues that may arise when I try it. 1) How to load *.trk files is not shown in your example (it looks like you added to cingulum bundles from your dataset using the dipy.data value?) So I follow this tutorial (http://nipy.org/dipy/examples_built/streamline_formats.html ) to load the streamline and hdr but I am not sure if this is the format SLF() wants it in. 2) So then I try using the srr.optimize( ) function but I get the following problems (check below). I hope you can help me. Rodrigo Here is my Jupyter notebook with the errors I found (any help will be greatly appreciate it): [cid:E8007168-CC60-44C2-98D4-377B91D1A9D9 at mgh.harvard.edu] The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 195056 bytes Desc: PastedGraphic-1.png URL: From satra at mit.edu Thu Sep 21 10:56:32 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 21 Sep 2017 10:56:32 -0400 Subject: [Neuroimaging] downsampling without an anti-aliasing filter Message-ID: hi folks, on the list and other places people often ask for scripts/tools for resampling or downsampling imaging data. my training in signal processing tells me that: 1. upsampling and downsampling are different operations 2. if i downsample data without low-pass filtering at an appropriate cutoff i can introduce aliasing if there are frequencies present in the original data above the new nyqist rate. for epi sequences where data are acquired in planes, i can easily understand doing this for slices. but i don't know why in imaging, many software downsample the in-plane samples without an anti-aliasing filter, especially as we are getting to higher sampling frequencies (smaller voxel sizes). i would love to understand if i am missing something fundamental about imaging data or signal processing applied to imaging data. cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From blaise.frederick at gmail.com Thu Sep 21 11:01:55 2017 From: blaise.frederick at gmail.com (Blaise Frederick) Date: Thu, 21 Sep 2017 11:01:55 -0400 Subject: [Neuroimaging] downsampling without an anti-aliasing filter In-Reply-To: References: Message-ID: <4AA6AB90-1F4A-4294-B687-25695C8DCBE1@gmail.com> Hey Satra, No, I don?t think you?re missing anything. The only reason I can think that people would downsample fMRI data without aliasing is laziness from the bad old days of long TRs, where you might figure ?well, cardiac and respiratory signals are aliased already, so why bother?? Blaise > On Sep 21, 2017, at 10:56 AM, Satrajit Ghosh wrote: > > hi folks, > > on the list and other places people often ask for scripts/tools for resampling or downsampling imaging data. my training in signal processing tells me that: > > 1. upsampling and downsampling are different operations > 2. if i downsample data without low-pass filtering at an appropriate cutoff i can introduce aliasing if there are frequencies present in the original data above the new nyqist rate. > > for epi sequences where data are acquired in planes, i can easily understand doing this for slices. but i don't know why in imaging, many software downsample the in-plane samples without an anti-aliasing filter, especially as we are getting to higher sampling frequencies (smaller voxel sizes). > > i would love to understand if i am missing something fundamental about imaging data or signal processing applied to imaging data. > > cheers, > > satra > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging From blaise.frederick at gmail.com Thu Sep 21 11:02:24 2017 From: blaise.frederick at gmail.com (Blaise Frederick) Date: Thu, 21 Sep 2017 11:02:24 -0400 Subject: [Neuroimaging] downsampling without an anti-aliasing filter In-Reply-To: <4AA6AB90-1F4A-4294-B687-25695C8DCBE1@gmail.com> References: <4AA6AB90-1F4A-4294-B687-25695C8DCBE1@gmail.com> Message-ID: <99E2017C-4B0B-43ED-A441-537D8234DA97@gmail.com> Arg. Without ANTI-aliasing? > On Sep 21, 2017, at 11:01 AM, Blaise Frederick wrote: > > Hey Satra, > > No, I don?t think you?re missing anything. The only reason I can think that people would downsample fMRI data without aliasing is laziness from the bad old days of long TRs, where you might figure ?well, cardiac and respiratory signals are aliased already, so why bother?? > > Blaise > > >> On Sep 21, 2017, at 10:56 AM, Satrajit Ghosh wrote: >> >> hi folks, >> >> on the list and other places people often ask for scripts/tools for resampling or downsampling imaging data. my training in signal processing tells me that: >> >> 1. upsampling and downsampling are different operations >> 2. if i downsample data without low-pass filtering at an appropriate cutoff i can introduce aliasing if there are frequencies present in the original data above the new nyqist rate. >> >> for epi sequences where data are acquired in planes, i can easily understand doing this for slices. but i don't know why in imaging, many software downsample the in-plane samples without an anti-aliasing filter, especially as we are getting to higher sampling frequencies (smaller voxel sizes). >> >> i would love to understand if i am missing something fundamental about imaging data or signal processing applied to imaging data. >> >> cheers, >> >> satra >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > From bennet at umich.edu Sat Sep 23 11:25:45 2017 From: bennet at umich.edu (Bennet Fauber) Date: Sat, 23 Sep 2017 11:25:45 -0400 Subject: [Neuroimaging] Wanted: Two open positions in neuroimaging Message-ID: The University of Michigan currently has two openings for people with neuroimaging backgrounds and interests. One is a post-doctoral fellowship, and one is a regular staff position. I provide a brief summary here and a URL at the UM job site for each. For the staff position, we are seeking someone with a good background in fMRI design and analysis and good computing skills. Our current environment is heavily SPM-based, but we would like to expand use of Python/NiPype, FSL, AFNI, and other non-MATLAB based tools in the future. The successful candidate will become part of the core of the Michigan Neuroimaging Initiative , which is entering its second year. ============================= Neuroimaging Analysis Specialist The Functional MRI Laboratory is an interdisciplinary research unit founded in 2001 and organized under the University of Michigan Office of Research. We are dedicated to supporting research on the structures and functions of the brain that underlie cognitive and affective processes, and research on functional MRI and associated research tools. The Laboratory's mission is to maintain an environment that will enhance the excellence of research using fMRI and associated technologies by providing a well-equipped physical facility and appropriate intellectual support services for investigators. The primary role of this position will be to work with individual investigators who use the fMRI facility to process and analyze their data. These investigators have varying levels of expertise with analysis protocols, so the successful candidate will be working most often with relatively novice users of fMRI to develop design and analysis protocols, and sometimes with more experienced investigators to conduct advanced analyses. Examples of the responsibilities of the successful candidate include: * Helping to create effective data acquisition designs as well as data processing and analysis protocols once the data are collected * Working with investigators to tailor their analyses to the research questions under investigation * Participating with other Lab members to train investigators in analysis methods tailored to their research programs * Participating with other Lab members in offering the annual fMRI training course * Joining forces with staff members from the Center for Statistical Consulting and Research to offer workshops on issues concerning MRI and fMRI data analysis * Developing new analysis tools that are tailored to ongoing research protocols * Working with members of the Lab and the fMRI Methods Core to coordinate data analysis protocols across campus * Organize and/or provide training sessions to the user community on Freesurfer, FSL, and other analysis software, as needed Select qualifications * Master?s or Ph.D. strongly preferred in computer science, biostatistics with neuroimaging emphasis, or related field * Experience working with brain imaging data - specifically MRI and/or Functional MRI * Two or more years of programming experience (specifically, shell scripting, Matlab and Python) http://careers.umich.edu/job_detail/147138/neuroimaging_analysis_specialist ============================= Post-Doctoral Fellowship in Cognitive Neuroscience The Cognition, Control, and Action (CoCoA) lab, directed by Dr. Taraz Lee in the Department of Psychology is inviting applications for a postdoctoral research position in cognitive neuroscience. The goal of the research in the lab is to understand how the brain implements cognitive control processes such as attention and motivation and how these processes affect learning, memory, and performance in a variety of contexts. More recently we have begun several investigations into how cognitive control and motivation interact with skilled action. To achieve this goal, we design and conduct experiments with young adults using multiple methods, such as functional magnetic resonance imaging (fMRI), non-invasive brain stimulation, and a battery of cognitive and motor tasks. More information about the CoCoA Lab can be found here . http://careers.umich.edu/job_detail/131324/postdoctoral_fellowship_in_cognitive_neuroscience ============================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From elef at indiana.edu Thu Sep 21 15:46:33 2017 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Thu, 21 Sep 2017 19:46:33 +0000 Subject: [Neuroimaging] [DIPY] Question about whole tracks *.trk registration using SLR In-Reply-To: <63C189EA-8788-4FB9-BDCA-5D3DAC191164@mgh.harvard.edu> References: <63C189EA-8788-4FB9-BDCA-5D3DAC191164@mgh.harvard.edu> Message-ID: Hi Rodrigo, Whole brain SLR is a great idea! Thank you for your question. :) Have in mind that nibabel.trackvis is the old API. Use nibabel.streamlines.load instead. Can you please share with me (off the list) your trk files so that I can correct your script? A wild guess is that you are not loading the streamlines correctly. Did you use DIPY to create those trks or another software? Also, another suggestion is to run QuickBundles first to reduce the size of your datasets. I mean before starting the SLR. Here is a sample script (in one of my branches). https://github.com/Garyfallidis/dipy/blob/recobundles/dipy/workflows/align.py We are working on making something similar available asap in master. Best, Eleftherios On Thu, Sep 21, 2017 at 1:36 AM Perea Camargo, Rodrigo Dennis < RPEREACAMARGO at mgh.harvard.edu> wrote: > Hi Eleftherios & DIPY community, > I am trying to register two whole brain tracts as shown in your recent > publication using the streamline-based linear registration. I am diving > into Dipy now and I might have some problems loading the files or > registering them. > Following your example ( > http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration), > there are 2 issues that may arise when I try it. > > 1) How to load *.trk files is not shown in your example (it looks like you > added to cingulum bundles from your dataset using the dipy.data value?) So > I follow this tutorial ( > http://nipy.org/dipy/examples_built/streamline_formats.html ) to load the > streamline and hdr but I am not sure if this is the format SLF() wants it > in. > > 2) So then I try using the srr.optimize( ) function but I get the > following problems (check below). > > I hope you can help me. > Rodrigo > > > Here is my Jupyter notebook with the errors I found (any help will be > greatly appreciate it): > > > > > > The information in this e-mail is intended only for the person to whom it > is > addressed. If you believe this e-mail was sent to you in error and the > e-mail > contains patient information, please contact the Partners Compliance > HelpLine at > http://www.partners.org/complianceline . If the e-mail was sent to you in > error > but does not contain patient information, please contact the sender and > properly > dispose of the e-mail. > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 195056 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 195056 bytes Desc: not available URL: From RPEREACAMARGO at mgh.harvard.edu Mon Sep 25 09:47:57 2017 From: RPEREACAMARGO at mgh.harvard.edu (Perea Camargo, Rodrigo Dennis) Date: Mon, 25 Sep 2017 13:47:57 +0000 Subject: [Neuroimaging] [DIPY] Question about whole tracks *.trk registration using SLR In-Reply-To: References: <63C189EA-8788-4FB9-BDCA-5D3DAC191164@mgh.harvard.edu> Message-ID: Hi Eleftherios, I am included both *.trk files in here: http://www.nmr.mgh.harvard.edu/~rdp20/drop/trk_slf/ IIT2mean.trk was a template TRK downloaded from the tract_querier example (http://tract-querier.readthedocs.io/en/latest/ ) and ADRC_TMP_wholeBrain.trk.gz was generated using dsi_studio (http://dsi-studio.labsolver.org/) Thanks again, Rodrigo On Sep 21, 2017, at 3:46 PM, Eleftherios Garyfallidis > wrote: Hi Rodrigo, Whole brain SLR is a great idea! Thank you for your question. :) Have in mind that nibabel.trackvis is the old API. Use nibabel.streamlines.load instead. Can you please share with me (off the list) your trk files so that I can correct your script? A wild guess is that you are not loading the streamlines correctly. Did you use DIPY to create those trks or another software? Also, another suggestion is to run QuickBundles first to reduce the size of your datasets. I mean before starting the SLR. Here is a sample script (in one of my branches). https://github.com/Garyfallidis/dipy/blob/recobundles/dipy/workflows/align.py We are working on making something similar available asap in master. Best, Eleftherios On Thu, Sep 21, 2017 at 1:36 AM Perea Camargo, Rodrigo Dennis > wrote: Hi Eleftherios & DIPY community, I am trying to register two whole brain tracts as shown in your recent publication using the streamline-based linear registration. I am diving into Dipy now and I might have some problems loading the files or registering them. Following your example ( http://nipy.org/dipy/examples_built/bundle_registration.html#example-bundle-registration), there are 2 issues that may arise when I try it. 1) How to load *.trk files is not shown in your example (it looks like you added to cingulum bundles from your dataset using the dipy.data value?) So I follow this tutorial (http://nipy.org/dipy/examples_built/streamline_formats.html ) to load the streamline and hdr but I am not sure if this is the format SLF() wants it in. 2) So then I try using the srr.optimize( ) function but I get the following problems (check below). I hope you can help me. Rodrigo Here is my Jupyter notebook with the errors I found (any help will be greatly appreciate it): The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From larson.eric.d at gmail.com Tue Sep 26 15:31:20 2017 From: larson.eric.d at gmail.com (Eric Larson) Date: Tue, 26 Sep 2017 15:31:20 -0400 Subject: [Neuroimaging] PySurfer 0.8.0 Message-ID: Dear All, We are pleased to announce the 0.8.0 release of PySurfer, a Python-based package for visualizing Freesurfer data. This release includes bugfixes and multiple enhancements, including: - The surface geometry that is displayed can now be changed after initializing a Brain instance with e.g. brain.set_surf("smoothwm"). - Allow using custom matplotlib colormap objects or the names of custom colormaps that have been registered with matplotlib. - Added four new colormaps from seaborn: rocket, mako, icefire, and vlag. - Terrain interaction is now possible via the ``interaction`` keyword argument. - An API reference page is now available. - Support is now provided for visualizing vector-valued (3 values per vertex) data. Please see our website (http://pysurfer.github.io/) for information on installation, documentation, and the example gallery. We hope you find it useful in your research! Eric Larson, Michael Waskom, Alexandre Gramfort, and the Nipy team -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Tue Sep 26 15:57:19 2017 From: bertrand.thirion at inria.fr (bthirion) Date: Tue, 26 Sep 2017 21:57:19 +0200 Subject: [Neuroimaging] PySurfer 0.8.0 In-Reply-To: References: Message-ID: <2e754ac2-24ff-8a84-e383-f7340f145041@inria.fr> Great work, thx ! Bertrand On 26/09/2017 21:31, Eric Larson wrote: > Dear All, > > We are pleased to announce the 0.8.0 release of PySurfer, a > Python-based package for visualizing Freesurfer data. This release > includes bugfixes and multiple enhancements, including: > > > - The surface geometry that is displayed can now be changed after > initializing a Brain instance with e.g. brain.set_surf("smoothwm"). > > - Allow using custom matplotlib colormap objects or the names of > custom colormaps that have been registered with matplotlib. > > - Added four new colormaps from seaborn: rocket, mako, icefire, and vlag. > > - Terrain interaction is now possible via the ``interaction`` keyword > argument. > > - An API reference page is now available. > > - Support is now provided for visualizing vector-valued (3 values per > vertex) data. > > > Please see our website (http://pysurfer.github.io/) for information on > installation, documentation, and the example gallery. > > We hope you find it useful in your research! > > Eric Larson, Michael Waskom, Alexandre Gramfort, and the Nipy team > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Tue Sep 26 16:32:14 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Tue, 26 Sep 2017 16:32:14 -0400 Subject: [Neuroimaging] PySurfer 0.8.0 In-Reply-To: References: Message-ID: awesome! it's really good to see projects ticking along. cheers, satra On Tue, Sep 26, 2017 at 3:31 PM, Eric Larson wrote: > Dear All, > > We are pleased to announce the 0.8.0 release of PySurfer, a Python-based > package for visualizing Freesurfer data. This release includes bugfixes and > multiple enhancements, including: > > > - The surface geometry that is displayed can now be changed after > initializing a Brain instance with e.g. brain.set_surf("smoothwm"). > > - Allow using custom matplotlib colormap objects or the names of custom > colormaps that have been registered with matplotlib. > > - Added four new colormaps from seaborn: rocket, mako, icefire, and vlag. > > - Terrain interaction is now possible via the ``interaction`` keyword > argument. > > - An API reference page is now available. > > - Support is now provided for visualizing vector-valued (3 values per > vertex) data. > > > Please see our website (http://pysurfer.github.io/) for information on > installation, documentation, and the example gallery. > > We hope you find it useful in your research! > > Eric Larson, Michael Waskom, Alexandre Gramfort, and the Nipy team > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Tue Sep 26 19:30:08 2017 From: krzysztof.gorgolewski at gmail.com (Chris Gorgolewski) Date: Wed, 27 Sep 2017 00:30:08 +0100 Subject: [Neuroimaging] PySurfer 0.8.0 In-Reply-To: References: Message-ID: Congrats! On Sep 26, 2017 9:33 PM, "Satrajit Ghosh" wrote: > awesome! it's really good to see projects ticking along. > > cheers, > > satra > > On Tue, Sep 26, 2017 at 3:31 PM, Eric Larson > wrote: > >> Dear All, >> >> We are pleased to announce the 0.8.0 release of PySurfer, a Python-based >> package for visualizing Freesurfer data. This release includes bugfixes and >> multiple enhancements, including: >> >> >> - The surface geometry that is displayed can now be changed after >> initializing a Brain instance with e.g. brain.set_surf("smoothwm"). >> >> - Allow using custom matplotlib colormap objects or the names of custom >> colormaps that have been registered with matplotlib. >> >> - Added four new colormaps from seaborn: rocket, mako, icefire, and vlag. >> >> - Terrain interaction is now possible via the ``interaction`` keyword >> argument. >> >> - An API reference page is now available. >> >> - Support is now provided for visualizing vector-valued (3 values per >> vertex) data. >> >> >> Please see our website (http://pysurfer.github.io/) for information on >> installation, documentation, and the example gallery. >> >> We hope you find it useful in your research! >> >> Eric Larson, Michael Waskom, Alexandre Gramfort, and the Nipy team >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kw401 at cam.ac.uk Wed Sep 27 03:10:37 2017 From: kw401 at cam.ac.uk (Kirstie Whitaker) Date: Wed, 27 Sep 2017 09:10:37 +0200 Subject: [Neuroimaging] PySurfer 0.8.0 In-Reply-To: References: Message-ID: Fantastic! Really excited about this animate brain example. *Everyone* loves spinning brains! http://pysurfer.github.io/auto_examples/rotate_animation.html#sphx-glr-auto-examples-rotate-animation-py Thank you so much for all the hard work :) Kx Virus-free. www.avg.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> On 27 September 2017 at 01:30, Chris Gorgolewski < krzysztof.gorgolewski at gmail.com> wrote: > Congrats! > > On Sep 26, 2017 9:33 PM, "Satrajit Ghosh" wrote: > >> awesome! it's really good to see projects ticking along. >> >> cheers, >> >> satra >> >> On Tue, Sep 26, 2017 at 3:31 PM, Eric Larson >> wrote: >> >>> Dear All, >>> >>> We are pleased to announce the 0.8.0 release of PySurfer, a Python-based >>> package for visualizing Freesurfer data. This release includes bugfixes and >>> multiple enhancements, including: >>> >>> >>> - The surface geometry that is displayed can now be changed after >>> initializing a Brain instance with e.g. brain.set_surf("smoothwm"). >>> >>> - Allow using custom matplotlib colormap objects or the names of custom >>> colormaps that have been registered with matplotlib. >>> >>> - Added four new colormaps from seaborn: rocket, mako, icefire, and vlag. >>> >>> - Terrain interaction is now possible via the ``interaction`` keyword >>> argument. >>> >>> - An API reference page is now available. >>> >>> - Support is now provided for visualizing vector-valued (3 values per >>> vertex) data. >>> >>> >>> Please see our website (http://pysurfer.github.io/) for information on >>> installation, documentation, and the example gallery. >>> >>> We hope you find it useful in your research! >>> >>> Eric Larson, Michael Waskom, Alexandre Gramfort, and the Nipy team >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -- Kirstie Whitaker, PhD Mozilla Fellow for Science Postdoctoral researcher Department of Psychiatry University of Cambridge *Mailing Address* Brain Mapping Unit Department of Psychiatry Sir William Hardy Building Downing Street Cambridge CB2 3EB *Phone: *+44 7583 535 307 *Website:* www.kirstiewhitaker.com *Twitter:* @kirstie_j -------------- next part -------------- An HTML attachment was scrubbed... URL: