From arokem at gmail.com Tue May 1 00:44:34 2018 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 30 Apr 2018 21:44:34 -0700 Subject: [Neuroimaging] Time series per voxel In-Reply-To: <20180428221026.ueyu5nwkxhgqloyf@phare.normalesup.org> References: <20180428221026.ueyu5nwkxhgqloyf@phare.normalesup.org> Message-ID: Nitime's time_series_from_file might also do what you want: http://nipy.org/nitime/api/generated/nitime.fmri.io.html#nitime.fmri.io.time_series_from_file Hope that helps, Ariel On Sat, Apr 28, 2018 at 3:10 PM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > In that case, the simplest thing might be to use nilearn's NiftiMasker, > which does pretty much this: > http://nilearn.github.io/manipulating_images/masker_objects.html > > Ga?l > > On Sat, Apr 28, 2018 at 09:51:11PM +0000, Dav Clark wrote: > > I'm pretty sure the OP means individual as opposed to mean over an ROI? > > > So the beginning of the choose your own adventure is whether you > understand > > numpy indexing. If so, you can easily grab a time series or a slice or > > whatever. Nibabel gives you the image as a numpy array, and if you're > just > > getting started if encourage you to just grab each location / time > series one > > at a time. See here: > > > https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.indexing.html > > > http://nipy.org/nibabel/gettingstarted.html > > > Cheers, > > Dav > > > On Sat, Apr 28, 2018, 3:42 AM Christophe Pallier > > > wrote: > > > What do you mean 'preprocessed'? > > > You can extract time-series from the original EPI files (although if > they > > are not corrected for movement and slice timing delays, it may not > be a > > great idea). > > > Christophe > > > On Thu, Apr 26, 2018 at 10:39 AM, ?????? ????????? ????????? via > > Neuroimaging wrote: > > > Hello! > > > My name is Alexander, I am student from Russia and I am > interested in > > MRI investigations. I am struggling with one problem which I > can not > > solve for a long. Can you help me please? > > > My aim is to obtain time series for each voxel in exact ROI. I > have > > found methods which allow me to get one time serie for one ROI, > but it > > is already preprocessed, what is not needed. Tell me please is > there > > any way to solve this problem using implemented functions? > > > Sincerely, > > Alexander Rozhnov > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > -- > Gael Varoquaux > Senior Researcher, INRIA Parietal > NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France > Phone: ++ 33-1-69-08-79-68 > http://gael-varoquaux.info http://twitter.com/GaelVaroquaux > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From AiWern.Chung at childrens.harvard.edu Tue May 1 10:21:45 2018 From: AiWern.Chung at childrens.harvard.edu (Chung, Ai Wern) Date: Tue, 1 May 2018 14:21:45 +0000 Subject: [Neuroimaging] Call for Papers: MICCAI 2018 Connectomics in NeuroImaging Workshop In-Reply-To: <1525146658594.86048@childrens.harvard.edu> References: <1525146658594.86048@childrens.harvard.edu> Message-ID: <1525184505400.41354@childrens.harvard.edu> This is a Call for Papers for the 2nd International Workshop on Connectomics in NeuroImaging (CNI 2018), which will be held in parallel with the 21st International Conference on Medical Image Computing and Computer-assisted Intervention (MICCAI 2018) in Granada Spain. CNI 2018 is a full day workshop that will take place September 20th, 2018. Our topics of interest cover (but are not limited to): (1) New developments in connectome construction from different imaging modalities; (2) Development of data driven techniques to identify biomarkers in connectome data; (3) Brain network modeling and formal conceptual models of connectome data; (4) Machine learning algorithms and connectome data analysis; (5) Evaluation and validation of connectome models. If you have research that fits into the scope of our workshop detailed on our website (http://munsellb.people.cofc.edu/cni.html), we encourage you to submit a paper or an abstract! ** Why submit to the CNI workshop? ** - Three great keynote speakers, presentations and poster sessions with accompanying power pitches will provide you with ample opportunity for exchanges and discussions with international computational neuroscientists and clinicians; - Accepted papers will be published in an LNCS proceedings and will be invited to submit to a special issue journal; - There will be an INCF sponsored Best Paper Award. ** Full length paper submission ** Paper submission deadline, June 11 via https://cmt3.research.microsoft.com/CNI2018 For more information visit http://munsellb.people.cofc.edu/cni.html ** Full length important dates ** Submission deadline: 11:59 PM PST, June 11th, 2018 Notification of acceptance: July 2nd, 2018 Camera-ready deadline: 11:59 PST, July 16th, 2018 Workshop date: Sept. 20th, 2018 We look forward to your participation! If you have any additional questions, please contact Brent Munsell (munsellb at cofc.edu) or Ai Chung (aiwern.chung at childrens.harvard.edu) ========================= CNI 2018 Chairs ----------------------------------------- Islem Rekik, University of Dundee Ai Wern Chung, Boston Children's Hospital, Harvard Medical School Markus Schirmer, Massachusetts General Hospital, Harvard Medical School Guorong Wu, University of North Carolina, Chapel Hill Brent Munsell, College of Charleston -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.wiestler at tum.de Thu May 3 03:04:57 2018 From: b.wiestler at tum.de (Bene Wiestler) Date: Thu, 3 May 2018 09:04:57 +0200 Subject: [Neuroimaging] [DiPy] Registration problem integer image arrays Message-ID: <2949793d-467d-e926-96ad-4d61a886df51@tum.de> Hi, I have a problem with the dipy image registration workflow: I want to register two anatomical images using linear methods (c_of_mass, translation, rigid). One file is a uint16 data type, the other is saved as a float, however essentially also only contains integer. When I load these files with nibabel and convert the data arrays to int16 (both with the .astype(np-int16) methods and with numpy.asarray), registration (even c_of_mass) always yields an empty transformed moving image (i.e. just containing 0s). However, when I save the files using the converted int16 data arrays and open them in e.g. ITK-Snap, they look fine. When I convert both files to float32, registration works, but is slow. Further, when I run your example script for affine registration with the files provided there (Stanford; syn; both int16), it works fine for these files. Has anybody any idea what might be a solution other than just float32? Thanks a lot! Bene From alexandre.gramfort at inria.fr Thu May 3 04:38:21 2018 From: alexandre.gramfort at inria.fr (Alexandre Gramfort) Date: Thu, 3 May 2018 10:38:21 +0200 Subject: [Neuroimaging] [ANN] MNE-Python 0.16 Message-ID: *Hi,We are very pleased to announce the new 0.16 release of MNE-Python (http://martinos.org/mne/stable/ ).A few highlights============ - Add support for metadata in mne.Epochs. See: https://martinos.org/mne/stable/auto_tutorials/plot_metadata_epochs.html - Add ability to plot whitened data in mne.io.Raw.plot(), mne.Epochs.plot(), mne.Evoked.plot(), and mne.Evoked.plot_topo() https://martinos.org/mne/stable/auto_tutorials/plot_whitened.html - Add ability to read and write Annotations separate from mne.io.Raw instances via Annotations.save() and read_annotations() - Add option to unset a montage by passing None to mne.io.Raw.set_montage() - Add sensor denoising via mne.preprocessing.oversampled_temporal_projection() https://martinos.org/mne/stable/auto_examples/preprocessing/plot_otp.html - Add support for any data type like sEEG or ECoG in covariance related functions (estimation, whitening and plotting) as well as in mne.viz.plot_alignment() - Improve the coregistration tools based on Traits and Mayavi used via mne coreg and mne.gui.coregistration()- Add eLORETA noise normalization for minimum-norm solvers https://martinos.org/mne/stable/auto_tutorials/plot_mne_dspm_source_localization.html - Add support for reading Eximia files - Add the Picard algorithm to perform ICA with mne.preprocessing.ICA https://martinos.org/mne/stable/auto_examples/preprocessing/plot_ica_comparison.html - Add new DICS beamformer implementation as mne.beamformer.make_dics(), mne.beamformer.apply_dics(), mne.beamformer.apply_dics_csd() and mne.beamformer.apply_dics_epochs() https://martinos.org/mne/stable/auto_tutorials/plot_dics.html Notable API changes================ - Channels with unknown locations are now assigned position [np.nan, np.nan, np.nan] instead of [0., 0., 0.],- Unknown measurement dates are now stored as info['meas_date'] = None rather than using the current date. - mne.Evoked.plot() will now append the number of epochs averaged for the evoked data in the first plot title- Changed the behavior of mne.io.Raw.pick_channels() and similar methods to be consistent with mne.pick_channels() to treat channel list as a set (ignoring order)- Changed the labeling of some plotting functions to use more standard capitalization and units, e.g. ?Time (s)? instead of ?time [sec]? - mne.time_frequency.csd_epochs has been refactored into mne.time_frequency.csd_fourier() and mne.time_frequency.csd_multitaper()- The functions lcmv, lcmv_epochs, and lcmv_raw are now deprecated in favor of mne.beamformer.make_lcmv() and mne.beamformer.apply_lcmv(), mne.beamformer.apply_lcmv_epochs(), and mne.beamformer.apply_lcmv_raw()For a full list of improvements and API changes, see:http://martinos.org/mne/stable/whats_new.html#version-0-16 To install the latest release the following command should do the job:pip install --upgrade --user mneAs usual we welcome your bug reports, feature requests, critiques, andcontributions.Some links:- https://github.com/mne-tools/mne-python (code + readme on how to install)- http://martinos.org/mne/stable/ (full MNE documentation)Follow us on Twitter: https://twitter.com/mne_news Regards,The MNE-Python developersPeople who contributed to this release (in alphabetical order):* Alejandro Weinstein* Alexandre Gramfort* Annalisa Pascarella* Anne-Sophie Dubarry* Britta Westner* Chris Bailey* Chris Holdgraf* Christian Brodbeck* Claire Braboszcz* Clemens Brunner* Daniel McCloy* Denis A. Engemann* Desislava Petkova* Dominik Krzemi?ski* Eric Larson* Erik Hornberger* Fede Raimondo* Henrich Kolkhorst* Jean-Remi King* Jen Evans* Joan Massich* Jon Houck* Jona Sassenhagen* Juergen Dammers* Jussi Nurminen* Kambiz Tavabi* Katrin Leinweber* Kostiantyn Maksymenko* Larry Eisenman* Luke Bloy* Mainak Jas* Marijn van Vliet* Mathurin Massias* Mikolaj Magnuski* Nathalie Gayraud* Oleh Kozynets* Phillip Alday* Pierre Ablin* Stefan Appelhoff* Stefan Repplinger* Tommy Clausner* Yaroslav Halchenko* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pietroastolfi92 at gmail.com Thu May 3 04:57:58 2018 From: pietroastolfi92 at gmail.com (Pietro Astolfi) Date: Thu, 3 May 2018 10:57:58 +0200 Subject: [Neuroimaging] [DiPy] Registration problem integer image arrays In-Reply-To: <2949793d-467d-e926-96ad-4d61a886df51@tum.de> References: <2949793d-467d-e926-96ad-4d61a886df51@tum.de> Message-ID: <33D5C44E-0861-4562-8751-279CD6A33EFB@gmail.com> Hi, You can use uint16 for both the images when you load them. Then if the images are nifti files you don?t need to transform them to array, you should use the class method .get_data() to extract the numpy array from the loaded image. I give you a practice working example with c_of_mass: import nibabel as nib import numpy as np from dipy.align.imaffine import transform_centers_of_mass # static image img1 = nib.load(?img1.nii.gz?) img1_data = img1.get_data() #now img1_data is a numpy array of float img1_data = img1_data.as_type(np.uint16) aff1 = img1.affine # moving image img2 = nib.load(?img2.nii.gz?) img2_data = img2.get_data() #now img2_data is a numpy array of float img2_data = img2_data.as_type(np.uint16) aff2 = img2.affine c_of_mass = transform_centers_of_mass(img1_data, aff1, img2_data, aff2) transformed = c_of_mass.transform(img2_data) # note that transform function cast the input array (img2_data here) as np.float64 in order to interpolate, so the returned array (transformed here) is a float64 let me know if this helps you. Pietro > Il giorno 03 mag 2018, alle ore 09:04, Bene Wiestler ha scritto: > > Hi, > > I have a problem with the dipy image registration workflow: > I want to register two anatomical images using linear methods (c_of_mass, translation, rigid). One file is a uint16 data type, the other is saved as a float, however essentially also only contains integer. When I load these files with nibabel and convert the data arrays to int16 (both with the .astype(np-int16) methods and with numpy.asarray), registration (even c_of_mass) always yields an empty transformed moving image (i.e. just containing 0s). However, when I save the files using the converted int16 data arrays and open them in e.g. ITK-Snap, they look fine. > When I convert both files to float32, registration works, but is slow. > Further, when I run your example script for affine registration with the files provided there (Stanford; syn; both int16), it works fine for these files. > Has anybody any idea what might be a solution other than just float32? > > Thanks a lot! > > Bene > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging From b.wiestler at tum.de Thu May 3 05:10:51 2018 From: b.wiestler at tum.de (Bene Wiestler) Date: Thu, 3 May 2018 11:10:51 +0200 Subject: [Neuroimaging] [DiPy] Registration problem integer image arrays In-Reply-To: <33D5C44E-0861-4562-8751-279CD6A33EFB@gmail.com> References: <2949793d-467d-e926-96ad-4d61a886df51@tum.de> <33D5C44E-0861-4562-8751-279CD6A33EFB@gmail.com> Message-ID: <4fac00b8-3074-6648-f01f-a4ea4c5fa40b@tum.de> Hi Pietro, thanks for your reply. In fact, this is exactly my workflow. However, when I call the .transform-method of c_of_mass, the result is an all-zero array (both with interp="linear" or "nearest"). Cheers, Bene Pietro Astolfi wrote: > Hi, > > You can use uint16 for both the images when you load them. Then if the images are nifti files you don?t need to transform them to array, you should use the class method .get_data() to extract the numpy array from the loaded image. I give you a practice working example with c_of_mass: > > import nibabel as nib > import numpy as np > from dipy.align.imaffine import transform_centers_of_mass > > # static image > img1 = nib.load(?img1.nii.gz?) > img1_data = img1.get_data() #now img1_data is a numpy array of float > img1_data = img1_data.as_type(np.uint16) > aff1 = img1.affine > > # moving image > img2 = nib.load(?img2.nii.gz?) > img2_data = img2.get_data() #now img2_data is a numpy array of float > img2_data = img2_data.as_type(np.uint16) > aff2 = img2.affine > > c_of_mass = transform_centers_of_mass(img1_data, aff1, img2_data, aff2) > > transformed = c_of_mass.transform(img2_data) # note that transform function cast the input array (img2_data here) as np.float64 in order to interpolate, so the returned array (transformed here) is a float64 > > let me know if this helps you. > > Pietro > >> Il giorno 03 mag 2018, alle ore 09:04, Bene Wiestler ha scritto: >> >> Hi, >> >> I have a problem with the dipy image registration workflow: >> I want to register two anatomical images using linear methods (c_of_mass, translation, rigid). One file is a uint16 data type, the other is saved as a float, however essentially also only contains integer. When I load these files with nibabel and convert the data arrays to int16 (both with the .astype(np-int16) methods and with numpy.asarray), registration (even c_of_mass) always yields an empty transformed moving image (i.e. just containing 0s). However, when I save the files using the converted int16 data arrays and open them in e.g. ITK-Snap, they look fine. >> When I convert both files to float32, registration works, but is slow. >> Further, when I run your example script for affine registration with the files provided there (Stanford; syn; both int16), it works fine for these files. >> Has anybody any idea what might be a solution other than just float32? >> >> Thanks a lot! >> >> Bene >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > -- Dr. med. Benedikt Wiestler Abteilung f?r Neuroradiologie Klinikum rechts der Isar, TU M?nchen Ismaninger Str. 22 81675 M?nchen goo.gl/178PRF From gael.varoquaux at normalesup.org Sat May 5 09:16:24 2018 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 5 May 2018 15:16:24 +0200 Subject: [Neuroimaging] Announcing IMPAC: an IMaging-PsychiAtry Challenge, using data-science to predict autism from brain imaging Message-ID: <20180505131624.yo5elgfgg2uf2kqd@phare.normalesup.org> Dear colleagues, It is my pleasure to announce IMPAC: an IMaging-PsychiAtry Challenge, using data-science to predict autism from brain imaging. https://paris-saclay-cds.github.io/autism_challenge/ This is a machine-learning challenge on brain-imaging data to achieve the best prediction of autism spectrum disorder diagnostic status. We are providing the largest cohort so far to learn such predictive biomarkers, with more than 2000 individuals. There is a total of 9000 euros of prices to win for the best prediction. The prediction quality will be measured on a large hidden test set, to ensure fairness. We provide a simple starting kit to serve as a proof of feasibility. We are excited to see what the community will come up with in terms of predictive models and of score. Best, Ga?l -- Gael Varoquaux Senior Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From elef at indiana.edu Mon May 7 14:29:52 2018 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Mon, 07 May 2018 18:29:52 +0000 Subject: [Neuroimaging] [DIPY] Help with DIPY at OHBM Message-ID: Hi all, Kesshi Jordan will be running a tutorial about DIPY at OHBM this summer. If any of the DIPY developers are planning to attend please do coordinate with Kesshi and try to help her during her tutorial. This year we will have many participants in ISMRM but not in OHBM. I hope there are some souls out there that plan to go to OHBM. I hope the organizers will stop organizing these two events at the same time. It was much nicer in the past where we could easily attend both conferences. Let Kesshi know asap if you can help. Best, Eleftherios -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcohen at polymtl.ca Tue May 8 03:41:28 2018 From: jcohen at polymtl.ca (Julien Cohen-Adad) Date: Tue, 8 May 2018 16:41:28 +0900 Subject: [Neuroimaging] SCT tutorial on June 22nd in Paris Message-ID: Dear Neuroimaging community, We will organize a tutorial on the Spinal Cord Toolbox (SCT) at the 5th Spinal Cord MRI Workshop, on June 22nd in Paris. The course is free, but please register here to help us organize: https://goo.gl/p277Rh The full program is available here: https://goo.gl/TA8Xgu Hoping to see you all in Paris! Kind regards, Julien -- Julien Cohen-Adad, PhD Associate Professor, Polytechnique Montreal Associate Director, Functional Neuroimaging Unit, University of Montreal Canada Research Chair in Quantitative Magnetic Resonance Imaging Phone: 514 340 5121 (office: 2264); Skype: jcohenadad Web: www.neuro.polymtl.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From tartuz at gmail.com Tue May 8 12:34:31 2018 From: tartuz at gmail.com (Gabriele Arnulfo) Date: Tue, 8 May 2018 18:34:31 +0200 Subject: [Neuroimaging] [nipy/PySurfer] - add_foci in single-subject geometry Message-ID: Dear List, I have been trying to plot channel positions for intracerebral recordings in respect to single subject anatomy - i.e. pial surface. Following the tutorial at https://pysurfer.github.io/auto_examples/plot_foci.html#sphx-glr-auto-examples-plot-foci-py I was able to plot foci in locations but those look far from being correct. My contact positions are referenced to MRI-space but I know the transformation to map tkRAS to scannerRAS based on Freesurfer documentation. After applying that to my points, still those look in the wrong place. From the tutorial, points are expressed as MNI coordinates. Can someone explain me which geometrical space the surfaces are rendered in PySurfer with respect to original Freesurfer surface-space? Thanks a lot for your time. All the best, -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.gramfort at inria.fr Tue May 8 16:13:27 2018 From: alexandre.gramfort at inria.fr (Alexandre Gramfort) Date: Tue, 8 May 2018 22:13:27 +0200 Subject: [Neuroimaging] [nipy/PySurfer] - add_foci in single-subject geometry In-Reply-To: References: Message-ID: hi, you might want to look at this example: https://martinos.org/mne/dev/auto_tutorials/plot_ecog.html Alex On Tue, May 8, 2018 at 6:34 PM, Gabriele Arnulfo wrote: > Dear List, > > I have been trying to plot channel positions for intracerebral recordings in > respect to single subject anatomy - i.e. pial surface. > Following the tutorial at > https://pysurfer.github.io/auto_examples/plot_foci.html#sphx-glr-auto-examples-plot-foci-py > I was able to plot foci in locations but those look far from being correct. > > My contact positions are referenced to MRI-space but I know the > transformation to map tkRAS to scannerRAS based on Freesurfer documentation. > After applying that to my points, still those look in the wrong place. From > the tutorial, points are expressed as MNI coordinates. > > Can someone explain me which geometrical space the surfaces are rendered in > PySurfer with respect to original Freesurfer surface-space? > > Thanks a lot for your time. > > All the best, > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > From karl.spuhler at stonybrook.edu Wed May 9 14:47:28 2018 From: karl.spuhler at stonybrook.edu (Karl Spuhler) Date: Wed, 9 May 2018 14:47:28 -0400 Subject: [Neuroimaging] Utility to read ECAT raw data? Message-ID: Hello all, I am wondering if there exists a pre-written Python utility which would allow me to read in ECAT raw data (.S, .a, .N ). I cannot find anything, only for image files. Thank you! Karl From Albert.Montillo at UTSouthwestern.edu Thu May 10 16:48:48 2018 From: Albert.Montillo at UTSouthwestern.edu (Albert Montillo) Date: Thu, 10 May 2018 20:48:48 +0000 Subject: [Neuroimaging] Scientific programmer position in neuroimage analysis and machine learning (University of Texas Southwestern, Dallas, USA) Message-ID: Scientific programmer position in neuroimage analysis and machine learning (University of Texas Southwestern, Dallas, USA) The laboratory of Albert Montillo ( http://www.utsouthwestern.edu/labs/montillo ) in the Bioinformatics Department of UT Southwestern Medical Center is seeking a full time Scientific Programmer for studies of mental & neurodevelopmental disorders and neurodegenerative diseases. The Scientific Programmer will use multimodal MRI, and MEG/EEG data to study structural and functional circuit changes, and PET/SPECT, CT to study metabolic and pathophysiological changes associated with diagnosis and prognoses. The main responsibilities of the position include: implementing and optimizing image processing, computational and analyses pipelines for large-scale multimodal brain imaging data and corresponding clinical data. The lab is an interactive and collaborative team directed by Albert Montillo, Ph.D., conducting cutting-edge research to advance the theory and application of machine learning for the analysis of medical images. The lab addresses unmet clinical needs by forming predictive models that make diagnoses and prognoses more precise and advance neuroscience by furthering the understanding of mechanisms in disease and intervention. You will work directly with him and an array of principle investigators, collaborators and trainees. Medical image analysis software the lab has developed include machine learning-based methods for labeling structures throughout the brain (parcellation), versions of which are used worldwide and FDA approved. The lab has built deep learning methods to label networks in resting state fMRI and detect artifacts in MEG. The lab has pioneered deep learning decision forests that increase prediction accuracy while reducing prediction time and outcome prediction methods using advanced brain connectomics. Job responsibilities * Design, implement and optimize image processing, computational and analyses pipelines for large-scale multimodal brain imaging, including resting and task-based fMRI, diffusion MRI, PET/SPECT, CT, and MEG/EEG data on our high performance compute cluster. * Implement machine learning models to automate preprocessing, quality control, and computational data analysis of imaging, metabolic, genetic, behavioral, and clinical data. * Integrate third party pipelines with tailored in-house pipelines. * Build tools to organize and visualize the growing database of analysis results. * Contribute to writing of journal and conference papers. * Attend lab meetings, stay abreast of developments in image analysis and machine learning. * Coordinate data integration from ongoing studies. * Teach lab members about image processing steps, outputs and to quality control approaches. * A minimum three-year commitment is strongly encouraged. * Anticipated start date: immediate Experience of ideal applicants: * B.A. or B.S. Degree in Computer Science, Electrical Engineering, Biomedical Engineering or a related field with three (3) years scientific software development; Master's or Ph.D. preferred. Software development experience on high performance compute clusters or GPU-based machine learning is a strong plus. Will consider record of success in publishing computational results in lieu of experience. * Familiarity with at least 1 image data type: MRI, PET/SPECT, CT, MEG/EEG & format: NIFTI, DICOM. * Experience in at least 1 neuroimage analysis pipeline: NiPype, SPM, FSL, AFNI, FreeSurfer; for diffusion MRI: Camino, DTI-TK, DiPy, TrackVis, DTI/DSI studio, ExploreDTI; for MEG/EEG: Brainstorm, EEGLAB, FieldTrip, MNE, NUTMEG. * Experience developing image processing or image analysis software. Solid understand of standard CS data structures. * At least 2 years of experience in Linux; Python and 1 other language (Matlab, R, C/C++) * Practical ML experience applying at least 1 of the following: deep learning neural nets (RNN,CNN,DNN, UNet/VNet), DCGAN, deep reinforcement learning, transfer learning, autoencoders; classical or shallow machine learning; image/object recognition; time series analysis. * Experience in at least one of these Python libraries: Keras, scikit-learn, TensorFlow, PyTorch, Nilearn. * Experience in brain connectivity, graph theory, and genomic data analyses are significant advantages. * Optional but helpful: Experience in C/C++ (ITK library), cMake, software development. Familiarity with XNAT. Salary compensation is very competitive and enhanced by the low cost of living in Dallas. Benefits include health insurance. The candidate will also benefit from membership in vibrant national and international research communities through our on-going collaborations with UCLA, UCSF, UPenn, Stanford, Philips and Siemens Research, as well as a large local neuroscience communities through UTSW's O'Donnell Brain Institute, and UTD's Centers for Vital Longevity, Brain Health, and Brain Performance. Please apply by email to Dr. Montillo [Albert.Montillo at UTSouthwestern.edu] and include your CV and names and addresses of three references. Use the subject line "ScientificProgrammer: ". The Montillo lab is co-located within the Bioinformatics Department on UT Southwestern's south campus and embedded in the Radiology Department on north campus. We are an integral part of the Advanced Imaging Research Center, and work closely with research groups within Neuroscience, Neurology, Psychiatry, Radiation Oncology, and Surgery. Montillo lab members have access to considerable computational resources, including the >6,800-core cluster with >8 Petabyte of storage available through UTSW's high-performance infrastructure (BioHPC - https://portal.biohpc.swmed.edu ). Future lab members will have the opportunity to work on a broad range of image analysis, machine learning and modeling on interdisciplinary teams, and participate in all aspects of method development, software implementation, data analysis, and validation with lab collaborators. Albert Montillo, Ph.D. Assistant Professor, Director, Deep Learning for Precision Health Laboratory Departments of Bioinformatics, Radiology and the Advanced Imaging Research Center University of Texas Southwestern Medical Center 5323 Harry Hines Blvd. Dallas, TX 75390-8579 Albert.Montillo at UTSouthwestern.edu http://www.utsouthwestern.edu/labs/montillo UT Southwestern Medical Center is an Affirmative Action/Equal Opportunity Employer. Women, minorities, veterans and individuals with disabilities are encouraged to apply. ________________________________ UT Southwestern Medical Center The future of medicine, today. -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyfallidis at gmail.com Mon May 14 14:32:53 2018 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Mon, 14 May 2018 14:32:53 -0400 Subject: [Neuroimaging] ANN: DIPY 0.14.0 In-Reply-To: References: Message-ID: Hello all! We are excited to announce a new *major release* of Diffusion Imaging in Python (DIPY). *DIPY 0.14 (Tuesday, 1rst May 2018)* This release received contributions from *24 developers*. A warm thank you to each one of you for your contribution. The complete release notes are available at: http://nipy.org/dipy/release0.14.html *Highlights *of this release include: - *RecoBundles*: anatomically relevant segmentation of bundles - New super fast clustering algorithm: *QuickBundlesX* - New tracking algorithm:* Particle Filtering Tracking*. - New tracking algorithm: *Probabilistic Residual Bootstrap Tracking*. - New API for reading, saving and processing tractograms. - Fiber ORientation Estimated using Continuous Axially Symmetric Tensors (*FORECAST*). - New command line interfaces. - Deprecated fvtk (old visualization framework). - A range of new visualization improvements. - *Large documentation update*. To upgrade, run the following command in your terminal: *pip install --upgrade dipy* or *conda install -c conda-forge dipy* This version of DIPY depends on recent versions of nibabel (2.1.0+). For any questions go to http://dipy.org, or send an e-mail to neuroimaging at python.org or ask a question to our interactive chat room available at https://gitter.im/nipy/dipy On behalf of the DIPY developers, Eleftherios Garyfallidis, Ariel Rokem, Serge Koudoro http://dipy.org/developers.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From Farshid.Sepehrband at loni.usc.edu Tue May 15 08:42:27 2018 From: Farshid.Sepehrband at loni.usc.edu (Farshid Sepehrband) Date: Tue, 15 May 2018 12:42:27 +0000 Subject: [Neuroimaging] MICCAI 2018 Workshop on Computational Diffusion MRI (CDMRI'18) Message-ID: CALL FOR PAPERS AND PARTICIPATION ************************************************************** 2018 MICCAI Workshop on Computational Diffusion MRI (CDMRI'18) 2018 MICCAI Multi-shell Diffusion MRI Harmonisation Challenge (MUSHAC) When and Where: Thursday, September 20th 2018 ? Granada, Spain Websites: http://cmic.cs.ucl.ac.uk/cdmri18 (Workshop) http://projects.iq.harvard.edu/cdmri2018/challenge (Challenge) Deadline for Paper Submission: Monday, July 2nd 2018 Deadline for Challenge Submissions: Monday, August 20th 2018 ************************************************************** Over the last two decades interest in diffusion MRI has exploded. This non-invasive technique provides unique measurements sensitive to the microstructure of living tissue and enables in-vivo connectivity mapping of the brain. As microscopic tissue alterations are often the earliest signs of disease or regeneration, the variety of clinical applications is expanding rapidly and includes detection of lesions and damaged tissue, grading of cancerous tumours, prognosis of functional impairment and neurosurgical planning. Moreover, fibre tractography gives fundamental new insights into connectional neuroanatomy. Computational techniques are key to the continued success and development of diffusion MRI and to its widespread transfer into the clinic. New processing methods are essential for addressing challenges at each stage of the diffusion pipeline: acquisition, reconstruction, modelling and model fitting, image processing, fibre tracking, connectivity mapping, visualisation, group studies and inference. This full-day workshop, now in the eleventh edition, will give a snapshot of the current state of the art. For further information please visit the website: http://cmic.cs.ucl.ac.uk/cdmri18 Part of this workshop is the "Multi-shell Diffusion MRI Harmonisation Challenge" (MUSHAC), whose aim is the systematic evaluation of the performance of algorithms that enable the harmonisation of multi-shell diffusion MRI data across scanners. Harmonisation consists of making data sets acquired with different scanners/protocols as comparable as possible, and has become a pressing need in the era of Big Data. Combining data from several scanners would increase dramatically the statistical power and sensitivity of clinical studies, with obvious benefits in clinical trials and multi-centre research. This challenge extends last year's challenge, which only considered single-shell data, to advanced multi-shell experiments. Registration to obtain the data is now open, the challenge deadline is August the 20th 2018. More information can be found at the website: https://projects.iq.harvard.edu/cdmri2018/challenge We look forward to seeing you in Granada! Nos vemos pronto! Elisenda Bonet-Carne Francesco Grussu Lipeng Ning Farshid Sepehrband Chantal Tax (CDMRI'18 Organising Committee, e-mail: cdmri18 at cs.ucl.ac.uk) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdeaton at stanford.edu Fri May 18 03:40:19 2018 From: jdeaton at stanford.edu (Jon Deaton) Date: Fri, 18 May 2018 00:40:19 -0700 Subject: [Neuroimaging] How to read NifTi files using file-like objects? Message-ID: Hello NiBabel mailing list, I am a student working on the BraTS tumor segmentation challenge. I am using NiBabel to read images but I am having difficulty loading NIfTI formatted image files (*.nii) from file-like objects rather than by file name. So far I have been using the typical interface in nibabel for loading these files into memory: import nibabel data = nibabel.load(image_filename).get_data() However, the trouble is that nibabel.load accepts filenames but not file-like objects. I need to use a file-like object because I am loading the data from Google Cloud Storage inside Google ML Engine, which requires opening files using tensorflow.python.lib.io.file_io like so: from tensorflow.python.lib.io import file_io def load_nifti(image_filename): with file_io.FileIO(image_filename, mode='r') as fileobj: return # something that extracts the NifTI data from fileobj I have had a lot of difficulty finding the correct interface in NiBabel to do this so I was hoping that one of you might be able to help me accomplish this task Thank you, Jon Deaton -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Sat May 19 10:20:09 2018 From: satra at mit.edu (Satrajit Ghosh) Date: Sat, 19 May 2018 10:20:09 -0400 Subject: [Neuroimaging] How to read NifTi files using file-like objects? In-Reply-To: References: Message-ID: hi jon, there are a few projects out there that use tensorflow with brain images. one that we are involved with is this one: https://github.com/kaczmarj/nobrainer , which uses nibabel to read and chunk volumes on the fly. a previous version used to convert the volumes to hdf5 files, but the current one just reads directly. feel free to use the code in the project or submit issues on the github repo. and deepneuro and niftynet has specific models for glioma segmentation. https://github.com/QTIM-Lab/DeepNeuro https://github.com/NifTK/NiftyNet cheers, satra On Fri, May 18, 2018 at 3:40 AM, Jon Deaton wrote: > Hello NiBabel mailing list, > > I am a student working on the BraTS tumor segmentation challenge. I am > using NiBabel to read images but I am having difficulty loading NIfTI > formatted image files (*.nii) from file-like objects rather than by file > name. So far I have been using the typical interface in nibabel for > loading these files into memory: > > import nibabel > data = nibabel.load(image_filename).get_data() > > However, the trouble is that nibabel.load accepts filenames but not > file-like objects. I need to use a file-like object because I am loading > the data from Google Cloud Storage inside Google ML Engine, which requires > opening files using tensorflow.python.lib.io.file_io like so: > > from tensorflow.python.lib.io import file_io > def load_nifti(image_filename): > with file_io.FileIO(image_filename, mode='r') as fileobj: > return # something that extracts the NifTI data from fileobj > > > I have had a lot of difficulty finding the correct interface in NiBabel to > do this so I was hoping that one of you might be able to help me accomplish > this task > > Thank you, > Jon Deaton > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivetti at fbk.eu Thu May 24 09:03:46 2018 From: olivetti at fbk.eu (Emanuele Olivetti) Date: Thu, 24 May 2018 15:03:46 +0200 Subject: [Neuroimaging] PhD Position available: Computational methods for longitudinal studies of structural brain connectivity, Deadline 31/5/18 Message-ID: PhD position available (online application details below) Title: Computational methods for longitudinal studies of structural brain connectivity Topic: machine learning / pattern recognition methods for diffusion MRI data analysis Description: Diffusion MRI provides indirect information of the anatomical structure of the white matter of the brain. Neurological diseases and brain disorders may introduce alterations in such structures. Additionally, after neurosurgery and cognitive rehabilitation, there may be reorganization of the white matter. The goal of this project is to design computational methods to detect and characterize white matter changes over time, disentangling the intrinsic variability of brain recordings from actual anatomical changes, in order to support neuroscientific and clinical inference. References: - https://doi.org/10.3389/fnins.2016.00554 - https://doi.org/10.3389/fnins.2017.00754 Emanuele Olivetti ---------- Forwarded message ---------- From: PhD CIMeC Date: Thu, May 24, 2018 at 10:04 AM Subject: PhD Position available: Computational methods for longitudinal studies of structural brain connectivity, Deadline 31/5/18 To: CIMeC is pleased to announce the opening of a call for 11 fellowships for our international PhD Program in Cognitive and Brain Sciences (CIMeC), commencing November 2018 at the University of Trento, Italy. As you will see from the links below, the PhD call is open to several fields of cognitive neuroscience research. Candidates interested in* Computational methods for longitudinal studies of structural brain connectivity *should contact *Emanuele Olivetti *at olivetti at fbk.eu immediately*.* - *Admissions*: http://www.unitn.it/drcimec/admission - PhD research areas outline: http://www.unitn.it/drcimec/research-topics - PhD Application *summary*: http://www.unitn.it/drcimec/node/118 - PhD *Application *link: https://webapps.unitn.it/Apply/en/ MyUnitn/Home/669/dott/cbs34 - PhD Application *Deadline*:* Thursday May 31, 2018, at 4pm (Italian time)* - *CIMeC*: http://www.unitn.it/cimec This year?s research topics proposed by the Doctoral Program Committee for 5 UNITN grants are visible here: http://www.unitn.it/drcimec/research-topics, while the 6 topic-specific grants? details can be found here: http://www.unitn.it/drcimec/topic-specific-grants *The CIMeC, it?s doctoral program and UNITN in numbers:* - 2018 UNITrento ranks among top Italian Universities in THE - Times Higher Education - *4-yr program* (Nov. 1, 2018- Oct. 31, 2022). - Why a 4-yr program - Courses are given in English - 11 positions, 100% funded - Salary: starting at approximately ?1.200/mo., net - Winners receive a ?4887 tax-free research/mobility budget - Why choose UNITRENTO Please feel free to browse the FAQ pages on our website for further information, or contact the Director of the Programme, Prof. Francesco Pavani phd.cimec at unitn.it. Regards, Leah Mercanti -- Leah Mercanti CIMeC PhD Program Administrator -------------------------------------------------------- CIMeC - Center for Mind/Brain Sciences University of Trento Corso Bettini, 31 I-38068, Rovereto (TN) Tel: +39 0464 80 8617 Fax: +39 0461 28 8690 Skypename: lleahhatwork Email: phd.cimec at unitn.it http://www.unitn.it/drcimec/ -- Le informazioni contenute nella presente comunicazione sono di natura privata e come tali sono da considerarsi riservate ed indirizzate esclusivamente ai destinatari indicati e per le finalit? strettamente legate al relativo contenuto. Se avete ricevuto questo messaggio per errore, vi preghiamo di eliminarlo e di inviare una comunicazione all?indirizzo e-mail del mittente. -- The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you received this in error, please contact the sender and delete the material. -- -- Le informazioni contenute nella presente comunicazione sono di natura? privata e come tali sono da considerarsi riservate ed indirizzate? esclusivamente ai destinatari indicati e per le finalit? strettamente? legate al relativo contenuto. Se avete ricevuto questo messaggio per? errore, vi preghiamo di eliminarlo e di inviare una comunicazione? all?indirizzo e-mail del mittente. -- The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you received this in error, please contact the sender and delete the material. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriele.arnulfo at unige.it Fri May 25 10:29:46 2018 From: gabriele.arnulfo at unige.it (Gabriele Arnulfo) Date: Fri, 25 May 2018 16:29:46 +0200 Subject: [Neuroimaging] 7th international Neuroengineering summer school Message-ID: <640D3E7F-B9F5-4BE8-AF19-7BE28BBE56E0@unige.it> Sorry for multiple posting. Please below find attached the announcement of the upcoming 7th international Neuroengineering summer school. On behalf of the organising committee Gabriele Arnulfo, Ph.D. Assistant Professor Dept. Informatics, Bioengineering, Robotics and Systems Engineering University of Genova Viale Causa 13, 16145 Genova (IT) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Flyer_School.pdf Type: application/pdf Size: 197988 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenlin.wu at duke.edu Wed May 30 13:58:49 2018 From: wenlin.wu at duke.edu (Wenlin Wu) Date: Wed, 30 May 2018 17:58:49 +0000 Subject: [Neuroimaging] nibabel issues Message-ID: Hi Nibabel experts, Hope you are doing great this week. I am using the Nibabel package and met a problem. Specifically, when I run img = nib.load('data/N54917_nii4D_RAS.nii') today I got the warning pixdim[0] (qfac) should be 1 (default) or -1; setting qfac to 1 INFO:nibabel.global:pixdim[0] (qfac) should be 1 (default) or -1; setting qfac to 1 But I don't have the warning yesterday with the same code and same file. Do you have any idea what happened and will this have effect on the results? Best, Wenlin -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.bore at gmail.com Thu May 31 16:15:44 2018 From: arnaud.bore at gmail.com (=?UTF-8?B?QXJuYXVkIEJvcsOp?=) Date: Thu, 31 May 2018 16:15:44 -0400 Subject: [Neuroimaging] Postdoctoral Fellowship in Spinal Cord and Parkinson's Disease Message-ID: *Postdoctoral Fellowship in Spinal Cord and Parkinson's Disease* We are looking to recruit a post-doctoral fellow in Dr. Julien Doyon?s *Motor Learning and Plasticity Laboratory*, at McGill University. The suitable candidate will join a team of post-doctoral fellows and PhD students who are conducting neuroimaging research in motor learning, both fundamental and clinical. For the current job opening, we seek to recruit a post-doctoral fellow who will *conduct neuroimaging and behavioral studies on healthy individuals and Parkinson?s Disease (PD) patients*. These studies are part of a research program aiming to characterize motor learning related plasticity in both brain and spinal cord, as well as to identify neuronal correlates of early stages of PD at both spinal and supra-spinal levels. The position is based at the Montreal Neurological Institute (MNI). The ideal candidate is expected to take the lead in regards to the methodological aspects of the study, addressing specifically the challenges related to the functional neuroimaging of the spinal cord, and to the analysis of complex data sets. The fellow will be involved in all stages of the research projects, from experimental design to data analysis and manuscript writing. The following requirements are mandatory: - PhD degree in neuroscience or a related field; - Experience with functional neuroimaging in human participants and with neuroimaging data analysis techniques and software (i.e. SPM, FSL or similar); - Ability to work independently and to collaborate with other team members; - Excellent organizational and social skills, especially when dealing with patient participants; - Established publication record attesting the above-mentioned requirements; - Good English proficiency (especially scientific writing and oral communication). The following requirements are assets: - Experience with spinal cord imaging; - Experience with multivariate data analysis approaches as implemented in established neuroimaging data analysis software; - Good proficiency with programming (such as Python or Matlab); - French proficiency; - Research experience with PD patients (or other movement-related disorders); - Availability to start as soon as possible. If you are interested in this position, please send your CV, a cover letter and the contact information of two individuals who can provide references at the following e-mail address: *francine.belanger at mcgill.ca * -------------- next part -------------- An HTML attachment was scrubbed... URL: