From AiWern.Chung at childrens.harvard.edu Tue Jul 2 11:21:56 2019 From: AiWern.Chung at childrens.harvard.edu (Chung, Ai Wern) Date: Tue, 2 Jul 2019 15:21:56 +0000 Subject: [Neuroimaging] CALL for Papers and Challengers: MICCAI 2019 Connectomics in NeuroImaging Workshop and Challenge Message-ID: <1562080916519.58722@childrens.harvard.edu> Dear all, This is a call for full-length papers and challenge submissions to our 3rd International Workshop on Connectomics in NeuroImaging (CNI 2019), and Transfer-Learning CNI Challenge 2019 which will be held in parallel with the 22nd International Conference on Medical Image Computing and Computer-assisted Intervention (MICCAI 2019) in Shenzhen, China. CNI 2019 will be a full-day of all things connectomics that will take place October 13th, 2019. *** CNI Workshop Call for Papers *** Our topics of interest cover (but are not limited to): (1) New developments in connectome construction from different imaging modalities; (2) Development of data driven techniques to identify biomarkers in connectome data; (3) Machine learning algorithms and connectome data analysis; (4) Brain network modeling and formal conceptual models of connectome data; (5) Evaluation and validation of connectome models. If you have research that fits into the scope of our workshop detailed on our website (http://www.brainconnectivity.net/workshop), we encourage you to submit a paper! *** CNI Call for Challengers *** Addressing the issues of generalizability and clinical relevance for functional connectomes, you can leverage a unique resting-state fMRI (rsfMRI) dataset of attention deficit hyperactivity disorder (ADHD) and neurotypical controls (NC) to design a classification framework that can predict subject diagnosis (ADHD vs. NC) based on brain connectivity data. In a surprise twist, we will also evaluate the classification performance on a related clinical population with an ADHD comorbidity. This challenge will allow us to assess (1) whether the method is extracting functional connectivity patterns related to ADHD symptomatology, and (2) how much of this information "transfers" between clinical populations. *** Why submit to the CNI Workshop and Challenge? *** - Two keynote speakers, with oral presentations and poster sessions to provide you with ample opportunity for exchanges and discussions; - Accepted papers will be published in an LNCS proceedings and will be invited to submit to a special issue journal; - Best Paper and Poster Awards will be presented, and sponsored prizes for Challenge winners. *** Important dates for CNI workshop *** - Submission deadline: July 28th, 2019, 23:59 EST - Notification of acceptance: August 13th, 2019 - Camera-ready deadline : August 20th, 2019, 23:59 EST - Submission website: https://cmt3.research.microsoft.com/CNI2019 *** Important dates for CNI Challenge *** - Training data is now released - Validation data release: July 22nd, 2019 - Submission deadline: August 15th, 2019 - Submission website: https://cmt3.research.microsoft.com/CNIChallenge2019 For more information, visit the CNI site http://www.brainconnectivity.net or see our flyer http://basira-lab.com/cfp_cni2019/. We look forward to your participation! If you have any additional questions, please contact Ai Chung (aiwern.chung at childrens.harvard.edu) or Markus Schirmer (mschirmer1 at mgh.harvard.edu) CNI 2019 Chairs ------------------------- Ai Wern Chung, Boston Children's Hospital, Harvard Medical School Archana Venkataraman, Johns Hopkins University Islem Rekik, Istanbul Technical University Markus Schirmer, Harvard Medical School Minjeong Kim, University of North Carolina at Greensboro CNI website: http://www.brainconnectivity.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bert.Han at jc-med.net Wed Jul 3 05:11:03 2019 From: Bert.Han at jc-med.net (Bert.Han) Date: Wed, 3 Jul 2019 17:11:03 +0800 Subject: [Neuroimaging] convert Trk file to VTK file Message-ID: <28ACAD9B-8282-40CA-95F1-67A1CDA04EB7@jc-med.net> An HTML attachment was scrubbed... URL: From nico.hoffmann at tu-dresden.de Wed Jul 3 13:39:30 2019 From: nico.hoffmann at tu-dresden.de (Nico Hoffmann) Date: Wed, 3 Jul 2019 19:39:30 +0200 Subject: [Neuroimaging] convert Trk file to VTK file In-Reply-To: <28ACAD9B-8282-40CA-95F1-67A1CDA04EB7@jc-med.net> References: <28ACAD9B-8282-40CA-95F1-67A1CDA04EB7@jc-med.net> Message-ID: Dear Bert, the attached python snippet allows you to convert trk to vtk based on vtk and dipy. You can add scalar information to the vtk polydata array by polydata.GetPointData().AddArray(scalars). ?scalars? is a vtk.vtkFloatArray() containing a scalar per vtkPoint. That value can be used to color your streamlines (and even every point on the streamlines) in 3DSlicer (at least). Best, Nico --- import vtk from dipy.tracking.streamline import Streamlines from dipy.io.streamline import load_trk streams, hdr = load_trk(fname) streamlines = Streamlines(streams) saveStreamlinesVTK(streamlines,?sl.vtk?) def saveStreamlinesVTK(streamlines, pStreamlines): polydata = vtk.vtkPolyData() lines = vtk.vtkCellArray() points = vtk.vtkPoints() ptCtr = 0 for i in range(0,len(streamlines)): if((i % 10000) == 0): print(str(i) + "/" + str(len(streamlines))) line = vtk.vtkLine() line.GetPointIds().SetNumberOfIds(len(streamlines[i])) for j in range(0,len(streamlines[i])): points.InsertNextPoint(streamlines[i][j]) linePts = line.GetPointIds() linePts.SetId(j,ptCtr) ptCtr += 1 lines.InsertNextCell(line) polydata.SetLines(lines) polydata.SetPoints(points) writer = vtk.vtkPolyDataWriter() writer.SetFileName(pStreamlines) writer.SetInputData(polydata) writer.Write() print("Wrote streamlines to " + writer.GetFileName()) -- Dr. rer. nat. Nico Hoffmann Computational Radiation Physics Helmholtz-Zentrum Dresden-Rossendorf (HZDR) Bautzner Landstra?e 400 01328 Dresden T +49 351 260 3668 From mikeltv95 at gmail.com Fri Jul 5 19:37:19 2019 From: mikeltv95 at gmail.com (Mike V) Date: Sat, 6 Jul 2019 01:37:19 +0200 Subject: [Neuroimaging] GC analysis with nitime Message-ID: Hello, I'm following the Granger Causality tutorial in the nitime's webpage but I have few questions: 1- In the tutorial the values f_ub = 0.15 and f_lb = 0.02 are used for the bounds on the frequency band of interest. Are these values recommended for all resting state data-sets or do they depend on the particular acquisition parameters? 2- I do not understand the causality plots (fig03 and fig04), they are both blank. What does it mean? I was expecting some variation like in the coherence and correlation matrices... 3- the tutorial shows the analysis of one single subject. How can I compute group statistics? Can I simply extract the values stored in g1 and then run a one sample t-test across all subjects' g1 scores? 4- how can I get GrangerAnalyzer to display significance? 5- I noticed that the tutorial data has been motion corrected only. Is it not necessary to correct for other sources of noise like physiological noise and linear drift before computing GC? Many thanks in advance! Best regards, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From bennet at umich.edu Sat Jul 6 08:18:05 2019 From: bennet at umich.edu (Bennet Fauber) Date: Sat, 6 Jul 2019 08:18:05 -0400 Subject: [Neuroimaging] GC analysis with nitime In-Reply-To: References: Message-ID: Mike, I reported the blank plots in an Issue at GitHub. I think there may be more that is needed before they make a new release and refresh the tutorial page. https://github.com/nipy/nitime/pull/176 https://github.com/nipy/nitime/issues/173 Sorry, can't help with the rest. On Fri, Jul 5, 2019 at 7:37 PM Mike V wrote: > > Hello, > > I'm following the Granger Causality tutorial in the nitime's webpage but I have few questions: > > 1- In the tutorial the values f_ub = 0.15 and f_lb = 0.02 are used for the bounds on the frequency band of interest. Are these values recommended for all resting state data-sets or do they depend on the particular acquisition parameters? > > 2- I do not understand the causality plots (fig03 and fig04), they are both blank. What does it mean? I was expecting some variation like in the coherence and correlation matrices... > > 3- the tutorial shows the analysis of one single subject. How can I compute group statistics? Can I simply extract the values stored in g1 and then run a one sample t-test across all subjects' g1 scores? > > 4- how can I get GrangerAnalyzer to display significance? > > 5- I noticed that the tutorial data has been motion corrected only. Is it not necessary to correct for other sources of noise like physiological noise and linear drift before computing GC? > > Many thanks in advance! > > Best regards, > Mike > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging From mikeitexpert at gmail.com Tue Jul 9 10:32:26 2019 From: mikeitexpert at gmail.com (Mike IT Expert) Date: Tue, 9 Jul 2019 15:32:26 +0100 Subject: [Neuroimaging] How to read hdr/img images and convert to png/jpeg Message-ID: Can you please show me a nibabel snippet on how to read hdr/img files and convert to png/jpg? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbm.martins at gmail.com Tue Jul 9 06:21:28 2019 From: sbm.martins at gmail.com (Samuel Botter Martins) Date: Tue, 9 Jul 2019 12:21:28 +0200 Subject: [Neuroimaging] How to read hdr/img images and convert to png/jpeg In-Reply-To: References: Message-ID: Hi, Mike... You could use the *scikit-image* package for this. Here a snippet code for this. https://pastebin.com/wTJYiSSJ Best On Tue, Jul 9, 2019 at 12:04 PM Mike IT Expert wrote: > > Can you please show me a nibabel snippet on how to read hdr/img files and > convert to png/jpg? > > Thank you > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- *Samuel Botter Martins* Professor in Federal Institute of Education, Science and Technology of S?o Paulo PhD candidate in Digital Image Processing - University of Campinas, Brazil -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikeltv95 at gmail.com Tue Jul 9 18:11:58 2019 From: mikeltv95 at gmail.com (Mike V) Date: Wed, 10 Jul 2019 00:11:58 +0200 Subject: [Neuroimaging] GC analysis with nitime In-Reply-To: References: Message-ID: Thank you Bennet, I'll wait for the next release then. Best regards, Mike On Sat, 6 Jul 2019 at 14:18, Bennet Fauber wrote: > Mike, > > I reported the blank plots in an Issue at GitHub. I think there may > be more that is needed before they make a new release and refresh the > tutorial page. > > https://github.com/nipy/nitime/pull/176 > https://github.com/nipy/nitime/issues/173 > > Sorry, can't help with the rest. > > On Fri, Jul 5, 2019 at 7:37 PM Mike V wrote: > > > > Hello, > > > > I'm following the Granger Causality tutorial in the nitime's webpage but > I have few questions: > > > > 1- In the tutorial the values f_ub = 0.15 and f_lb = 0.02 are used for > the bounds on the frequency band of interest. Are these values recommended > for all resting state data-sets or do they depend on the particular > acquisition parameters? > > > > 2- I do not understand the causality plots (fig03 and fig04), they are > both blank. What does it mean? I was expecting some variation like in the > coherence and correlation matrices... > > > > 3- the tutorial shows the analysis of one single subject. How can I > compute group statistics? Can I simply extract the values stored in g1 and > then run a one sample t-test across all subjects' g1 scores? > > > > 4- how can I get GrangerAnalyzer to display significance? > > > > 5- I noticed that the tutorial data has been motion corrected only. Is > it not necessary to correct for other sources of noise like physiological > noise and linear drift before computing GC? > > > > Many thanks in advance! > > > > Best regards, > > Mike > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Tue Jul 9 22:25:31 2019 From: arokem at gmail.com (Ariel Rokem) Date: Tue, 9 Jul 2019 19:25:31 -0700 Subject: [Neuroimaging] GC analysis with nitime In-Reply-To: References: Message-ID: Hi Mike, Thanks for your email. Answers inline below: On Tue, Jul 9, 2019 at 3:12 PM Mike V wrote: > Thank you Bennet, I'll wait for the next release then. > Best regards, > Mike > > On Sat, 6 Jul 2019 at 14:18, Bennet Fauber wrote: > >> Mike, >> >> I reported the blank plots in an Issue at GitHub. I think there may >> be more that is needed before they make a new release and refresh the >> tutorial page. >> >> https://github.com/nipy/nitime/pull/176 >> https://github.com/nipy/nitime/issues/173 >> >> Sorry, can't help with the rest. >> >> On Fri, Jul 5, 2019 at 7:37 PM Mike V wrote: >> > >> > Hello, >> > >> > I'm following the Granger Causality tutorial in the nitime's webpage >> but I have few questions: >> > >> > 1- In the tutorial the values f_ub = 0.15 and f_lb = 0.02 are used for >> the bounds on the frequency band of interest. Are these values recommended >> for all resting state data-sets or do they depend on the particular >> acquisition parameters? >> > These are pretty reasonable for most fMRI data, with anything above 0.15 Hz probably unrelated to the hemodynamic response and slow oscillations below 0.02 probably due to signal drift (but we also have a paper coming out soon that there are some interesting things going out in that infraslow range). But it really depends on your data and on your questions. > > >> > 2- I do not understand the causality plots (fig03 and fig04), they are >> both blank. What does it mean? I was expecting some variation like in the >> coherence and correlation matrices... >> > >> > Yes. As Bennet pointed out, this is a bug in the recent release. I really hope that we can fix that soon, but after a quick look, I am not quite sure what's going on, and have only limited time to devote to this in the next few weeks. > > 3- the tutorial shows the analysis of one single subject. How can I >> compute group statistics? Can I simply extract the values stored in g1 and >> then run a one sample t-test across all subjects' g1 scores? >> > >> > That seems like a reasonable approach. > > 4- how can I get GrangerAnalyzer to display significance? >> > >> > It's not designed to do so. > > 5- I noticed that the tutorial data has been motion corrected only. Is >> it not necessary to correct for other sources of noise like physiological >> noise and linear drift before computing GC? >> > >> > The filtering should help with that (see above). Hope that helps -- sorry about the slowness to respond and fix things, Ariel > > Many thanks in advance! >> > >> > Best regards, >> > Mike >> > _______________________________________________ >> > Neuroimaging mailing list >> > Neuroimaging at python.org >> > https://mail.python.org/mailman/listinfo/neuroimaging >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaicanoll at gmail.com Wed Jul 10 16:09:16 2019 From: kaicanoll at gmail.com (Kai Canoll) Date: Wed, 10 Jul 2019 16:09:16 -0400 Subject: [Neuroimaging] Niftiimage Affine Rotation Message-ID: HI All, We are having some problems applying an affine transform to some nifti data. We are trying to apply a new transform using affine_transformed_image=nib.Nifti1Image(datamatrix,affinetransform) When we look at a slice of affine_transformed_image in Python, it doesn't seem to change in any way. However, if we open it in FSLeyes, it appears to be rotated according to affinetransform. However, if we resample affine_transformed_image using resampled_image=nib.processing.resample_from_to(affine_transformed_image, *v*ox_map) where vox_map contains an affine identity transform, then the slices in python are successfully rotated. In other words, our image does not appear to be affine transformed in the first step. Can anyone suggest how to apply the affine transform such that the image actually rotates without having to do the resampling step? thanks! Kai and Jack -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Jul 10 17:34:31 2019 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 10 Jul 2019 14:34:31 -0700 Subject: [Neuroimaging] Niftiimage Affine Rotation In-Reply-To: References: Message-ID: Hi, On Wed, Jul 10, 2019 at 2:30 PM Kai Canoll wrote: > > HI All, > > We are having some problems applying an affine transform to some nifti data. > We are trying to apply a new transform using > affine_transformed_image=nib.Nifti1Image(datamatrix,affinetransform) > When we look at a slice of affine_transformed_image in Python, it doesn't seem to change in any way. However, if we open it in FSLeyes, it appears to be rotated according to affinetransform. > However, if we resample affine_transformed_image using > resampled_image=nib.processing.resample_from_to(affine_transformed_image, vox_map) > where vox_map contains an affine identity transform, then the slices in python are successfully rotated. > > In other words, our image does not appear to be affine transformed in the first step. Can anyone suggest how to apply the affine transform such that the image actually rotates without having to do the resampling step? Could you say more about what you mean? If you want the slices to change, in the array, I'm afraid there's no way round it, but do resampling. FSLeyes is just doing the resampling in the background for you, in order to show you the slices. Cheers, Matthew From kaicanoll at gmail.com Thu Jul 11 14:03:58 2019 From: kaicanoll at gmail.com (Kai Canoll) Date: Thu, 11 Jul 2019 14:03:58 -0400 Subject: [Neuroimaging] Niftiimage Affine Rotation Message-ID: I would like to apply an affine transform to my image, then open the transformed image in python to do additional processing. In most other toolboxes, the affine transformation and the resampling are separate steps, i.e. you can rotate an image without changing the resolution or you can change the resolution without rotating. Is there any way to just apply an affine transform to an image in nibabel? Hi, On Wed, Jul 10, 2019 at 2:30 PM Kai Canoll > wrote: >>* HI All, *>>* We are having some problems applying an affine transform to some nifti data. *>* We are trying to apply a new transform using *>* affine_transformed_image=nib.Nifti1Image(datamatrix,affinetransform) *>* When we look at a slice of affine_transformed_image in Python, it doesn't seem to change in any way. However, if we open it in FSLeyes, it appears to be rotated according to affinetransform. *>* However, if we resample affine_transformed_image using *>* resampled_image=nib.processing.resample_from_to(affine_transformed_image, vox_map) *>* where vox_map contains an affine identity transform, then the slices in python are successfully rotated. *>>* In other words, our image does not appear to be affine transformed in the first step. Can anyone suggest how to apply the affine transform such that the image actually rotates without having to do the resampling step? * Could you say more about what you mean? If you want the slices to change, in the array, I'm afraid there's no way round it, but do resampling. FSLeyes is just doing the resampling in the background for you, in order to show you the slices. Cheers, Matthew -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Jul 12 08:52:59 2019 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 12 Jul 2019 05:52:59 -0700 Subject: [Neuroimaging] Niftiimage Affine Rotation In-Reply-To: References: Message-ID: Hi, On Fri, Jul 12, 2019 at 5:29 AM Kai Canoll wrote: > > > I would like to apply an affine transform to my image, then open the transformed image in python to do additional processing. In most other toolboxes, the affine transformation and the resampling are separate steps, i.e. you can rotate an image without changing the resolution or you can change the resolution without rotating. > Is there any way to just apply an affine transform to an image in nibabel? The image, in Nibabel as in other implementations, is the assocation of the (array of data, the header). The header contains the affine. You can change the array of data without changing the header, and vice versa. I am not sure what you mean, exactly, by "apply an affine transform", but the obvious meaning, is setting or modifying the affine transform of the image (which then goes into the header), without modifying the image data. You can do that by making a new image, with the affine you want: >>> img = nib.load('my_image.nii') >>> new_img = nib.Nifti1Image(img.dataobj, my_new_affine, img.header) >>> nib.save(new_image, 'my_image_modified_affine.nii') If you want to change the image data by applying such a rotation, then you'll need to resample, in Nibabel as for any other software. Cheers, Matthew From amitvakula at gmail.com Sun Jul 14 17:05:11 2019 From: amitvakula at gmail.com (Amit Akula) Date: Sun, 14 Jul 2019 14:05:11 -0700 Subject: [Neuroimaging] Niftiimage Affine Rotation In-Reply-To: References: Message-ID: Hello, Just to get some background information, how are you applying the affine transformation to your image? Are you using the following function? from nibabel.affines import apply_affine You can see the source code of this function here . Sincerely, Amit On Fri, Jul 12, 2019 at 5:29 AM Kai Canoll wrote: > > I would like to apply an affine transform to my image, then open the > transformed image in python to do additional processing. In most other > toolboxes, the affine transformation and the resampling are separate steps, > i.e. you can rotate an image without changing the resolution or you can > change the resolution without rotating. > Is there any way to just apply an affine transform to an image in nibabel? > > Hi, > > On Wed, Jul 10, 2019 at 2:30 PM Kai Canoll > wrote: > >>* HI All, > *>>* We are having some problems applying an affine transform to some nifti data. > *>* We are trying to apply a new transform using > *>* affine_transformed_image=nib.Nifti1Image(datamatrix,affinetransform) > *>* When we look at a slice of affine_transformed_image in Python, it doesn't seem to change in any way. However, if we open it in FSLeyes, it appears to be rotated according to affinetransform. > *>* However, if we resample affine_transformed_image using > *>* resampled_image=nib.processing.resample_from_to(affine_transformed_image, vox_map) > *>* where vox_map contains an affine identity transform, then the slices in python are successfully rotated. > *>>* In other words, our image does not appear to be affine transformed in the first step. Can anyone suggest how to apply the affine transform such that the image actually rotates without having to do the resampling step? > * > Could you say more about what you mean? If you want the slices to > change, in the array, I'm afraid there's no way round it, but do > resampling. FSLeyes is just doing the resampling in the background > for you, in order to show you the slices. > > Cheers, > > Matthew > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giardini.fr at gmail.com Mon Jul 15 06:45:25 2019 From: giardini.fr at gmail.com (francesco giardini) Date: Mon, 15 Jul 2019 12:45:25 +0200 Subject: [Neuroimaging] [DIPY] track simple (not DWI) vector field data Message-ID: Hi, I'm a complete novice at the field of Dipy so, first, thank you very much to all of you. I would like to exploit Dipy tractography tools to draw my own data (Not DWI), but I have some problem about the fiber directions representation. I work with 3D images of heart muscle tissue, and I developed a simple code that extract the fibers orientation by means of a Structure Tensor Analysis. So, I have a 3d matrix 'R' (numpy.ndarray); every R(z, y, x) element contains: - three eigenvalues (3x1 matrx) - three R3 eigenvectors (3x3 matrix) - others informations (for example, the Fractional Anisotropy, useful for the stop criterium) In the attached, there is a one-frame example (only one XY plane analyzed) of the orientations maps, plotted with matplotlib and mayavi. I'm trying to understand how to format my data to generate the streamlines. I have reproduced some examples. Here ( http://nipy.org/dipy/examples_built/tracking_quick_start.html#example-tracking-quick-start) i find that csd_peaks contains the vectors informations (the source of the tractography process?) while here ( http://nipy.org/dipy/examples_built/reconst_dti.html#example-reconst-dti) seems that tenfit contains the same vector field. I'm reading also the documentation about dipy.tracking.local.LocalTracking and dipy.tracking.local.DirectionGetter. How I can merge my data with the Dipy structures, to generate and display my streamlines? I'am looking forward to your reply. Thank you very much. Francesco Giardini -- Dr. Francesco Giardini (giardini at lens.unifi.it) LENS - European Laboratory for Non-linear Spectroscopy Via Nello Carrara 1, 50019 Sesto Fiorentino (FI), Italy University of Florence - Biomedical Engineering Tel. 055 457 2477 (off.) http://www.lens.unifi.it -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: quiver.png Type: image/png Size: 622022 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3d_vector.png Type: image/png Size: 732290 bytes Desc: not available URL: From mikeltv95 at gmail.com Wed Jul 17 17:50:45 2019 From: mikeltv95 at gmail.com (Mike V) Date: Wed, 17 Jul 2019 23:50:45 +0200 Subject: [Neuroimaging] GC analysis with nitime In-Reply-To: References: Message-ID: Hi Ariel, Thank you very much for your reply! I'll eagerly wait for the next release :) Best regards, Mike On Wed, 10 Jul 2019 at 04:26, Ariel Rokem wrote: > Hi Mike, > > Thanks for your email. Answers inline below: > > On Tue, Jul 9, 2019 at 3:12 PM Mike V wrote: > >> Thank you Bennet, I'll wait for the next release then. >> Best regards, >> Mike >> >> On Sat, 6 Jul 2019 at 14:18, Bennet Fauber wrote: >> >>> Mike, >>> >>> I reported the blank plots in an Issue at GitHub. I think there may >>> be more that is needed before they make a new release and refresh the >>> tutorial page. >>> >>> https://github.com/nipy/nitime/pull/176 >>> https://github.com/nipy/nitime/issues/173 >>> >>> Sorry, can't help with the rest. >>> >>> On Fri, Jul 5, 2019 at 7:37 PM Mike V wrote: >>> > >>> > Hello, >>> > >>> > I'm following the Granger Causality tutorial in the nitime's webpage >>> but I have few questions: >>> > >>> > 1- In the tutorial the values f_ub = 0.15 and f_lb = 0.02 are used for >>> the bounds on the frequency band of interest. Are these values recommended >>> for all resting state data-sets or do they depend on the particular >>> acquisition parameters? >>> >> > These are pretty reasonable for most fMRI data, with anything above 0.15 > Hz probably unrelated to the hemodynamic response and slow oscillations > below 0.02 probably due to signal drift (but we also have a paper coming > out soon that there are some interesting things going out in that infraslow > range). But it really depends on your data and on your questions. > > >> > >>> > 2- I do not understand the causality plots (fig03 and fig04), they are >>> both blank. What does it mean? I was expecting some variation like in the >>> coherence and correlation matrices... >>> > >>> >> > Yes. As Bennet pointed out, this is a bug in the recent release. I really > hope that we can fix that soon, but after a quick look, I am not quite sure > what's going on, and have only limited time to devote to this in the next > few weeks. > > >> > 3- the tutorial shows the analysis of one single subject. How can I >>> compute group statistics? Can I simply extract the values stored in g1 and >>> then run a one sample t-test across all subjects' g1 scores? >>> > >>> >> > That seems like a reasonable approach. > > >> > 4- how can I get GrangerAnalyzer to display significance? >>> > >>> >> > It's not designed to do so. > > >> > 5- I noticed that the tutorial data has been motion corrected only. Is >>> it not necessary to correct for other sources of noise like physiological >>> noise and linear drift before computing GC? >>> > >>> >> > The filtering should help with that (see above). > > Hope that helps -- sorry about the slowness to respond and fix things, > > Ariel > > >> > Many thanks in advance! >>> > >>> > Best regards, >>> > Mike >>> > _______________________________________________ >>> > Neuroimaging mailing list >>> > Neuroimaging at python.org >>> > https://mail.python.org/mailman/listinfo/neuroimaging >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giardini at lens.unifi.it Thu Jul 18 10:36:32 2019 From: giardini at lens.unifi.it (Francesco Giardini) Date: Thu, 18 Jul 2019 16:36:32 +0200 Subject: [Neuroimaging] [DIPY] track simple (not DWI) vector field np.ndarray data Message-ID: Hi, I'm a complete novice at the field of Dipy so, first thank you very much to all of you. I would like to exploit Dipy tractography tools to draw my own data (not DWI), but I have some problem about the fiber directions representation. I work with 3D images of heart muscle tissue, and I developed a simple code that extract the fibers orientation by means of a Structure Tensor Analysis. So, I have a 3d matrix 'R' (numpy.ndarray); every R(z, y, x) element contains: - three eigenvalues (3x1 matrx) - three R3 eigenvectors (3x3 matrix) - others informations (for example, the Fractional Anisotropy, useful (i think) for the stop criterium) In the attached, there is an example (only one XY plane displayed) of the orientations maps, plotted with mayavi. I'm trying to understand how to format my data to generate the streamlines. I have reproduced some examples. Here ( http://nipy.org/dipy/examples_built/tracking_quick_start.html#example-tracking-quick-start) i find that csd_peaks contains the vectors informations (the source of the tractography process?) while here ( http://nipy.org/dipy/examples_built/reconst_dti.html#example-reconst-dti) seems that tenfit contains the same vector field. I'm reading also the documentation about dipy.tracking.local.LocalTracking and dipy.tracking.local.DirectionGetter. The question is: How I can create the streamlines from my data (numpy.ndarray)? I'am looking forward to your reply. Thank you very much. Francesco -- Dr. Francesco Giardini (giardini at lens.unifi.it) LENS - European Laboratory for Non-linear Spectroscopy Via Nello Carrara 1, 50019 Sesto Fiorentino (FI), Italy University of Florence - Biomedical Engineering Tel. 055 457 2477 (off.) http://www.lens.unifi.it -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3d_vectors_small.jpg Type: image/jpeg Size: 40250 bytes Desc: not available URL: From emma.robinson01 at gmail.com Tue Jul 23 07:30:08 2019 From: emma.robinson01 at gmail.com (Emma Robinson) Date: Tue, 23 Jul 2019 12:30:08 +0100 Subject: [Neuroimaging] Resetting gifti labeltable Message-ID: Hi I am trying to create a new gifti label file using an existing one as template. How do I change the label table? I've create a new label dictionary as variable 'labeldict' but I cannot create a GiftiLabelTable instance from this see: nibabel.gifti.GiftiLabelTable(labeldict) Traceback (most recent call last): File "", line 1, in nibabel.gifti.GiftiLabelTable(labeldict) TypeError: __init__() takes 1 positional argument but 2 were given Could someone tell me how it is supposed to be done? Thanks Emma -------------- next part -------------- An HTML attachment was scrubbed... URL: From michiel.cottaar at ndcn.ox.ac.uk Tue Jul 23 08:06:42 2019 From: michiel.cottaar at ndcn.ox.ac.uk (Michiel Cottaar) Date: Tue, 23 Jul 2019 12:06:42 +0000 Subject: [Neuroimaging] Resetting gifti labeltable In-Reply-To: References: Message-ID: <148AFE38-C26E-433D-AB93-30F04147F063@ndcn.ox.ac.uk> Hi Emma, It's a bit awkward, but the way I do this is to create an empty GiftiLabelTable and then append individual GiftiLabels to it. Note that the constructor of the GiftiLabel only expects the integer index and the RGBA colour, but you should also set a label (when serialising the GiftiLabelTable expects all labels to have such a label). So my code looks like: labeltable = gifti.GiftiLabelTable() for value, (text, rgba) in color_map.items(): labeltable.labels.append(gifti.GiftiLabel(value, *rgba)) labeltable.labels[-1].label = str(text) where color_map is a dictonary mapping from the indices to a tuple with the label and the RGBA value. Cheers, Michiel On 23 Jul 2019, at 12:30, Emma Robinson > wrote: Hi I am trying to create a new gifti label file using an existing one as template. How do I change the label table? I've create a new label dictionary as variable 'labeldict' but I cannot create a GiftiLabelTable instance from this see: nibabel.gifti.GiftiLabelTable(labeldict) Traceback (most recent call last): File "", line 1, in nibabel.gifti.GiftiLabelTable(labeldict) TypeError: __init__() takes 1 positional argument but 2 were given Could someone tell me how it is supposed to be done? Thanks Emma _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From AiWern.Chung at childrens.harvard.edu Thu Jul 25 13:44:14 2019 From: AiWern.Chung at childrens.harvard.edu (Chung, Ai Wern) Date: Thu, 25 Jul 2019 17:44:14 +0000 Subject: [Neuroimaging] FINAL CALL for Papers: MICCAI 2019 Connectomics in NeuroImaging Workshop and Challenge In-Reply-To: <1562863498247.64412@childrens.harvard.edu> References: <1559656196223.77412@childrens.harvard.edu> <1559745851975.55289@childrens.harvard.edu> <1560171940966.80109@childrens.harvard.edu> <1560172967828.18782@childrens.harvard.edu> <1560184318236.53889@childrens.harvard.edu> <1560187209491.12194@childrens.harvard.edu> <1560295231182.25157@childrens.harvard.edu> <66af52c354ee4ab0bf152d05b3d717c3@ESGEBEX7.win.ad.jhu.edu> <1561386182331.96213@childrens.harvard.edu> , , <1561993663310.85635@childrens.harvard.edu>, <1562010983705.40993@childrens.harvard.edu>, <1562076295433.82718@childrens.harvard.edu>, <1562076462247.99267@childrens.harvard.edu>, <1562166902740.68167@childrens.harvard.edu>, <1562166982137.30592@childrens.harvard.edu>, <1562168435025.7068@childrens.harvard.edu>, <1562858823013.29682@childrens.harvard.edu>, <1562859152482.12803@childrens.harvard.edu>, <1562863498247.64412@childrens.harvard.edu> Message-ID: <1564076654342.50102@childrens.harvard.edu> **Apologies for cross posting** Submission deadline extended to Weds 31st July 2019 This is a final call for full-length papers submissions to our 3rd International Workshop on Connectomics in NeuroImaging (CNI 2019), and Transfer-Learning CNI Challenge 2019 held in parallel with the 22nd International Conference on Medical Image Computing and Computer-assisted Intervention (MICCAI 2019) in Shenzhen, China. CNI 2019 will be a full-day workshop taking place October 13th, 2019. *** CNI Workshop Call for Papers *** Our topics of interest cover (but are not limited to): (1) New developments in connectome construction from different imaging modalities; (2) Development of data driven techniques to identify biomarkers in connectome data; (3) Machine learning algorithms and connectome data analysis; (4) Brain network modeling and formal conceptual models of connectome data; (5) Evaluation and validation of connectome models. If you have research that fits into the scope of our workshop detailed on our website (http://www.brainconnectivity.net/workshop), we encourage you to submit a paper! *** CNI Call for Challengers *** Addressing the issues of generalizability and clinical relevance for functional connectomes, you can leverage a unique resting-state fMRI (rsfMRI) dataset of attention deficit hyperactivity disorder (ADHD) and neurotypical controls (NC) to design a classification framework that can predict subject diagnosis (ADHD vs. NC) based on brain connectivity data. In a surprise twist, we will also evaluate the classification performance on a related clinical population with an ADHD comorbidity. This challenge will allow us to assess (1) whether the method is extracting functional connectivity patterns related to ADHD symptomatology, and (2) how much of this information "transfers" between clinical populations. Training and validation data are now released! http://www.brainconnectivity.net/challenge *** Why submit to the CNI Workshop and Challenge? *** - Two great keynote speakers Prof Yong He (Beijing Normal University, China) and Dr. Fan Zhang (Harvard Medical School, USA); - Oral presentations and poster sessions to provide you with ample opportunity for exchanges and discussions; - Accepted papers will be published in an LNCS proceedings; - Best Paper and Poster Awards will be presented, and sponsored prizes for Challenge winners. *** Important dates for CNI workshop *** - Submission deadline: July 31st, 2019, 23:59 EST - Notification of acceptance: August 13th, 2019 - Camera-ready deadline : August 18th, 2019, 23:59 EST - Submission website: https://cmt3.research.microsoft.com/CNI2019 *** Important dates for CNI Challenge *** - Submission deadline: August 15th, 2019, 23:59 EST - Submission website: https://cmt3.research.microsoft.com/CNIChallenge2019 For more information, visit the http://www.brainconnectivity.net. If you have any questions, please contact Ai Chung (aiwern.chung at childrens.harvard.edu) or Markus Schirmer (mschirmer1 at mgh.harvard.edu) We look forward to your participation! CNI 2019 Chairs ------------------------- Ai Wern Chung, Boston Children's Hospital, Harvard Medical School Archana Venkataraman, Johns Hopkins University Islem Rekik, Istanbul Technical University Markus Schirmer, Harvard Medical School Minjeong Kim, University of North Carolina at Greensboro CNI website: http://www.brainconnectivity.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at crabb.org Thu Jul 25 18:09:52 2019 From: andrew at crabb.org (Andrew Crabb) Date: Thu, 25 Jul 2019 18:09:52 -0400 Subject: [Neuroimaging] ECAT images in NiBabel Message-ID: Hi - I'm new to NiBabel. I'd like to open some ECAT 7 images, but I see that while the class `EcatImage` exists, it is not included in `all_image_classes` in `imageclasses.py`. If I add it and try `load()`, I get errors that suggest `ecat.py` is written in Python 2 (`next()` is called on an iter). Grepping around for 'ecat' returns lots of 'deprecated' (coincidence, I'm sure!) and not much else. Is ECAT still a supported image format? It's as old as the hills, but we have a PET scanner that produces it. Thanks, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From emma.robinson01 at gmail.com Fri Jul 26 03:56:36 2019 From: emma.robinson01 at gmail.com (Emma Robinson) Date: Fri, 26 Jul 2019 08:56:36 +0100 Subject: [Neuroimaging] Resetting gifti labeltable In-Reply-To: <148AFE38-C26E-433D-AB93-30F04147F063@ndcn.ox.ac.uk> References: <148AFE38-C26E-433D-AB93-30F04147F063@ndcn.ox.ac.uk> Message-ID: Thanks! Could you possibly show me how you create your colour_map dictionary? What I've tried so far has not been working. Emma On Tue, 23 Jul 2019 at 13:50, Michiel Cottaar wrote: > Hi Emma, > > It's a bit awkward, but the way I do this is to create an empty > GiftiLabelTable and then append individual GiftiLabels to it. Note that the > constructor of the GiftiLabel only expects the integer index and the RGBA > colour, but you should also set a label (when serialising the > GiftiLabelTable expects all labels to have such a label). > > So my code looks like: > > labeltable = gifti.GiftiLabelTable() > for value, (text, rgba) in color_map.items(): > labeltable.labels.append(gifti.GiftiLabel(value, *rgba)) > labeltable.labels[-1].label = str(text) > > where color_map is a dictonary mapping from the indices to a tuple with > the label and the RGBA value. > > Cheers, > > Michiel > > On 23 Jul 2019, at 12:30, Emma Robinson wrote: > > Hi > > I am trying to create a new gifti label file using an existing one as > template. How do I change the label table? I've create a new label > dictionary as variable 'labeldict' but I cannot create a GiftiLabelTable > instance from this see: > > nibabel.gifti.GiftiLabelTable(labeldict) > Traceback (most recent call last): > > File "", line 1, in > nibabel.gifti.GiftiLabelTable(labeldict) > > TypeError: __init__() takes 1 positional argument but 2 were given > > Could someone tell me how it is supposed to be done? > > Thanks > > Emma > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michiel.cottaar at ndcn.ox.ac.uk Fri Jul 26 04:41:38 2019 From: michiel.cottaar at ndcn.ox.ac.uk (Michiel Cottaar) Date: Fri, 26 Jul 2019 08:41:38 +0000 Subject: [Neuroimaging] Resetting gifti labeltable In-Reply-To: References: <148AFE38-C26E-433D-AB93-30F04147F063@ndcn.ox.ac.uk> Message-ID: Something like the following should work for that: color_map = {0: ('red', (1., 0., 0., 1.)), 1: ('green', (0., 1., 0., 1.)), 2: ('blue', (0., 0., 1., 1.))} This will make all the zeros in the GIFTI file to be plotted as red, all the ones as green, and all the two's as blue. So within the loop, the variable `value` should be the integer value that you use to mark the ROI in the array. The variable `text` should be the ROI label and `rgba` should be a sequence of four numbers (red, green, blue, and alpha). You can set the alpha to zero for any labels you don't want to plot. Good luck, Michiel On 26 Jul 2019, at 08:56, Emma Robinson > wrote: Thanks! Could you possibly show me how you create your colour_map dictionary? What I've tried so far has not been working. Emma On Tue, 23 Jul 2019 at 13:50, Michiel Cottaar > wrote: Hi Emma, It's a bit awkward, but the way I do this is to create an empty GiftiLabelTable and then append individual GiftiLabels to it. Note that the constructor of the GiftiLabel only expects the integer index and the RGBA colour, but you should also set a label (when serialising the GiftiLabelTable expects all labels to have such a label). So my code looks like: labeltable = gifti.GiftiLabelTable() for value, (text, rgba) in color_map.items(): labeltable.labels.append(gifti.GiftiLabel(value, *rgba)) labeltable.labels[-1].label = str(text) where color_map is a dictonary mapping from the indices to a tuple with the label and the RGBA value. Cheers, Michiel On 23 Jul 2019, at 12:30, Emma Robinson > wrote: Hi I am trying to create a new gifti label file using an existing one as template. How do I change the label table? I've create a new label dictionary as variable 'labeldict' but I cannot create a GiftiLabelTable instance from this see: nibabel.gifti.GiftiLabelTable(labeldict) Traceback (most recent call last): File "", line 1, in nibabel.gifti.GiftiLabelTable(labeldict) TypeError: __init__() takes 1 positional argument but 2 were given Could someone tell me how it is supposed to be done? Thanks Emma _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging _______________________________________________ Neuroimaging mailing list Neuroimaging at python.org https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From emma.robinson01 at gmail.com Fri Jul 26 05:48:28 2019 From: emma.robinson01 at gmail.com (Emma Robinson) Date: Fri, 26 Jul 2019 10:48:28 +0100 Subject: [Neuroimaging] Resetting gifti labeltable In-Reply-To: References: <148AFE38-C26E-433D-AB93-30F04147F063@ndcn.ox.ac.uk> Message-ID: Got it thanks. As it turns out, that was almost what I had (without the fourth number - alpha). But I had another issue, which is that I hadn't cast the data as int32. Cheers. Emma On Fri, 26 Jul 2019 at 09:42, Michiel Cottaar wrote: > Something like the following should work for that: > color_map = {0: ('red', (1., 0., 0., 1.)), 1: ('green', (0., 1., 0., 1.)), > 2: ('blue', (0., 0., 1., 1.))} > This will make all the zeros in the GIFTI file to be plotted as red, all > the ones as green, and all the two's as blue. > > So within the loop, the variable `value` should be the integer value that > you use to mark the ROI in the array. The variable `text` should be the ROI > label and `rgba` should be a sequence of four numbers (red, green, blue, > and alpha). You can set the alpha to zero for any labels you don't want to > plot. > > Good luck, > > Michiel > > On 26 Jul 2019, at 08:56, Emma Robinson wrote: > > Thanks! > > Could you possibly show me how you create your colour_map dictionary? What > I've tried so far has not been working. > > Emma > > > > On Tue, 23 Jul 2019 at 13:50, Michiel Cottaar < > michiel.cottaar at ndcn.ox.ac.uk> wrote: > >> Hi Emma, >> >> It's a bit awkward, but the way I do this is to create an empty >> GiftiLabelTable and then append individual GiftiLabels to it. Note that the >> constructor of the GiftiLabel only expects the integer index and the RGBA >> colour, but you should also set a label (when serialising the >> GiftiLabelTable expects all labels to have such a label). >> >> So my code looks like: >> >> labeltable = gifti.GiftiLabelTable() >> for value, (text, rgba) in color_map.items(): >> labeltable.labels.append(gifti.GiftiLabel(value, *rgba)) >> labeltable.labels[-1].label = str(text) >> >> where color_map is a dictonary mapping from the indices to a tuple with >> the label and the RGBA value. >> >> Cheers, >> >> Michiel >> >> On 23 Jul 2019, at 12:30, Emma Robinson >> wrote: >> >> Hi >> >> I am trying to create a new gifti label file using an existing one as >> template. How do I change the label table? I've create a new label >> dictionary as variable 'labeldict' but I cannot create a GiftiLabelTable >> instance from this see: >> >> nibabel.gifti.GiftiLabelTable(labeldict) >> Traceback (most recent call last): >> >> File "", line 1, in >> nibabel.gifti.GiftiLabelTable(labeldict) >> >> TypeError: __init__() takes 1 positional argument but 2 were given >> >> Could someone tell me how it is supposed to be done? >> >> Thanks >> >> Emma >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markiewicz at stanford.edu Fri Jul 26 09:56:15 2019 From: markiewicz at stanford.edu (Christopher Markiewicz) Date: Fri, 26 Jul 2019 13:56:15 +0000 Subject: [Neuroimaging] ECAT images in NiBabel In-Reply-To: References: Message-ID: Hi Andrew, ECAT is still supported, but it doesn't get exercised a lot. Could you go ahead and open an issue with the errors you see in https://github.com/nipy/nibabel/issues? We can surely fix any Python 3 related errors without much trouble. Best, Chris ________________________________ From: Neuroimaging on behalf of Andrew Crabb Sent: Thursday, July 25, 2019 6:09 PM To: neuroimaging at python.org Subject: [Neuroimaging] ECAT images in NiBabel Hi - I'm new to NiBabel. I'd like to open some ECAT 7 images, but I see that while the class EcatImage exists, it is not included in all_image_classes in imageclasses.py. If I add it and try load(), I get errors that suggest ecat.py is written in Python 2 (next() is called on an iter). Grepping around for 'ecat' returns lots of 'deprecated' (coincidence, I'm sure!) and not much else. Is ECAT still a supported image format? It's as old as the hills, but we have a PET scanner that produces it. Thanks, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: