From pauldmccarthy at gmail.com Thu Sep 1 05:14:55 2016 From: pauldmccarthy at gmail.com (paul mccarthy) Date: Thu, 1 Sep 2016 10:14:55 +0100 Subject: [Neuroimaging] 2D NIFTI images in nibabel Message-ID: Howdy all, Nibabel truncates the third dimension of a NIFTI image with dim3=1. (boo) ws189:nifti2D paulmc$ fslinfo MNI152_T1_2mm_sliceXY.nii.gz data_type INT16 dim1 91 dim2 109 dim3 1 dim4 1 datatype 4 pixdim1 2.000000 pixdim2 2.000000 pixdim3 2.000000 pixdim4 1.000000 cal_max 8000.0000 cal_min 3000.0000 file_type NIFTI-1+ (boo) ws189:nifti2D paulmc$ ipython Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 12:54:16) Type "copyright", "credits" or "license" for more information. IPython 5.1.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import nibabel as nib In [2]: i = nib.load('MNI152_T1_2mm_sliceXY.nii.gz') In [3]: i.shape Out[3]: (91, 109) In [4]: i.header.get_zooms() Out[4]: (2.0, 2.0) In [5]: Does anybody else think that this is a problem? Note that the dimensions for an image of e.g. size (91, 1, 91) will be preserved. Cheers, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From Reid.Robert at mayo.edu Thu Sep 1 10:17:27 2016 From: Reid.Robert at mayo.edu (Reid, Robert I. (Rob)) Date: Thu, 01 Sep 2016 14:17:27 +0000 Subject: [Neuroimaging] 2D NIFTI images in nibabel In-Reply-To: References: Message-ID: <083a37$48av92@ironport10.mayo.edu> Well, no. I usually work with a mix of 3D and 4D NIFTIs, and like that the 3D ones appear as 3D arrays instead of 4D (or 5D, 6D,?) with the extra axis having length 1. Also, if I select a volume from a 4D array, i.e. vol = data[?, t], it makes sense that I get a 3D array, just like selecting an element from a list of scalars gives you a scalar. I understand that there are two ways of looking at it, but I think overall things are better as is. Rob -- Robert I. Reid, Ph.D. | Sr. Analyst/Programmer, Information Technology Aging and Dementia Imaging Research | Opus Center for Advanced Imaging Research Mayo Clinic | 200 First Street SW | Rochester, MN 55905 | mayoclinic.org From: Neuroimaging [mailto:neuroimaging-bounces+reid.robert=mayo.edu at python.org] On Behalf Of paul mccarthy Sent: Thursday, September 01, 2016 4:15 AM To: Neuroimaging analysis in Python Subject: [Neuroimaging] 2D NIFTI images in nibabel Howdy all, Nibabel truncates the third dimension of a NIFTI image with dim3=1. (boo) ws189:nifti2D paulmc$ fslinfo MNI152_T1_2mm_sliceXY.nii.gz data_type INT16 dim1 91 dim2 109 dim3 1 dim4 1 datatype 4 pixdim1 2.000000 pixdim2 2.000000 pixdim3 2.000000 pixdim4 1.000000 cal_max 8000.0000 cal_min 3000.0000 file_type NIFTI-1+ (boo) ws189:nifti2D paulmc$ ipython Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 12:54:16) Type "copyright", "credits" or "license" for more information. IPython 5.1.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import nibabel as nib In [2]: i = nib.load('MNI152_T1_2mm_sliceXY.nii.gz') In [3]: i.shape Out[3]: (91, 109) In [4]: i.header.get_zooms() Out[4]: (2.0, 2.0) In [5]: Does anybody else think that this is a problem? Note that the dimensions for an image of e.g. size (91, 1, 91) will be preserved. Cheers, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Thu Sep 1 11:56:57 2016 From: effigies at bu.edu (Christopher J Markiewicz) Date: Thu, 1 Sep 2016 11:56:57 -0400 Subject: [Neuroimaging] 2D NIFTI images in nibabel In-Reply-To: References: Message-ID: Paul, Can you give us the output of: >>> print(nib.load(nifti_file).header._structarr['dim']) If the first number is 2, nibabel will (correctly) interpret to mean it is to ignore dimensions 3+. If the first number is 3, then there's a bug. That said, perhaps it is reasonable that a volumetric image should have a minimum of three dimensions? Chris On 09/01/2016 05:14 AM, paul mccarthy wrote: > Howdy all, > > > Nibabel truncates the third dimension of a NIFTI image with dim3=1. > > > (boo) ws189:nifti2D paulmc$ fslinfo MNI152_T1_2mm_sliceXY.nii.gz > data_type INT16 > dim1 91 > dim2 109 > dim3 1 > dim4 1 > datatype 4 > pixdim1 2.000000 > pixdim2 2.000000 > pixdim3 2.000000 > pixdim4 1.000000 > cal_max 8000.0000 > cal_min 3000.0000 > file_type NIFTI-1+ > > > (boo) ws189:nifti2D paulmc$ ipython > Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 12:54:16) > Type "copyright", "credits" or "license" for more information. > > IPython 5.1.0 -- An enhanced Interactive Python. > ? -> Introduction and overview of IPython's features. > %quickref -> Quick reference. > help -> Python's own help system. > object? -> Details about 'object', use 'object??' for extra details. > > In [1]: import nibabel as nib > > In [2]: i = nib.load('MNI152_T1_2mm_sliceXY.nii.gz') > > In [3]: i.shape > Out[3]: (91, 109) > > In [4]: i.header.get_zooms() > Out[4]: (2.0, 2.0) > > In [5]: > > > Does anybody else think that this is a problem? > > Note that the dimensions for an image of e.g. size (91, 1, 91) will be > preserved. > > Cheers, > > Paul > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- Christopher J Markiewicz Ph.D. Candidate, Quantitative Neuroscience Laboratory Boston University From pauldmccarthy at gmail.com Thu Sep 1 12:20:55 2016 From: pauldmccarthy at gmail.com (paul mccarthy) Date: Thu, 1 Sep 2016 17:20:55 +0100 Subject: [Neuroimaging] 2D NIFTI images in nibabel In-Reply-To: References: Message-ID: Hi Chris, Aah there's my problem - dim0 was set to 2. It's probably `fslroi` that is the culprit here. @Robert - I agree with you - this was a 2D slice of a 3D volume, which was being presented as 2D (due to the dim0 issue). I feel that it should be presented as 3D volumetric (which it turns out that nibabel correctly does, if the image header is valid). Sorry for the noise and thanks, Paul On 1 September 2016 at 16:56, Christopher J Markiewicz wrote: > Paul, > > Can you give us the output of: > > >>> print(nib.load(nifti_file).header._structarr['dim']) > > If the first number is 2, nibabel will (correctly) interpret to mean it > is to ignore dimensions 3+. If the first number is 3, then there's a bug. > > That said, perhaps it is reasonable that a volumetric image should have > a minimum of three dimensions? > > Chris > > On 09/01/2016 05:14 AM, paul mccarthy wrote: > > Howdy all, > > > > > > Nibabel truncates the third dimension of a NIFTI image with dim3=1. > > > > > > (boo) ws189:nifti2D paulmc$ fslinfo MNI152_T1_2mm_sliceXY.nii.gz > > data_type INT16 > > dim1 91 > > dim2 109 > > dim3 1 > > dim4 1 > > datatype 4 > > pixdim1 2.000000 > > pixdim2 2.000000 > > pixdim3 2.000000 > > pixdim4 1.000000 > > cal_max 8000.0000 > > cal_min 3000.0000 > > file_type NIFTI-1+ > > > > > > (boo) ws189:nifti2D paulmc$ ipython > > Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 12:54:16) > > Type "copyright", "credits" or "license" for more information. > > > > IPython 5.1.0 -- An enhanced Interactive Python. > > ? -> Introduction and overview of IPython's features. > > %quickref -> Quick reference. > > help -> Python's own help system. > > object? -> Details about 'object', use 'object??' for extra details. > > > > In [1]: import nibabel as nib > > > > In [2]: i = nib.load('MNI152_T1_2mm_sliceXY.nii.gz') > > > > In [3]: i.shape > > Out[3]: (91, 109) > > > > In [4]: i.header.get_zooms() > > Out[4]: (2.0, 2.0) > > > > In [5]: > > > > > > Does anybody else think that this is a problem? > > > > Note that the dimensions for an image of e.g. size (91, 1, 91) will be > > preserved. > > > > Cheers, > > > > Paul > > > > > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > -- > Christopher J Markiewicz > Ph.D. Candidate, Quantitative Neuroscience Laboratory > Boston University > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From athanastasiou at gmail.com Sun Sep 4 15:12:53 2016 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Sun, 4 Sep 2016 20:12:53 +0100 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation Message-ID: Hello everyone I am trying to convert between world and image coordinates and I am having some difficulty, particularly with inversing the transform. I need this specifically, as I would like to extract the pixels that have been prescribed (manually) by a ROI. I can access the grayscale values which are in some pixel space and I can access the ROI data which however is expressed in 'mm' and thus in world coordinates. As per http://nipy.org/nibabel/dicom/dicom_orientation.html I have all the data to construct the matrix that converts pixel to world coordinates but I am interested in the opposite direction (i.e. from world (ROI) to image (pixels)). My immediate reaction was to invert the transformation matrix. However, I am getting an error that the matrix is "singular". I suspect that this might be because direction cosines are used throughout the matrix which means that it might not appear to be a "proper" rotation matrix (The translation and scaling are easier dealt with) and therefore it is not as straightforward to invert. I am thinking about recovering the angle described by the direction cosines and building a proper transformation matrix (with cosines and sines). But to do this, I would like to double check the following: First of all, does this makes sense or should the matrix have been inverseable without problems? (And therefore, I may be going wrong elsewhere). Given the data packed along with the ROIs, is it safe to assume that: a) The center of rotation is the PatientLocation (top left corner) (And not the center of the image) b) The top left corner should be translated to PatientLocation (or rather, -PatientLocation) c) Everything must be scaled by the pixel spacing. Also, I am puzzled by the fact that the ROIs, although obtained by one of the standard projections, appear to have a Z variable that varies. I would expect that the ROI points would have 2 coordinates varying (describing the shape of the ROI) and the third possibly fixed to the slice "depth". However, in the data I have, the Z dimension seems to vary (I have not plotted the ROI in 3D yet to verify if it is indeed valid data). Is it right to assume that one of the dimensions of the ROI points might simply be invalid? (On a test image, I can recover the prescribed ROI by ignoring Z but as I am writing code that should cover every eventuality, I thought I might check.). Or is it that the ROI might be slightly oblique, following the viewing plane that was set during the prescription process? Looking forward to hearing from you AA -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Sep 5 19:48:48 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 5 Sep 2016 16:48:48 -0700 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hi, On Sun, Sep 4, 2016 at 12:12 PM, Athanasios Anastasiou wrote: > Hello everyone > > I am trying to convert between world and image coordinates and I am having > some difficulty, particularly with inversing the transform. > > I need this specifically, as I would like to extract the pixels that have > been prescribed (manually) by a ROI. I can access the grayscale values which > are in some pixel space and I can access the ROI data which however is > expressed in 'mm' and thus in world coordinates. > > As per http://nipy.org/nibabel/dicom/dicom_orientation.html I have all the > data to construct the matrix that converts pixel to world coordinates but I > am interested in the opposite direction (i.e. from world (ROI) to image > (pixels)). > > My immediate reaction was to invert the transformation matrix. However, I am > getting an error that the matrix is "singular". Yes, inverting is what you want to do. The error is telling you about some error in the affine matrix. Would you mind giving more details on where you got the entries for your affine matrix? What are your ImageOrientationPatient and ImagePositionPatient fields? I assume you got the third column of the matrix by taking the cross-product of the first two? What matrix do you end up with? Best, Matthew From athanastasiou at gmail.com Tue Sep 6 07:50:34 2016 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Tue, 6 Sep 2016 12:50:34 +0100 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hello Matthew Thank you for your response. As per the DICOM pages: #M = numpy.zeros((4,4)) #M[0:3,0]=numpy.array(tumourImageData[n].ImageOrientationPatient[0:3]) * tumourImageData[n].PixelSpacing[0] #M[0:3,1]=numpy.array(tumourImageData[n].ImageOrientationPatient[3:]) * tumourImageData[n].PixelSpacing[1] #M[0:3,3]=tumourImageData[n].ImagePositionPatient #M[3,3]=1.0 #M=numpy.matrix(M) ImageOrientationPatient is a flat 1x6 vector as it is coming out of the image but more importantly it contains direction cosines which means that it may not constitute a "proper" rotation matrix. The determinant of that matrix is 0.0. Scaling apart, it would still have to be a "valid" rotation matrix. Specific Data: ImageOrientationPatient: ['0.999857', '0.00390641', '0.0164496', '-0.00741602', '0.975738', '0.218818'] ImagePositionPatient: ['-127.773', '-105.599', '-94.5758'] PixelSpacing: ['0.4688', '0.4688'] Now, if I read this correctly, all the points are offset by ImagePositionPatient, scaled by pixel spacing (and so far so good), the FIRST ROW (i.e. X) points mostly downwards (rotation of almost 90 degrees around the X axis in the world system), and the FIRST COLUMN (i.e. Y) now points left. To interpert the rotations, I am taking the direction cosines as angle differences between the scanner space (x,y,z) and image space (i,j,k) as depicted in ( https://www.slicer.org/slicerWiki/index.php/Coordinate_systems). Come to think of it, these very small angles might be the reason for the non-zero Z coordinate in the ROI. What am I doing wrong? All the best AA On Tue, Sep 6, 2016 at 12:48 AM, Matthew Brett wrote: > Hi, > > On Sun, Sep 4, 2016 at 12:12 PM, Athanasios Anastasiou > wrote: > > Hello everyone > > > > I am trying to convert between world and image coordinates and I am > having > > some difficulty, particularly with inversing the transform. > > > > I need this specifically, as I would like to extract the pixels that have > > been prescribed (manually) by a ROI. I can access the grayscale values > which > > are in some pixel space and I can access the ROI data which however is > > expressed in 'mm' and thus in world coordinates. > > > > As per http://nipy.org/nibabel/dicom/dicom_orientation.html I have all > the > > data to construct the matrix that converts pixel to world coordinates > but I > > am interested in the opposite direction (i.e. from world (ROI) to image > > (pixels)). > > > > My immediate reaction was to invert the transformation matrix. However, > I am > > getting an error that the matrix is "singular". > > Yes, inverting is what you want to do. The error is telling you > about some error in the affine matrix. > > Would you mind giving more details on where you got the entries for > your affine matrix? What are your ImageOrientationPatient and > ImagePositionPatient fields? I assume you got the third column of the > matrix by taking the cross-product of the first two? What matrix do > you end up with? > > Best, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pieper at isomics.com Tue Sep 6 09:34:31 2016 From: pieper at isomics.com (Steve Pieper) Date: Tue, 6 Sep 2016 09:34:31 -0400 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hi Athanasios - To get the scan direction you'll need to look at the relative ImagePositionPatient points from slice to slice. Note that the scan direction is not always the cross product of the row and column orientations since the scan may go in the other direction from a right handed cross product or the slices can be sheared (or even at arbitrary locations).. There are lots of other things that can happen too, like irregular spacing, missing slices, etc, but usually just normalizing the vector between your origin and any slice in the scan will be what you want. This code will give you an idea: https://github.com/Slicer/Slicer/blob/master/Modules/Scripted/DICOMPlugins/DICOMScalarVolumePlugin.py#L195-L216 Best, Steve On Tue, Sep 6, 2016 at 7:50 AM, Athanasios Anastasiou < athanastasiou at gmail.com> wrote: > Hello Matthew > > Thank you for your response. > > As per the DICOM pages: > > #M = numpy.zeros((4,4)) > #M[0:3,0]=numpy.array(tumourImageData[n].ImageOrientationPatient[0:3]) > * tumourImageData[n].PixelSpacing[0] > #M[0:3,1]=numpy.array(tumourImageData[n].ImageOrientationPatient[3:]) > * tumourImageData[n].PixelSpacing[1] > #M[0:3,3]=tumourImageData[n].ImagePositionPatient > #M[3,3]=1.0 > #M=numpy.matrix(M) > > ImageOrientationPatient is a flat 1x6 vector as it is coming out of the > image but more importantly it contains direction cosines which means that > it may not constitute a "proper" rotation matrix. The determinant of that > matrix is 0.0. Scaling apart, it would still have to be a "valid" rotation > matrix. > > Specific Data: > > ImageOrientationPatient: > ['0.999857', '0.00390641', '0.0164496', '-0.00741602', '0.975738', > '0.218818'] > > ImagePositionPatient: > ['-127.773', '-105.599', '-94.5758'] > > PixelSpacing: > ['0.4688', '0.4688'] > > > Now, if I read this correctly, all the points are offset by > ImagePositionPatient, scaled by pixel spacing (and so far so good), the > FIRST ROW (i.e. X) points mostly downwards (rotation of almost 90 degrees > around the X axis in the world system), and the FIRST COLUMN (i.e. Y) now > points left. To interpert the rotations, I am taking the direction cosines > as angle differences between the scanner space (x,y,z) and image space > (i,j,k) as depicted in (https://www.slicer.org/slicerWiki/index.php/ > Coordinate_systems). > > Come to think of it, these very small angles might be the reason for the > non-zero Z coordinate in the ROI. > > What am I doing wrong? > > All the best > AA > > > > > > > > > > > On Tue, Sep 6, 2016 at 12:48 AM, Matthew Brett > wrote: > >> Hi, >> >> On Sun, Sep 4, 2016 at 12:12 PM, Athanasios Anastasiou >> wrote: >> > Hello everyone >> > >> > I am trying to convert between world and image coordinates and I am >> having >> > some difficulty, particularly with inversing the transform. >> > >> > I need this specifically, as I would like to extract the pixels that >> have >> > been prescribed (manually) by a ROI. I can access the grayscale values >> which >> > are in some pixel space and I can access the ROI data which however is >> > expressed in 'mm' and thus in world coordinates. >> > >> > As per http://nipy.org/nibabel/dicom/dicom_orientation.html I have all >> the >> > data to construct the matrix that converts pixel to world coordinates >> but I >> > am interested in the opposite direction (i.e. from world (ROI) to image >> > (pixels)). >> > >> > My immediate reaction was to invert the transformation matrix. However, >> I am >> > getting an error that the matrix is "singular". >> >> Yes, inverting is what you want to do. The error is telling you >> about some error in the affine matrix. >> >> Would you mind giving more details on where you got the entries for >> your affine matrix? What are your ImageOrientationPatient and >> ImagePositionPatient fields? I assume you got the third column of the >> matrix by taking the cross-product of the first two? What matrix do >> you end up with? >> >> Best, >> >> Matthew >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From athanastasiou at gmail.com Tue Sep 6 10:02:01 2016 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Tue, 6 Sep 2016 15:02:01 +0100 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hello Steve Thank you very much for your response and the pointer. I feel that this might be an additional thing to be looking into once I get the mapping between World and Image coordinates (and back) right. Also, the scan direction is something that is not "bothering" me so much at the moment because each ROI references its grayscale image and I receive them sequentially ordered anyway. All the best AA On Tue, Sep 6, 2016 at 2:34 PM, Steve Pieper wrote: > Hi Athanasios - > > To get the scan direction you'll need to look at the relative > ImagePositionPatient points from slice to slice. Note that the scan > direction is not always the cross product of the row and column > orientations since the scan may go in the other direction from a right > handed cross product or the slices can be sheared (or even at arbitrary > locations).. There are lots of other things that can happen too, like > irregular spacing, missing slices, etc, but usually just normalizing the > vector between your origin and any slice in the scan will be what you want. > > This code will give you an idea: > > https://github.com/Slicer/Slicer/blob/master/Modules/ > Scripted/DICOMPlugins/DICOMScalarVolumePlugin.py#L195-L216 > > Best, > Steve > > On Tue, Sep 6, 2016 at 7:50 AM, Athanasios Anastasiou < > athanastasiou at gmail.com> wrote: > >> Hello Matthew >> >> Thank you for your response. >> >> As per the DICOM pages: >> >> #M = numpy.zeros((4,4)) >> #M[0:3,0]=numpy.array(tumourImageData[n].ImageOrientationPatient[0:3]) >> * tumourImageData[n].PixelSpacing[0] >> #M[0:3,1]=numpy.array(tumourImageData[n].ImageOrientationPatient[3:]) >> * tumourImageData[n].PixelSpacing[1] >> #M[0:3,3]=tumourImageData[n].ImagePositionPatient >> #M[3,3]=1.0 >> #M=numpy.matrix(M) >> >> ImageOrientationPatient is a flat 1x6 vector as it is coming out of the >> image but more importantly it contains direction cosines which means that >> it may not constitute a "proper" rotation matrix. The determinant of that >> matrix is 0.0. Scaling apart, it would still have to be a "valid" rotation >> matrix. >> >> Specific Data: >> >> ImageOrientationPatient: >> ['0.999857', '0.00390641', '0.0164496', '-0.00741602', '0.975738', >> '0.218818'] >> >> ImagePositionPatient: >> ['-127.773', '-105.599', '-94.5758'] >> >> PixelSpacing: >> ['0.4688', '0.4688'] >> >> >> Now, if I read this correctly, all the points are offset by >> ImagePositionPatient, scaled by pixel spacing (and so far so good), the >> FIRST ROW (i.e. X) points mostly downwards (rotation of almost 90 degrees >> around the X axis in the world system), and the FIRST COLUMN (i.e. Y) now >> points left. To interpert the rotations, I am taking the direction cosines >> as angle differences between the scanner space (x,y,z) and image space >> (i,j,k) as depicted in (https://www.slicer.org/slicer >> Wiki/index.php/Coordinate_systems). >> >> Come to think of it, these very small angles might be the reason for the >> non-zero Z coordinate in the ROI. >> >> What am I doing wrong? >> >> All the best >> AA >> >> >> >> >> >> >> >> >> >> >> On Tue, Sep 6, 2016 at 12:48 AM, Matthew Brett >> wrote: >> >>> Hi, >>> >>> On Sun, Sep 4, 2016 at 12:12 PM, Athanasios Anastasiou >>> wrote: >>> > Hello everyone >>> > >>> > I am trying to convert between world and image coordinates and I am >>> having >>> > some difficulty, particularly with inversing the transform. >>> > >>> > I need this specifically, as I would like to extract the pixels that >>> have >>> > been prescribed (manually) by a ROI. I can access the grayscale values >>> which >>> > are in some pixel space and I can access the ROI data which however is >>> > expressed in 'mm' and thus in world coordinates. >>> > >>> > As per http://nipy.org/nibabel/dicom/dicom_orientation.html I have >>> all the >>> > data to construct the matrix that converts pixel to world coordinates >>> but I >>> > am interested in the opposite direction (i.e. from world (ROI) to image >>> > (pixels)). >>> > >>> > My immediate reaction was to invert the transformation matrix. >>> However, I am >>> > getting an error that the matrix is "singular". >>> >>> Yes, inverting is what you want to do. The error is telling you >>> about some error in the affine matrix. >>> >>> Would you mind giving more details on where you got the entries for >>> your affine matrix? What are your ImageOrientationPatient and >>> ImagePositionPatient fields? I assume you got the third column of the >>> matrix by taking the cross-product of the first two? What matrix do >>> you end up with? >>> >>> Best, >>> >>> Matthew >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pieper at isomics.com Tue Sep 6 11:43:06 2016 From: pieper at isomics.com (Steve Pieper) Date: Tue, 6 Sep 2016 11:43:06 -0400 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hi Athanaios - What I'm saying is that if you make the index to physical matrix using the method I described it will be invertable and you can use that to calculate know the image coordinates for any point in physical space. You need to look at the vector between the slice origins to calculate the third column of your matrix. -Steve On Tue, Sep 6, 2016 at 10:02 AM, Athanasios Anastasiou < athanastasiou at gmail.com> wrote: > Hello Steve > > Thank you very much for your response and the pointer. > > I feel that this might be an additional thing to be looking into once I > get the mapping between World and Image coordinates (and back) right. Also, > the scan direction is something that is not "bothering" me so much at the > moment because each ROI references its grayscale image and I receive them > sequentially ordered anyway. > > All the best > AA > > On Tue, Sep 6, 2016 at 2:34 PM, Steve Pieper wrote: > >> Hi Athanasios - >> >> To get the scan direction you'll need to look at the relative >> ImagePositionPatient points from slice to slice. Note that the scan >> direction is not always the cross product of the row and column >> orientations since the scan may go in the other direction from a right >> handed cross product or the slices can be sheared (or even at arbitrary >> locations).. There are lots of other things that can happen too, like >> irregular spacing, missing slices, etc, but usually just normalizing the >> vector between your origin and any slice in the scan will be what you want. >> >> This code will give you an idea: >> >> https://github.com/Slicer/Slicer/blob/master/Modules/Scripte >> d/DICOMPlugins/DICOMScalarVolumePlugin.py#L195-L216 >> >> Best, >> Steve >> >> On Tue, Sep 6, 2016 at 7:50 AM, Athanasios Anastasiou < >> athanastasiou at gmail.com> wrote: >> >>> Hello Matthew >>> >>> Thank you for your response. >>> >>> As per the DICOM pages: >>> >>> #M = numpy.zeros((4,4)) >>> #M[0:3,0]=numpy.array(tumourImageData[n].ImageOrientationPatient[0:3]) >>> * tumourImageData[n].PixelSpacing[0] >>> #M[0:3,1]=numpy.array(tumourImageData[n].ImageOrientationPatient[3:]) >>> * tumourImageData[n].PixelSpacing[1] >>> #M[0:3,3]=tumourImageData[n].ImagePositionPatient >>> #M[3,3]=1.0 >>> #M=numpy.matrix(M) >>> >>> ImageOrientationPatient is a flat 1x6 vector as it is coming out of the >>> image but more importantly it contains direction cosines which means that >>> it may not constitute a "proper" rotation matrix. The determinant of that >>> matrix is 0.0. Scaling apart, it would still have to be a "valid" rotation >>> matrix. >>> >>> Specific Data: >>> >>> ImageOrientationPatient: >>> ['0.999857', '0.00390641', '0.0164496', '-0.00741602', '0.975738', >>> '0.218818'] >>> >>> ImagePositionPatient: >>> ['-127.773', '-105.599', '-94.5758'] >>> >>> PixelSpacing: >>> ['0.4688', '0.4688'] >>> >>> >>> Now, if I read this correctly, all the points are offset by >>> ImagePositionPatient, scaled by pixel spacing (and so far so good), the >>> FIRST ROW (i.e. X) points mostly downwards (rotation of almost 90 degrees >>> around the X axis in the world system), and the FIRST COLUMN (i.e. Y) now >>> points left. To interpert the rotations, I am taking the direction cosines >>> as angle differences between the scanner space (x,y,z) and image space >>> (i,j,k) as depicted in (https://www.slicer.org/slicer >>> Wiki/index.php/Coordinate_systems). >>> >>> Come to think of it, these very small angles might be the reason for the >>> non-zero Z coordinate in the ROI. >>> >>> What am I doing wrong? >>> >>> All the best >>> AA >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Tue, Sep 6, 2016 at 12:48 AM, Matthew Brett >>> wrote: >>> >>>> Hi, >>>> >>>> On Sun, Sep 4, 2016 at 12:12 PM, Athanasios Anastasiou >>>> wrote: >>>> > Hello everyone >>>> > >>>> > I am trying to convert between world and image coordinates and I am >>>> having >>>> > some difficulty, particularly with inversing the transform. >>>> > >>>> > I need this specifically, as I would like to extract the pixels that >>>> have >>>> > been prescribed (manually) by a ROI. I can access the grayscale >>>> values which >>>> > are in some pixel space and I can access the ROI data which however is >>>> > expressed in 'mm' and thus in world coordinates. >>>> > >>>> > As per http://nipy.org/nibabel/dicom/dicom_orientation.html I have >>>> all the >>>> > data to construct the matrix that converts pixel to world coordinates >>>> but I >>>> > am interested in the opposite direction (i.e. from world (ROI) to >>>> image >>>> > (pixels)). >>>> > >>>> > My immediate reaction was to invert the transformation matrix. >>>> However, I am >>>> > getting an error that the matrix is "singular". >>>> >>>> Yes, inverting is what you want to do. The error is telling you >>>> about some error in the affine matrix. >>>> >>>> Would you mind giving more details on where you got the entries for >>>> your affine matrix? What are your ImageOrientationPatient and >>>> ImagePositionPatient fields? I assume you got the third column of the >>>> matrix by taking the cross-product of the first two? What matrix do >>>> you end up with? >>>> >>>> Best, >>>> >>>> Matthew >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Sep 6 13:23:59 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 6 Sep 2016 10:23:59 -0700 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hi, On Tue, Sep 6, 2016 at 6:34 AM, Steve Pieper wrote: > Hi Athanasios - > > To get the scan direction you'll need to look at the relative > ImagePositionPatient points from slice to slice. Note that the scan > direction is not always the cross product of the row and column orientations > since the scan may go in the other direction from a right handed cross > product or the slices can be sheared (or even at arbitrary locations).. > There are lots of other things that can happen too, like irregular spacing, > missing slices, etc, but usually just normalizing the vector between your > origin and any slice in the scan will be what you want. > > This code will give you an idea: > > https://github.com/Slicer/Slicer/blob/master/Modules/Scripted/DICOMPlugins/DICOMScalarVolumePlugin.py#L195-L216 > >From your code, you are missing a valid third column for your affine. I believe that column will be all zeros from your code. This is what the later part of the DICOM orientation page is talking about, and what Steve is referring to as the "slice direction". Steve is quite right that the slice direction need not be the cross-product of the first two, and the DICOM information can often tell you what that slice direction vector is, but assuming for a moment that it is the cross product, and that you are looking at the first slice of the volume, then you'd want something like: """ import numpy as np ImageOrientationPatient = [0.999857, 0.00390641, 0.0164496, -0.00741602, 0.975738, 0.218818] ImagePositionPatient = [-127.773, -105.599, -94.5758] PixelSpacing = [0.4688, 0.4688] slice_spacing = 3.0 # ? # Make F array from DICOM orientation page F = np.fliplr(np.reshape(ImageOrientationPatient, (2, 3)).T) rotations = np.eye(3) rotations[:, :2] = F # Third direction cosine from cross-product of first two rotations[:, 2] = np.cross(F[:, 0], F[:, 1]) # Add the zooms zooms = np.diag(PixelSpacing + [slice_spacing]) # Make the affine affine = np.diag([0., 0, 0, 1]) affine[:3, :3] = rotations.dot(zooms) affine[:3, 3] = ImagePositionPatient np.set_printoptions(precision=4, suppress=True) print(affine) """ But - Steve's suggestion is more general - this code is just to give you an idea. Best, Matthew From satra at mit.edu Wed Sep 7 18:20:41 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 7 Sep 2016 15:20:41 -0700 Subject: [Neuroimaging] nipy organization on dockerhub Message-ID: hi folks, i was trying to create a nipy organization on dockerhub and was wondering if someone had already created one. if no one has created a nipy organization on dockerhub, then i'll write to them to see if we can get that organization name. if someone has created one, could you please create a team and add me to it? cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Wed Sep 7 18:34:31 2016 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 7 Sep 2016 15:34:31 -0700 Subject: [Neuroimaging] nipy organization on dockerhub In-Reply-To: References: Message-ID: Hi Satra, On Wed, Sep 7, 2016 at 3:20 PM, Satrajit Ghosh wrote: > hi folks, > > i was trying to create a nipy organization on dockerhub and was wondering > if someone had already created one. > > if no one has created a nipy organization on dockerhub, then i'll write to > them to see if we can get that organization name. > > if someone has created one, could you please create a team and add me to > it? > I own that. What is it worth to you? ;) Cheers, Ariel > cheers, > > satra > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Wed Sep 7 18:38:21 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 7 Sep 2016 15:38:21 -0700 Subject: [Neuroimaging] nipy organization on dockerhub In-Reply-To: References: Message-ID: a hug ;) cheers, satra On Wed, Sep 7, 2016 at 3:34 PM, Ariel Rokem wrote: > Hi Satra, > > On Wed, Sep 7, 2016 at 3:20 PM, Satrajit Ghosh wrote: > >> hi folks, >> >> i was trying to create a nipy organization on dockerhub and was wondering >> if someone had already created one. >> >> if no one has created a nipy organization on dockerhub, then i'll write >> to them to see if we can get that organization name. >> >> if someone has created one, could you please create a team and add me to >> it? >> > > I own that. What is it worth to you? ;) > > Cheers, > > Ariel > > >> cheers, >> >> satra >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From athanastasiou at gmail.com Thu Sep 8 16:37:27 2016 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Thu, 8 Sep 2016 21:37:27 +0100 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hello Matthew & Steven Thank you for your email. Of course I am missing the third column :( I am paying too much attention on the two numbers I am after right now, to bring the contour right where it should be when plotting it over the image. Thank you for your help, I will have another go at establishing the matrix with the helpful comments provided here. All the best AA On 6 Sep 2016 18:25, "Matthew Brett" wrote: > Hi, > > On Tue, Sep 6, 2016 at 6:34 AM, Steve Pieper wrote: > > Hi Athanasios - > > > > To get the scan direction you'll need to look at the relative > > ImagePositionPatient points from slice to slice. Note that the scan > > direction is not always the cross product of the row and column > orientations > > since the scan may go in the other direction from a right handed cross > > product or the slices can be sheared (or even at arbitrary locations).. > > There are lots of other things that can happen too, like irregular > spacing, > > missing slices, etc, but usually just normalizing the vector between your > > origin and any slice in the scan will be what you want. > > > > This code will give you an idea: > > > > https://github.com/Slicer/Slicer/blob/master/Modules/ > Scripted/DICOMPlugins/DICOMScalarVolumePlugin.py#L195-L216 > > > > From your code, you are missing a valid third column for your affine. > I believe that column will be all zeros from your code. This is what > the later part of the DICOM orientation page is talking about, and > what Steve is referring to as the "slice direction". > > Steve is quite right that the slice direction need not be the > cross-product of the first two, and the DICOM information can often > tell you what that slice direction vector is, but assuming for a > moment that it is the cross product, and that you are looking at the > first slice of the volume, then you'd want something like: > > """ > import numpy as np > > ImageOrientationPatient = [0.999857, 0.00390641, 0.0164496, > -0.00741602, 0.975738, 0.218818] > > ImagePositionPatient = [-127.773, -105.599, -94.5758] > > PixelSpacing = [0.4688, 0.4688] > > slice_spacing = 3.0 # ? > > # Make F array from DICOM orientation page > F = np.fliplr(np.reshape(ImageOrientationPatient, (2, 3)).T) > rotations = np.eye(3) > rotations[:, :2] = F > # Third direction cosine from cross-product of first two > rotations[:, 2] = np.cross(F[:, 0], F[:, 1]) > # Add the zooms > zooms = np.diag(PixelSpacing + [slice_spacing]) > > # Make the affine > affine = np.diag([0., 0, 0, 1]) > affine[:3, :3] = rotations.dot(zooms) > affine[:3, 3] = ImagePositionPatient > > np.set_printoptions(precision=4, suppress=True) > print(affine) > """ > > But - Steve's suggestion is more general - this code is just to give > you an idea. > > Best, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Mon Sep 12 14:00:20 2016 From: pellman.john at gmail.com (John Pellman) Date: Mon, 12 Sep 2016 14:00:20 -0400 Subject: [Neuroimaging] Pysurfer Brain's save_image method produces images with only background color Message-ID: Hi all, I'm encountering a peculiar Pysurfer error on our server and I was wondering if anyone has encountered anything similar or might have some insight into how I can tackle it. Basically, when our researchers try to save a png image using Brain.save_image() or Brain.save_imageset() the images produced only contain the background color (as you may have inferred from the subject line). I've traced this back to Scipy method (scipy.misc.imsave), but it looks like this would only output an empty png if the image passed in were completely zeroed out. Our setup uses the following versions of pysurfer/its dependencies: Numpy: 1.10.0.dev0+1fe98ff Scipy: 0.17.0.dev0+f2f6e48 Ipython: 3.1.0 nibabel: 2.0.0 Mayavi: 4.4.2 matplotlib: 1.4.3 PIL: 1.1.7 Pysurfer: 0.5 This setup is running within a Miniconda environment using Python 2.7.11. I'm uncertain if this is related, but running the example code here produces the following warning: *(ipython:20765): Gdk-WARNING **: /build/buildd/gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 drawable is not a pixmap or window* Any insight would be greatly appreciated. Best, John Pellman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruy.valle at yale.edu Sat Sep 10 14:02:39 2016 From: ruy.valle at yale.edu (Valle-Mena, Ricardo Ruy) Date: Sat, 10 Sep 2016 18:02:39 +0000 Subject: [Neuroimaging] Contributing Message-ID: <8C18EE4A-F598-4FB1-AE40-B1B2229B6B8D@yale.edu> Hi all, I would like to contribute code to Nipy but I am not sure where to start. The Nipy github page recommended emailing this list for questions about this. Sorry that my question is so vague... Cheers! Ruy From arokem at gmail.com Mon Sep 12 15:25:24 2016 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 12 Sep 2016 12:25:24 -0700 Subject: [Neuroimaging] Contributing In-Reply-To: <8C18EE4A-F598-4FB1-AE40-B1B2229B6B8D@yale.edu> References: <8C18EE4A-F598-4FB1-AE40-B1B2229B6B8D@yale.edu> Message-ID: Hi Ruy, Thanks for your email! If you tell us a little bit more about yourself (e.g., what kind of neuroimaging data do you analyze? What are some things you would like to implement as part of the nipy effort? What is your background in terms of software development?) that would be helpful in helping you find a good place to start contributing. Cheers, Ariel On Sat, Sep 10, 2016 at 11:02 AM, Valle-Mena, Ricardo Ruy < ruy.valle at yale.edu> wrote: > Hi all, > > I would like to contribute code to Nipy but I am not sure where to start. > The Nipy github page recommended emailing this list for questions about > this. Sorry that my question is so vague... > > Cheers! > > Ruy > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruyvalle at yahoo.ca Tue Sep 13 12:40:23 2016 From: ruyvalle at yahoo.ca (Ruy Valle) Date: Tue, 13 Sep 2016 12:40:23 -0400 Subject: [Neuroimaging] Contributing In-Reply-To: References: Message-ID: <362B6DA5-490D-4382-AD07-051885C83C12@yahoo.ca> Hi Ariel, Thank you for your response. I am doing fMRI analysis, mostly task-based up until now but I will be getting into resting-state analysis sooner or later. I will probably also start looking at EEG and TMS, but I am not sure how relevant that is for Nipy. I learned to use AFNI at their bootcamp and FSL through my supervisor at work. I recently (yesterday) started working on a project of mine through which I am hoping to clarify what the effect of including motion parameters (estimates of head motion calculated during volume registration/motion correction) in regression models used in fMRI software is. I would say that I have a fair amount of experience in Java but have not used it much as of late, am comfortable enough with Python, R, and MATLAB to really dive into them as much as needed, and have been learning C and Go recently. A colleague of mine introduced me to Nipy/Nipype around 5 months ago and I really liked the idea of it. I was still in college at the time and graduated this last semester from McGill University. I have been reading a book on modeling techniques and learning about statistics more generally, so contributing on that side could be nice. I also enjoy learning about algorithms (I implemented a heap sort algorithm in C a few weeks ago). To be honest, I find I still lack enough experience (both in software and in life in general) to pinpoint my preferences, and am open to trying new things, hopefully discovering what most appeals to me, increasing my skills, and making useful contributions along the way. I hope this helps... Best wishes, Ruy > Hi Ruy, > > Thanks for your email! If you tell us a little bit more about yourself > (e.g., what kind of neuroimaging data do you analyze? What are some things > you would like to implement as part of the nipy effort? What is your > background in terms of software development?) that would be helpful in > helping you find a good place to start contributing. > > Cheers, > > Ariel From pellman.john at gmail.com Tue Sep 13 13:24:23 2016 From: pellman.john at gmail.com (John Pellman) Date: Tue, 13 Sep 2016 13:24:23 -0400 Subject: [Neuroimaging] Pysurfer Brain's save_image method produces images with only background color In-Reply-To: References: Message-ID: It looks like it might be related to the following issue described at StackOverflow: http://stackoverflow.com/questions/16543634/mayavi-mlab-savefig-gives-an-empty-image On Mon, Sep 12, 2016 at 2:00 PM, John Pellman wrote: > Hi all, > > I'm encountering a peculiar Pysurfer error on our server and I was > wondering if anyone has encountered anything similar or might have some > insight into how I can tackle it. Basically, when our researchers try to > save a png image using Brain.save_image() or Brain.save_imageset() the > images produced only contain the background color (as you may have inferred > from the subject line). I've traced this back to Scipy method > (scipy.misc.imsave), but it looks like this would only output an empty png > if the image passed in were completely zeroed out. Our setup uses the > following versions of pysurfer/its dependencies: > > Numpy: 1.10.0.dev0+1fe98ff > Scipy: 0.17.0.dev0+f2f6e48 > Ipython: 3.1.0 > nibabel: 2.0.0 > Mayavi: 4.4.2 > matplotlib: 1.4.3 > PIL: 1.1.7 > Pysurfer: 0.5 > > This setup is running within a Miniconda environment using Python 2.7.11. > I'm uncertain if this is related, but running the example code here > produces the > following warning: > > *(ipython:20765): Gdk-WARNING **: > /build/buildd/gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 drawable is not > a pixmap or window* > > Any insight would be greatly appreciated. > > Best, > John Pellman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kashi.vishwa25 at gmail.com Tue Sep 13 19:09:42 2016 From: kashi.vishwa25 at gmail.com (Kashi Vishwanath) Date: Tue, 13 Sep 2016 16:09:42 -0700 Subject: [Neuroimaging] Convert Nifti images Message-ID: Hello Users, We have concatenated dataset i.e 4D nifti data set. We are looking for options in python to convert 4d dataset into 3d dataset. is there a python package for this.? we tried nibable.four_to_three() but couldnt get exactly how it working.. Not much documentation available. Grateful for help. Kashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Sep 13 19:18:08 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 13 Sep 2016 16:18:08 -0700 Subject: [Neuroimaging] Convert Nifti images In-Reply-To: References: Message-ID: Hi, On Tue, Sep 13, 2016 at 4:09 PM, Kashi Vishwanath wrote: > Hello Users, > > We have concatenated dataset i.e 4D nifti data set. We are looking for > options in python to convert 4d dataset into 3d dataset. > > is there a python package for this.? > we tried nibable.four_to_three() but couldnt get exactly how it working.. > Not much documentation available. The documentation is here: http://nipy.org/nibabel/reference/nibabel.funcs.html#four-to-three You'd use that from within a Python script or interactive session, as in: In [4]: img = nib.load('a_4d_image.nii') In [5]: images = nib.four_to_three(img) In [6]: for i, img_3d in enumerate(images): ...: nib.save(img_3d, 'a_3d_image_{:03d}.nii'.format(i)) You could also use the program `nipy_4d_to_3d` from NiPy. Best, Matthew From athanastasiou at gmail.com Tue Sep 13 08:58:08 2016 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Tue, 13 Sep 2016 13:58:08 +0100 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hello Matthew & Steven Alright, I am not sure what to make of this because the ImagePositionPatient for the part of the scan I am interested in seems to be a vector that cuts across the space diagonally. I am trying to take Steve's viewpoint here, as it looks like the "easiest" too (just rotate the "track" to align to one of the axes). The values of ImagePositionPatient are: array([[-127.773 , -105.599 , -94.5758], [-127.841 , -106.584 , -90.1855], [-127.91 , -107.569 , -85.7951], [-127.978 , -108.554 , -81.4048], [-128.046 , -109.539 , -77.0145], [-128.115 , -110.524 , -72.6241], [-128.183 , -111.509 , -68.2338], [-128.251 , -112.494 , -63.8435], [-128.32 , -113.479 , -59.4531], [-128.388 , -114.464 , -55.0628], [-128.456 , -115.449 , -50.6725]]) And the 3d plot of that is available further below [image: Inline image 1] They are all regular (good) and all intermediate slice thicknesses are the same (double good). Now, the "trouble" with this is that this represents the direction of one of the axes. Let's call it the Z axis. The other two define a plane that is perpendicular to this direction. I can rotate the axes so that this "Z" is aligned with one of the Zs in space. The other thing that is a bit of a "problem" here of course is the third dimension of my ROI data. Because so far, in my preliminary tests, I have been ignoring it. This means, that just by looking at X,Y, I may have been seeing something that is distorted. And looking at this track, I may be seeing something that is thinner on the vertical projection than it really is although the increases in 2/3 axes are small. Can I please ask the following: 1) Am I right to assume that the ROI data are perpendicular to this "track"? 2) All I have to do now then, is workout a rotation around Z and Y (or, 2/3 axes) to make this track parallel to one of the axes (?) and then apply that transformation to the ROIs so that, when I set them on the image (which is always properly alligned), it appears to be properly aligned. (By the looks of this, -45 deg around Z, -45 around Y and I am there. 3) How does the DICOM rotation data relate to this track? (If at all). 4) Is there an API (or part of an API) for a [DICOM Data Type].getPixel or [DICOM Data Type].getVoxel kind of operation? Even if it is via a class ecosystem that is making sense within the context of Slicer or other piece of software. The problem here is that I don't have a volumetric DICOM (Multiple images single file). I have a ROI DICOM that references individual images. So, I build my own volume aware data type (based on pydicom) that will implement its getVoxel method and is aware of a few other things I am going to be doing with these volumes. But if this is already done somewhere, maybe I could re-use it (?). Looking forward to hearing from you AA On Thu, Sep 8, 2016 at 9:37 PM, Athanasios Anastasiou < athanastasiou at gmail.com> wrote: > Hello Matthew & Steven > > Thank you for your email. Of course I am missing the third column :( I am > paying too much attention on the two numbers I am after right now, to bring > the contour right where it should be when plotting it over the image. > > Thank you for your help, I will have another go at establishing the matrix > with the helpful comments provided here. > > All the best > AA > > On 6 Sep 2016 18:25, "Matthew Brett" wrote: > >> Hi, >> >> On Tue, Sep 6, 2016 at 6:34 AM, Steve Pieper wrote: >> > Hi Athanasios - >> > >> > To get the scan direction you'll need to look at the relative >> > ImagePositionPatient points from slice to slice. Note that the scan >> > direction is not always the cross product of the row and column >> orientations >> > since the scan may go in the other direction from a right handed cross >> > product or the slices can be sheared (or even at arbitrary locations).. >> > There are lots of other things that can happen too, like irregular >> spacing, >> > missing slices, etc, but usually just normalizing the vector between >> your >> > origin and any slice in the scan will be what you want. >> > >> > This code will give you an idea: >> > >> > https://github.com/Slicer/Slicer/blob/master/Modules/Scripte >> d/DICOMPlugins/DICOMScalarVolumePlugin.py#L195-L216 >> > >> >> From your code, you are missing a valid third column for your affine. >> I believe that column will be all zeros from your code. This is what >> the later part of the DICOM orientation page is talking about, and >> what Steve is referring to as the "slice direction". >> >> Steve is quite right that the slice direction need not be the >> cross-product of the first two, and the DICOM information can often >> tell you what that slice direction vector is, but assuming for a >> moment that it is the cross product, and that you are looking at the >> first slice of the volume, then you'd want something like: >> >> """ >> import numpy as np >> >> ImageOrientationPatient = [0.999857, 0.00390641, 0.0164496, >> -0.00741602, 0.975738, 0.218818] >> >> ImagePositionPatient = [-127.773, -105.599, -94.5758] >> >> PixelSpacing = [0.4688, 0.4688] >> >> slice_spacing = 3.0 # ? >> >> # Make F array from DICOM orientation page >> F = np.fliplr(np.reshape(ImageOrientationPatient, (2, 3)).T) >> rotations = np.eye(3) >> rotations[:, :2] = F >> # Third direction cosine from cross-product of first two >> rotations[:, 2] = np.cross(F[:, 0], F[:, 1]) >> # Add the zooms >> zooms = np.diag(PixelSpacing + [slice_spacing]) >> >> # Make the affine >> affine = np.diag([0., 0, 0, 1]) >> affine[:3, :3] = rotations.dot(zooms) >> affine[:3, 3] = ImagePositionPatient >> >> np.set_printoptions(precision=4, suppress=True) >> print(affine) >> """ >> >> But - Steve's suggestion is more general - this code is just to give >> you an idea. >> >> Best, >> >> Matthew >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DICOM_Fig1.png Type: image/png Size: 178149 bytes Desc: not available URL: From pieper at isomics.com Tue Sep 13 14:20:29 2016 From: pieper at isomics.com (Steve Pieper) Date: Tue, 13 Sep 2016 14:20:29 -0400 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hi Athanasios - It's very likely that your data is not sheared, which you can check by seeing if the cross product of the in plane (row and column from Image Orientation Patient) vectors is parallel to the line you have plotted (as noted before it may point the same way or the opposite way). It's not clear to me from your question where you obtained the ROI , but it's also very likely that your ROI is in the same pixel space as the MR, in which case it would share the same matrix from pixel to patient spaces. If it turns out they have different sampling grids then you may need to resample one or the other. If the data files have valid headers this is a fairly trivial operation to do with the GUI in Slicer or you could write some code to do it. Maybe the easiest if for you to read through the DICOM documentation for background. http://dicom.nema.org/medical/dicom/current/output/html/part03.html#sect_C.7.6.2 Hope that helps, Steve On Tue, Sep 13, 2016 at 8:58 AM, Athanasios Anastasiou < athanastasiou at gmail.com> wrote: > Hello Matthew & Steven > > Alright, I am not sure what to make of this because the > ImagePositionPatient for the part of the scan I am interested in seems to > be a vector that cuts across the space diagonally. I am trying to take > Steve's viewpoint here, as it looks like the "easiest" too (just rotate the > "track" to align to one of the axes). > > The values of ImagePositionPatient are: > > array([[-127.773 , -105.599 , -94.5758], > [-127.841 , -106.584 , -90.1855], > [-127.91 , -107.569 , -85.7951], > [-127.978 , -108.554 , -81.4048], > [-128.046 , -109.539 , -77.0145], > [-128.115 , -110.524 , -72.6241], > [-128.183 , -111.509 , -68.2338], > [-128.251 , -112.494 , -63.8435], > [-128.32 , -113.479 , -59.4531], > [-128.388 , -114.464 , -55.0628], > [-128.456 , -115.449 , -50.6725]]) > > And the 3d plot of that is available further below > > [image: Inline image 1] > > They are all regular (good) and all intermediate slice thicknesses are the > same (double good). > > Now, the "trouble" with this is that this represents the direction of one > of the axes. Let's call it the Z axis. The other two define a plane that is > perpendicular to this direction. I can rotate the axes so that this "Z" is > aligned with one of the Zs in space. > > The other thing that is a bit of a "problem" here of course is the third > dimension of my ROI data. Because so far, in my preliminary tests, I have > been ignoring it. This means, that just by looking at X,Y, I may have been > seeing something that is distorted. And looking at this track, I may be > seeing something that is thinner on the vertical projection than it really > is although the increases in 2/3 axes are small. > > Can I please ask the following: > > 1) Am I right to assume that the ROI data are perpendicular to this > "track"? > > 2) All I have to do now then, is workout a rotation around Z and Y (or, > 2/3 axes) to make this track parallel to one of the axes (?) and then apply > that transformation to the ROIs so that, when I set them on the image > (which is always properly alligned), it appears to be properly aligned. (By > the looks of this, -45 deg around Z, -45 around Y and I am there. > > 3) How does the DICOM rotation data relate to this track? (If at all). > > 4) Is there an API (or part of an API) for a [DICOM Data Type].getPixel or > [DICOM Data Type].getVoxel kind of operation? Even if it is via a class > ecosystem that is making sense within the context of Slicer or other piece > of software. The problem here is that I don't have a volumetric DICOM > (Multiple images single file). I have a ROI DICOM that references > individual images. So, I build my own volume aware data type (based on > pydicom) that will implement its getVoxel method and is aware of a few > other things I am going to be doing with these volumes. But if this is > already done somewhere, maybe I could re-use it (?). > > Looking forward to hearing from you > AA > > > > > > > > On Thu, Sep 8, 2016 at 9:37 PM, Athanasios Anastasiou < > athanastasiou at gmail.com> wrote: > >> Hello Matthew & Steven >> >> Thank you for your email. Of course I am missing the third column :( I am >> paying too much attention on the two numbers I am after right now, to bring >> the contour right where it should be when plotting it over the image. >> >> Thank you for your help, I will have another go at establishing the >> matrix with the helpful comments provided here. >> >> All the best >> AA >> >> On 6 Sep 2016 18:25, "Matthew Brett" wrote: >> >>> Hi, >>> >>> On Tue, Sep 6, 2016 at 6:34 AM, Steve Pieper wrote: >>> > Hi Athanasios - >>> > >>> > To get the scan direction you'll need to look at the relative >>> > ImagePositionPatient points from slice to slice. Note that the scan >>> > direction is not always the cross product of the row and column >>> orientations >>> > since the scan may go in the other direction from a right handed cross >>> > product or the slices can be sheared (or even at arbitrary locations).. >>> > There are lots of other things that can happen too, like irregular >>> spacing, >>> > missing slices, etc, but usually just normalizing the vector between >>> your >>> > origin and any slice in the scan will be what you want. >>> > >>> > This code will give you an idea: >>> > >>> > https://github.com/Slicer/Slicer/blob/master/Modules/Scripte >>> d/DICOMPlugins/DICOMScalarVolumePlugin.py#L195-L216 >>> > >>> >>> From your code, you are missing a valid third column for your affine. >>> I believe that column will be all zeros from your code. This is what >>> the later part of the DICOM orientation page is talking about, and >>> what Steve is referring to as the "slice direction". >>> >>> Steve is quite right that the slice direction need not be the >>> cross-product of the first two, and the DICOM information can often >>> tell you what that slice direction vector is, but assuming for a >>> moment that it is the cross product, and that you are looking at the >>> first slice of the volume, then you'd want something like: >>> >>> """ >>> import numpy as np >>> >>> ImageOrientationPatient = [0.999857, 0.00390641, 0.0164496, >>> -0.00741602, 0.975738, 0.218818] >>> >>> ImagePositionPatient = [-127.773, -105.599, -94.5758] >>> >>> PixelSpacing = [0.4688, 0.4688] >>> >>> slice_spacing = 3.0 # ? >>> >>> # Make F array from DICOM orientation page >>> F = np.fliplr(np.reshape(ImageOrientationPatient, (2, 3)).T) >>> rotations = np.eye(3) >>> rotations[:, :2] = F >>> # Third direction cosine from cross-product of first two >>> rotations[:, 2] = np.cross(F[:, 0], F[:, 1]) >>> # Add the zooms >>> zooms = np.diag(PixelSpacing + [slice_spacing]) >>> >>> # Make the affine >>> affine = np.diag([0., 0, 0, 1]) >>> affine[:3, :3] = rotations.dot(zooms) >>> affine[:3, 3] = ImagePositionPatient >>> >>> np.set_printoptions(precision=4, suppress=True) >>> print(affine) >>> """ >>> >>> But - Steve's suggestion is more general - this code is just to give >>> you an idea. >>> >>> Best, >>> >>> Matthew >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DICOM_Fig1.png Type: image/png Size: 178149 bytes Desc: not available URL: From matthew.brett at gmail.com Wed Sep 14 12:24:46 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 14 Sep 2016 09:24:46 -0700 Subject: [Neuroimaging] Contributing In-Reply-To: <362B6DA5-490D-4382-AD07-051885C83C12@yahoo.ca> References: <362B6DA5-490D-4382-AD07-051885C83C12@yahoo.ca> Message-ID: Hi, On Tue, Sep 13, 2016 at 9:40 AM, Ruy Valle via Neuroimaging wrote: > Hi Ariel, > > Thank you for your response. I am doing fMRI analysis, mostly task-based up until now but I will be getting into resting-state analysis sooner or later. I will probably also start looking at EEG and TMS, but I am not sure how relevant that is for Nipy. I learned to use AFNI at their bootcamp and FSL through my supervisor at work. I recently (yesterday) started working on a project of mine through which I am hoping to clarify what the effect of including motion parameters (estimates of head motion calculated during volume registration/motion correction) in regression models used in fMRI software is. I would say that I have a fair amount of experience in Java but have not used it much as of late, am comfortable enough with Python, R, and MATLAB to really dive into them as much as needed, and have been learning C and Go recently. > > A colleague of mine introduced me to Nipy/Nipype around 5 months ago and I really liked the idea of it. I was still in college at the time and graduated this last semester from McGill University. I have been reading a book on modeling techniques and learning about statistics more generally, so contributing on that side could be nice. I also enjoy learning about algorithms (I implemented a heap sort algorithm in C a few weeks ago). To be honest, I find I still lack enough experience (both in software and in life in general) to pinpoint my preferences, and am open to trying new things, hopefully discovering what most appeals to me, increasing my skills, and making useful contributions along the way. > I find that the most productive way, is to start doing some analysis I'm interested in, and then look to see how to do it in nipy (or dipy, nibabel or nipype etc). That gets me reading the code. I start to use the code, and that points out bugs, or documentation that could be improved, or features I'd like. That makes me look at the code and maybe ask for help if it's hard to follow. Then I make some edits and put up a work-in-progress pull request. Is that a practical path for you? Cheers, Matthew From ruyvalle at yahoo.ca Wed Sep 14 13:11:23 2016 From: ruyvalle at yahoo.ca (Ruy Valle) Date: Wed, 14 Sep 2016 13:11:23 -0400 Subject: [Neuroimaging] Contributing In-Reply-To: References: <362B6DA5-490D-4382-AD07-051885C83C12@yahoo.ca> Message-ID: <62BC9691-596D-4CB0-A958-69DC477E4F27@yahoo.ca> Hi Matthew, Thank you for your response. I guess if there is enough of an incentive for me to translate the analyses I am working on to Nipy, then yes that seems like a natural path to follow. I am uncertain if this will be the case however. Regardless, I will contribute what I can. Cheers, Ruy > On Sep 14, 2016, at 12:24 PM, Matthew Brett wrote: > > Hi, > > On Tue, Sep 13, 2016 at 9:40 AM, Ruy Valle via Neuroimaging > > wrote: >> Hi Ariel, >> >> Thank you for your response. I am doing fMRI analysis, mostly task-based up until now but I will be getting into resting-state analysis sooner or later. I will probably also start looking at EEG and TMS, but I am not sure how relevant that is for Nipy. I learned to use AFNI at their bootcamp and FSL through my supervisor at work. I recently (yesterday) started working on a project of mine through which I am hoping to clarify what the effect of including motion parameters (estimates of head motion calculated during volume registration/motion correction) in regression models used in fMRI software is. I would say that I have a fair amount of experience in Java but have not used it much as of late, am comfortable enough with Python, R, and MATLAB to really dive into them as much as needed, and have been learning C and Go recently. >> >> A colleague of mine introduced me to Nipy/Nipype around 5 months ago and I really liked the idea of it. I was still in college at the time and graduated this last semester from McGill University. I have been reading a book on modeling techniques and learning about statistics more generally, so contributing on that side could be nice. I also enjoy learning about algorithms (I implemented a heap sort algorithm in C a few weeks ago). To be honest, I find I still lack enough experience (both in software and in life in general) to pinpoint my preferences, and am open to trying new things, hopefully discovering what most appeals to me, increasing my skills, and making useful contributions along the way. >> > > I find that the most productive way, is to start doing some analysis > I'm interested in, and then look to see how to do it in nipy (or dipy, > nibabel or nipype etc). That gets me reading the code. I start to > use the code, and that points out bugs, or documentation that could be > improved, or features I'd like. That makes me look at the code and > maybe ask for help if it's hard to follow. Then I make some edits and > put up a work-in-progress pull request. > > Is that a practical path for you? > > Cheers, > > Matthew -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Wed Sep 14 14:22:29 2016 From: arokem at gmail.com (Ariel Rokem) Date: Wed, 14 Sep 2016 11:22:29 -0700 Subject: [Neuroimaging] Contributing In-Reply-To: <62BC9691-596D-4CB0-A958-69DC477E4F27@yahoo.ca> References: <362B6DA5-490D-4382-AD07-051885C83C12@yahoo.ca> <62BC9691-596D-4CB0-A958-69DC477E4F27@yahoo.ca> Message-ID: Hi Ruy, On Wed, Sep 14, 2016 at 10:11 AM, Ruy Valle via Neuroimaging < neuroimaging at python.org> wrote: > Hi Matthew, > > Thank you for your response. I guess if there is enough of an incentive > for me to translate the analyses I am working on to Nipy, then yes that > seems like a natural path to follow. I am uncertain if this will be the > case however. Regardless, I will contribute what I can. > > One way to go about this is to start by making your analysis code publicly available on Github, and then see whether it fits in with some of the other work that people are doing. It would already be useful to people, even if it isn't in one of the already existing packages. Cheers, Ariel > Cheers, > > Ruy > > On Sep 14, 2016, at 12:24 PM, Matthew Brett > wrote: > > Hi, > > On Tue, Sep 13, 2016 at 9:40 AM, Ruy Valle via Neuroimaging > wrote: > > Hi Ariel, > > Thank you for your response. I am doing fMRI analysis, mostly task-based > up until now but I will be getting into resting-state analysis sooner or > later. I will probably also start looking at EEG and TMS, but I am not sure > how relevant that is for Nipy. I learned to use AFNI at their bootcamp and > FSL through my supervisor at work. I recently (yesterday) started working > on a project of mine through which I am hoping to clarify what the effect > of including motion parameters (estimates of head motion calculated during > volume registration/motion correction) in regression models used in fMRI > software is. I would say that I have a fair amount of experience in Java > but have not used it much as of late, am comfortable enough with Python, R, > and MATLAB to really dive into them as much as needed, and have been > learning C and Go recently. > > A colleague of mine introduced me to Nipy/Nipype around 5 months ago and I > really liked the idea of it. I was still in college at the time and > graduated this last semester from McGill University. I have been reading a > book on modeling techniques and learning about statistics more generally, > so contributing on that side could be nice. I also enjoy learning about > algorithms (I implemented a heap sort algorithm in C a few weeks ago). To > be honest, I find I still lack enough experience (both in software and in > life in general) to pinpoint my preferences, and am open to trying new > things, hopefully discovering what most appeals to me, increasing my > skills, and making useful contributions along the way. > > > I find that the most productive way, is to start doing some analysis > I'm interested in, and then look to see how to do it in nipy (or dipy, > nibabel or nipype etc). That gets me reading the code. I start to > use the code, and that points out bugs, or documentation that could be > improved, or features I'd like. That makes me look at the code and > maybe ask for help if it's hard to follow. Then I make some edits and > put up a work-in-progress pull request. > > Is that a practical path for you? > > Cheers, > > Matthew > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Thu Sep 15 12:44:49 2016 From: pellman.john at gmail.com (John Pellman) Date: Thu, 15 Sep 2016 12:44:49 -0400 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color Message-ID: I've had at this a little bit more and my current suspicion is that this behavior is the result of an interaction between our remote desktop service (x2go) and Mayavi. I created a an identical Miniconda environment for Pysurfer on both our server and my laptop and ran the following code to test this theory: # The Basic Visualization demo from the Pysurfer gallery. > from surfer import Brain > > print(__doc__) > > """ > Define the three important variables. > Note that these are the first three positional arguments > in tksurfer (and pysurfer for that matter). > """ > subject_id = 'fsaverage' > hemi = 'lh' > surface = 'inflated' > > """ > Call the Brain object constructor with these > parameters to initialize the visualization session. > """ > brain = Brain(subject_id, hemi, surface) > > # Save an image out to /tmp > print 'Saving out an image to /tmp using Brain.save_image.' > brain.save_image('/tmp/brain.png') > > # Looking at just the screenshot method of pysurfer's Brain object. > # This is called by save_image and is fed into scipy.misc.imsave. > # If the boolean expression evaluated here is true, then only a black > # background is being fed into scipy's misc.imsave method for evaluation. > x = brain.screenshot() > print 'Test pysurfer\'s Brain.screenshot.' > if sum(x.flatten()==0)!=len(x.flatten()): > print 'Pass' > else: > print 'Fail' > > # Looking at the Mayavi mlab.screenshot method. > # This is called by screenshot_single, which is called by Brain's > screenshot. > # If the boolean expression evaluated here is true, then only a black > # background is being fed into Brain.screenshot() > from mayavi import mlab > x = mlab.screenshot(brain.brain_matrix[0,0]._f, 'rgb', False) > print 'Test mayavi\'s mlab.screenshot' > if sum(x.flatten()==0)!=len(x.flatten()): > print 'Pass' > else: > print 'Fail' > On the server through an x2go session both Brain.screenshot and mlab.screenshot failed to produce a non-blank image, while on my laptop's local environment both of these methods did produce the desired output (i.e., there were some nonzero outputs). Since this doesn't seem to be an error with pysurfer in particular, I'm going to proceed to see if anyone using Mayavi with x2go or nx has encountered similar issues by querying their forums / issue pages. I just wanted to leave this here if someone else encounters the same issue in the future. --John On Tue, Sep 13, 2016 at 1:24 PM, John Pellman wrote: > It looks like it might be related to the following issue described at > StackOverflow: > > http://stackoverflow.com/questions/16543634/mayavi-mlab- > savefig-gives-an-empty-image > > On Mon, Sep 12, 2016 at 2:00 PM, John Pellman > wrote: > >> Hi all, >> >> I'm encountering a peculiar Pysurfer error on our server and I was >> wondering if anyone has encountered anything similar or might have some >> insight into how I can tackle it. Basically, when our researchers try to >> save a png image using Brain.save_image() or Brain.save_imageset() the >> images produced only contain the background color (as you may have inferred >> from the subject line). I've traced this back to Scipy method >> (scipy.misc.imsave), but it looks like this would only output an empty png >> if the image passed in were completely zeroed out. Our setup uses the >> following versions of pysurfer/its dependencies: >> >> Numpy: 1.10.0.dev0+1fe98ff >> Scipy: 0.17.0.dev0+f2f6e48 >> Ipython: 3.1.0 >> nibabel: 2.0.0 >> Mayavi: 4.4.2 >> matplotlib: 1.4.3 >> PIL: 1.1.7 >> Pysurfer: 0.5 >> >> This setup is running within a Miniconda environment using Python >> 2.7.11. I'm uncertain if this is related, but running the example code >> here produces the >> following warning: >> >> *(ipython:20765): Gdk-WARNING **: >> /build/buildd/gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 drawable is not >> a pixmap or window* >> >> Any insight would be greatly appreciated. >> >> Best, >> John Pellman >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kw401 at cam.ac.uk Thu Sep 15 16:33:44 2016 From: kw401 at cam.ac.uk (Kirstie Whitaker) Date: Thu, 15 Sep 2016 16:33:44 -0400 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: Message-ID: Hi John, I'm travelling at the moment but I've had problems with pysurfer showing beautiful brains on the screen but only saving a black box to file. It happened right after our systems admin updated a few things but I haven't been able to get a clear list from him of what changed except: everything should work. My point with this email is please do share back what you learn.....even if it ends up being not a pysurfer problem. At the moment my workaround is to move everything I do to a different cluster that works!! Non efficient to say the least! Thank you Kirstie Sent from my iPhone, please excuse any typos or excessive brevity > On 15 Sep 2016, at 12:44, John Pellman wrote: > > I've had at this a little bit more and my current suspicion is that this behavior is the result of an interaction between our remote desktop service (x2go) and Mayavi. > > I created a an identical Miniconda environment for Pysurfer on both our server and my laptop and ran the following code to test this theory: > >> # The Basic Visualization demo from the Pysurfer gallery. >> from surfer import Brain >> >> print(__doc__) >> >> """ >> Define the three important variables. >> Note that these are the first three positional arguments >> in tksurfer (and pysurfer for that matter). >> """ >> subject_id = 'fsaverage' >> hemi = 'lh' >> surface = 'inflated' >> >> """ >> Call the Brain object constructor with these >> parameters to initialize the visualization session. >> """ >> brain = Brain(subject_id, hemi, surface) >> >> # Save an image out to /tmp >> print 'Saving out an image to /tmp using Brain.save_image.' >> brain.save_image('/tmp/brain.png') >> >> # Looking at just the screenshot method of pysurfer's Brain object. >> # This is called by save_image and is fed into scipy.misc.imsave. >> # If the boolean expression evaluated here is true, then only a black >> # background is being fed into scipy's misc.imsave method for evaluation. >> x = brain.screenshot() >> print 'Test pysurfer\'s Brain.screenshot.' >> if sum(x.flatten()==0)!=len(x.flatten()): >> print 'Pass' >> else: >> print 'Fail' >> >> # Looking at the Mayavi mlab.screenshot method. >> # This is called by screenshot_single, which is called by Brain's screenshot. >> # If the boolean expression evaluated here is true, then only a black >> # background is being fed into Brain.screenshot() >> from mayavi import mlab >> x = mlab.screenshot(brain.brain_matrix[0,0]._f, 'rgb', False) >> print 'Test mayavi\'s mlab.screenshot' >> if sum(x.flatten()==0)!=len(x.flatten()): >> print 'Pass' >> else: >> print 'Fail' > > On the server through an x2go session both Brain.screenshot and mlab.screenshot failed to produce a non-blank image, while on my laptop's local environment both of these methods did produce the desired output (i.e., there were some nonzero outputs). > > Since this doesn't seem to be an error with pysurfer in particular, I'm going to proceed to see if anyone using Mayavi with x2go or nx has encountered similar issues by querying their forums / issue pages. I just wanted to leave this here if someone else encounters the same issue in the future. > > --John > >> On Tue, Sep 13, 2016 at 1:24 PM, John Pellman wrote: >> It looks like it might be related to the following issue described at StackOverflow: >> >> http://stackoverflow.com/questions/16543634/mayavi-mlab-savefig-gives-an-empty-image >> >>> On Mon, Sep 12, 2016 at 2:00 PM, John Pellman wrote: >>> Hi all, >>> >>> I'm encountering a peculiar Pysurfer error on our server and I was wondering if anyone has encountered anything similar or might have some insight into how I can tackle it. Basically, when our researchers try to save a png image using Brain.save_image() or Brain.save_imageset() the images produced only contain the background color (as you may have inferred from the subject line). I've traced this back to Scipy method (scipy.misc.imsave), but it looks like this would only output an empty png if the image passed in were completely zeroed out. Our setup uses the following versions of pysurfer/its dependencies: >>> >>> Numpy: 1.10.0.dev0+1fe98ff >>> Scipy: 0.17.0.dev0+f2f6e48 >>> Ipython: 3.1.0 >>> nibabel: 2.0.0 >>> Mayavi: 4.4.2 >>> matplotlib: 1.4.3 >>> PIL: 1.1.7 >>> Pysurfer: 0.5 >>> >>> This setup is running within a Miniconda environment using Python 2.7.11. I'm uncertain if this is related, but running the example code here produces the following warning: >>> >>> (ipython:20765): Gdk-WARNING **: /build/buildd/gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 drawable is not a pixmap or window >>> >>> Any insight would be greatly appreciated. >>> >>> Best, >>> John Pellman >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Thu Sep 15 20:40:49 2016 From: arokem at gmail.com (Ariel Rokem) Date: Thu, 15 Sep 2016 17:40:49 -0700 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: Message-ID: On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker wrote: > Hi John, > > I'm travelling at the moment but I've had problems with pysurfer showing > beautiful brains on the screen but only saving a black box to file. It > happened right after our systems admin updated a few things but I haven't > been able to get a clear list from him of what changed except: everything > should work. > > My point with this email is please do share back what you learn.....even > if it ends up being not a pysurfer problem. At the moment my workaround is > to move everything I do to a different cluster that works!! Non efficient > to say the least! > > Thank you > Kirstie > > Sent from my iPhone, please excuse any typos or excessive brevity > > On 15 Sep 2016, at 12:44, John Pellman wrote: > > I've had at this a little bit more and my current suspicion is that this > behavior is the result of an interaction between our remote desktop service > (x2go) and Mayavi. > > I created a an identical Miniconda environment for Pysurfer on both our > server and my laptop and ran the following code to test this theory: > > # The Basic Visualization demo from the Pysurfer gallery. >> from surfer import Brain >> >> print(__doc__) >> >> """ >> Define the three important variables. >> Note that these are the first three positional arguments >> in tksurfer (and pysurfer for that matter). >> """ >> subject_id = 'fsaverage' >> hemi = 'lh' >> surface = 'inflated' >> >> """ >> Call the Brain object constructor with these >> parameters to initialize the visualization session. >> """ >> brain = Brain(subject_id, hemi, surface) >> >> # Save an image out to /tmp >> print 'Saving out an image to /tmp using Brain.save_image.' >> brain.save_image('/tmp/brain.png') >> >> # Looking at just the screenshot method of pysurfer's Brain object. >> # This is called by save_image and is fed into scipy.misc.imsave. >> # If the boolean expression evaluated here is true, then only a black >> # background is being fed into scipy's misc.imsave method for evaluation. >> x = brain.screenshot() >> print 'Test pysurfer\'s Brain.screenshot.' >> if sum(x.flatten()==0)!=len(x.flatten()): >> print 'Pass' >> else: >> print 'Fail' >> >> # Looking at the Mayavi mlab.screenshot method. >> # This is called by screenshot_single, which is called by Brain's >> screenshot. >> # If the boolean expression evaluated here is true, then only a black >> # background is being fed into Brain.screenshot() >> from mayavi import mlab >> x = mlab.screenshot(brain.brain_matrix[0,0]._f, 'rgb', False) >> print 'Test mayavi\'s mlab.screenshot' >> if sum(x.flatten()==0)!=len(x.flatten()): >> print 'Pass' >> else: >> print 'Fail' >> > > On the server through an x2go session both Brain.screenshot and > mlab.screenshot failed to produce a non-blank image, while on my laptop's > local environment both of these methods did produce the desired output > (i.e., there were some nonzero outputs). > > Since this doesn't seem to be an error with pysurfer in particular, I'm > going to proceed to see if anyone using Mayavi with x2go or nx has > encountered similar issues by querying their forums / issue pages. I just > wanted to leave this here if someone else encounters the same issue in the > future. > > A shot in the dark: Could it be something to do with running headless? Maybe running this under XVFB (e.g. through xvfbwrapper ) would help? Ariel > --John > > On Tue, Sep 13, 2016 at 1:24 PM, John Pellman > wrote: > >> It looks like it might be related to the following issue described at >> StackOverflow: >> >> http://stackoverflow.com/questions/16543634/mayavi-mlab-save >> fig-gives-an-empty-image >> >> On Mon, Sep 12, 2016 at 2:00 PM, John Pellman >> wrote: >> >>> Hi all, >>> >>> I'm encountering a peculiar Pysurfer error on our server and I was >>> wondering if anyone has encountered anything similar or might have some >>> insight into how I can tackle it. Basically, when our researchers try to >>> save a png image using Brain.save_image() or Brain.save_imageset() the >>> images produced only contain the background color (as you may have inferred >>> from the subject line). I've traced this back to Scipy method >>> (scipy.misc.imsave), but it looks like this would only output an empty png >>> if the image passed in were completely zeroed out. Our setup uses the >>> following versions of pysurfer/its dependencies: >>> >>> Numpy: 1.10.0.dev0+1fe98ff >>> Scipy: 0.17.0.dev0+f2f6e48 >>> Ipython: 3.1.0 >>> nibabel: 2.0.0 >>> Mayavi: 4.4.2 >>> matplotlib: 1.4.3 >>> PIL: 1.1.7 >>> Pysurfer: 0.5 >>> >>> This setup is running within a Miniconda environment using Python >>> 2.7.11. I'm uncertain if this is related, but running the example code >>> here produces the >>> following warning: >>> >>> *(ipython:20765): Gdk-WARNING **: >>> /build/buildd/gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 drawable is not >>> a pixmap or window* >>> >>> Any insight would be greatly appreciated. >>> >>> Best, >>> John Pellman >>> >> >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Fri Sep 16 05:43:00 2016 From: jbpoline at gmail.com (JB Poline) Date: Fri, 16 Sep 2016 11:43:00 +0200 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: Message-ID: That's a cool idea and package - thanks for pointing to this ! On 16 September 2016 at 02:40, Ariel Rokem wrote: > > On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker wrote: > >> Hi John, >> >> I'm travelling at the moment but I've had problems with pysurfer showing >> beautiful brains on the screen but only saving a black box to file. It >> happened right after our systems admin updated a few things but I haven't >> been able to get a clear list from him of what changed except: everything >> should work. >> >> My point with this email is please do share back what you learn.....even >> if it ends up being not a pysurfer problem. At the moment my workaround is >> to move everything I do to a different cluster that works!! Non efficient >> to say the least! >> >> Thank you >> Kirstie >> >> Sent from my iPhone, please excuse any typos or excessive brevity >> >> On 15 Sep 2016, at 12:44, John Pellman wrote: >> >> I've had at this a little bit more and my current suspicion is that this >> behavior is the result of an interaction between our remote desktop service >> (x2go) and Mayavi. >> >> I created a an identical Miniconda environment for Pysurfer on both our >> server and my laptop and ran the following code to test this theory: >> >> # The Basic Visualization demo from the Pysurfer gallery. >>> from surfer import Brain >>> >>> print(__doc__) >>> >>> """ >>> Define the three important variables. >>> Note that these are the first three positional arguments >>> in tksurfer (and pysurfer for that matter). >>> """ >>> subject_id = 'fsaverage' >>> hemi = 'lh' >>> surface = 'inflated' >>> >>> """ >>> Call the Brain object constructor with these >>> parameters to initialize the visualization session. >>> """ >>> brain = Brain(subject_id, hemi, surface) >>> >>> # Save an image out to /tmp >>> print 'Saving out an image to /tmp using Brain.save_image.' >>> brain.save_image('/tmp/brain.png') >>> >>> # Looking at just the screenshot method of pysurfer's Brain object. >>> # This is called by save_image and is fed into scipy.misc.imsave. >>> # If the boolean expression evaluated here is true, then only a black >>> # background is being fed into scipy's misc.imsave method for evaluation. >>> x = brain.screenshot() >>> print 'Test pysurfer\'s Brain.screenshot.' >>> if sum(x.flatten()==0)!=len(x.flatten()): >>> print 'Pass' >>> else: >>> print 'Fail' >>> >>> # Looking at the Mayavi mlab.screenshot method. >>> # This is called by screenshot_single, which is called by Brain's >>> screenshot. >>> # If the boolean expression evaluated here is true, then only a black >>> # background is being fed into Brain.screenshot() >>> from mayavi import mlab >>> x = mlab.screenshot(brain.brain_matrix[0,0]._f, 'rgb', False) >>> print 'Test mayavi\'s mlab.screenshot' >>> if sum(x.flatten()==0)!=len(x.flatten()): >>> print 'Pass' >>> else: >>> print 'Fail' >>> >> >> On the server through an x2go session both Brain.screenshot and >> mlab.screenshot failed to produce a non-blank image, while on my laptop's >> local environment both of these methods did produce the desired output >> (i.e., there were some nonzero outputs). >> >> Since this doesn't seem to be an error with pysurfer in particular, I'm >> going to proceed to see if anyone using Mayavi with x2go or nx has >> encountered similar issues by querying their forums / issue pages. I just >> wanted to leave this here if someone else encounters the same issue in the >> future. >> >> A shot in the dark: Could it be something to do with running headless? > Maybe running this under XVFB (e.g. through xvfbwrapper > ) would help? > > Ariel > > >> --John >> >> On Tue, Sep 13, 2016 at 1:24 PM, John Pellman >> wrote: >> >>> It looks like it might be related to the following issue described at >>> StackOverflow: >>> >>> http://stackoverflow.com/questions/16543634/mayavi-mlab-save >>> fig-gives-an-empty-image >>> >>> On Mon, Sep 12, 2016 at 2:00 PM, John Pellman >>> wrote: >>> >>>> Hi all, >>>> >>>> I'm encountering a peculiar Pysurfer error on our server and I was >>>> wondering if anyone has encountered anything similar or might have some >>>> insight into how I can tackle it. Basically, when our researchers try to >>>> save a png image using Brain.save_image() or Brain.save_imageset() the >>>> images produced only contain the background color (as you may have inferred >>>> from the subject line). I've traced this back to Scipy method >>>> (scipy.misc.imsave), but it looks like this would only output an empty png >>>> if the image passed in were completely zeroed out. Our setup uses the >>>> following versions of pysurfer/its dependencies: >>>> >>>> Numpy: 1.10.0.dev0+1fe98ff >>>> Scipy: 0.17.0.dev0+f2f6e48 >>>> Ipython: 3.1.0 >>>> nibabel: 2.0.0 >>>> Mayavi: 4.4.2 >>>> matplotlib: 1.4.3 >>>> PIL: 1.1.7 >>>> Pysurfer: 0.5 >>>> >>>> This setup is running within a Miniconda environment using Python >>>> 2.7.11. I'm uncertain if this is related, but running the example code >>>> here produces >>>> the following warning: >>>> >>>> *(ipython:20765): Gdk-WARNING **: >>>> /build/buildd/gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 drawable is not >>>> a pixmap or window* >>>> >>>> Any insight would be greatly appreciated. >>>> >>>> Best, >>>> John Pellman >>>> >>> >>> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdgispert at fpmaragall.org Fri Sep 16 08:39:41 2016 From: jdgispert at fpmaragall.org (=?UTF-8?Q?Juan_Domingo_Gispert_L=C3=B3pez?=) Date: Fri, 16 Sep 2016 14:39:41 +0200 Subject: [Neuroimaging] Postdoctoral Position - Amyloid Imaging - Barcelonabeta Brain Research Center Message-ID: The Pasqual Maragall Foundation invites applications for a full-time postdoctoral position within the context of the AMYPAD project, as part of a clinical research program of the Barcelona?eta Brain Research Centre (Barcelona, Spain). AMYPAD is a European project to establish the true value of amyloid PET in a diagnostic and prognostic setting (http://www.amypad.eu/). This 5-year project is a collaboration between industry (GEHC, Piramal, Janssen, Ixico) and academic partners funded by the IMI-2 program (total budget 27 million euro). Throughout Europe we will recruit 900 memory clinic patients and 3100 preclinical or prodromal AD subjects from natural history cohorts. Up to 50% of subjects will undergo dynamic scanning and have repeat imaging, for a total of 6000 amyloid PET scans. Main study goals include 1) diagnostic impact including patient-reported outcomes and healthcare resource utilization, 2) prognostic value and enrichment of treatment trials, and 3) quantitative assessment of treatment effects. In close collaboration with EPAD (www.ep-ad.org), the cohorts will be followed with careful longitudinal monitoring and MRI to determine (surrogate) outcomes of cognitive decline and neurodegeneration. The consortium brings together a word-class team of highly synergistic partners to form a pan-European network including the most active PET sites. This will ensure effective access to patients and also maximise exposure to technical knowledge and disease modelling. In addition, AMYPAD will develop expertise in image data collection, including ?-amyloid PET and MRI data from the EPAD project. The Barcelona?eta Brain Research Center (Barcelona?eta) is a new research infrastructure, constituted by the Pasqual Maragall Foundation and with the participation of the Pompeu Fabra University, dedicated to research on Alzheimer?s disease. The new building will be completed by summer 2014, and will contain excellent technical facilities including a research-dedicated 3T MR scanner, dedicated to clinical research on neurodegenerative diseases. In collaboration with other AMYPAD partners, the responsibilities for this position includes: ? Develop technical procedures to enhance amyloid PET quantification; ? Engage in the management of the project; ? Modelling of amyloid PET data in disease models; ? Participate in analysis and publication of results; ? Participate in regular meetings with our partners throughout Europe; Required Qualifications: ? PhD in neuroscience or image analysis; ? Excellent communication and writing skills; ? Experience with PET acquisition and analysis; ? Affiliation with (or experience with) project management. ? Ability to think independently and work collaboratively. Benefits: ? Starting date: Jan 2017. ? The postdoctoral position is scheduled initially for two years (open to renewal). ? Salary will depend on experience. Additional Information: To apply, please submit a single PDF file containing the following: 1) Cover letter describing research interests and relevant background; 2) CV with list of publications; 3) The names of three individuals who could provide reference letters. All files or inquiries should be submitted electronically to: cminguillon at fpmaragall.org. Deadline for submitting applications: November 4th 2016. -- *Juan D Gispert* Head Neuroimaging Research *Barcelona**?**eta **Brain Research Center - Fundaci? Pasqual Maragall* T. (+34) 93 326 31 90 C/ Wellington, 30 08005 Barcelona *www.fpmaragall.org * -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Fri Sep 16 12:00:15 2016 From: pellman.john at gmail.com (John Pellman) Date: Fri, 16 Sep 2016 12:00:15 -0400 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: Message-ID: Pysurfer isn't running headless- it's using x2go, which is based upon the nx protocol, a technology that improves the ability of X11 to function over a network connection. Therefore, I don't think that Xvfb is related. xvfbwrapper might be usable as a workaround, however. As I mentioned in my last post, I traced the offending method back to mayavi. I've opened an issue related to this here . Kirstie- if you'd be willing to refer your sysadmin to this thread I think that would be great, as I would be interested in hearing what theories or potential fixes he/she might have for this issue as well. --John On Fri, Sep 16, 2016 at 5:43 AM, JB Poline wrote: > That's a cool idea and package - thanks for pointing to this ! > > On 16 September 2016 at 02:40, Ariel Rokem wrote: > >> >> On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker >> wrote: >> >>> Hi John, >>> >>> I'm travelling at the moment but I've had problems with pysurfer showing >>> beautiful brains on the screen but only saving a black box to file. It >>> happened right after our systems admin updated a few things but I haven't >>> been able to get a clear list from him of what changed except: everything >>> should work. >>> >>> My point with this email is please do share back what you learn.....even >>> if it ends up being not a pysurfer problem. At the moment my workaround is >>> to move everything I do to a different cluster that works!! Non efficient >>> to say the least! >>> >>> Thank you >>> Kirstie >>> >>> Sent from my iPhone, please excuse any typos or excessive brevity >>> >>> On 15 Sep 2016, at 12:44, John Pellman wrote: >>> >>> I've had at this a little bit more and my current suspicion is that this >>> behavior is the result of an interaction between our remote desktop service >>> (x2go) and Mayavi. >>> >>> I created a an identical Miniconda environment for Pysurfer on both our >>> server and my laptop and ran the following code to test this theory: >>> >>> # The Basic Visualization demo from the Pysurfer gallery. >>>> from surfer import Brain >>>> >>>> print(__doc__) >>>> >>>> """ >>>> Define the three important variables. >>>> Note that these are the first three positional arguments >>>> in tksurfer (and pysurfer for that matter). >>>> """ >>>> subject_id = 'fsaverage' >>>> hemi = 'lh' >>>> surface = 'inflated' >>>> >>>> """ >>>> Call the Brain object constructor with these >>>> parameters to initialize the visualization session. >>>> """ >>>> brain = Brain(subject_id, hemi, surface) >>>> >>>> # Save an image out to /tmp >>>> print 'Saving out an image to /tmp using Brain.save_image.' >>>> brain.save_image('/tmp/brain.png') >>>> >>>> # Looking at just the screenshot method of pysurfer's Brain object. >>>> # This is called by save_image and is fed into scipy.misc.imsave. >>>> # If the boolean expression evaluated here is true, then only a black >>>> # background is being fed into scipy's misc.imsave method for >>>> evaluation. >>>> x = brain.screenshot() >>>> print 'Test pysurfer\'s Brain.screenshot.' >>>> if sum(x.flatten()==0)!=len(x.flatten()): >>>> print 'Pass' >>>> else: >>>> print 'Fail' >>>> >>>> # Looking at the Mayavi mlab.screenshot method. >>>> # This is called by screenshot_single, which is called by Brain's >>>> screenshot. >>>> # If the boolean expression evaluated here is true, then only a black >>>> # background is being fed into Brain.screenshot() >>>> from mayavi import mlab >>>> x = mlab.screenshot(brain.brain_matrix[0,0]._f, 'rgb', False) >>>> print 'Test mayavi\'s mlab.screenshot' >>>> if sum(x.flatten()==0)!=len(x.flatten()): >>>> print 'Pass' >>>> else: >>>> print 'Fail' >>>> >>> >>> On the server through an x2go session both Brain.screenshot and >>> mlab.screenshot failed to produce a non-blank image, while on my laptop's >>> local environment both of these methods did produce the desired output >>> (i.e., there were some nonzero outputs). >>> >>> Since this doesn't seem to be an error with pysurfer in particular, I'm >>> going to proceed to see if anyone using Mayavi with x2go or nx has >>> encountered similar issues by querying their forums / issue pages. I just >>> wanted to leave this here if someone else encounters the same issue in the >>> future. >>> >>> A shot in the dark: Could it be something to do with running headless? >> Maybe running this under XVFB (e.g. through xvfbwrapper >> ) would help? >> >> Ariel >> >> >>> --John >>> >>> On Tue, Sep 13, 2016 at 1:24 PM, John Pellman >>> wrote: >>> >>>> It looks like it might be related to the following issue described at >>>> StackOverflow: >>>> >>>> http://stackoverflow.com/questions/16543634/mayavi-mlab-save >>>> fig-gives-an-empty-image >>>> >>>> On Mon, Sep 12, 2016 at 2:00 PM, John Pellman >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> I'm encountering a peculiar Pysurfer error on our server and I was >>>>> wondering if anyone has encountered anything similar or might have some >>>>> insight into how I can tackle it. Basically, when our researchers try to >>>>> save a png image using Brain.save_image() or Brain.save_imageset() the >>>>> images produced only contain the background color (as you may have inferred >>>>> from the subject line). I've traced this back to Scipy method >>>>> (scipy.misc.imsave), but it looks like this would only output an empty png >>>>> if the image passed in were completely zeroed out. Our setup uses the >>>>> following versions of pysurfer/its dependencies: >>>>> >>>>> Numpy: 1.10.0.dev0+1fe98ff >>>>> Scipy: 0.17.0.dev0+f2f6e48 >>>>> Ipython: 3.1.0 >>>>> nibabel: 2.0.0 >>>>> Mayavi: 4.4.2 >>>>> matplotlib: 1.4.3 >>>>> PIL: 1.1.7 >>>>> Pysurfer: 0.5 >>>>> >>>>> This setup is running within a Miniconda environment using Python >>>>> 2.7.11. I'm uncertain if this is related, but running the example code >>>>> here produces >>>>> the following warning: >>>>> >>>>> *(ipython:20765): Gdk-WARNING **: >>>>> /build/buildd/gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 drawable is not >>>>> a pixmap or window* >>>>> >>>>> Any insight would be greatly appreciated. >>>>> >>>>> Best, >>>>> John Pellman >>>>> >>>> >>>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Sep 16 12:29:40 2016 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 16 Sep 2016 18:29:40 +0200 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: Message-ID: <20160916162940.GA248315@phare.normalesup.org> Nx has always been a problem with Mayavi (or actually VTK, which is the underlying technology). Basically, it interfers with the openGL contexts, and in some cases the buffer cannot be captured well. Hence the black image. IMHO, the bug is in NX or the mesa driver, or both. Ga?l On Fri, Sep 16, 2016 at 12:00:15PM -0400, John Pellman wrote: > Pysurfer isn't running headless- it's using x2go, which is based upon the nx > protocol, a technology that improves the ability of X11 to function over a > network connection. Therefore, I don't think that Xvfb is related.? xvfbwrapper > might be usable as a workaround, however. > As I mentioned in my last post, I traced the offending method back to mayavi.? > I've opened an issue related to this here. > Kirstie- if you'd be willing to refer your sysadmin to this thread I think that > would be great, as I would be interested in hearing what theories or potential > fixes he/she might have for this issue as well. > --John > On Fri, Sep 16, 2016 at 5:43 AM, JB Poline wrote: > That's a cool idea and package - thanks for pointing to this ! > On 16 September 2016 at 02:40, Ariel Rokem wrote: > On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker > wrote: > Hi John, > I'm travelling at the moment but I've had problems with pysurfer > showing beautiful brains on the screen but only saving a black box > to file. It happened right after our systems admin updated a few > things but I haven't been able to get a clear list from him of what > changed except: everything should work. > My point with this email is please do share back what you > learn.....even if it ends up being not a pysurfer problem. At the > moment my workaround is to move everything I do to a different > cluster that works!! Non efficient to say the least! > Thank you > Kirstie? > Sent from my iPhone, please excuse any typos or excessive brevity > On 15 Sep 2016, at 12:44, John Pellman > wrote: > I've had at this a little bit more and my current suspicion is > that this behavior is the result of an interaction between our > remote desktop service (x2go) and Mayavi. > I created a an identical Miniconda environment for Pysurfer on > both our server and my laptop and ran the following code to > test this theory: > # The Basic Visualization demo from the Pysurfer gallery. > from surfer import Brain > print(__doc__) > """ > Define the three important variables. > Note that these are the first three positional arguments > in tksurfer (and pysurfer for that matter). > """ > subject_id = 'fsaverage' > hemi = 'lh' > surface = 'inflated' > """ > Call the Brain object constructor with these > parameters to initialize the visualization session. > """ > brain = Brain(subject_id, hemi, surface) > # Save an image out to /tmp > print 'Saving out an image to /tmp using Brain.save_image.' > brain.save_image('/tmp/brain.png') > # Looking at just the screenshot method of pysurfer's Brain > object. > # This is called by save_image and is fed into > scipy.misc.imsave. > # If the boolean expression evaluated here is true, then > only a black > # background is being fed into scipy's misc.imsave method > for evaluation. > x = brain.screenshot() > print 'Test pysurfer\'s Brain.screenshot.' > if sum(x.flatten()==0)!=len(x.flatten()): > ??? print 'Pass' > else: > ??? print 'Fail' > # Looking at the Mayavi mlab.screenshot method. > # This is called by screenshot_single, which is called by > Brain's screenshot. > # If the boolean expression evaluated here is true, then > only a black > # background is being fed into Brain.screenshot() > from mayavi import mlab > x = mlab.screenshot(brain.brain_matrix[0,0]._f, 'rgb', > False) > print 'Test mayavi\'s mlab.screenshot' > if sum(x.flatten()==0)!=len(x.flatten()): > ??? print 'Pass' > else: > ??? print 'Fail' > On the server through an x2go session both Brain.screenshot and > mlab.screenshot failed to produce a non-blank image, while on > my laptop's local environment both of these methods did produce > the desired output (i.e., there were some nonzero outputs). > Since this doesn't seem to be an error with pysurfer in > particular, I'm going to proceed to see if anyone using Mayavi > with x2go or nx has encountered similar issues by querying > their forums / issue pages.? I just wanted to leave this here > if someone else encounters the same issue in the future. > A shot in the dark: Could it be something to do with running headless? > Maybe running this under XVFB (e.g. through xvfbwrapper) would help?? > Ariel? > ? > --John > On Tue, Sep 13, 2016 at 1:24 PM, John Pellman < > pellman.john at gmail.com> wrote: > It looks like it might be related to the following issue > described at StackOverflow: > http://stackoverflow.com/questions/16543634/ > mayavi-mlab-savefig-gives-an-empty-image > On Mon, Sep 12, 2016 at 2:00 PM, John Pellman < > pellman.john at gmail.com> wrote: > Hi all, > I'm encountering a peculiar Pysurfer error on our > server and I was wondering if anyone has encountered > anything similar or might have some insight into how I > can tackle it.? Basically, when our researchers try to > save a png image using Brain.save_image() or > Brain.save_imageset() the images produced only contain > the background color (as you may have inferred from the > subject line).? I've traced this back to Scipy method > (scipy.misc.imsave), but it looks like this would only > output an empty png if the image passed in were > completely zeroed out.? Our setup uses the following > versions of pysurfer/its dependencies: > Numpy: 1.10.0.dev0+1fe98ff > Scipy: 0.17.0.dev0+f2f6e48 > Ipython: 3.1.0 > nibabel: 2.0.0 > Mayavi: 4.4.2 > matplotlib: 1.4.3 > PIL: 1.1.7 > Pysurfer: 0.5 > This setup is running within a Miniconda environment > using Python 2.7.11.? I'm uncertain if this is related, > but running the example code here produces the > following warning: > (ipython:20765): Gdk-WARNING **: /build/buildd/ > gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 drawable > is not a pixmap or window > Any insight would be greatly appreciated. > Best, > John Pellman > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From docpatient at gmail.com Fri Sep 16 12:44:50 2016 From: docpatient at gmail.com (Francesco Sammartino) Date: Fri, 16 Sep 2016 12:44:50 -0400 Subject: [Neuroimaging] NIFTI to DIcom Message-ID: Hi Can you provide me with some theoretical references on distortion correction in diffusion imaging preprocessing? I need to convert back to DICOM one dti scan after motion correction/distortion correction using flirt ecc. Is that theoretically Ok if I rotate appropriately the gradients after applying the correction? Thanks Francesco Sammartino -------------- next part -------------- An HTML attachment was scrubbed... URL: From zeydabadi at gmail.com Fri Sep 16 12:53:19 2016 From: zeydabadi at gmail.com (Mahmoud) Date: Fri, 16 Sep 2016 12:53:19 -0400 Subject: [Neuroimaging] NIFTI to DIcom In-Reply-To: References: Message-ID: I'm a newbie but as I understood it is necessary to rotate the gradient vectors after motion correction. However, as I heard from Jesper Andersson, using the rotated vectors in new FSL/Eddy (eddy_openmp) doesn't make a big difference (something around 0.1%). I hope the other experienced users comment on this. Mahmoud On Fri, Sep 16, 2016 at 12:44 PM, Francesco Sammartino wrote: > Hi > > Can you provide me with some theoretical references on distortion > correction in diffusion imaging preprocessing? > > I need to convert back to DICOM one dti scan after motion > correction/distortion correction using flirt ecc. > > Is that theoretically Ok if I rotate appropriately the gradients after > applying the correction? > > Thanks > > Francesco Sammartino > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francesco.sammartino.nch at gmail.com Fri Sep 16 12:58:22 2016 From: francesco.sammartino.nch at gmail.com (Francesco Sammartino) Date: Fri, 16 Sep 2016 12:58:22 -0400 Subject: [Neuroimaging] NIFTI to DIcom In-Reply-To: References: Message-ID: Thank you a lot Mahmoud. In my case I would need to apply a 9 dof transformation to allineate the DWI to the T1 so I assume I will need to rotate the vectors after. Did you ever heard of changing the original DICOM header files with the new gradients? Any possible problem with that? Thanks *Francesco Sammartino MD* On Fri, Sep 16, 2016 at 12:53 PM, Mahmoud wrote: > I'm a newbie but as I understood it is necessary to rotate the gradient > vectors after motion correction. > However, as I heard from Jesper Andersson, using the rotated vectors in > new FSL/Eddy (eddy_openmp) doesn't make a big difference (something around > 0.1%). > > I hope the other experienced users comment on this. > > Mahmoud > > On Fri, Sep 16, 2016 at 12:44 PM, Francesco Sammartino < > docpatient at gmail.com> wrote: > >> Hi >> >> Can you provide me with some theoretical references on distortion >> correction in diffusion imaging preprocessing? >> >> I need to convert back to DICOM one dti scan after motion >> correction/distortion correction using flirt ecc. >> >> Is that theoretically Ok if I rotate appropriately the gradients after >> applying the correction? >> >> Thanks >> >> Francesco Sammartino >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zeydabadi at gmail.com Fri Sep 16 13:34:34 2016 From: zeydabadi at gmail.com (Mahmoud) Date: Fri, 16 Sep 2016 13:34:34 -0400 Subject: [Neuroimaging] NIFTI to DIcom In-Reply-To: References: Message-ID: Unfortunately, no. This is the first time I'm hearing someone wants to convert from Nifti to DICOM. On Fri, Sep 16, 2016 at 12:58 PM, Francesco Sammartino < francesco.sammartino.nch at gmail.com> wrote: > Thank you a lot Mahmoud. > > In my case I would need to apply a 9 dof transformation to allineate the > DWI to the T1 so I assume I will need to rotate the vectors after. Did you > ever heard of changing the original DICOM header files with the new > gradients? Any possible problem with that? > > Thanks > > > *Francesco Sammartino MD* > > On Fri, Sep 16, 2016 at 12:53 PM, Mahmoud wrote: > >> I'm a newbie but as I understood it is necessary to rotate the gradient >> vectors after motion correction. >> However, as I heard from Jesper Andersson, using the rotated vectors in >> new FSL/Eddy (eddy_openmp) doesn't make a big difference (something around >> 0.1%). >> >> I hope the other experienced users comment on this. >> >> Mahmoud >> >> On Fri, Sep 16, 2016 at 12:44 PM, Francesco Sammartino < >> docpatient at gmail.com> wrote: >> >>> Hi >>> >>> Can you provide me with some theoretical references on distortion >>> correction in diffusion imaging preprocessing? >>> >>> I need to convert back to DICOM one dti scan after motion >>> correction/distortion correction using flirt ecc. >>> >>> Is that theoretically Ok if I rotate appropriately the gradients after >>> applying the correction? >>> >>> Thanks >>> >>> Francesco Sammartino >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlaplant at nmr.mgh.harvard.edu Fri Sep 16 14:36:08 2016 From: rlaplant at nmr.mgh.harvard.edu (Roan LaPlante) Date: Fri, 16 Sep 2016 14:36:08 -0400 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: <20160916162940.GA248315@phare.normalesup.org> References: <20160916162940.GA248315@phare.normalesup.org> Message-ID: If that's the problem, xvfb should still be a viable workaround in the nx context, right? On Sep 16, 2016 12:30 PM, "Gael Varoquaux" wrote: > Nx has always been a problem with Mayavi (or actually VTK, which is the > underlying technology). Basically, it interfers with the openGL contexts, > and in some cases the buffer cannot be captured well. Hence the black > image. > > IMHO, the bug is in NX or the mesa driver, or both. > > Ga?l > > On Fri, Sep 16, 2016 at 12:00:15PM -0400, John Pellman wrote: > > Pysurfer isn't running headless- it's using x2go, which is based upon > the nx > > protocol, a technology that improves the ability of X11 to function over > a > > network connection. Therefore, I don't think that Xvfb is related. > xvfbwrapper > > might be usable as a workaround, however. > > > As I mentioned in my last post, I traced the offending method back to > mayavi. > > I've opened an issue related to this here. > > > Kirstie- if you'd be willing to refer your sysadmin to this thread I > think that > > would be great, as I would be interested in hearing what theories or > potential > > fixes he/she might have for this issue as well. > > > --John > > > On Fri, Sep 16, 2016 at 5:43 AM, JB Poline wrote: > > > That's a cool idea and package - thanks for pointing to this ! > > > On 16 September 2016 at 02:40, Ariel Rokem wrote: > > > > On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker < > kw401 at cam.ac.uk> > > wrote: > > > Hi John, > > > I'm travelling at the moment but I've had problems with > pysurfer > > showing beautiful brains on the screen but only saving a > black box > > to file. It happened right after our systems admin updated a > few > > things but I haven't been able to get a clear list from him > of what > > changed except: everything should work. > > > My point with this email is please do share back what you > > learn.....even if it ends up being not a pysurfer problem. > At the > > moment my workaround is to move everything I do to a > different > > cluster that works!! Non efficient to say the least! > > > Thank you > > Kirstie > > > Sent from my iPhone, please excuse any typos or excessive > brevity > > > On 15 Sep 2016, at 12:44, John Pellman < > pellman.john at gmail.com> > > wrote: > > > > I've had at this a little bit more and my current > suspicion is > > that this behavior is the result of an interaction > between our > > remote desktop service (x2go) and Mayavi. > > > I created a an identical Miniconda environment for > Pysurfer on > > both our server and my laptop and ran the following code > to > > test this theory: > > > > # The Basic Visualization demo from the Pysurfer > gallery. > > from surfer import Brain > > > print(__doc__) > > > """ > > Define the three important variables. > > Note that these are the first three positional > arguments > > in tksurfer (and pysurfer for that matter). > > """ > > subject_id = 'fsaverage' > > hemi = 'lh' > > surface = 'inflated' > > > """ > > Call the Brain object constructor with these > > parameters to initialize the visualization session. > > """ > > brain = Brain(subject_id, hemi, surface) > > > # Save an image out to /tmp > > print 'Saving out an image to /tmp using > Brain.save_image.' > > brain.save_image('/tmp/brain.png') > > > # Looking at just the screenshot method of > pysurfer's Brain > > object. > > # This is called by save_image and is fed into > > scipy.misc.imsave. > > # If the boolean expression evaluated here is true, > then > > only a black > > # background is being fed into scipy's misc.imsave > method > > for evaluation. > > x = brain.screenshot() > > print 'Test pysurfer\'s Brain.screenshot.' > > if sum(x.flatten()==0)!=len(x.flatten()): > > print 'Pass' > > else: > > print 'Fail' > > > # Looking at the Mayavi mlab.screenshot method. > > # This is called by screenshot_single, which is > called by > > Brain's screenshot. > > # If the boolean expression evaluated here is true, > then > > only a black > > # background is being fed into Brain.screenshot() > > from mayavi import mlab > > x = mlab.screenshot(brain.brain_matrix[0,0]._f, > 'rgb', > > False) > > print 'Test mayavi\'s mlab.screenshot' > > if sum(x.flatten()==0)!=len(x.flatten()): > > print 'Pass' > > else: > > print 'Fail' > > > > On the server through an x2go session both > Brain.screenshot and > > mlab.screenshot failed to produce a non-blank image, > while on > > my laptop's local environment both of these methods did > produce > > the desired output (i.e., there were some nonzero > outputs). > > > Since this doesn't seem to be an error with pysurfer in > > particular, I'm going to proceed to see if anyone using > Mayavi > > with x2go or nx has encountered similar issues by > querying > > their forums / issue pages. I just wanted to leave this > here > > if someone else encounters the same issue in the future. > > > > A shot in the dark: Could it be something to do with running > headless? > > Maybe running this under XVFB (e.g. through xvfbwrapper) would > help? > > > Ariel > > > > > --John > > > On Tue, Sep 13, 2016 at 1:24 PM, John Pellman < > > pellman.john at gmail.com> wrote: > > > It looks like it might be related to the following > issue > > described at StackOverflow: > > > http://stackoverflow.com/questions/16543634/ > > mayavi-mlab-savefig-gives-an-empty-image > > > On Mon, Sep 12, 2016 at 2:00 PM, John Pellman < > > pellman.john at gmail.com> wrote: > > > Hi all, > > > I'm encountering a peculiar Pysurfer error on our > > server and I was wondering if anyone has > encountered > > anything similar or might have some insight into > how I > > can tackle it. Basically, when our researchers > try to > > save a png image using Brain.save_image() or > > Brain.save_imageset() the images produced only > contain > > the background color (as you may have inferred > from the > > subject line). I've traced this back to Scipy > method > > (scipy.misc.imsave), but it looks like this > would only > > output an empty png if the image passed in were > > completely zeroed out. Our setup uses the > following > > versions of pysurfer/its dependencies: > > > Numpy: 1.10.0.dev0+1fe98ff > > Scipy: 0.17.0.dev0+f2f6e48 > > Ipython: 3.1.0 > > nibabel: 2.0.0 > > Mayavi: 4.4.2 > > matplotlib: 1.4.3 > > PIL: 1.1.7 > > Pysurfer: 0.5 > > > This setup is running within a Miniconda > environment > > using Python 2.7.11. I'm uncertain if this is > related, > > but running the example code here produces the > > following warning: > > > (ipython:20765): Gdk-WARNING **: /build/buildd/ > > gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 > drawable > > is not a pixmap or window > > > Any insight would be greatly appreciated. > > > Best, > > John Pellman > > > > > > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging > > > -- > Gael Varoquaux > Researcher, INRIA Parietal > NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France > Phone: ++ 33-1-69-08-79-68 > http://gael-varoquaux.info http://twitter.com/GaelVaroquaux > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > > > The information in this e-mail is intended only for the person to whom it > is > addressed. If you believe this e-mail was sent to you in error and the > e-mail > contains patient information, please contact the Partners Compliance > HelpLine at > http://www.partners.org/complianceline . If the e-mail was sent to you in > error > but does not contain patient information, please contact the sender and > properly > dispose of the e-mail. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Fri Sep 16 15:55:08 2016 From: pellman.john at gmail.com (John Pellman) Date: Fri, 16 Sep 2016 15:55:08 -0400 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: <20160916162940.GA248315@phare.normalesup.org> Message-ID: Running the basic visualization code for Pysurfer followed by Brain.save_image() gave the following results with when run with xvfbwrapper: On the server under x2go/nx : An all-black image. On the server under a regular ssh session : An all-black image. On my local computer: An all-black image. So it doesn't look like that's going to work. :/ wxPython is also installed- is it possible that that is affecting the image rendering too? On Fri, Sep 16, 2016 at 2:36 PM, Roan LaPlante wrote: > If that's the problem, xvfb should still be a viable workaround in the nx > context, right? > > On Sep 16, 2016 12:30 PM, "Gael Varoquaux" > wrote: > >> Nx has always been a problem with Mayavi (or actually VTK, which is the >> underlying technology). Basically, it interfers with the openGL contexts, >> and in some cases the buffer cannot be captured well. Hence the black >> image. >> >> IMHO, the bug is in NX or the mesa driver, or both. >> >> Ga?l >> >> On Fri, Sep 16, 2016 at 12:00:15PM -0400, John Pellman wrote: >> > Pysurfer isn't running headless- it's using x2go, which is based upon >> the nx >> > protocol, a technology that improves the ability of X11 to function >> over a >> > network connection. Therefore, I don't think that Xvfb is related. >> xvfbwrapper >> > might be usable as a workaround, however. >> >> > As I mentioned in my last post, I traced the offending method back to >> mayavi. >> > I've opened an issue related to this here. >> >> > Kirstie- if you'd be willing to refer your sysadmin to this thread I >> think that >> > would be great, as I would be interested in hearing what theories or >> potential >> > fixes he/she might have for this issue as well. >> >> > --John >> >> > On Fri, Sep 16, 2016 at 5:43 AM, JB Poline wrote: >> >> > That's a cool idea and package - thanks for pointing to this ! >> >> > On 16 September 2016 at 02:40, Ariel Rokem >> wrote: >> >> >> > On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker < >> kw401 at cam.ac.uk> >> > wrote: >> >> > Hi John, >> >> > I'm travelling at the moment but I've had problems with >> pysurfer >> > showing beautiful brains on the screen but only saving a >> black box >> > to file. It happened right after our systems admin updated >> a few >> > things but I haven't been able to get a clear list from him >> of what >> > changed except: everything should work. >> >> > My point with this email is please do share back what you >> > learn.....even if it ends up being not a pysurfer problem. >> At the >> > moment my workaround is to move everything I do to a >> different >> > cluster that works!! Non efficient to say the least! >> >> > Thank you >> > Kirstie >> >> > Sent from my iPhone, please excuse any typos or excessive >> brevity >> >> > On 15 Sep 2016, at 12:44, John Pellman < >> pellman.john at gmail.com> >> > wrote: >> >> >> > I've had at this a little bit more and my current >> suspicion is >> > that this behavior is the result of an interaction >> between our >> > remote desktop service (x2go) and Mayavi. >> >> > I created a an identical Miniconda environment for >> Pysurfer on >> > both our server and my laptop and ran the following >> code to >> > test this theory: >> >> >> > # The Basic Visualization demo from the Pysurfer >> gallery. >> > from surfer import Brain >> >> > print(__doc__) >> >> > """ >> > Define the three important variables. >> > Note that these are the first three positional >> arguments >> > in tksurfer (and pysurfer for that matter). >> > """ >> > subject_id = 'fsaverage' >> > hemi = 'lh' >> > surface = 'inflated' >> >> > """ >> > Call the Brain object constructor with these >> > parameters to initialize the visualization session. >> > """ >> > brain = Brain(subject_id, hemi, surface) >> >> > # Save an image out to /tmp >> > print 'Saving out an image to /tmp using >> Brain.save_image.' >> > brain.save_image('/tmp/brain.png') >> >> > # Looking at just the screenshot method of >> pysurfer's Brain >> > object. >> > # This is called by save_image and is fed into >> > scipy.misc.imsave. >> > # If the boolean expression evaluated here is true, >> then >> > only a black >> > # background is being fed into scipy's misc.imsave >> method >> > for evaluation. >> > x = brain.screenshot() >> > print 'Test pysurfer\'s Brain.screenshot.' >> > if sum(x.flatten()==0)!=len(x.flatten()): >> > print 'Pass' >> > else: >> > print 'Fail' >> >> > # Looking at the Mayavi mlab.screenshot method. >> > # This is called by screenshot_single, which is >> called by >> > Brain's screenshot. >> > # If the boolean expression evaluated here is true, >> then >> > only a black >> > # background is being fed into Brain.screenshot() >> > from mayavi import mlab >> > x = mlab.screenshot(brain.brain_matrix[0,0]._f, >> 'rgb', >> > False) >> > print 'Test mayavi\'s mlab.screenshot' >> > if sum(x.flatten()==0)!=len(x.flatten()): >> > print 'Pass' >> > else: >> > print 'Fail' >> >> >> > On the server through an x2go session both >> Brain.screenshot and >> > mlab.screenshot failed to produce a non-blank image, >> while on >> > my laptop's local environment both of these methods did >> produce >> > the desired output (i.e., there were some nonzero >> outputs). >> >> > Since this doesn't seem to be an error with pysurfer in >> > particular, I'm going to proceed to see if anyone using >> Mayavi >> > with x2go or nx has encountered similar issues by >> querying >> > their forums / issue pages. I just wanted to leave >> this here >> > if someone else encounters the same issue in the future. >> >> >> > A shot in the dark: Could it be something to do with running >> headless? >> > Maybe running this under XVFB (e.g. through xvfbwrapper) would >> help? >> >> > Ariel >> > >> >> > --John >> >> > On Tue, Sep 13, 2016 at 1:24 PM, John Pellman < >> > pellman.john at gmail.com> wrote: >> >> > It looks like it might be related to the following >> issue >> > described at StackOverflow: >> >> > http://stackoverflow.com/questions/16543634/ >> > mayavi-mlab-savefig-gives-an-empty-image >> >> > On Mon, Sep 12, 2016 at 2:00 PM, John Pellman < >> > pellman.john at gmail.com> wrote: >> >> > Hi all, >> >> > I'm encountering a peculiar Pysurfer error on >> our >> > server and I was wondering if anyone has >> encountered >> > anything similar or might have some insight >> into how I >> > can tackle it. Basically, when our researchers >> try to >> > save a png image using Brain.save_image() or >> > Brain.save_imageset() the images produced only >> contain >> > the background color (as you may have inferred >> from the >> > subject line). I've traced this back to Scipy >> method >> > (scipy.misc.imsave), but it looks like this >> would only >> > output an empty png if the image passed in were >> > completely zeroed out. Our setup uses the >> following >> > versions of pysurfer/its dependencies: >> >> > Numpy: 1.10.0.dev0+1fe98ff >> > Scipy: 0.17.0.dev0+f2f6e48 >> > Ipython: 3.1.0 >> > nibabel: 2.0.0 >> > Mayavi: 4.4.2 >> > matplotlib: 1.4.3 >> > PIL: 1.1.7 >> > Pysurfer: 0.5 >> >> > This setup is running within a Miniconda >> environment >> > using Python 2.7.11. I'm uncertain if this is >> related, >> > but running the example code here produces the >> > following warning: >> >> > (ipython:20765): Gdk-WARNING **: /build/buildd/ >> > gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 >> drawable >> > is not a pixmap or window >> >> > Any insight would be greatly appreciated. >> >> > Best, >> > John Pellman >> >> >> >> >> >> > _______________________________________________ >> > Neuroimaging mailing list >> > Neuroimaging at python.org >> > https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> > _______________________________________________ >> > Neuroimaging mailing list >> > Neuroimaging at python.org >> > https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> >> >> > _______________________________________________ >> > Neuroimaging mailing list >> > Neuroimaging at python.org >> > https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> >> >> > _______________________________________________ >> > Neuroimaging mailing list >> > Neuroimaging at python.org >> > https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> >> >> > _______________________________________________ >> > Neuroimaging mailing list >> > Neuroimaging at python.org >> > https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> -- >> Gael Varoquaux >> Researcher, INRIA Parietal >> NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France >> Phone: ++ 33-1-69-08-79-68 >> http://gael-varoquaux.info http://twitter.com/GaelVaroqua >> ux >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> >> The information in this e-mail is intended only for the person to whom it >> is >> addressed. If you believe this e-mail was sent to you in error and the >> e-mail >> contains patient information, please contact the Partners Compliance >> HelpLine at >> http://www.partners.org/complianceline . If the e-mail was sent to you >> in error >> but does not contain patient information, please contact the sender and >> properly >> dispose of the e-mail. >> >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlaplant at nmr.mgh.harvard.edu Fri Sep 16 16:14:32 2016 From: rlaplant at nmr.mgh.harvard.edu (Roan LaPlante) Date: Fri, 16 Sep 2016 16:14:32 -0400 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: <20160916162940.GA248315@phare.normalesup.org> Message-ID: On Sep 16, 2016 4:13 PM, "Roan LaPlante" wrote: > That's weird, I use xvfb with pysurfer to plot in a headless context > regularly. > > Have you confirmed that there is not a missing or outdated library > version? What python distribution are you running? > > On Sep 16, 2016 3:55 PM, "John Pellman" wrote: > >> Running the basic visualization code for Pysurfer followed by >> Brain.save_image() gave the following results with when run with >> xvfbwrapper: >> >> On the server under x2go/nx : An all-black image. >> On the server under a regular ssh session : An all-black image. >> On my local computer: An all-black image. >> >> So it doesn't look like that's going to work. :/ >> >> wxPython is also installed- is it possible that that is affecting the >> image rendering too? >> >> On Fri, Sep 16, 2016 at 2:36 PM, Roan LaPlante < >> rlaplant at nmr.mgh.harvard.edu> wrote: >> >>> If that's the problem, xvfb should still be a viable workaround in the >>> nx context, right? >>> >>> On Sep 16, 2016 12:30 PM, "Gael Varoquaux" < >>> gael.varoquaux at normalesup.org> wrote: >>> >>>> Nx has always been a problem with Mayavi (or actually VTK, which is the >>>> underlying technology). Basically, it interfers with the openGL >>>> contexts, >>>> and in some cases the buffer cannot be captured well. Hence the black >>>> image. >>>> >>>> IMHO, the bug is in NX or the mesa driver, or both. >>>> >>>> Ga?l >>>> >>>> On Fri, Sep 16, 2016 at 12:00:15PM -0400, John Pellman wrote: >>>> > Pysurfer isn't running headless- it's using x2go, which is based upon >>>> the nx >>>> > protocol, a technology that improves the ability of X11 to function >>>> over a >>>> > network connection. Therefore, I don't think that Xvfb is related. >>>> xvfbwrapper >>>> > might be usable as a workaround, however. >>>> >>>> > As I mentioned in my last post, I traced the offending method back to >>>> mayavi. >>>> > I've opened an issue related to this here. >>>> >>>> > Kirstie- if you'd be willing to refer your sysadmin to this thread I >>>> think that >>>> > would be great, as I would be interested in hearing what theories or >>>> potential >>>> > fixes he/she might have for this issue as well. >>>> >>>> > --John >>>> >>>> > On Fri, Sep 16, 2016 at 5:43 AM, JB Poline >>>> wrote: >>>> >>>> > That's a cool idea and package - thanks for pointing to this ! >>>> >>>> > On 16 September 2016 at 02:40, Ariel Rokem >>>> wrote: >>>> >>>> >>>> > On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker < >>>> kw401 at cam.ac.uk> >>>> > wrote: >>>> >>>> > Hi John, >>>> >>>> > I'm travelling at the moment but I've had problems with >>>> pysurfer >>>> > showing beautiful brains on the screen but only saving a >>>> black box >>>> > to file. It happened right after our systems admin >>>> updated a few >>>> > things but I haven't been able to get a clear list from >>>> him of what >>>> > changed except: everything should work. >>>> >>>> > My point with this email is please do share back what you >>>> > learn.....even if it ends up being not a pysurfer >>>> problem. At the >>>> > moment my workaround is to move everything I do to a >>>> different >>>> > cluster that works!! Non efficient to say the least! >>>> >>>> > Thank you >>>> > Kirstie >>>> >>>> > Sent from my iPhone, please excuse any typos or excessive >>>> brevity >>>> >>>> > On 15 Sep 2016, at 12:44, John Pellman < >>>> pellman.john at gmail.com> >>>> > wrote: >>>> >>>> >>>> > I've had at this a little bit more and my current >>>> suspicion is >>>> > that this behavior is the result of an interaction >>>> between our >>>> > remote desktop service (x2go) and Mayavi. >>>> >>>> > I created a an identical Miniconda environment for >>>> Pysurfer on >>>> > both our server and my laptop and ran the following >>>> code to >>>> > test this theory: >>>> >>>> >>>> > # The Basic Visualization demo from the Pysurfer >>>> gallery. >>>> > from surfer import Brain >>>> >>>> > print(__doc__) >>>> >>>> > """ >>>> > Define the three important variables. >>>> > Note that these are the first three positional >>>> arguments >>>> > in tksurfer (and pysurfer for that matter). >>>> > """ >>>> > subject_id = 'fsaverage' >>>> > hemi = 'lh' >>>> > surface = 'inflated' >>>> >>>> > """ >>>> > Call the Brain object constructor with these >>>> > parameters to initialize the visualization >>>> session. >>>> > """ >>>> > brain = Brain(subject_id, hemi, surface) >>>> >>>> > # Save an image out to /tmp >>>> > print 'Saving out an image to /tmp using >>>> Brain.save_image.' >>>> > brain.save_image('/tmp/brain.png') >>>> >>>> > # Looking at just the screenshot method of >>>> pysurfer's Brain >>>> > object. >>>> > # This is called by save_image and is fed into >>>> > scipy.misc.imsave. >>>> > # If the boolean expression evaluated here is >>>> true, then >>>> > only a black >>>> > # background is being fed into scipy's >>>> misc.imsave method >>>> > for evaluation. >>>> > x = brain.screenshot() >>>> > print 'Test pysurfer\'s Brain.screenshot.' >>>> > if sum(x.flatten()==0)!=len(x.flatten()): >>>> > print 'Pass' >>>> > else: >>>> > print 'Fail' >>>> >>>> > # Looking at the Mayavi mlab.screenshot method. >>>> > # This is called by screenshot_single, which is >>>> called by >>>> > Brain's screenshot. >>>> > # If the boolean expression evaluated here is >>>> true, then >>>> > only a black >>>> > # background is being fed into Brain.screenshot() >>>> > from mayavi import mlab >>>> > x = mlab.screenshot(brain.brain_matrix[0,0]._f, >>>> 'rgb', >>>> > False) >>>> > print 'Test mayavi\'s mlab.screenshot' >>>> > if sum(x.flatten()==0)!=len(x.flatten()): >>>> > print 'Pass' >>>> > else: >>>> > print 'Fail' >>>> >>>> >>>> > On the server through an x2go session both >>>> Brain.screenshot and >>>> > mlab.screenshot failed to produce a non-blank image, >>>> while on >>>> > my laptop's local environment both of these methods >>>> did produce >>>> > the desired output (i.e., there were some nonzero >>>> outputs). >>>> >>>> > Since this doesn't seem to be an error with pysurfer >>>> in >>>> > particular, I'm going to proceed to see if anyone >>>> using Mayavi >>>> > with x2go or nx has encountered similar issues by >>>> querying >>>> > their forums / issue pages. I just wanted to leave >>>> this here >>>> > if someone else encounters the same issue in the >>>> future. >>>> >>>> >>>> > A shot in the dark: Could it be something to do with running >>>> headless? >>>> > Maybe running this under XVFB (e.g. through xvfbwrapper) >>>> would help? >>>> >>>> > Ariel >>>> > >>>> >>>> > --John >>>> >>>> > On Tue, Sep 13, 2016 at 1:24 PM, John Pellman < >>>> > pellman.john at gmail.com> wrote: >>>> >>>> > It looks like it might be related to the >>>> following issue >>>> > described at StackOverflow: >>>> >>>> > http://stackoverflow.com/questions/16543634/ >>>> > mayavi-mlab-savefig-gives-an-empty-image >>>> >>>> > On Mon, Sep 12, 2016 at 2:00 PM, John Pellman < >>>> > pellman.john at gmail.com> wrote: >>>> >>>> > Hi all, >>>> >>>> > I'm encountering a peculiar Pysurfer error on >>>> our >>>> > server and I was wondering if anyone has >>>> encountered >>>> > anything similar or might have some insight >>>> into how I >>>> > can tackle it. Basically, when our >>>> researchers try to >>>> > save a png image using Brain.save_image() or >>>> > Brain.save_imageset() the images produced >>>> only contain >>>> > the background color (as you may have >>>> inferred from the >>>> > subject line). I've traced this back to >>>> Scipy method >>>> > (scipy.misc.imsave), but it looks like this >>>> would only >>>> > output an empty png if the image passed in >>>> were >>>> > completely zeroed out. Our setup uses the >>>> following >>>> > versions of pysurfer/its dependencies: >>>> >>>> > Numpy: 1.10.0.dev0+1fe98ff >>>> > Scipy: 0.17.0.dev0+f2f6e48 >>>> > Ipython: 3.1.0 >>>> > nibabel: 2.0.0 >>>> > Mayavi: 4.4.2 >>>> > matplotlib: 1.4.3 >>>> > PIL: 1.1.7 >>>> > Pysurfer: 0.5 >>>> >>>> > This setup is running within a Miniconda >>>> environment >>>> > using Python 2.7.11. I'm uncertain if this >>>> is related, >>>> > but running the example code here produces the >>>> > following warning: >>>> >>>> > (ipython:20765): Gdk-WARNING **: >>>> /build/buildd/ >>>> > gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 >>>> drawable >>>> > is not a pixmap or window >>>> >>>> > Any insight would be greatly appreciated. >>>> >>>> > Best, >>>> > John Pellman >>>> >>>> >>>> >>>> >>>> >>>> > _______________________________________________ >>>> > Neuroimaging mailing list >>>> > Neuroimaging at python.org >>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>>> > _______________________________________________ >>>> > Neuroimaging mailing list >>>> > Neuroimaging at python.org >>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>>> >>>> >>>> > _______________________________________________ >>>> > Neuroimaging mailing list >>>> > Neuroimaging at python.org >>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>>> >>>> >>>> > _______________________________________________ >>>> > Neuroimaging mailing list >>>> > Neuroimaging at python.org >>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>>> >>>> >>>> > _______________________________________________ >>>> > Neuroimaging mailing list >>>> > Neuroimaging at python.org >>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>>> -- >>>> Gael Varoquaux >>>> Researcher, INRIA Parietal >>>> NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France >>>> Phone: ++ 33-1-69-08-79-68 >>>> http://gael-varoquaux.info >>>> http://twitter.com/GaelVaroquaux >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>>> >>>> The information in this e-mail is intended only for the person to whom >>>> it is >>>> addressed. If you believe this e-mail was sent to you in error and the >>>> e-mail >>>> contains patient information, please contact the Partners Compliance >>>> HelpLine at >>>> http://www.partners.org/complianceline . If the e-mail was sent to you >>>> in error >>>> but does not contain patient information, please contact the sender and >>>> properly >>>> dispose of the e-mail. >>>> >>>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> >> The information in this e-mail is intended only for the person to whom it >> is >> addressed. If you believe this e-mail was sent to you in error and the >> e-mail >> contains patient information, please contact the Partners Compliance >> HelpLine at >> http://www.partners.org/complianceline . If the e-mail was sent to you >> in error >> but does not contain patient information, please contact the sender and >> properly >> dispose of the e-mail. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Fri Sep 16 17:59:26 2016 From: pellman.john at gmail.com (John Pellman) Date: Fri, 16 Sep 2016 17:59:26 -0400 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: <20160916162940.GA248315@phare.normalesup.org> Message-ID: I'm running in a Miniconda environment using Python 2.7.12. The original version numbers are slightly different from what I first had because I tried to create a new environment. This is the output of *conda env export* : name: surfer > channels: !!python/tuple > - !!python/unicode 'defaults' > dependencies: > - apptools=4.2.1=py27_0 > - cairo=1.12.18=6 > - configobj=5.0.6=py27_0 > - envisage=4.4.0=py27_1 > - fontconfig=2.11.1=6 > - freetype=2.5.5=1 > - glib=2.43.0=1 > - harfbuzz=0.9.39=1 > - jpeg=8d=1 > - lcms=1.19=0 > - libffi=3.2.1=0 > - libgfortran=1.0=0 > - libpng=1.6.22=0 > - libxml2=2.9.2=0 > - mayavi=4.4.0=np19py27_0 > - mkl=11.3.3=0 > - numpy=1.9.3=py27_3 > - openssl=1.0.2h=1 > - pango=1.39.0=1 > - pil=1.1.7=py27_2 > - pip=8.1.2=py27_0 > - pixman=0.32.6=0 > - pyface=4.4.0=py27_0 > - pyqt=4.11.4=py27_4 > - python=2.7.12=1 > - qt=4.8.7=4 > - readline=6.2=2 > - scipy=0.15.1=np19py27_0 > - setuptools=26.1.1=py27_0 > - sip=4.18=py27_0 > - six=1.10.0=py27_0 > - sqlite=3.13.0=0 > - tk=8.5.18=0 > - traits=4.4.0=py27_0 > - traitsui=4.4.0=py27_0 > - vtk=5.10.1=py27_1 > - wheel=0.29.0=py27_0 > - wxpython=3.0.0.0=py27_2 > - zlib=1.2.8=3 > - pip: > - backports.shutil-get-terminal-size==1.0.0 > - cycler==0.10.0 > - decorator==4.0.10 > - enum34==1.1.6 > - ipython==5.1.0 > - ipython-genutils==0.1.0 > - matplotlib==1.5.3 > - nibabel==2.1.0 > - nilearn==0.1.4.post1 > - pathlib2==2.1.0 > - pexpect==4.2.1 > - pickleshare==0.7.4 > - prompt-toolkit==1.0.7 > - ptyprocess==0.5.1 > - pygments==2.1.3 > - pyparsing==2.1.9 > - pysurfer==0.6 > - python-dateutil==2.5.3 > - pytz==2016.6.1 > - simplegeneric==0.8.1 > - traitlets==4.3.0 > - wcwidth==0.1.7 > - wxpython-common==3.0.0.0 > - xvfbwrapper==0.2.8 > prefix: /usr/local/bin/miniconda/envs/surfer I've tried using Xvfb both with and without xvfbwrapper, but get the same result both ways. I've also been getting the following GTK-related errors: Gtk-Message: Failed to load module "gail" Gtk-Message: Failed to load module "atk-bridge" (mayavi_test.py:33137): Gtk-WARNING **: GModule (/usr/lib/x86_64-linux-gnu/gtk-2.0/2.10.0/immodules/im-ibus.so) initialization check failed: GLib version too old (micro mismatch) (mayavi_test.py:33137): Gtk-WARNING **: Loading IM context type 'ibus' failed Is Brain.save_image among the functions you use in headless mode regularly, or is the class of plotting functions you're using more broad? If not, which functions do you use? On Fri, Sep 16, 2016 at 4:14 PM, Roan LaPlante wrote: > > On Sep 16, 2016 4:13 PM, "Roan LaPlante" wrote: > >> That's weird, I use xvfb with pysurfer to plot in a headless context >> regularly. >> >> Have you confirmed that there is not a missing or outdated library >> version? What python distribution are you running? >> >> On Sep 16, 2016 3:55 PM, "John Pellman" wrote: >> >>> Running the basic visualization code for Pysurfer followed by >>> Brain.save_image() gave the following results with when run with >>> xvfbwrapper: >>> >>> On the server under x2go/nx : An all-black image. >>> On the server under a regular ssh session : An all-black image. >>> On my local computer: An all-black image. >>> >>> So it doesn't look like that's going to work. :/ >>> >>> wxPython is also installed- is it possible that that is affecting the >>> image rendering too? >>> >>> On Fri, Sep 16, 2016 at 2:36 PM, Roan LaPlante < >>> rlaplant at nmr.mgh.harvard.edu> wrote: >>> >>>> If that's the problem, xvfb should still be a viable workaround in the >>>> nx context, right? >>>> >>>> On Sep 16, 2016 12:30 PM, "Gael Varoquaux" < >>>> gael.varoquaux at normalesup.org> wrote: >>>> >>>>> Nx has always been a problem with Mayavi (or actually VTK, which is the >>>>> underlying technology). Basically, it interfers with the openGL >>>>> contexts, >>>>> and in some cases the buffer cannot be captured well. Hence the black >>>>> image. >>>>> >>>>> IMHO, the bug is in NX or the mesa driver, or both. >>>>> >>>>> Ga?l >>>>> >>>>> On Fri, Sep 16, 2016 at 12:00:15PM -0400, John Pellman wrote: >>>>> > Pysurfer isn't running headless- it's using x2go, which is based >>>>> upon the nx >>>>> > protocol, a technology that improves the ability of X11 to function >>>>> over a >>>>> > network connection. Therefore, I don't think that Xvfb is related. >>>>> xvfbwrapper >>>>> > might be usable as a workaround, however. >>>>> >>>>> > As I mentioned in my last post, I traced the offending method back >>>>> to mayavi. >>>>> > I've opened an issue related to this here. >>>>> >>>>> > Kirstie- if you'd be willing to refer your sysadmin to this thread I >>>>> think that >>>>> > would be great, as I would be interested in hearing what theories or >>>>> potential >>>>> > fixes he/she might have for this issue as well. >>>>> >>>>> > --John >>>>> >>>>> > On Fri, Sep 16, 2016 at 5:43 AM, JB Poline >>>>> wrote: >>>>> >>>>> > That's a cool idea and package - thanks for pointing to this ! >>>>> >>>>> > On 16 September 2016 at 02:40, Ariel Rokem >>>>> wrote: >>>>> >>>>> >>>>> > On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker < >>>>> kw401 at cam.ac.uk> >>>>> > wrote: >>>>> >>>>> > Hi John, >>>>> >>>>> > I'm travelling at the moment but I've had problems with >>>>> pysurfer >>>>> > showing beautiful brains on the screen but only saving a >>>>> black box >>>>> > to file. It happened right after our systems admin >>>>> updated a few >>>>> > things but I haven't been able to get a clear list from >>>>> him of what >>>>> > changed except: everything should work. >>>>> >>>>> > My point with this email is please do share back what you >>>>> > learn.....even if it ends up being not a pysurfer >>>>> problem. At the >>>>> > moment my workaround is to move everything I do to a >>>>> different >>>>> > cluster that works!! Non efficient to say the least! >>>>> >>>>> > Thank you >>>>> > Kirstie >>>>> >>>>> > Sent from my iPhone, please excuse any typos or >>>>> excessive brevity >>>>> >>>>> > On 15 Sep 2016, at 12:44, John Pellman < >>>>> pellman.john at gmail.com> >>>>> > wrote: >>>>> >>>>> >>>>> > I've had at this a little bit more and my current >>>>> suspicion is >>>>> > that this behavior is the result of an interaction >>>>> between our >>>>> > remote desktop service (x2go) and Mayavi. >>>>> >>>>> > I created a an identical Miniconda environment for >>>>> Pysurfer on >>>>> > both our server and my laptop and ran the following >>>>> code to >>>>> > test this theory: >>>>> >>>>> >>>>> > # The Basic Visualization demo from the Pysurfer >>>>> gallery. >>>>> > from surfer import Brain >>>>> >>>>> > print(__doc__) >>>>> >>>>> > """ >>>>> > Define the three important variables. >>>>> > Note that these are the first three positional >>>>> arguments >>>>> > in tksurfer (and pysurfer for that matter). >>>>> > """ >>>>> > subject_id = 'fsaverage' >>>>> > hemi = 'lh' >>>>> > surface = 'inflated' >>>>> >>>>> > """ >>>>> > Call the Brain object constructor with these >>>>> > parameters to initialize the visualization >>>>> session. >>>>> > """ >>>>> > brain = Brain(subject_id, hemi, surface) >>>>> >>>>> > # Save an image out to /tmp >>>>> > print 'Saving out an image to /tmp using >>>>> Brain.save_image.' >>>>> > brain.save_image('/tmp/brain.png') >>>>> >>>>> > # Looking at just the screenshot method of >>>>> pysurfer's Brain >>>>> > object. >>>>> > # This is called by save_image and is fed into >>>>> > scipy.misc.imsave. >>>>> > # If the boolean expression evaluated here is >>>>> true, then >>>>> > only a black >>>>> > # background is being fed into scipy's >>>>> misc.imsave method >>>>> > for evaluation. >>>>> > x = brain.screenshot() >>>>> > print 'Test pysurfer\'s Brain.screenshot.' >>>>> > if sum(x.flatten()==0)!=len(x.flatten()): >>>>> > print 'Pass' >>>>> > else: >>>>> > print 'Fail' >>>>> >>>>> > # Looking at the Mayavi mlab.screenshot method. >>>>> > # This is called by screenshot_single, which is >>>>> called by >>>>> > Brain's screenshot. >>>>> > # If the boolean expression evaluated here is >>>>> true, then >>>>> > only a black >>>>> > # background is being fed into Brain.screenshot() >>>>> > from mayavi import mlab >>>>> > x = mlab.screenshot(brain.brain_matrix[0,0]._f, >>>>> 'rgb', >>>>> > False) >>>>> > print 'Test mayavi\'s mlab.screenshot' >>>>> > if sum(x.flatten()==0)!=len(x.flatten()): >>>>> > print 'Pass' >>>>> > else: >>>>> > print 'Fail' >>>>> >>>>> >>>>> > On the server through an x2go session both >>>>> Brain.screenshot and >>>>> > mlab.screenshot failed to produce a non-blank image, >>>>> while on >>>>> > my laptop's local environment both of these methods >>>>> did produce >>>>> > the desired output (i.e., there were some nonzero >>>>> outputs). >>>>> >>>>> > Since this doesn't seem to be an error with pysurfer >>>>> in >>>>> > particular, I'm going to proceed to see if anyone >>>>> using Mayavi >>>>> > with x2go or nx has encountered similar issues by >>>>> querying >>>>> > their forums / issue pages. I just wanted to leave >>>>> this here >>>>> > if someone else encounters the same issue in the >>>>> future. >>>>> >>>>> >>>>> > A shot in the dark: Could it be something to do with running >>>>> headless? >>>>> > Maybe running this under XVFB (e.g. through xvfbwrapper) >>>>> would help? >>>>> >>>>> > Ariel >>>>> > >>>>> >>>>> > --John >>>>> >>>>> > On Tue, Sep 13, 2016 at 1:24 PM, John Pellman < >>>>> > pellman.john at gmail.com> wrote: >>>>> >>>>> > It looks like it might be related to the >>>>> following issue >>>>> > described at StackOverflow: >>>>> >>>>> > http://stackoverflow.com/questions/16543634/ >>>>> > mayavi-mlab-savefig-gives-an-empty-image >>>>> >>>>> > On Mon, Sep 12, 2016 at 2:00 PM, John Pellman < >>>>> > pellman.john at gmail.com> wrote: >>>>> >>>>> > Hi all, >>>>> >>>>> > I'm encountering a peculiar Pysurfer error >>>>> on our >>>>> > server and I was wondering if anyone has >>>>> encountered >>>>> > anything similar or might have some insight >>>>> into how I >>>>> > can tackle it. Basically, when our >>>>> researchers try to >>>>> > save a png image using Brain.save_image() or >>>>> > Brain.save_imageset() the images produced >>>>> only contain >>>>> > the background color (as you may have >>>>> inferred from the >>>>> > subject line). I've traced this back to >>>>> Scipy method >>>>> > (scipy.misc.imsave), but it looks like this >>>>> would only >>>>> > output an empty png if the image passed in >>>>> were >>>>> > completely zeroed out. Our setup uses the >>>>> following >>>>> > versions of pysurfer/its dependencies: >>>>> >>>>> > Numpy: 1.10.0.dev0+1fe98ff >>>>> > Scipy: 0.17.0.dev0+f2f6e48 >>>>> > Ipython: 3.1.0 >>>>> > nibabel: 2.0.0 >>>>> > Mayavi: 4.4.2 >>>>> > matplotlib: 1.4.3 >>>>> > PIL: 1.1.7 >>>>> > Pysurfer: 0.5 >>>>> >>>>> > This setup is running within a Miniconda >>>>> environment >>>>> > using Python 2.7.11. I'm uncertain if this >>>>> is related, >>>>> > but running the example code here produces >>>>> the >>>>> > following warning: >>>>> >>>>> > (ipython:20765): Gdk-WARNING **: >>>>> /build/buildd/ >>>>> > gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 >>>>> drawable >>>>> > is not a pixmap or window >>>>> >>>>> > Any insight would be greatly appreciated. >>>>> >>>>> > Best, >>>>> > John Pellman >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> > _______________________________________________ >>>>> > Neuroimaging mailing list >>>>> > Neuroimaging at python.org >>>>> > https://mail.python.org/mailm >>>>> an/listinfo/neuroimaging >>>>> >>>>> >>>>> > _______________________________________________ >>>>> > Neuroimaging mailing list >>>>> > Neuroimaging at python.org >>>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>>> >>>>> >>>>> > _______________________________________________ >>>>> > Neuroimaging mailing list >>>>> > Neuroimaging at python.org >>>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>>> >>>>> >>>>> > _______________________________________________ >>>>> > Neuroimaging mailing list >>>>> > Neuroimaging at python.org >>>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>>> >>>>> >>>>> > _______________________________________________ >>>>> > Neuroimaging mailing list >>>>> > Neuroimaging at python.org >>>>> > https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>>> -- >>>>> Gael Varoquaux >>>>> Researcher, INRIA Parietal >>>>> NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France >>>>> Phone: ++ 33-1-69-08-79-68 >>>>> http://gael-varoquaux.info >>>>> http://twitter.com/GaelVaroquaux >>>>> _______________________________________________ >>>>> Neuroimaging mailing list >>>>> Neuroimaging at python.org >>>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>>> >>>>> >>>>> >>>>> The information in this e-mail is intended only for the person to whom >>>>> it is >>>>> addressed. If you believe this e-mail was sent to you in error and the >>>>> e-mail >>>>> contains patient information, please contact the Partners Compliance >>>>> HelpLine at >>>>> http://www.partners.org/complianceline . If the e-mail was sent to >>>>> you in error >>>>> but does not contain patient information, please contact the sender >>>>> and properly >>>>> dispose of the e-mail. >>>>> >>>>> >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>>> >>> >>> _______________________________________________ >>> Neuroimaging mailing list >>> Neuroimaging at python.org >>> https://mail.python.org/mailman/listinfo/neuroimaging >>> >>> >>> The information in this e-mail is intended only for the person to whom >>> it is >>> addressed. If you believe this e-mail was sent to you in error and the >>> e-mail >>> contains patient information, please contact the Partners Compliance >>> HelpLine at >>> http://www.partners.org/complianceline . If the e-mail was sent to you >>> in error >>> but does not contain patient information, please contact the sender and >>> properly >>> dispose of the e-mail. >>> >>> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Sep 17 05:36:15 2016 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 17 Sep 2016 11:36:15 +0200 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: References: <20160916162940.GA248315@phare.normalesup.org> Message-ID: <20160917093615.GB613850@phare.normalesup.org> What if you change local computer. I don't remember if it's the server's mesa that is used or the client's mesa that is used. If it's the client's mesa, than the problem is related to your computer. Ga?l On Fri, Sep 16, 2016 at 03:55:08PM -0400, John Pellman wrote: > Running the basic visualization code for Pysurfer followed by Brain.save_image > () gave the following results with when run with xvfbwrapper: > On the server under x2go/nx : An all-black image. > On the server under a regular ssh session : An all-black image. > On my local computer: An all-black image. > So it doesn't look like that's going to work. :/ > wxPython is also installed- is it possible that that is affecting the image > rendering too? > On Fri, Sep 16, 2016 at 2:36 PM, Roan LaPlante > wrote: > If that's the problem, xvfb should still be a viable workaround in the nx > context, right? > On Sep 16, 2016 12:30 PM, "Gael Varoquaux" > wrote: > Nx has always been a problem with Mayavi (or actually VTK, which is the > underlying technology). Basically, it interfers with the openGL > contexts, > and in some cases the buffer cannot be captured well. Hence the black > image. > IMHO, the bug is in NX or the mesa driver, or both. > Ga?l > On Fri, Sep 16, 2016 at 12:00:15PM -0400, John Pellman wrote: > > Pysurfer isn't running headless- it's using x2go, which is based upon > the nx > > protocol, a technology that improves the ability of X11 to function > over a > > network connection. Therefore, I don't think that Xvfb is related.? > xvfbwrapper > > might be usable as a workaround, however. > > As I mentioned in my last post, I traced the offending method back to > mayavi.? > > I've opened an issue related to this here. > > Kirstie- if you'd be willing to refer your sysadmin to this thread I > think that > > would be great, as I would be interested in hearing what theories or > potential > > fixes he/she might have for this issue as well. > > --John > > On Fri, Sep 16, 2016 at 5:43 AM, JB Poline > wrote: > >? ? ?That's a cool idea and package - thanks for pointing to this ! > >? ? ?On 16 September 2016 at 02:40, Ariel Rokem > wrote: > >? ? ? ? ?On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker < > kw401 at cam.ac.uk> > >? ? ? ? ?wrote: > >? ? ? ? ? ? ?Hi John, > >? ? ? ? ? ? ?I'm travelling at the moment but I've had problems with > pysurfer > >? ? ? ? ? ? ?showing beautiful brains on the screen but only saving a > black box > >? ? ? ? ? ? ?to file. It happened right after our systems admin > updated a few > >? ? ? ? ? ? ?things but I haven't been able to get a clear list from > him of what > >? ? ? ? ? ? ?changed except: everything should work. > >? ? ? ? ? ? ?My point with this email is please do share back what you > >? ? ? ? ? ? ?learn.....even if it ends up being not a pysurfer > problem. At the > >? ? ? ? ? ? ?moment my workaround is to move everything I do to a > different > >? ? ? ? ? ? ?cluster that works!! Non efficient to say the least! > >? ? ? ? ? ? ?Thank you > >? ? ? ? ? ? ?Kirstie? > >? ? ? ? ? ? ?Sent from my iPhone, please excuse any typos or excessive > brevity > >? ? ? ? ? ? ?On 15 Sep 2016, at 12:44, John Pellman < > pellman.john at gmail.com> > >? ? ? ? ? ? ?wrote: > >? ? ? ? ? ? ? ? ?I've had at this a little bit more and my current > suspicion is > >? ? ? ? ? ? ? ? ?that this behavior is the result of an interaction > between our > >? ? ? ? ? ? ? ? ?remote desktop service (x2go) and Mayavi. > >? ? ? ? ? ? ? ? ?I created a an identical Miniconda environment for > Pysurfer on > >? ? ? ? ? ? ? ? ?both our server and my laptop and ran the following > code to > >? ? ? ? ? ? ? ? ?test this theory: > >? ? ? ? ? ? ? ? ? ? ?# The Basic Visualization demo from the Pysurfer > gallery. > >? ? ? ? ? ? ? ? ? ? ?from surfer import Brain > >? ? ? ? ? ? ? ? ? ? ?print(__doc__) > >? ? ? ? ? ? ? ? ? ? ?""" > >? ? ? ? ? ? ? ? ? ? ?Define the three important variables. > >? ? ? ? ? ? ? ? ? ? ?Note that these are the first three positional > arguments > >? ? ? ? ? ? ? ? ? ? ?in tksurfer (and pysurfer for that matter). > >? ? ? ? ? ? ? ? ? ? ?""" > >? ? ? ? ? ? ? ? ? ? ?subject_id = 'fsaverage' > >? ? ? ? ? ? ? ? ? ? ?hemi = 'lh' > >? ? ? ? ? ? ? ? ? ? ?surface = 'inflated' > >? ? ? ? ? ? ? ? ? ? ?""" > >? ? ? ? ? ? ? ? ? ? ?Call the Brain object constructor with these > >? ? ? ? ? ? ? ? ? ? ?parameters to initialize the visualization > session. > >? ? ? ? ? ? ? ? ? ? ?""" > >? ? ? ? ? ? ? ? ? ? ?brain = Brain(subject_id, hemi, surface) > >? ? ? ? ? ? ? ? ? ? ?# Save an image out to /tmp > >? ? ? ? ? ? ? ? ? ? ?print 'Saving out an image to /tmp using > Brain.save_image.' > >? ? ? ? ? ? ? ? ? ? ?brain.save_image('/tmp/brain.png') > >? ? ? ? ? ? ? ? ? ? ?# Looking at just the screenshot method of > pysurfer's Brain > >? ? ? ? ? ? ? ? ? ? ?object. > >? ? ? ? ? ? ? ? ? ? ?# This is called by save_image and is fed into > >? ? ? ? ? ? ? ? ? ? ?scipy.misc.imsave. > >? ? ? ? ? ? ? ? ? ? ?# If the boolean expression evaluated here is > true, then > >? ? ? ? ? ? ? ? ? ? ?only a black > >? ? ? ? ? ? ? ? ? ? ?# background is being fed into scipy's > misc.imsave method > >? ? ? ? ? ? ? ? ? ? ?for evaluation. > >? ? ? ? ? ? ? ? ? ? ?x = brain.screenshot() > >? ? ? ? ? ? ? ? ? ? ?print 'Test pysurfer\'s Brain.screenshot.' > >? ? ? ? ? ? ? ? ? ? ?if sum(x.flatten()==0)!=len(x.flatten()): > >? ? ? ? ? ? ? ? ? ? ???? print 'Pass' > >? ? ? ? ? ? ? ? ? ? ?else: > >? ? ? ? ? ? ? ? ? ? ???? print 'Fail' > >? ? ? ? ? ? ? ? ? ? ?# Looking at the Mayavi mlab.screenshot method. > >? ? ? ? ? ? ? ? ? ? ?# This is called by screenshot_single, which is > called by > >? ? ? ? ? ? ? ? ? ? ?Brain's screenshot. > >? ? ? ? ? ? ? ? ? ? ?# If the boolean expression evaluated here is > true, then > >? ? ? ? ? ? ? ? ? ? ?only a black > >? ? ? ? ? ? ? ? ? ? ?# background is being fed into Brain.screenshot() > >? ? ? ? ? ? ? ? ? ? ?from mayavi import mlab > >? ? ? ? ? ? ? ? ? ? ?x = mlab.screenshot(brain.brain_matrix[0,0]._f, > 'rgb', > >? ? ? ? ? ? ? ? ? ? ?False) > >? ? ? ? ? ? ? ? ? ? ?print 'Test mayavi\'s mlab.screenshot' > >? ? ? ? ? ? ? ? ? ? ?if sum(x.flatten()==0)!=len(x.flatten()): > >? ? ? ? ? ? ? ? ? ? ???? print 'Pass' > >? ? ? ? ? ? ? ? ? ? ?else: > >? ? ? ? ? ? ? ? ? ? ???? print 'Fail' > >? ? ? ? ? ? ? ? ?On the server through an x2go session both > Brain.screenshot and > >? ? ? ? ? ? ? ? ?mlab.screenshot failed to produce a non-blank image, > while on > >? ? ? ? ? ? ? ? ?my laptop's local environment both of these methods > did produce > >? ? ? ? ? ? ? ? ?the desired output (i.e., there were some nonzero > outputs). > >? ? ? ? ? ? ? ? ?Since this doesn't seem to be an error with pysurfer > in > >? ? ? ? ? ? ? ? ?particular, I'm going to proceed to see if anyone > using Mayavi > >? ? ? ? ? ? ? ? ?with x2go or nx has encountered similar issues by > querying > >? ? ? ? ? ? ? ? ?their forums / issue pages.? I just wanted to leave > this here > >? ? ? ? ? ? ? ? ?if someone else encounters the same issue in the > future. > >? ? ? ? ?A shot in the dark: Could it be something to do with running > headless? > >? ? ? ? ?Maybe running this under XVFB (e.g. through xvfbwrapper) > would help?? > >? ? ? ? ?Ariel? > >? ? ? ? ?? > >? ? ? ? ? ? ? ? ?--John > >? ? ? ? ? ? ? ? ?On Tue, Sep 13, 2016 at 1:24 PM, John Pellman < > >? ? ? ? ? ? ? ? ?pellman.john at gmail.com> wrote: > >? ? ? ? ? ? ? ? ? ? ?It looks like it might be related to the > following issue > >? ? ? ? ? ? ? ? ? ? ?described at StackOverflow: > >? ? ? ? ? ? ? ? ? ? ?http://stackoverflow.com/questions/16543634/ > >? ? ? ? ? ? ? ? ? ? ?mayavi-mlab-savefig-gives-an-empty-image > >? ? ? ? ? ? ? ? ? ? ?On Mon, Sep 12, 2016 at 2:00 PM, John Pellman < > >? ? ? ? ? ? ? ? ? ? ?pellman.john at gmail.com> wrote: > >? ? ? ? ? ? ? ? ? ? ? ? ?Hi all, > >? ? ? ? ? ? ? ? ? ? ? ? ?I'm encountering a peculiar Pysurfer error on > our > >? ? ? ? ? ? ? ? ? ? ? ? ?server and I was wondering if anyone has > encountered > >? ? ? ? ? ? ? ? ? ? ? ? ?anything similar or might have some insight > into how I > >? ? ? ? ? ? ? ? ? ? ? ? ?can tackle it.? Basically, when our > researchers try to > >? ? ? ? ? ? ? ? ? ? ? ? ?save a png image using Brain.save_image() or > >? ? ? ? ? ? ? ? ? ? ? ? ?Brain.save_imageset() the images produced > only contain > >? ? ? ? ? ? ? ? ? ? ? ? ?the background color (as you may have > inferred from the > >? ? ? ? ? ? ? ? ? ? ? ? ?subject line).? I've traced this back to > Scipy method > >? ? ? ? ? ? ? ? ? ? ? ? ?(scipy.misc.imsave), but it looks like this > would only > >? ? ? ? ? ? ? ? ? ? ? ? ?output an empty png if the image passed in > were > >? ? ? ? ? ? ? ? ? ? ? ? ?completely zeroed out.? Our setup uses the > following > >? ? ? ? ? ? ? ? ? ? ? ? ?versions of pysurfer/its dependencies: > >? ? ? ? ? ? ? ? ? ? ? ? ?Numpy: 1.10.0.dev0+1fe98ff > >? ? ? ? ? ? ? ? ? ? ? ? ?Scipy: 0.17.0.dev0+f2f6e48 > >? ? ? ? ? ? ? ? ? ? ? ? ?Ipython: 3.1.0 > >? ? ? ? ? ? ? ? ? ? ? ? ?nibabel: 2.0.0 > >? ? ? ? ? ? ? ? ? ? ? ? ?Mayavi: 4.4.2 > >? ? ? ? ? ? ? ? ? ? ? ? ?matplotlib: 1.4.3 > >? ? ? ? ? ? ? ? ? ? ? ? ?PIL: 1.1.7 > >? ? ? ? ? ? ? ? ? ? ? ? ?Pysurfer: 0.5 > >? ? ? ? ? ? ? ? ? ? ? ? ?This setup is running within a Miniconda > environment > >? ? ? ? ? ? ? ? ? ? ? ? ?using Python 2.7.11.? I'm uncertain if this > is related, > >? ? ? ? ? ? ? ? ? ? ? ? ?but running the example code here produces > the > >? ? ? ? ? ? ? ? ? ? ? ? ?following warning: > >? ? ? ? ? ? ? ? ? ? ? ? ?(ipython:20765): Gdk-WARNING **: /build/ > buildd/ > >? ? ? ? ? ? ? ? ? ? ? ? ?gtk+2.0-2.24.27/gdk/x11/gdkdrawable-x11.c:952 > drawable > >? ? ? ? ? ? ? ? ? ? ? ? ?is not a pixmap or window > >? ? ? ? ? ? ? ? ? ? ? ? ?Any insight would be greatly appreciated. > >? ? ? ? ? ? ? ? ? ? ? ? ?Best, > >? ? ? ? ? ? ? ? ? ? ? ? ?John Pellman > >? ? ? ? ? ? ? ? ?_______________________________________________ > >? ? ? ? ? ? ? ? ?Neuroimaging mailing list > >? ? ? ? ? ? ? ? ?Neuroimaging at python.org > >? ? ? ? ? ? ? ? ?https://mail.python.org/mailman/listinfo/neuroimaging > >? ? ? ? ? ? ?_______________________________________________ > >? ? ? ? ? ? ?Neuroimaging mailing list > >? ? ? ? ? ? ?Neuroimaging at python.org > >? ? ? ? ? ? ?https://mail.python.org/mailman/listinfo/neuroimaging > >? ? ? ? ?_______________________________________________ > >? ? ? ? ?Neuroimaging mailing list > >? ? ? ? ?Neuroimaging at python.org > >? ? ? ? ?https://mail.python.org/mailman/listinfo/neuroimaging > >? ? ?_______________________________________________ > >? ? ?Neuroimaging mailing list > >? ? ?Neuroimaging at python.org > >? ? ?https://mail.python.org/mailman/listinfo/neuroimaging > > _______________________________________________ > > Neuroimaging mailing list > > Neuroimaging at python.org > > https://mail.python.org/mailman/listinfo/neuroimaging -- Gael Varoquaux Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From arokem at gmail.com Sun Sep 18 13:47:05 2016 From: arokem at gmail.com (Ariel Rokem) Date: Sun, 18 Sep 2016 10:47:05 -0700 Subject: [Neuroimaging] [Dipy] Raise minimal version of scipy to 0.10 or even 0.12? Message-ID: Hi everyone, For the IVIM implementation, we might want to raise the minimal version of scipy to at least 0.10 (see recent comments in https://github.com/nipy/dipy/pull/1110). Even better, if we raise the minimal required version to 0.12, we can get rid of some of the special-casing in dipy.core.optimize. Scipy 0.10 was released in 2011 and scipy 0.12 was released in April 2013. What do you all think? Cheers, Ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Sep 18 14:10:24 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 18 Sep 2016 11:10:24 -0700 Subject: [Neuroimaging] [Dipy] Raise minimal version of scipy to 0.10 or even 0.12? In-Reply-To: References: Message-ID: Hi, On Sun, Sep 18, 2016 at 10:47 AM, Ariel Rokem wrote: > Hi everyone, > > For the IVIM implementation, we might want to raise the minimal version of > scipy to at least 0.10 (see recent comments in > https://github.com/nipy/dipy/pull/1110). > > Even better, if we raise the minimal required version to 0.12, we can get > rid of some of the special-casing in dipy.core.optimize. > > Scipy 0.10 was released in 2011 and scipy 0.12 was released in April 2013. I just checked the scipy package versions via Docker: apt-get update && apt-cache policy python-scipy * Ubuntu 12.04 (supported until Fall 2017): 0.9.0 * Ubuntu 14.04 (Fall 2019) : 0.13.3 * Debian Wheezy (May 2018) : 0.10.1 * Debian Jessie (May 2020) : 0.14.0-2 https://wiki.ubuntu.com/LTS https://wiki.debian.org/LTS This just FYI. I could see an argument for going beyond some of those, given that wheels are available for scipy now. Cheers, Matthew From gael.varoquaux at normalesup.org Sun Sep 18 14:14:56 2016 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 18 Sep 2016 20:14:56 +0200 Subject: [Neuroimaging] [Dipy] Raise minimal version of scipy to 0.10 or even 0.12? In-Reply-To: References: Message-ID: <20160918181456.GG1477333@phare.normalesup.org> On Sun, Sep 18, 2016 at 11:10:24AM -0700, Matthew Brett wrote: > > Scipy 0.10 was released in 2011 and scipy 0.12 was released in April 2013. > I just checked the scipy package versions via Docker: > apt-get update && apt-cache policy python-scipy > * Ubuntu 12.04 (supported until Fall 2017): 0.9.0 > * Ubuntu 14.04 (Fall 2019) : 0.13.3 > * Debian Wheezy (May 2018) : 0.10.1 > * Debian Jessie (May 2020) : 0.14.0-2 > This just FYI. I could see an argument for going beyond some of > those, given that wheels are available for scipy now. Well, that's still a cost to the users (if I update my scipy, I need to reaudit my numerical codes). But given the numbers that you show above (which you can obtain also by packages.ubuntu.com/packages.debian.org), I would say that a dependency on 0.12 is fine: very few people should still be on 12.04 or Wheezy. Ga?l From arokem at gmail.com Mon Sep 19 09:31:32 2016 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 19 Sep 2016 06:31:32 -0700 Subject: [Neuroimaging] [Dipy] Raise minimal version of scipy to 0.10 or even 0.12? In-Reply-To: <20160918181456.GG1477333@phare.normalesup.org> References: <20160918181456.GG1477333@phare.normalesup.org> Message-ID: Thanks for looking these up and thanks Gael for your input. On Sun, Sep 18, 2016 at 11:14 AM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Sun, Sep 18, 2016 at 11:10:24AM -0700, Matthew Brett wrote: > > > Scipy 0.10 was released in 2011 and scipy 0.12 was released in April > 2013. > > > I just checked the scipy package versions via Docker: > > > apt-get update && apt-cache policy python-scipy > > > * Ubuntu 12.04 (supported until Fall 2017): 0.9.0 > > * Ubuntu 14.04 (Fall 2019) : 0.13.3 > > * Debian Wheezy (May 2018) : 0.10.1 > > * Debian Jessie (May 2020) : 0.14.0-2 > > > This just FYI. I could see an argument for going beyond some of > > those, given that wheels are available for scipy now. > > Well, that's still a cost to the users (if I update my scipy, I need to > reaudit my numerical codes). > Sounds like the safest, least obtrusive would be to continue supporting 0.9 for another year or so (And 0.10 for another year after that), but that it wouldn't be scandalous to drop support for these even earlier. Cheers, Ariel > > But given the numbers that you show above (which you can obtain also by > packages.ubuntu.com/packages.debian.org), I would say that a dependency > on 0.12 is fine: very few people should still be on 12.04 or Wheezy. > > Ga?l > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From athanastasiou at gmail.com Sat Sep 17 11:37:36 2016 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Sat, 17 Sep 2016 16:37:36 +0100 Subject: [Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation In-Reply-To: References: Message-ID: Hello Steve, Matthew and all It works. The matrix inverts properly and the ROI lands exactly where it should. Thank you very much for your help. Tiny little note: ImagePositionPatient's first two elements must be flipped too to account for the X,Y->col, row correspondence, otherwise the offset is not added properly. Steve, thank you for the pointer to the DICOM documentation, I have gone through that a couple of times and tried to confirm my understanding with earlier questions. I think I got the gist of it but getting the elements of the matrix right was the "problem". All the best AA On Tue, Sep 13, 2016 at 7:20 PM, Steve Pieper wrote: > Hi Athanasios - > > It's very likely that your data is not sheared, which you can check by > seeing if the cross product of the in plane (row and column from Image > Orientation Patient) vectors is parallel to the line you have plotted (as > noted before it may point the same way or the opposite way). > > It's not clear to me from your question where you obtained the ROI , but > it's also very likely that your ROI is in the same pixel space as the MR, > in which case it would share the same matrix from pixel to patient spaces. > If it turns out they have different sampling grids then you may need to > resample one or the other. If the data files have valid headers this is a > fairly trivial operation to do with the GUI in Slicer or you could write > some code to do it. > > Maybe the easiest if for you to read through the DICOM documentation for > background. > > http://dicom.nema.org/medical/dicom/current/output/html/ > part03.html#sect_C.7.6.2 > > Hope that helps, > Steve > > On Tue, Sep 13, 2016 at 8:58 AM, Athanasios Anastasiou < > athanastasiou at gmail.com> wrote: > >> Hello Matthew & Steven >> >> Alright, I am not sure what to make of this because the >> ImagePositionPatient for the part of the scan I am interested in seems to >> be a vector that cuts across the space diagonally. I am trying to take >> Steve's viewpoint here, as it looks like the "easiest" too (just rotate the >> "track" to align to one of the axes). >> >> The values of ImagePositionPatient are: >> >> array([[-127.773 , -105.599 , -94.5758], >> [-127.841 , -106.584 , -90.1855], >> [-127.91 , -107.569 , -85.7951], >> [-127.978 , -108.554 , -81.4048], >> [-128.046 , -109.539 , -77.0145], >> [-128.115 , -110.524 , -72.6241], >> [-128.183 , -111.509 , -68.2338], >> [-128.251 , -112.494 , -63.8435], >> [-128.32 , -113.479 , -59.4531], >> [-128.388 , -114.464 , -55.0628], >> [-128.456 , -115.449 , -50.6725]]) >> >> And the 3d plot of that is available further below >> >> [image: Inline image 1] >> >> They are all regular (good) and all intermediate slice thicknesses are >> the same (double good). >> >> Now, the "trouble" with this is that this represents the direction of one >> of the axes. Let's call it the Z axis. The other two define a plane that is >> perpendicular to this direction. I can rotate the axes so that this "Z" is >> aligned with one of the Zs in space. >> >> The other thing that is a bit of a "problem" here of course is the third >> dimension of my ROI data. Because so far, in my preliminary tests, I have >> been ignoring it. This means, that just by looking at X,Y, I may have been >> seeing something that is distorted. And looking at this track, I may be >> seeing something that is thinner on the vertical projection than it really >> is although the increases in 2/3 axes are small. >> >> Can I please ask the following: >> >> 1) Am I right to assume that the ROI data are perpendicular to this >> "track"? >> >> 2) All I have to do now then, is workout a rotation around Z and Y (or, >> 2/3 axes) to make this track parallel to one of the axes (?) and then apply >> that transformation to the ROIs so that, when I set them on the image >> (which is always properly alligned), it appears to be properly aligned. (By >> the looks of this, -45 deg around Z, -45 around Y and I am there. >> >> 3) How does the DICOM rotation data relate to this track? (If at all). >> >> 4) Is there an API (or part of an API) for a [DICOM Data Type].getPixel >> or [DICOM Data Type].getVoxel kind of operation? Even if it is via a class >> ecosystem that is making sense within the context of Slicer or other piece >> of software. The problem here is that I don't have a volumetric DICOM >> (Multiple images single file). I have a ROI DICOM that references >> individual images. So, I build my own volume aware data type (based on >> pydicom) that will implement its getVoxel method and is aware of a few >> other things I am going to be doing with these volumes. But if this is >> already done somewhere, maybe I could re-use it (?). >> >> Looking forward to hearing from you >> AA >> >> >> >> >> >> >> >> On Thu, Sep 8, 2016 at 9:37 PM, Athanasios Anastasiou < >> athanastasiou at gmail.com> wrote: >> >>> Hello Matthew & Steven >>> >>> Thank you for your email. Of course I am missing the third column :( I >>> am paying too much attention on the two numbers I am after right now, to >>> bring the contour right where it should be when plotting it over the image. >>> >>> Thank you for your help, I will have another go at establishing the >>> matrix with the helpful comments provided here. >>> >>> All the best >>> AA >>> >>> On 6 Sep 2016 18:25, "Matthew Brett" wrote: >>> >>>> Hi, >>>> >>>> On Tue, Sep 6, 2016 at 6:34 AM, Steve Pieper >>>> wrote: >>>> > Hi Athanasios - >>>> > >>>> > To get the scan direction you'll need to look at the relative >>>> > ImagePositionPatient points from slice to slice. Note that the scan >>>> > direction is not always the cross product of the row and column >>>> orientations >>>> > since the scan may go in the other direction from a right handed cross >>>> > product or the slices can be sheared (or even at arbitrary >>>> locations).. >>>> > There are lots of other things that can happen too, like irregular >>>> spacing, >>>> > missing slices, etc, but usually just normalizing the vector between >>>> your >>>> > origin and any slice in the scan will be what you want. >>>> > >>>> > This code will give you an idea: >>>> > >>>> > https://github.com/Slicer/Slicer/blob/master/Modules/Scripte >>>> d/DICOMPlugins/DICOMScalarVolumePlugin.py#L195-L216 >>>> > >>>> >>>> From your code, you are missing a valid third column for your affine. >>>> I believe that column will be all zeros from your code. This is what >>>> the later part of the DICOM orientation page is talking about, and >>>> what Steve is referring to as the "slice direction". >>>> >>>> Steve is quite right that the slice direction need not be the >>>> cross-product of the first two, and the DICOM information can often >>>> tell you what that slice direction vector is, but assuming for a >>>> moment that it is the cross product, and that you are looking at the >>>> first slice of the volume, then you'd want something like: >>>> >>>> """ >>>> import numpy as np >>>> >>>> ImageOrientationPatient = [0.999857, 0.00390641, 0.0164496, >>>> -0.00741602, 0.975738, 0.218818] >>>> >>>> ImagePositionPatient = [-127.773, -105.599, -94.5758] >>>> >>>> PixelSpacing = [0.4688, 0.4688] >>>> >>>> slice_spacing = 3.0 # ? >>>> >>>> # Make F array from DICOM orientation page >>>> F = np.fliplr(np.reshape(ImageOrientationPatient, (2, 3)).T) >>>> rotations = np.eye(3) >>>> rotations[:, :2] = F >>>> # Third direction cosine from cross-product of first two >>>> rotations[:, 2] = np.cross(F[:, 0], F[:, 1]) >>>> # Add the zooms >>>> zooms = np.diag(PixelSpacing + [slice_spacing]) >>>> >>>> # Make the affine >>>> affine = np.diag([0., 0, 0, 1]) >>>> affine[:3, :3] = rotations.dot(zooms) >>>> affine[:3, 3] = ImagePositionPatient >>>> >>>> np.set_printoptions(precision=4, suppress=True) >>>> print(affine) >>>> """ >>>> >>>> But - Steve's suggestion is more general - this code is just to give >>>> you an idea. >>>> >>>> Best, >>>> >>>> Matthew >>>> _______________________________________________ >>>> Neuroimaging mailing list >>>> Neuroimaging at python.org >>>> https://mail.python.org/mailman/listinfo/neuroimaging >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DICOM_Fig1.png Type: image/png Size: 178149 bytes Desc: not available URL: From pellman.john at gmail.com Sat Sep 24 10:10:19 2016 From: pellman.john at gmail.com (John Pellman) Date: Sat, 24 Sep 2016 10:10:19 -0400 Subject: [Neuroimaging] Open Source MRI Hardware? Message-ID: Out of curiosity, has anyone else come across the following website before? http://www.opensourceimaging.org/ I think it would be really neat to see more affordable MRI machines hit the market to make the field a little bit easier to enter for aspiring researchers who wouldn't be able to do neuroimaging otherwise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From athanastasiou at gmail.com Sat Sep 24 10:50:35 2016 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Sat, 24 Sep 2016 15:50:35 +0100 Subject: [Neuroimaging] Open Source MRI Hardware? In-Reply-To: References: Message-ID: Hello John and all I have not come across this before and it really sounds like a good idea. You would still have to buy key hardware though and at the moment it doesn't sound cheaper than something like Terranova http://www.magritek.com/products/terranova/ I can't find the price now but I recall something between ?5k and ?7k when I was looking up its specs a few years ago. For practical imaging of body parts, I think the cost of the hardware would start accumulating quickly. All the best AA On 24 Sep 2016 3:10 p.m., "John Pellman" wrote: > Out of curiosity, has anyone else come across the following website before? > > http://www.opensourceimaging.org/ > > I think it would be really neat to see more affordable MRI machines hit > the market to make the field a little bit easier to enter for aspiring > researchers who wouldn't be able to do neuroimaging otherwise. > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Sep 24 14:27:37 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 24 Sep 2016 11:27:37 -0700 Subject: [Neuroimaging] New NIPY release soon Message-ID: Hi, I plan to make a new nipy release soon, from the current master branch, plus this PR: https://github.com/nipy/nipy/pull/416 I'd really appreciate a review for the PR. Any pressing issues that I've missed? Cheers, Matthew From elef at indiana.edu Sat Sep 24 11:33:34 2016 From: elef at indiana.edu (Eleftherios Garyfallidis) Date: Sat, 24 Sep 2016 15:33:34 +0000 Subject: [Neuroimaging] Open Source MRI Hardware? In-Reply-To: References: Message-ID: Hi John this looks awesome. Thank you for sharing I was not aware of this website. I definitely agree with you. On Sat, Sep 24, 2016 at 10:50 AM Athanasios Anastasiou < athanastasiou at gmail.com> wrote: > Hello John and all > > I have not come across this before and it really sounds like a good idea. > > You would still have to buy key hardware though and at the moment it > doesn't sound cheaper than something like Terranova > http://www.magritek.com/products/terranova/ > > I can't find the price now but I recall something between ?5k and ?7k when > I was looking up its specs a few years ago. > > For practical imaging of body parts, I think the cost of the hardware > would start accumulating quickly. > > All the best > > > AA > > > > On 24 Sep 2016 3:10 p.m., "John Pellman" wrote: > >> Out of curiosity, has anyone else come across the following website >> before? >> >> http://www.opensourceimaging.org/ >> >> I think it would be really neat to see more affordable MRI machines hit >> the market to make the field a little bit easier to enter for aspiring >> researchers who wouldn't be able to do neuroimaging otherwise. >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Sun Sep 25 22:56:51 2016 From: krzysztof.gorgolewski at gmail.com (Chris Gorgolewski) Date: Sun, 25 Sep 2016 19:56:51 -0700 Subject: [Neuroimaging] Brain Imaging Data Structure version 1.0.1 Release Candidate 1 Message-ID: Dear all, Apologies for cross posting. Brain Imaging Data Structure (BIDS) is an easy to adopt standard for organizing and describing raw neuroimaging data. We have just released version 1.0.1 (release candidate 1) and we would love to hear your opinion before finalizing the changes. This is a minor revision which along with some small fixes (smoothing the language) comes with a few small backwards compatible new features (support for ne types of data). Here's the full changelog: - Added T1 Rho maps - Added support for phenotypic information split into multiple files - Added recommendations for multi site datasets - Added SoftwareVersions. - Added run- to the phase encoding maps. Improved the description. - Added InversionTime metadata key. - Clarification on the source vs raw language. - Added trial_type column to the event files. - Added missing sub- in behavioural data file names - Added ability to store stimuli files. - Clarified the language describing allowed subject labels. - Added quantitative proton density maps. Next step (before another release candidate or publishing the final version of 1.0.1) will be updating the validator. The plan for that has been laid out in this github milestone: https://github.com/ INCF/bids-validator/milestone/1. We'll be also soliciting public opinions to make sure everything is ok before committing to final release of 1.0.1. You can leave comments in this thread or on the working copy of the standard available here: https://docs.google.com/document/d/1HFUkAEE-pB-angVcYe6pf_-fVf4sCpOHKesUvfb8Grc/edit#1.0.1-rc1 Thanks for all your feedback! Best, Chris PS I hope we will be able to get 1.1.0 out this year with multispectral imaging and ASL! -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.close at monash.edu Mon Sep 26 21:01:14 2016 From: tom.close at monash.edu (Thomas Close) Date: Tue, 27 Sep 2016 11:01:14 +1000 Subject: [Neuroimaging] What is required to modify spm.NewSegment to handle multiple channels? Message-ID: (sorry if this has posted twice) Hi all, There is a note in the doc string of the NewSegment interface that says that it currently doesn't handle multiple channels. However, it looks as though much of the infrastructure that would be required to handle multiple channels is in place. Does anybody know what work is left to be done and what is making the extension to multiple channels difficult to implement? Cheers, Tom -- *THOMAS G. CLOSE, PHD* Senior Informatics Officer *Monash Biomedical Imaging* Monash University Room 139, 770 Blackburn Rd Clayton Campus, Clayton VIC 3800 Australia T: +61 3 9902 9804 M: +61 491 141 390 E: tom.close at monash.edu mbi.monash.edu.au -------------- next part -------------- An HTML attachment was scrubbed... URL: From pellman.john at gmail.com Tue Sep 27 12:40:27 2016 From: pellman.john at gmail.com (John Pellman) Date: Tue, 27 Sep 2016 12:40:27 -0400 Subject: [Neuroimaging] [PySurfer] Brain's save_image method produces images with only background color In-Reply-To: <20160917093615.GB613850@phare.normalesup.org> References: <20160916162940.GA248315@phare.normalesup.org> <20160917093615.GB613850@phare.normalesup.org> Message-ID: Hi all, Thanks for all of your help. Just as an update, our postdoc was able to get PySurfer to get save_image to work properly by invoking Brain with "offscreen=True". For anyone who needs to plumb deeper into this, we were encountering the issue on both an Ubuntu 14.04 laptop and a laptop running OS X, so I'm not sure that it was the version of mesa on the client that was problematic. On Sat, Sep 17, 2016 at 5:36 AM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > What if you change local computer. I don't remember if it's the server's > mesa that is used or the client's mesa that is used. If it's the client's > mesa, than the problem is related to your computer. > > Ga?l > > On Fri, Sep 16, 2016 at 03:55:08PM -0400, John Pellman wrote: > > Running the basic visualization code for Pysurfer followed by > Brain.save_image > > () gave the following results with when run with xvfbwrapper: > > > On the server under x2go/nx : An all-black image. > > On the server under a regular ssh session : An all-black image. > > On my local computer: An all-black image. > > > So it doesn't look like that's going to work. :/ > > > wxPython is also installed- is it possible that that is affecting the > image > > rendering too? > > > On Fri, Sep 16, 2016 at 2:36 PM, Roan LaPlante < > rlaplant at nmr.mgh.harvard.edu> > > wrote: > > > > If that's the problem, xvfb should still be a viable workaround in > the nx > > context, right? > > > > On Sep 16, 2016 12:30 PM, "Gael Varoquaux" < > gael.varoquaux at normalesup.org> > > wrote: > > > Nx has always been a problem with Mayavi (or actually VTK, which > is the > > underlying technology). Basically, it interfers with the openGL > > contexts, > > and in some cases the buffer cannot be captured well. Hence the > black > > image. > > > IMHO, the bug is in NX or the mesa driver, or both. > > > Ga?l > > > On Fri, Sep 16, 2016 at 12:00:15PM -0400, John Pellman wrote: > > > Pysurfer isn't running headless- it's using x2go, which is > based upon > > the nx > > > protocol, a technology that improves the ability of X11 to > function > > over a > > > network connection. Therefore, I don't think that Xvfb is > related. > > xvfbwrapper > > > might be usable as a workaround, however. > > > > As I mentioned in my last post, I traced the offending method > back to > > mayavi. > > > I've opened an issue related to this here. > > > > Kirstie- if you'd be willing to refer your sysadmin to this > thread I > > think that > > > would be great, as I would be interested in hearing what > theories or > > potential > > > fixes he/she might have for this issue as well. > > > > --John > > > > On Fri, Sep 16, 2016 at 5:43 AM, JB Poline > > > wrote: > > > > That's a cool idea and package - thanks for pointing to > this ! > > > > On 16 September 2016 at 02:40, Ariel Rokem < > arokem at gmail.com> > > wrote: > > > > > On Thu, Sep 15, 2016 at 1:33 PM, Kirstie Whitaker < > > kw401 at cam.ac.uk> > > > wrote: > > > > Hi John, > > > > I'm travelling at the moment but I've had problems > with > > pysurfer > > > showing beautiful brains on the screen but only > saving a > > black box > > > to file. It happened right after our systems admin > > updated a few > > > things but I haven't been able to get a clear list > from > > him of what > > > changed except: everything should work. > > > > My point with this email is please do share back > what you > > > learn.....even if it ends up being not a pysurfer > > problem. At the > > > moment my workaround is to move everything I do to > a > > different > > > cluster that works!! Non efficient to say the > least! > > > > Thank you > > > Kirstie > > > > Sent from my iPhone, please excuse any typos or > excessive > > brevity > > > > On 15 Sep 2016, at 12:44, John Pellman < > > pellman.john at gmail.com> > > > wrote: > > > > > I've had at this a little bit more and my > current > > suspicion is > > > that this behavior is the result of an > interaction > > between our > > > remote desktop service (x2go) and Mayavi. > > > > I created a an identical Miniconda environment > for > > Pysurfer on > > > both our server and my laptop and ran the > following > > code to > > > test this theory: > > > > > # The Basic Visualization demo from the > Pysurfer > > gallery. > > > from surfer import Brain > > > > print(__doc__) > > > > """ > > > Define the three important variables. > > > Note that these are the first three > positional > > arguments > > > in tksurfer (and pysurfer for that matter). > > > """ > > > subject_id = 'fsaverage' > > > hemi = 'lh' > > > surface = 'inflated' > > > > """ > > > Call the Brain object constructor with > these > > > parameters to initialize the visualization > > session. > > > """ > > > brain = Brain(subject_id, hemi, surface) > > > > # Save an image out to /tmp > > > print 'Saving out an image to /tmp using > > Brain.save_image.' > > > brain.save_image('/tmp/brain.png') > > > > # Looking at just the screenshot method of > > pysurfer's Brain > > > object. > > > # This is called by save_image and is fed > into > > > scipy.misc.imsave. > > > # If the boolean expression evaluated here > is > > true, then > > > only a black > > > # background is being fed into scipy's > > misc.imsave method > > > for evaluation. > > > x = brain.screenshot() > > > print 'Test pysurfer\'s Brain.screenshot.' > > > if sum(x.flatten()==0)!=len(x.flatten()): > > > print 'Pass' > > > else: > > > print 'Fail' > > > > # Looking at the Mayavi mlab.screenshot > method. > > > # This is called by screenshot_single, > which is > > called by > > > Brain's screenshot. > > > # If the boolean expression evaluated here > is > > true, then > > > only a black > > > # background is being fed into > Brain.screenshot() > > > from mayavi import mlab > > > x = mlab.screenshot(brain.brain_ > matrix[0,0]._f, > > 'rgb', > > > False) > > > print 'Test mayavi\'s mlab.screenshot' > > > if sum(x.flatten()==0)!=len(x.flatten()): > > > print 'Pass' > > > else: > > > print 'Fail' > > > > > On the server through an x2go session both > > Brain.screenshot and > > > mlab.screenshot failed to produce a non-blank > image, > > while on > > > my laptop's local environment both of these > methods > > did produce > > > the desired output (i.e., there were some > nonzero > > outputs). > > > > Since this doesn't seem to be an error with > pysurfer > > in > > > particular, I'm going to proceed to see if > anyone > > using Mayavi > > > with x2go or nx has encountered similar issues > by > > querying > > > their forums / issue pages. I just wanted to > leave > > this here > > > if someone else encounters the same issue in > the > > future. > > > > > A shot in the dark: Could it be something to do with > running > > headless? > > > Maybe running this under XVFB (e.g. through > xvfbwrapper) > > would help? > > > > Ariel > > > > > > > --John > > > > On Tue, Sep 13, 2016 at 1:24 PM, John Pellman < > > > pellman.john at gmail.com> wrote: > > > > It looks like it might be related to the > > following issue > > > described at StackOverflow: > > > > http://stackoverflow.com/ > questions/16543634/ > > > mayavi-mlab-savefig-gives-an-empty-image > > > > On Mon, Sep 12, 2016 at 2:00 PM, John > Pellman < > > > pellman.john at gmail.com> wrote: > > > > Hi all, > > > > I'm encountering a peculiar Pysurfer > error on > > our > > > server and I was wondering if anyone > has > > encountered > > > anything similar or might have some > insight > > into how I > > > can tackle it. Basically, when our > > researchers try to > > > save a png image using > Brain.save_image() or > > > Brain.save_imageset() the images > produced > > only contain > > > the background color (as you may have > > inferred from the > > > subject line). I've traced this back > to > > Scipy method > > > (scipy.misc.imsave), but it looks like > this > > would only > > > output an empty png if the image > passed in > > were > > > completely zeroed out. Our setup uses > the > > following > > > versions of pysurfer/its dependencies: > > > > Numpy: 1.10.0.dev0+1fe98ff > > > Scipy: 0.17.0.dev0+f2f6e48 > > > Ipython: 3.1.0 > > > nibabel: 2.0.0 > > > Mayavi: 4.4.2 > > > matplotlib: 1.4.3 > > > PIL: 1.1.7 > > > Pysurfer: 0.5 > > > > This setup is running within a > Miniconda > > environment > > > using Python 2.7.11. I'm uncertain if > this > > is related, > > > but running the example code here > produces > > the > > > following warning: > > > > (ipython:20765): Gdk-WARNING **: > /build/ > > buildd/ > > > gtk+2.0-2.24.27/gdk/x11/ > gdkdrawable-x11.c:952 > > drawable > > > is not a pixmap or window > > > > Any insight would be greatly > appreciated. > > > > Best, > > > John Pellman > > > > > > > > _____________________________ > __________________ > > > Neuroimaging mailing list > > > Neuroimaging at python.org > > > https://mail.python.org/ > mailman/listinfo/neuroimaging > > > > > _______________________________________________ > > > Neuroimaging mailing list > > > Neuroimaging at python.org > > > https://mail.python.org/ > mailman/listinfo/neuroimaging > > > > > > > _______________________________________________ > > > Neuroimaging mailing list > > > Neuroimaging at python.org > > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > > _______________________________________________ > > > Neuroimaging mailing list > > > Neuroimaging at python.org > > > https://mail.python.org/mailman/listinfo/neuroimaging > > > > > > > _______________________________________________ > > > Neuroimaging mailing list > > > Neuroimaging at python.org > > > https://mail.python.org/mailman/listinfo/neuroimaging > -- > Gael Varoquaux > Researcher, INRIA Parietal > NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France > Phone: ++ 33-1-69-08-79-68 > http://gael-varoquaux.info http://twitter.com/GaelVaroquaux > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.gramfort at telecom-paristech.fr Wed Sep 28 16:02:05 2016 From: alexandre.gramfort at telecom-paristech.fr (Alexandre Gramfort) Date: Wed, 28 Sep 2016 22:02:05 +0200 Subject: [Neuroimaging] [ANN] MNE-Python 0.13 Message-ID: Hi, We are pleased to announce the new 0.13 release of MNE-Python. As usual this release comes with new features, many improvements to usability, visualization and documentation and bug fixes. A couple of major API changes are being implemented, so we recommend that users read through the changes carefully. Support for Python 2.6 has been dropped, and the minimum supported dependencies are now NumPy 1.8, SciPy 0.12, and Matplotlib 1.3. A few highlights ============ Our filtering functionality has been significantly improved: - In FIR filters the parameters filter_length, l_trans_bandwidth, and h_trans_bandwidth are now automatically determined. We also added a phase argument in e.g. in mne.io.Raw.filter() . This means that the new recommended defaults are l_trans_bandwidth='auto', h_trans_bandwidth='auto', and filter_length='auto'. This should generally reduce filter artifacts at the expense of slight decrease in effective filter stop-band attenuation. For details see Defaults in MNE-Python . - An improved phase='zero' zero-phase FIR filtering has been added. - We added a second-order sections (instead of (b, a) form) IIR filtering which commonly has less numerical error - We added a generic array-filtering function mne.filter.filter_data() for numpy arrays. - Constructing IIR filters in mne.filter.construct_iir_filter() will default to output='sos' in 0.14 We extended and tuned our visualization functionality: - The ordering parameters ?selection? and ?position? were added to mne.viz.plot_raw() to allow plotting of specific regions of the sensor array. - mne.viz.plot_trans() now also shows head position indicators. - We have new plotting functions for independent component properties, similar to `pop_prop` in EEGLAB. - There is a new function mne.viz.plot_compare_evokeds() to show multiple evoked time courses at a single location, or the mean over a ROI, or the GFP. This is achieved by automatically averaging and calculating a confidence interval if multiple subjects are given. - We now have an interactive colormap option in our image plotting functions. - Subsets of sensors can now be interactively selected by the so called lasso selector. Checkout mne.viz.plot_sensors() and mne.viz.plot_raw() when using order=?selection? or order=?position?. - In viz.plot_bem() brain surfaces can now be plotted. - mne.preprocessing.ICA.plot_components() can now be used interactively. We refactored and extended our multvariate statistical analysis functionality and made it more compatible with scikit-klearn: - The mne.decoding.TimeFrequency allows to transform signals in scikit-learn pipelines. - the mne.decoding.UnsupervisedSpatialFilter provides interface for scikit-learn decomposition algorithms such that they can be easily used with MNE data. - We added support for multiclass decoding in mne.decoding.CSP . And as always many more good things: - There is now a --filterchpi option to mne browse_raw . - mne.Evoked objects can now be decimated with mne.Evoked.decimate() . - Functional near-infrared spectroscopy (fNIRS) data can now be processed. - MaxShield (IAS) can now be read for evoked data (e.g., from the acquisition machine) in mne.read_evokeds() - We added a single trial container for time-frequency representations ( mne.time_frequency.EpochsTFR ), an average parameter to mne.time_frequency.tfr_morlet() and mne.time_frequency.tfr_multitaper() . This way time-frequency transforms can be easily computed on single trial epochs without averaging. Notable API changes ================ - Components obtained from mne.preprocessing.ICA are now sorted by explained variance - Adding an EEG reference channel using mne.io.add_reference_channels() will now use its digitized location from the FIFF file if present. - The add_eeg_ref argument in core functions like mne.io.read_raw_fif() and mne.Epochs has been deprecated in favor of using mne.set_eeg_reference() and equivalent instance methods like raw.set_eeg_reference() . - When CTF gradient compensation is applied to raw data, it is no longer reverted on save of mne.io.Raw.save() . - Weighted addition and subtraction of Evoked as ev1 + ev2 and ev1 - ev2 have been deprecated, use explicit mne.combine_evoked(...,weights='nave') instead. - Deprecated support for passing a lits of filenames to mne.io.Raw constructor, use mne.io.read_raw_fif() and mne.concatenate_raws() instead. - Now channels with units of ?C?, ??S?, ?uS?, ?ARU? and ?S? will be turned to misc by default in mne.io.read_raw_brainvision() . - Add mne.io.anonymize_info() function to anonymize measurements and add methods to mne.io.Raw , mne.Epochs and mne.Evoked . - Deprecated the baseline parameter in mne.Evoked . Use mne.Epochs.apply_baseline() instead. - The default dataset location has been changed from examples/ in the MNE-Python root directory to ~/mne_data in the user?s home directory - mne.decoding.EpochsVectorizer has been deprecated in favor of mne.decoding.Vectorizer. - Deprecated mne.time_frequency.cwt_morlet() and mne.time_frequency.single_trial_power() in favour of mne.time_frequency.tfr_morlet() with parameter average=False. - Extended Infomax is now the new default in mne.preprocessing.infomax() (extended=True). For a full list of improvements and API changes, see: http://martinos.org/mne/stable/whats_new.html#version-0-13 To install the latest release the following command should do the job: pip install --upgrade --user mne As usual we welcome your bug reports, feature requests, critiques and contributions. Some links: - https://github.com/mne-tools/mne-python (code + readme on how to install) - http://martinos.org/mne/stable/ (full MNE documentation) Follow us on Twitter: https://twitter.com/mne_python Regards, The MNE-Python developers People who contributed to this release (in alphabetical order): * Alexander Rudiuk * Alexandre Barachant * Alexandre Gramfort * Asish Panda * Camilo Lamus * Chris Holdgraf * Christian Brodbeck * Christopher J. Bailey * Christopher Mullins * Clemens Brunner * Denis A. Engemann * Eric Larson * Federico Raimondo * F?lix Raimundo * Guillaume Dumas * Jaakko Leppakangas * Jair Montoya * Jean-Remi King * Johannes Niediek * Jona Sassenhagen * Jussi Nurminen * Keith Doelling * Mainak Jas * Marijn van Vliet * Michael Krause * Mikolaj Magnuski * Nick Foti * Phillip Alday * Simon-Shlomo Poil * Teon Brooks * Yaroslav Halchenko -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Wed Sep 28 16:13:25 2016 From: bertrand.thirion at inria.fr (bthirion) Date: Wed, 28 Sep 2016 22:13:25 +0200 Subject: [Neuroimaging] [ANN] MNE-Python 0.13 In-Reply-To: References: Message-ID: Congratulations ! Bertrand On 28/09/2016 22:02, Alexandre Gramfort wrote: > > Hi, > > > We are pleased to announce the new 0.13 release of MNE-Python. As > usual this release comes with new features, many improvements to > usability, visualization and documentation and bug fixes. > > > A couple of major API changes are being implemented, so we recommend > that users read through the changes carefully. > > > Support for Python 2.6 has been dropped, and the minimum supported > dependencies are now NumPy 1.8, SciPy > 0.12, and Matplotlib 1.3. > > > A few highlights > > ============ > > > Our filtering functionality has been significantly improved: > > * > > In FIR filters the parameters filter_length, l_trans_bandwidth, > and h_trans_bandwidth are now automatically determined. We also > added a phase argument in e.g. in mne.io.Raw.filter() > . > This means that the new recommended defaults are > l_trans_bandwidth='auto', h_trans_bandwidth='auto', and > filter_length='auto'. This should generally reduce filter > artifacts at the expense of slight decrease in effective filter > stop-band attenuation. For details see Defaults in MNE-Python > . > > * > > An improved phase='zero' zero-phase FIR filtering has been added. > > * > > We added a second-order sections (instead of (b, a) form) IIR > filtering which commonly has less numerical error > > * > > We added a generic array-filtering function > mne.filter.filter_data() > for > numpy arrays. > > * > > Constructing IIR filters in mne.filter.construct_iir_filter() > will > default to output='sos' in 0.14 > > > We extended and tuned our visualization functionality: > > * > > The ordering parameters ?selection? and ?position? were added to > mne.viz.plot_raw() > to > allow plotting of specific regions of the sensor array. > > * > > mne.viz.plot_trans() > now > also shows head position indicators. > > * > > We have new plotting functions for independent component > properties, similar to `pop_prop` in EEGLAB. > > * > > There is a new function mne.viz.plot_compare_evokeds() > to > show multiple evoked time courses at a single location, or the > mean over a ROI, or the GFP. This is achieved by automatically > averaging and calculating a confidence interval if multiple > subjects are given. > > * > > We now have an interactive colormap option in our image plotting > functions. > > * > > Subsets of sensors can now be interactively selected by the so > called lasso selector. Checkout mne.viz.plot_sensors() > and > mne.viz.plot_raw() > when > using order=?selection? or order=?position?. > > * > > In viz.plot_bem() > brain > surfaces can now be plotted. > > * > > mne.preprocessing.ICA.plot_components() > can > now be used interactively. > > > We refactored and extended our multvariate statistical analysis > functionality and made it more compatible with scikit-klearn: > > * > > The mne.decoding.TimeFrequency allows to transform signals in > scikit-learn pipelines. > > * > > the mne.decoding.UnsupervisedSpatialFilter provides interface for > scikit-learn decomposition algorithms such that they can be easily > used with MNE data. > > * > > We added support for multiclass decoding in mne.decoding.CSP > . > > > And as always many more good things: > > * > > There is now a --filterchpi option to mne browse_raw > . > > * > > mne.Evoked > objects > can now be decimated with mne.Evoked.decimate() > . > > * > > Functional near-infrared spectroscopy (fNIRS) data can now be > processed. > > * > > MaxShield (IAS) can now be read for evoked data (e.g., from the > acquisition machine) in mne.read_evokeds() > > > * > > We added a single trial container for time-frequency > representations (mne.time_frequency.EpochsTFR > ), > an average parameter to mne.time_frequency.tfr_morlet() > and > mne.time_frequency.tfr_multitaper() > . > This way time-frequency transforms can be easily computed on > single trial epochs without averaging. > > > Notable API changes > > ================ > > > * > > Components obtained from mne.preprocessing.ICA > are > now sorted by explained variance > > * > > Adding an EEG reference channel using > mne.io.add_reference_channels() will now use its digitized > location from the FIFF file if present. > > * > > The add_eeg_ref argument in core functions like > mne.io.read_raw_fif() > and > mne.Epochs > has > been deprecated in favor of using mne.set_eeg_reference() > and > equivalent instance methods like raw.set_eeg_reference() > . > > * > > When CTF gradient compensation is applied to raw data, it is no > longer reverted on save of mne.io.Raw.save() > . > > * > > Weighted addition and subtraction of Evoked > as > ev1 + ev2 and ev1 - ev2 have been deprecated, use explicit > mne.combine_evoked(...,weights='nave') > instead. > > * > > Deprecated support for passing a lits of filenames to mne.io.Raw > constructor, > use mne.io.read_raw_fif() > and > mne.concatenate_raws() > instead. > > * > > Now channels with units of ?C?, ??S?, ?uS?, ?ARU? and ?S? will be > turned to misc by default in mne.io.read_raw_brainvision() > . > > * > > Add mne.io.anonymize_info()function to anonymize measurements and > add methods to mne.io.Raw > , > mne.Epochs > and > mne.Evoked > . > > * > > Deprecated the baseline parameter in mne.Evoked > . > Use mne.Epochs.apply_baseline() > instead. > > * > > The default dataset location has been changed from examples/ in > the MNE-Python root directory to ~/mne_data in the user?s home > directory > > * > > mne.decoding.EpochsVectorizer > has > been deprecated in favor of mne.decoding.Vectorizer. > > * > > Deprecated mne.time_frequency.cwt_morlet() > and > mne.time_frequency.single_trial_power() > in > favour of mne.time_frequency.tfr_morlet() > with > parameter average=False. > > * > > Extended Infomax is now the new default in > mne.preprocessing.infomax()(extended=True). > > > For a full list of improvements and API changes, see: > > > http://martinos.org/mne/stable/whats_new.html#version-0-13 > > > To install the latest release the following command should do the job: > > > pip install --upgrade --user mne > > > As usual we welcome your bug reports, feature requests, critiques and > > contributions. > > > Some links: > > > - https://github.com/mne-tools/mne-python(code + readme on how to install) > > - http://martinos.org/mne/stable/(full MNE documentation) > > > Follow us on Twitter: https://twitter.com/mne_python > > > Regards, > > The MNE-Python developers > > > People who contributed to this release (in alphabetical order): > > > * Alexander Rudiuk > > * Alexandre Barachant > > * Alexandre Gramfort > > * Asish Panda > > * Camilo Lamus > > * Chris Holdgraf > > * Christian Brodbeck > > * Christopher J. Bailey > > * Christopher Mullins > > * Clemens Brunner > > * Denis A. Engemann > > * Eric Larson > > * Federico Raimondo > > * F?lix Raimundo > > * Guillaume Dumas > > * Jaakko Leppakangas > > * Jair Montoya > > * Jean-Remi King > > * Johannes Niediek > > * Jona Sassenhagen > > * Jussi Nurminen > > * Keith Doelling > > * Mainak Jas > > * Marijn van Vliet > > * Michael Krause > > * Mikolaj Magnuski > > * Nick Foti > > * Phillip Alday > > * Simon-Shlomo Poil > > * Teon Brooks > > * Yaroslav Halchenko > > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Thu Sep 29 13:32:57 2016 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 29 Sep 2016 13:32:57 -0400 Subject: [Neuroimaging] [Job announcement] position in the AFNI group Message-ID: on behalf of the afni folks: ------------------------------------------------------------ ------------------------------------------------ From: Robert W Cox, PhD Director, Scientific and Statistical Computing Core NIMH & NIH +1-301-594-9196 http://afni.nimh.nih.gov There is a job opening in the AFNI group at NIHM, NIH, USA. Please see this link if you or someone you know is a scientific programmer: https://kelly.secure.force.com/CandidateExperience/CandExpJobDetails?id= a7V80000000PChaEAG&searchFlag=true You do not apply directly to me, but through the Kelly Services company. However, if you have questions, please feel free to email me (robertcox at mail.nih.gov). ------------------------------------------------------------ ------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From frakkopesto at gmail.com Thu Sep 29 14:45:47 2016 From: frakkopesto at gmail.com (Franco Pestilli) Date: Thu, 29 Sep 2016 14:45:47 -0400 Subject: [Neuroimaging] [Job announcement] Postdoc position In-Reply-To: References: Message-ID: <5B25B3FA-517D-4D4A-A7DC-E2BAB4AA8FFA@gmail.com> https://goo.gl/WkmSZ9 POSTDOC IN DATA SCIENCE FOR NEUROSCIENCE An NSF-funded postdoctoral position is open in the laboratory of Dr. Pestilli Franco at Indiana University. More information about this project can be found on the NSF website , on the Indiana University Website and on Dr. Pestilli Website . General information about the Pestilli Laboratory can be found at pestillilab.indiana.edu This position is ideal of individuals with previous expertise in Neuroscience, Cognitive Science or Psychology interested in learning about Computer Science and Engineering or Computer Scientists and Engineers interested in learning Neuroscience. Lab collaborators are Olaf Sporns, Andrew Saykin, Cesar Caiafa, Craig Stewart, Robert Henschel (Indiana University), Lei Wang (Northwestern University), Daniel Marcus (Washington University St. Louis), and Brian Wandell (Stanford University). The project involves a large network of scientists. Successful candidates will hold a Ph.D. in Computer Science, Engineering, Psychology, Neuroscience, Cognitive Science, or other closely related fields. Excellent programming skills are essential. Expertise in advanced statistics or computational methods is strongly recommended. Salary will commensurate with experience as determined by NIH guidelines . To apply, please send a CV, a brief statement of research interests, and names of three individuals as reference to franpest at indiana.edu . Start date is as early as possible with possibility of negotiation. Highly talented Ph.D candidates are encouraged to apply. Franco Pestilli, PhD Assistant Professor Psychology, Neuroscience, Cognitive Science and Network Science Indiana University Bloomington @furranko | github.com/brain-life | github.com/brain-life/encode | francopestilli.github.io/life -------------- next part -------------- An HTML attachment was scrubbed... URL: