From garyfallidis at gmail.com Tue Mar 9 15:43:39 2021 From: garyfallidis at gmail.com (Eleftherios Garyfallidis) Date: Tue, 9 Mar 2021 15:43:39 -0500 Subject: [Neuroimaging] DIPY Workshop 2021 starts in a few days - March 15 2021 Message-ID: Hello all, Be happy to register here https://workshop.dipy.org to attend the DIPY workshop. Registration this year is free of charge due to COVID. See attachment for more information. Schedule available at https://workshop.dipy.org [image: image.png] Best regards, Eleftherios -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 377928 bytes Desc: not available URL: From etienne.roesch at gmail.com Thu Mar 11 08:23:42 2021 From: etienne.roesch at gmail.com (Etienne B. Roesch) Date: Thu, 11 Mar 2021 13:23:42 +0000 Subject: [Neuroimaging] Postdoctoral Research Assistant on the neuroscience of visual processing and decision making Message-ID: <9DC612F3-A894-4336-BD43-AB2FCFDA034F@gmail.com> ** Postdoc opportunity ** for neuroscientists and/or neuro-enthusiast engineers We are seeking to appoint a highly motivated, adventurous, interdisciplinary neuroscientist to help us understand how humans prioritise information in support of the decisions they make. We focus on visual perception, attention and individual differences, and will be using psychometrics assessments and psychophysics measurements as well as concurrent EEG-fMRI recording. Full time, fixed term for up to 24 months, with possibility of extension. This role falls in the remit of CHAI, a new ?2.4M research project funded by the UK EPSRC and led by the Internet of Things and Security Centre (ISEC) at the University of Greenwich, in collaboration with University College London (UCL), the University of Bristol and Queen Mary University of London, as well as industrial partners. As a follow up to the ?1.5M EU project ?Cocoon: Emotion psychology meets Cyber Security? (2016-2020) led by us, which measured and established how users of connected Internet-of-Things devices react to cyber security risks, ?CHAI: Cyber Hygiene in AI-enabled domestic life? (2021-2024) examines the particular threats posed by Artificial Intelligence. CHAI addresses the challenge of figuring out how to best help users protect themselves against the security risks they will face in a world supported by AI (see: http://bit.ly/chai-introduction-video). The role advertised relates to the fundamental research on human decision making that will inform the technical and pedagogical developments of our partners in the project. The remit is intentionally flexible to allow emerging opportunities for collaborations and includes funding for networking and training. You will have: - excellent computer and statistical skills, including a programming language (Python) - experience analysing neuroimaging data, including EEG and/or fMRI - the motivation to pursue research at the interface between academia and industry ???????This work will be carried out in the Centre for Integrative Neuroscience & Neurodynamics (CINN), in the School of Psychology and Clinical Language Sciences, at the University of Reading. CINN is a research platform currently gathering over 100 research staff and students. It is host to a research-dedicated 3T PRISMA Siemens MR scanner, MR compatible EEG (Brain Products) and TMS (MagVenture) equipments, eye-tracking and versatile computing clusters (incl. cloud management and GPUs), all of which available to the project. Other research projects in the lab currently include the development of a novel Bayesian framework for the analysis of data in psychology and neuroscience, neuroimaging of visual perception and attention, as well as projects with industrial partners on topics as varied as machine learning and brain-computer interfaces for a wide range of sectors. CINN and the School of Psychology are a tight-knit community, committed to open research and reproducibility, at the forefront of what is done in the UK in many ways. The post holder will have the opportunity to participate in many initiatives, and propose new ones, including training events on reproducibility, and best practices in neuroimaging, data analysis and coding, including Software Carpentry workshops. The University of Reading was the first University in the UK to publicly commit to open research. It is one of the first institutional members of the UK Reproducibility Network, and a member of the data and software Carpentries. The University is signatory to the Leiden Manifesto for Research Metrics (http://leidenmanifesto.org), is committed to having a diverse and inclusive workforce, supports the gender equality Athena SWAN Charter and the Race Equality Charter, and is a Diversity Champion for Stonewall, the leading LGBT+ rights organisation in the UK. Applications for job-share, part-time and flexible working arrangements are welcomed and will be considered in line with the project?s needs. Deadline: 19/4 Interviews (online): Early may Informal inquiries to Etienne Roesch, e.b.roesch at reading.ac.uk More information at: https://jobs.reading.ac.uk/displayjob.aspx?jobid=7590 Etienne ? dr. Etienne B. Roesch | Associate Professor of Cognitive Science | Univ. Reading Deputy Director of Imaging, Centre for Integrative Neuroscience and Neurodynamics (CINN) Programme Director, MSc Cognitive Neuroscience, School of Psychology and Clinical Language Sciences meet: Book yourself in my University Outlook calendar www: etienneroes.ch | osf.io/ede88 | github.com/eroesch | rdg.ac.uk/cinn -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From aboudifathia16 at gmail.com Mon Mar 15 16:07:06 2021 From: aboudifathia16 at gmail.com (Fathia ABOUDI) Date: Mon, 15 Mar 2021 21:07:06 +0100 Subject: [Neuroimaging] Step of Quick bundle Message-ID: Dear all, I'm Fathia ABOUDI, a scientific research in neuroimaging, i want to classifier the streamlines,. Can you help me to recupere the script code of quick bundle.? Thank you Best regards, Fathia ABOUDI -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbuist at ou.edu Fri Mar 19 15:30:21 2021 From: cbuist at ou.edu (Buist, Carl R.) Date: Fri, 19 Mar 2021 19:30:21 +0000 Subject: [Neuroimaging] Slicing images on different planes Message-ID: Hello all, I would like to preference this question by saying that I am new to the world of python in general and apologize if this is a silly question and if my code is suboptimal. I am also not using medical data, but microCT scanned images of fossils, but I think that the principles for slicing the images are pretty similar. I have recently written a very basic code to slice a NIfTI (.nii) file along the major axes or some variation of those axes. Now I would like to slice the file along a plane that is not along one of these major axes, for example maybe a plane that is rotated 45 degrees off the x axis in any direction or something like that. My initial thought was to define a plane and use that in place of one of the planes I am currently displaying but am running into trouble. I tried to look through the archive to see if this was a problem for others but didn't see anything. I would really appreciate any help or if someone could point me in the correct direction. Below is the simple code that I made and attached is an image of the output. Thank you so much for your help, Carl def display_views(file_path): image_axis = 2 medical_image = nib.load(file_path) image = medical_image.get_fdata() image = np.squeeze(image) print(image.shape) # Infer dimensions on each Cartesian axis x_dim = len(image[:,0,0]) y_dim = len(image[0,:,0]) z_dim = len(image[0,0,:]) # Midpoint layers, as integers x_midpoint = int(x_dim/2) y_midpoint = int(y_dim/2) z_midpoint = int(z_dim/2) sagital_image = image[x_midpoint, :, :] coronal_image = image[:, y_midpoint, :] axial_image = image[:, :, z_midpoint] axial_imagep10 = image[:, :, int(z_midpoint + 100)] plt.figure(figsize=(20, 10)) plt.style.use('grayscale') plt.subplot(141) plt.imshow(np.rot90(sagital_image)) plt.title('Sagital Plane') plt.axis('off') plt.subplot(142) plt.imshow(np.rot90(coronal_image)) plt.title('Coronal Plane') plt.axis('off') plt.subplot(143) plt.imshow(np.rot90(axial_image)) plt.title('Axial Plane') plt.axis('off') plt.subplot(144) plt.imshow(np.rot90(axial_imagep10)) plt.title('Axial Plane moved') plt.axis('off') plt.show() #fpath = sys.argv[1] fpath = '/content/drive/MyDrive/3D_Forams/Foram_NH1108sit_gdr_7_1__IR_rec480-1180_testcrop.nii' display_views(fpath) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fossil_Images.png Type: image/png Size: 408563 bytes Desc: Fossil_Images.png URL: From markiewicz at stanford.edu Sat Mar 20 21:52:51 2021 From: markiewicz at stanford.edu (Christopher Markiewicz) Date: Sun, 21 Mar 2021 01:52:51 +0000 Subject: [Neuroimaging] Slicing images on different planes In-Reply-To: References: Message-ID: Hi Carl, It sounds like what you want to do is to interpolate the image in a space that's more conveniently rotated so that image dimensions correspond to meaningful directions in the world and then visualize slices. The way to go about that is going to depend on how well the orientation is described in the image affine matrix. If the affine correctly describes rotations, then you can simply resample the image with nibabel.processing.resample_to_output() (https://nipy.org/nibabel/reference/nibabel.processing.html#resample-to-output). That will produce an image in-memory that you can work with as you show in your code, or save and open in another viewer. If your images don't have reliable rotation information stored (e.g., img.affine[:3, :3] contains 6 zeroes), then you're going to need to find a rotation so that you can do something like the above. I have to imagine that tools exist that allow you to place axes in a 3D image and determine the rotation matrix, but I don't know one to suggest. Best, Chris ________________________________________ From: Neuroimaging on behalf of Buist, Carl R. Sent: Friday, March 19, 2021 3:30 PM To: neuroimaging at python.org Subject: [Neuroimaging] Slicing images on different planes Hello all, I would like to preference this question by saying that I am new to the world of python in general and apologize if this is a silly question and if my code is suboptimal. I am also not using medical data, but microCT scanned images of fossils, but I think that the principles for slicing the images are pretty similar. I have recently written a very basic code to slice a NIfTI (.nii) file along the major axes or some variation of those axes. Now I would like to slice the file along a plane that is not along one of these major axes, for example maybe a plane that is rotated 45 degrees off the x axis in any direction or something like that. My initial thought was to define a plane and use that in place of one of the planes I am currently displaying but am running into trouble. I tried to look through the archive to see if this was a problem for others but didn't see anything. I would really appreciate any help or if someone could point me in the correct direction. Below is the simple code that I made and attached is an image of the output. Thank you so much for your help, Carl def display_views(file_path): image_axis = 2 medical_image = nib.load(file_path) image = medical_image.get_fdata() image = np.squeeze(image) print(image.shape) # Infer dimensions on each Cartesian axis x_dim = len(image[:,0,0]) y_dim = len(image[0,:,0]) z_dim = len(image[0,0,:]) # Midpoint layers, as integers x_midpoint = int(x_dim/2) y_midpoint = int(y_dim/2) z_midpoint = int(z_dim/2) sagital_image = image[x_midpoint, :, :] coronal_image = image[:, y_midpoint, :] axial_image = image[:, :, z_midpoint] axial_imagep10 = image[:, :, int(z_midpoint + 100)] plt.figure(figsize=(20, 10)) plt.style.use('grayscale') plt.subplot(141) plt.imshow(np.rot90(sagital_image)) plt.title('Sagital Plane') plt.axis('off') plt.subplot(142) plt.imshow(np.rot90(coronal_image)) plt.title('Coronal Plane') plt.axis('off') plt.subplot(143) plt.imshow(np.rot90(axial_image)) plt.title('Axial Plane') plt.axis('off') plt.subplot(144) plt.imshow(np.rot90(axial_imagep10)) plt.title('Axial Plane moved') plt.axis('off') plt.show() #fpath = sys.argv[1] fpath = '/content/drive/MyDrive/3D_Forams/Foram_NH1108sit_gdr_7_1__IR_rec480-1180_testcrop.nii' display_views(fpath) From satra at mit.edu Sun Mar 21 17:05:47 2021 From: satra at mit.edu (Satrajit Ghosh) Date: Sun, 21 Mar 2021 17:05:47 -0400 Subject: [Neuroimaging] Slicing images on different planes In-Reply-To: References: Message-ID: hi carl, in addition to what chris said, you may also want to consider just opening your file using neuroglancer, which supports nifti as an input (and as long as your file is on the web somewhere, neuroglancer will be able to open it) and has on the fly rotation capabilities (hit the ? button to look at the keyboard shortcuts). here is an example: https://tinyurl.com/yf2jk4af i've intentionally created a weird orientation, you can hit z to snap back to a default orientation. also if you click on the "source" tab on the right, you will be able to manually change affines if you would like to. cheers, satra On Sat, Mar 20, 2021 at 10:45 PM Christopher Markiewicz < markiewicz at stanford.edu> wrote: > Hi Carl, > > It sounds like what you want to do is to interpolate the image in a space > that's more conveniently rotated so that image dimensions correspond to > meaningful directions in the world and then visualize slices. The way to go > about that is going to depend on how well the orientation is described in > the image affine matrix. > > If the affine correctly describes rotations, then you can simply resample > the image with nibabel.processing.resample_to_output() ( > https://nipy.org/nibabel/reference/nibabel.processing.html#resample-to-output). > That will produce an image in-memory that you can work with as you show in > your code, or save and open in another viewer. > > If your images don't have reliable rotation information stored (e.g., > img.affine[:3, :3] contains 6 zeroes), then you're going to need to find a > rotation so that you can do something like the above. I have to imagine > that tools exist that allow you to place axes in a 3D image and determine > the rotation matrix, but I don't know one to suggest. > > Best, > Chris > > ________________________________________ > From: Neuroimaging stanford.edu at python.org> on behalf of Buist, Carl R. > Sent: Friday, March 19, 2021 3:30 PM > To: neuroimaging at python.org > Subject: [Neuroimaging] Slicing images on different planes > > Hello all, > > I would like to preference this question by saying that I am new to the > world of python in general and apologize if this is a silly question and if > my code is suboptimal. I am also not using medical data, but microCT > scanned images of fossils, but I think that the principles for slicing the > images are pretty similar. > > I have recently written a very basic code to slice a NIfTI (.nii) file > along the major axes or some variation of those axes. Now I would like to > slice the file along a plane that is not along one of these major axes, for > example maybe a plane that is rotated 45 degrees off the x axis in any > direction or something like that. My initial thought was to define a plane > and use that in place of one of the planes I am currently displaying but am > running into trouble. > > I tried to look through the archive to see if this was a problem for > others but didn't see anything. I would really appreciate any help or if > someone could point me in the correct direction. Below is the simple code > that I made and attached is an image of the output. > > Thank you so much for your help, > Carl > > def display_views(file_path): > image_axis = 2 > medical_image = nib.load(file_path) > image = medical_image.get_fdata() > image = np.squeeze(image) > print(image.shape) > > # Infer dimensions on each Cartesian axis > x_dim = len(image[:,0,0]) > y_dim = len(image[0,:,0]) > z_dim = len(image[0,0,:]) > > # Midpoint layers, as integers > x_midpoint = int(x_dim/2) > y_midpoint = int(y_dim/2) > z_midpoint = int(z_dim/2) > > sagital_image = image[x_midpoint, :, :] > coronal_image = image[:, y_midpoint, :] > axial_image = image[:, :, z_midpoint] > axial_imagep10 = image[:, :, int(z_midpoint + 100)] > > plt.figure(figsize=(20, 10)) > plt.style.use('grayscale') > > plt.subplot(141) > plt.imshow(np.rot90(sagital_image)) > plt.title('Sagital Plane') > plt.axis('off') > > plt.subplot(142) > plt.imshow(np.rot90(coronal_image)) > plt.title('Coronal Plane') > plt.axis('off') > > plt.subplot(143) > plt.imshow(np.rot90(axial_image)) > plt.title('Axial Plane') > plt.axis('off') > > plt.subplot(144) > plt.imshow(np.rot90(axial_imagep10)) > plt.title('Axial Plane moved') > plt.axis('off') > > plt.show() > > #fpath = sys.argv[1] > fpath = > '/content/drive/MyDrive/3D_Forams/Foram_NH1108sit_gdr_7_1__IR_rec480-1180_testcrop.nii' > > display_views(fpath) > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbm.martins at gmail.com Mon Mar 22 07:53:17 2021 From: sbm.martins at gmail.com (Samuel Botter Martins) Date: Mon, 22 Mar 2021 08:53:17 -0300 Subject: [Neuroimaging] Slicing images on different planes In-Reply-To: References: Message-ID: Dear Christopher, On Sat, Mar 20, 2021 at 11:46 PM Christopher Markiewicz < markiewicz at stanford.edu> wrote: > Hi Carl, > > It sounds like what you want to do is to interpolate the image in a space > that's more conveniently rotated so that image dimensions correspond to > meaningful directions in the world and then visualize slices. The way to go > about that is going to depend on how well the orientation is described in > the image affine matrix. > > If the affine correctly describes rotations, then you can simply resample > the image with nibabel.processing.resample_to_output() ( > https://nipy.org/nibabel/reference/nibabel.processing.html#resample-to-output). > That will produce an image in-memory that you can work with as you show in > your code, or save and open in another viewer. > So, does this mean I could reorient/serialize all images in memory to the same voxel space? I mean, regardless of the orientation of each image stored on file, this function will reorient all of them to the same voxel coordinate space, so that a voxel img[x, y, z] correspond to the precise position in all images, am I correct? Of course, if the image has stored its affine matrix. Best. -- *Prof. Dr. Samuel Botter Martins* Professor in Federal Institute of Education, Science and Technology of S?o Paulo -------------- next part -------------- An HTML attachment was scrubbed... URL: From markiewicz at stanford.edu Mon Mar 22 09:07:01 2021 From: markiewicz at stanford.edu (Christopher Markiewicz) Date: Mon, 22 Mar 2021 13:07:01 +0000 Subject: [Neuroimaging] Slicing images on different planes In-Reply-To: References: , Message-ID: Hi Samuel, If the images have come from a single session where multiple scans have been taken of the object without removing it from the scanner, and the different image sequences you use will induce the same distortions, then this should work. In fact, it would not even be necessary to reorient them to achieve voxel-to-voxel correspondence. If the images come from multiple sessions but have been registered so that their affines correspond, then yes, resampling should provide voxel-to-voxel correspondence. Again, this assumes constant distortion. If the images have distortions, do not match the above situations, or are of unknown provenance, then the odds are good that you'll need to perform image registration to bring them into alignment. In most cases, you will select an image as canonical and register all other images to that. If there is such a thing as a standard reference image, you can also register your images to that, which will allow you to use coordinates that others can interpret without your image. As a rule, images with the same expected distortions can be registered with a rigid (6 dof) transformation. Registration between modalities with different spatial distortions or to a standard reference will generally require nonlinear transformations. If you're just getting started on registration, I would suggest that it's worth your time to learn ANTs (https://antsx.github.io/ANTs/). It's extremely flexible, but they have a bunch of scripts that handle common cases and are useful demonstrations if you need to go beyond them: https://github.com/ANTsX/ANTs/tree/master/Scripts. One caveat with ANTs is that if there are any shear components to your affine matrix, ANTs will not respect them. Best, Chris ________________________________________ From: Neuroimaging on behalf of Samuel Botter Martins Sent: Monday, March 22, 2021 7:53 AM To: Neuroimaging analysis in Python Subject: Re: [Neuroimaging] Slicing images on different planes Dear Christopher, On Sat, Mar 20, 2021 at 11:46 PM Christopher Markiewicz > wrote: Hi Carl, It sounds like what you want to do is to interpolate the image in a space that's more conveniently rotated so that image dimensions correspond to meaningful directions in the world and then visualize slices. The way to go about that is going to depend on how well the orientation is described in the image affine matrix. If the affine correctly describes rotations, then you can simply resample the image with nibabel.processing.resample_to_output() (https://nipy.org/nibabel/reference/nibabel.processing.html#resample-to-output). That will produce an image in-memory that you can work with as you show in your code, or save and open in another viewer. So, does this mean I could reorient/serialize all images in memory to the same voxel space? I mean, regardless of the orientation of each image stored on file, this function will reorient all of them to the same voxel coordinate space, so that a voxel img[x, y, z] correspond to the precise position in all images, am I correct? Of course, if the image has stored its affine matrix. Best. -- Prof. Dr. Samuel Botter Martins Professor in Federal Institute of Education, Science and Technology of S?o Paulo From sbm.martins at gmail.com Mon Mar 22 09:37:39 2021 From: sbm.martins at gmail.com (Samuel Botter Martins) Date: Mon, 22 Mar 2021 10:37:39 -0300 Subject: [Neuroimaging] Slicing images on different planes In-Reply-To: References: Message-ID: It is clear. Thank you, Chris. On Mon, Mar 22, 2021 at 10:07 AM Christopher Markiewicz < markiewicz at stanford.edu> wrote: > Hi Samuel, > > If the images have come from a single session where multiple scans have > been taken of the object without removing it from the scanner, and the > different image sequences you use will induce the same distortions, then > this should work. In fact, it would not even be necessary to reorient them > to achieve voxel-to-voxel correspondence. > > If the images come from multiple sessions but have been registered so that > their affines correspond, then yes, resampling should provide > voxel-to-voxel correspondence. Again, this assumes constant distortion. > > If the images have distortions, do not match the above situations, or are > of unknown provenance, then the odds are good that you'll need to perform > image registration to bring them into alignment. In most cases, you will > select an image as canonical and register all other images to that. If > there is such a thing as a standard reference image, you can also register > your images to that, which will allow you to use coordinates that others > can interpret without your image. > > As a rule, images with the same expected distortions can be registered > with a rigid (6 dof) transformation. Registration between modalities with > different spatial distortions or to a standard reference will generally > require nonlinear transformations. > > If you're just getting started on registration, I would suggest that it's > worth your time to learn ANTs (https://antsx.github.io/ANTs/). It's > extremely flexible, but they have a bunch of scripts that handle common > cases and are useful demonstrations if you need to go beyond them: > https://github.com/ANTsX/ANTs/tree/master/Scripts. One caveat with ANTs > is that if there are any shear components to your affine matrix, ANTs will > not respect them. > > Best, > Chris > > ________________________________________ > From: Neuroimaging stanford.edu at python.org> on behalf of Samuel Botter Martins < > sbm.martins at gmail.com> > Sent: Monday, March 22, 2021 7:53 AM > To: Neuroimaging analysis in Python > Subject: Re: [Neuroimaging] Slicing images on different planes > > Dear Christopher, > > > On Sat, Mar 20, 2021 at 11:46 PM Christopher Markiewicz < > markiewicz at stanford.edu> wrote: > Hi Carl, > > It sounds like what you want to do is to interpolate the image in a space > that's more conveniently rotated so that image dimensions correspond to > meaningful directions in the world and then visualize slices. The way to go > about that is going to depend on how well the orientation is described in > the image affine matrix. > > If the affine correctly describes rotations, then you can simply resample > the image with nibabel.processing.resample_to_output() ( > https://nipy.org/nibabel/reference/nibabel.processing.html#resample-to-output). > That will produce an image in-memory that you can work with as you show in > your code, or save and open in another viewer. > > So, does this mean I could reorient/serialize all images in memory to the > same voxel space? > I mean, regardless of the orientation of each image stored on file, this > function will reorient all of them to the same voxel coordinate space, so > that a voxel img[x, y, z] correspond to the precise position in all images, > am I correct? > Of course, if the image has stored its affine matrix. > > Best. > > -- > Prof. Dr. Samuel Botter Martins > Professor in Federal Institute of Education, Science and Technology of S?o > Paulo > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- *Prof. Dr. Samuel Botter Martins* Professor in Federal Institute of Education, Science and Technology of S?o Paulo -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Mar 27 15:00:35 2021 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 27 Mar 2021 19:00:35 +0000 Subject: [Neuroimaging] Nipy release candidate Message-ID: Hi, I've made a release candidate for the next Nipy release - please do test with: pip install --pre -i https://pypi.org/simple --extra-index-url https://pypi.anaconda.org/nipy/simple nipy Cheers, Matthew From matthew.brett at gmail.com Sat Mar 27 15:27:03 2021 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 27 Mar 2021 19:27:03 +0000 Subject: [Neuroimaging] Nipy release candidate In-Reply-To: References: Message-ID: Hi, Actually, you can omit the -i bit - so just: pip install --pre --extra-index-url https://pypi.anaconda.org/nipy/simple nipy should work (if the release candidate is set up correctly). Cheers, Matthew On Sat, Mar 27, 2021 at 7:00 PM Matthew Brett wrote: > > Hi, > > I've made a release candidate for the next Nipy release - please do test with: > > pip install --pre -i https://pypi.org/simple --extra-index-url > https://pypi.anaconda.org/nipy/simple nipy > > Cheers, > > Matthew From matthew.brett at gmail.com Mon Mar 29 13:02:16 2021 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 29 Mar 2021 17:02:16 +0000 Subject: [Neuroimaging] Nipy 0.5.0 Message-ID: Hi, I've just done a bugfix / compatibility release for Nipy - 0.5.0. A huge thank-you to Matteo Visconti di Oleggio Castello for working on the tough and boring problems of updating Nipy compatibility with more recent Numpy, it made all the difference in making the code testable again. And thanks too to Michael R. Crusoe - our Debian maintainer - who chased me up, and to Yarik, as ever, for injecting energy and adding some useful fixes. This is the last Nipy release to support Python 2.7. Please do try it out and let me know of any problems. Cheers, Matthew From arokem at uw.edu Mon Mar 29 13:51:21 2021 From: arokem at uw.edu (Ariel Rokem) Date: Mon, 29 Mar 2021 10:51:21 -0700 Subject: [Neuroimaging] Nipy 0.5.0 In-Reply-To: References: Message-ID: Long (!) live nipy! Thanks Matthew and everyone doing the work! On Mon, Mar 29, 2021 at 10:03 AM Matthew Brett wrote: > Hi, > > I've just done a bugfix / compatibility release for Nipy - 0.5.0. > > A huge thank-you to Matteo Visconti di Oleggio Castello for working on > the tough and boring problems of updating Nipy compatibility with more > recent Numpy, it made all the difference in making the code testable > again. And thanks too to Michael R. Crusoe - our Debian maintainer - > who chased me up, and to Yarik, as ever, for injecting energy and > adding some useful fixes. > > This is the last Nipy release to support Python 2.7. > > Please do try it out and let me know of any problems. > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reine097 at umn.edu Mon Mar 29 16:06:03 2021 From: reine097 at umn.edu (Paul Reiners) Date: Mon, 29 Mar 2021 15:06:03 -0500 Subject: [Neuroimaging] Opening an aparc+aseg.nii.gz programmatically in Python Message-ID: This is my question: https://psychology.stackexchange.com/q/26872/28113 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbpoline at gmail.com Mon Mar 29 17:29:00 2021 From: jbpoline at gmail.com (JB Poline) Date: Mon, 29 Mar 2021 17:29:00 -0400 Subject: [Neuroimaging] Nipy 0.5.0 In-Reply-To: References: Message-ID: deeply grateful on this side of the coast ...! On Mon, Mar 29, 2021 at 2:09 PM Ariel Rokem wrote: > Long (!) live nipy! Thanks Matthew and everyone doing the work! > > On Mon, Mar 29, 2021 at 10:03 AM Matthew Brett > wrote: > >> Hi, >> >> I've just done a bugfix / compatibility release for Nipy - 0.5.0. >> >> A huge thank-you to Matteo Visconti di Oleggio Castello for working on >> the tough and boring problems of updating Nipy compatibility with more >> recent Numpy, it made all the difference in making the code testable >> again. And thanks too to Michael R. Crusoe - our Debian maintainer - >> who chased me up, and to Yarik, as ever, for injecting energy and >> adding some useful fixes. >> >> This is the last Nipy release to support Python 2.7. >> >> Please do try it out and let me know of any problems. >> >> Cheers, >> >> Matthew >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Mon Mar 29 17:52:07 2021 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 29 Mar 2021 14:52:07 -0700 Subject: [Neuroimaging] Opening an aparc+aseg.nii.gz programmatically in Python In-Reply-To: References: Message-ID: Hi Paul, Thanks for your email. I think this should do it: import nibabel as nib img = nib.load('aparc+aseg.nii.gz') data = img.get_fdata() At this point, `data` should be a numpy array with the values stored in that file. Hope that helps, Ariel On Mon, Mar 29, 2021 at 2:11 PM Paul Reiners via Neuroimaging < neuroimaging at python.org> wrote: > This is my question: https://psychology.stackexchange.com/q/26872/28113 > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.alexandersson at dfki.de Tue Mar 30 03:29:58 2021 From: jan.alexandersson at dfki.de (Jan Alexandersson) Date: Tue, 30 Mar 2021 07:29:58 +0000 Subject: [Neuroimaging] First Inria-DFKI European Summer School on AI: registration open Message-ID: Dear colleagues, could you please distribute this summerschool announcement? Beste Gr??e, Best regards, H?lsningar, Jan ------------------------------------------------------------------ Dr. Jan Alexandersson Research Fellow, Head AAL Competence Center DFKI GmbH, Campus D3 2 Stuhlsatzenhausweg 3 D-66123 Saarbr?cken, Germany +49 681 85775 -5347 (office) -5341 (fax) +49 179 1044317 (mobile) Email: janal at dfki.de, http: www.dfki.de/~janal, ccaal.dfki.de President, OpenURC Alliance e.V. - www.openurc.org ------------------------------------------------------------------ Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany Gesch?ftsf?hrung: Prof. Dr. Antonio Kr?ger Vorsitzender des Aufsichtsrats: Dr. Gabri?l Clemens Amtsgericht Kaiserslautern, HRB 2313 ------------------------------------------------------------- ******************************************************************* First Inria-DFKI European Summer School on AI (IDAI 2021) Trustworthy AI and AI for Medicine Palaiseau, France July 20-23, 2021 https://idessai.inria.fr/ Registration deadline: April 19, 2021 ******************************************************************* IDAI 2021 inaugurates a series of yearly Summer Schools organized by the two renowned German and French AI institutes, DFKI and Inria. It stands out from the crowd of offerings for AI students in several respects: * We ensure a good balance in the number of participants and instructors: participants will have the opportunity to join a community of like-minded people and, at the same time, they will be in close contact with the experts. * Our program features a line-up of courses focused on two themes, Trustworthy AI and AI for Medicine, which are at the forefront of socio-economic issues related to AI. * On top of the latest methodological advances and the shared vision of the future that both organizing institutes have to offer, IDAI 2021 will be practically oriented. We will achieve this through hands-on courses and the involvement of industry practitioners and innovators. * Participants will be offered to the opportunity to present their work to each other in dedicated poster/demo sessions. Trustworthy AI and AI for Medicine will take place in two parallel tracks. There will be plenty of opportunities to exchange between these two tracks at coffee breaks, meals and social events, as well as through joint cross-track sessions. TARGETED AUDIENCE IDAI 2021 was designed for PhD students in all areas of AI, including machine learning, knowledge representation and reasoning, search and optimisation, planning and scheduling, multi-agent systems, natural language processing, robotics, computer vision, and other areas. PhD students in other fields, MSc students, postdocs, and researchers in industry are also welcome. VENUE IDAI 2021 is currently planned as a fully in-person event, which will take place at the Inria Saclay ?le-de-France research center, close to Paris. Remote attendance will not be possible. In case the pandemic will still not allow for an in-person event, IDAI 2021 will take place as a fully virtual event at the same dates instead. We are closely monitoring the situation and will strive to make this decision as early as possible. CONFIRMED KEYNOTES AND SPEAKERS Cross-track keynotes: * Mihaela van der Schaar (University of Cambridge) - Why medicine is creating exciting new frontiers for machine learning and AI * Joanna Bryson (Hertie School) - AI ethics Trustworthy AI track (to be completed): * Serge Abiteboul (Inria) - Responsible data analysis algorithms: a realistic goal? * Simon Burton (Fraunhofer IKS) - Safety, complexity, AI and automated driving - holistic perspectives on safety assurance * Mich?le Sebag (CNRS - LISN) - Why and how learning causal models * Patrick Gallinari (Sorbonne University and Criteo AI Lab) - Deep learning meets numerical modeling * Christian M?ller (DFKI) - Explaining AI with narratives * Catuscia Palamidessi (Inria) and Miguel Couceiro (University of Lorraine) - Addressing algorithmic fairness through metrics and explanations * Guillaume Charpiat (Inria), Zakaria Chihani (CEA), and Julien Girard-Satabin (CEA) - Formal verification of deep neural networks: theory and practice * Hatem Hajri (IRT SystemX) - Adversarial examples and robustness of neural networks AI for Medicine track (to be completed): * Gerd Reis (DFKI) - AI in Medicine - An engineering perspective * Marco Lorenzi (Inria) - Federated learning methods and frameworks for collaborative data analysis * Ga?l Varoquaux (Inria) - Dirty data science: machine learning on non-curated data * Thomas Moreau and Demian Wassermann (Inria) - Introduction to neuroimaging with Python * Francesca Galassi (Inria) and Rutger Fick (TRIBVN Healthcare) - Domain adaptation for the segmentation of multiple sclerosis lesions in brain MRI. * Tim Dahmen (DFKI) - Bio-mechanical simulation for individualized implants and prosthetics * Elmar N?th (Friedrich-Alexander-University Erlangen-Nuremberg) - Automatic analysis of pathologic speech ? from diagnosis to therapy * Pierre Zweigenbaum (CNRS - LIMSI) - NLP for medical applications Open discussion with industry (to be completed): * Juliette Mattioli (Thales) and Fr?d?ric Jurie (Safran) - Industry use cases involving trusted AI * Boris Dimitrov (Check Point Cardio) - Real-time online patient tele-monitoring FEES AND REGISTRATION Our fees are all-inclusive and may optionally include accomodation. For more details and to register, see https://idessai.inria.fr/registration/ (deadline: April 19). To ensure a good balance in the number of participants and instructors and maximize the chances of interaction, the number of attendees is limited to 50 per track. Applicants will be selected on the grounds of diversity and benefit gained from attending the selected track. ORGANIZERS Co-organized by: Inria, DFKI, Dataia, IRT SystemX Contact us: idessai... at inria.fr. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.thirion at inria.fr Tue Mar 30 16:11:04 2021 From: bertrand.thirion at inria.fr (bthirion) Date: Tue, 30 Mar 2021 22:11:04 +0200 Subject: [Neuroimaging] First Inria-DFKI European Summer School on AI: registration open In-Reply-To: References: Message-ID: <00f8a16e-f94b-00c0-a403-2c66d919939b@inria.fr> ******************************************************************* > > *First Inria-DFKI European Summer School on AI (IDAI 2021)* > > *?? ???????????? Trustworthy AI*and *AI for Medicine* > > *????????????????????????????? Palaiseau, France* > *July 20-23, 2021* > ***https://idessai.inria.fr/* > > *?????????????? Registration deadline: April 19, 2021* > > ******************************************************************* > > IDAI 2021 inaugurates a series of yearly Summer Schools organized by > the two renowned German and French AI institutes, DFKI and Inria. It > stands out from the crowd of offerings for AI students in several > respects: > > * We ensure a good balance in the number of participants and > instructors: participants will have the opportunity to join a > community of like-minded people and, at the same time, they will > be in close contact with the experts. > * Our program features a line-up of courses focused on two themes, > Trustworthy AI and AI for Medicine, which are at the forefront of > socio-economic issues related to AI. > * On top of the latest methodological advances and the shared vision > of the future that both organizing institutes have to offer, IDAI > 2021 will be practically oriented. We will achieve this through > hands-on courses and the involvement of industry practitioners and > innovators. > * Participants will be offered to the opportunity to present their > work to each other in dedicated poster/demo sessions. > > Trustworthy AI and AI for Medicine will take place in two parallel > tracks. There will be plenty of opportunities to exchange between > these two tracks at coffee breaks, meals and social events, as well as > through joint cross-track sessions. > > > *TARGETED AUDIENCE* > > IDAI 2021 was designed for PhD students in all areas of AI, including > machine learning, knowledge representation and reasoning, search and > optimisation, planning and scheduling, multi-agent systems, natural > language processing, robotics, computer vision, and other areas. PhD > students in other fields, MSc students, postdocs, and researchers in > industry are also welcome. > > > *VENUE* > > IDAI 2021 is currently planned as a fully in-person event, which will > take place at the Inria Saclay ?le-de-France research center, close to > Paris. Remote attendance will not be possible. > > In case the pandemic will still not allow for an in-person event, IDAI > 2021 will take place as a fully virtual event at the same dates > instead. We are closely monitoring the situation and will strive to > make this decision as early as possible. > > > *CONFIRMED KEYNOTES AND SPEAKERS* > > Cross-track keynotes: > > * Mihaela van der Schaar (University of Cambridge) - Why medicine is > creating exciting new frontiers for machine learning and AI > * Joanna Bryson (Hertie School) - AI ethics > > Trustworthy AI track (to be completed): > > * Serge Abiteboul (Inria) - Responsible data analysis algorithms: a > realistic goal? > * Simon Burton (Fraunhofer IKS) - Safety, complexity, AI and > automated driving - holistic perspectives on safety assurance > * Mich?le Sebag (CNRS - LISN) - Why and how learning causal models > * Patrick Gallinari (Sorbonne University and Criteo AI Lab) - Deep > learning meets numerical modeling > * Christian M?ller (DFKI) - Explaining AI with narratives > * Catuscia Palamidessi (Inria) and Miguel Couceiro (University of > Lorraine) - Addressing algorithmic fairness through metrics and > explanations > * Guillaume Charpiat (Inria), Zakaria Chihani (CEA), and Julien > Girard-Satabin (CEA) - Formal verification of deep neural > networks: theory and practice > * Hatem Hajri (IRT SystemX) - Adversarial examples and robustness of > neural networks > > AI for Medicine track (to be completed): > > * Gerd Reis (DFKI) - AI in Medicine - An engineering perspective > * Marco Lorenzi (Inria) - Federated learning methods and frameworks > for collaborative data analysis > * Ga?l Varoquaux (Inria) - Dirty data science: machine learning on > non-curated data > * Thomas Moreau and Demian Wassermann (Inria) - Introduction to > neuroimaging with Python > * Francesca Galassi (Inria) and Rutger Fick (TRIBVN Healthcare) - > Domain adaptation for the segmentation of multiple sclerosis > lesions in brain MRI. > * Tim Dahmen (DFKI) - Bio-mechanical simulation for individualized > implants and prosthetics > * Elmar N?th (Friedrich-Alexander-University Erlangen-Nuremberg) - > Automatic analysis of pathologic speech ? from diagnosis to therapy > * Pierre Zweigenbaum (CNRS - LIMSI) - NLP for medical applications > > Open discussion with industry (to be completed): > > * Juliette Mattioli (Thales) and Fr?d?ric Jurie (Safran) - Industry > use cases involving trusted AI > * Boris Dimitrov (Check Point Cardio) - Real-time online patient > tele-monitoring > > * > *FEES AND REGISTRATION** > > Our fees are all-inclusive and may optionally include accomodation. > > For more details and to register, see > https://idessai.inria.fr/registration/(deadline: April 19). > > To ensure a good balance in the number of participants and instructors > and maximize the chances of interaction, the number of attendees is > limited to 50 per track. Applicants will be selected on the grounds of > diversity and benefit gained from attending the selected track. > > > *ORGANIZERS* > > Co-organized by: Inria, DFKI, Dataia, IRT SystemX > > Contact us: idessai... at inria.fr . > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging -------------- next part -------------- An HTML attachment was scrubbed... URL: