From ibmalone at gmail.com Mon Jan 9 13:59:30 2017 From: ibmalone at gmail.com (Ian Malone) Date: Mon, 9 Jan 2017 18:59:30 +0000 Subject: [Neuroimaging] iterating a workflow over inputs Message-ID: Hi, I've got a relatively complex workflow that I'd like to use as a sub-workflow of another one, however it needs to be iterated over some of the inputs. I suppose I could replace the appropriate pe.Node()s in it with MapNode()s, but there are a fair number of them, and quite a few connections. (I also think, that this would prevent it being used on single instance inputs without first packing them into a list, though I could be wrong.) This is what I've tried: sub_workflow = pe.MapNode(create_my_workflow(), iterfield = ['in_4d_file', 'in_text_file'], name = 'data_fit') Unsurprisingly it fails with : raise IOError('interface must be an instance of an Interface') IOError: interface must be an instance of an Interface I haven't been able to find much discussion of this, though there was one mention of wrapping it in a function, which looked more like it was intended to cause a separate cluster submission of the sub-workflow (and would require using the function parameters/return to connect up the inputs and outputs): https://groups.google.com/forum/#!topic/nipy-user/zMGPJ74_fJU # def reuse_wrapper(subject, etc.): # .... # reuseWorkflow = create_run_first_all() # get my run_first_all workflow # reuseWorkflow.run(plugin="Condor") # # # rfa_node = MapNode(Function(function='reuse_wrapper' etc. )iterfield='subject') # topLevelWorkflow.connect(#subjects to rfa_node) # topLevelWorkflow.run(plugin="Condor") Is this at all possible, or should I bite the bullet and start MapNode-ing the sub-workflow? -- imalone From ibmalone at gmail.com Tue Jan 10 11:47:27 2017 From: ibmalone at gmail.com (Ian Malone) Date: Tue, 10 Jan 2017 16:47:27 +0000 Subject: [Neuroimaging] iterating a workflow over inputs In-Reply-To: References: Message-ID: On 9 January 2017 at 18:59, Ian Malone wrote: > Hi, > > I've got a relatively complex workflow that I'd like to use as a > sub-workflow of another one, however it needs to be iterated over some > of the inputs. I suppose I could replace the appropriate pe.Node()s in > it with MapNode()s, but there are a fair number of them, and quite a > few connections. (I also think, that this would prevent it being used > on single instance inputs without first packing them into a list, > though I could be wrong.) > > Is this at all possible, or should I bite the bullet and start > MapNode-ing the sub-workflow? This has turned out to be doubly interesting as I forgot my sub-workflow already had its own sub-workflow, which is already used elsewhere with a single set of inputs. I suppose I can use interfaces.utility.Split() to extract the single output again in that case, but the bigger workflow (which I'd also like to use elsewhere) has quite a few outputs, and connecting a split to each one seems a bit unwieldy. Any good solutions to this? -- imalone From dan.lurie at berkeley.edu Tue Jan 10 16:07:27 2017 From: dan.lurie at berkeley.edu (Dan Lurie) Date: Tue, 10 Jan 2017 16:07:27 -0500 Subject: [Neuroimaging] [fmriprep] Specifying different priors and templates Message-ID: Hey fmriprep team, I am working on a pull request to add support for using lesion masks during T1 registration, and wanted to check in with you guys before I got too deep. A couple of questions: 1) I?m assuming it?s best to create a separate new pipeline based on ds005 instead of modifying ds005? I want to make sure i?m adding things in a way that is consistent with the overall architecture you?ve envisioned. 2) Lesion masks are usually in subject native space, so using cost function masking in ANTs requires specifying the template as the moving image and then using the inverse transforms (see here: https://github.com/stnava/ANTs/issues/48). My plan was to just change the pipeline workflow inputs/outputs to accommodate this change (e.g. use the inverse transform when registering the EPI and TPMs). Is there anything tricky about how the pipeline is organized that I should keep in mind when doing this? 3) is run_workflow.py the place to add options/settings for things like specifying what templates to use during skull stripping and tissue segmentation? I looked through the BIDS-Apps docs to see if there was any common design pattern established, but didn?t see anything. Thanks in advance, and apologies if I?m missing anything obvious. Happy to chat over Skype/Hangouts if that?s easier than email. Dan -- Dan Lurie Graduate Student Department of Psychology University of California, Berkeley http://despolab.berkeley.edu/lurie -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzysztof.gorgolewski at gmail.com Tue Jan 10 18:03:51 2017 From: krzysztof.gorgolewski at gmail.com (Chris Gorgolewski) Date: Tue, 10 Jan 2017 15:03:51 -0800 Subject: [Neuroimaging] [fmriprep] Specifying different priors and templates In-Reply-To: References: Message-ID: Hi Dan! It would be great to get your help! More comments below: On Tue, Jan 10, 2017 at 1:07 PM, Dan Lurie wrote: > Hey fmriprep team, > > I am working on a pull request to add support for using lesion masks > during T1 registration, and wanted to check in with you guys before I got > too deep. A couple of questions: > > 1) I?m assuming it?s best to create a separate new pipeline based on ds005 > instead of modifying ds005? I want to make sure i?m adding things in a way > that is consistent with the overall architecture you?ve envisioned. > Yes this is the best way to go around it. In general FMRIPREP is looking at what data is available for a given subject and then applies the appropriate pipeline (this happens here ). So in your case you should see if there is a lesion mask available for a given subject and chose your new workflow instead of ds005. The selection of the workflow can be overridden on the commandline - os you need to add the name of the new workflow here . BTW we are considering changing workflow names to make them more intuitive. The new workflow should reuse subworkflows of ds005 as much as it can. We want to minimize the amount of replicated code. 2) Lesion masks are usually in subject native space, so using cost function > masking in ANTs requires specifying the template as the moving image and > then using the inverse transforms (see here: https://github.com/ > stnava/ANTs/issues/48). My plan was to just change the pipeline workflow > inputs/outputs to accommodate this change (e.g. use the inverse transform > when registering the EPI and TPMs). Is there anything tricky about how the > pipeline is organized that I should keep in mind when doing this? > I see. You just need to connect the output transforms to the right outputnode fields. You also minght need to calculat/generate inverse transforms. > 3) is run_workflow.py the place to add options/settings for things like > specifying what templates to use during skull stripping and tissue > segmentation? I looked through the BIDS-Apps docs to see if there was any > common design pattern established, but didn?t see anything. > If you want to add an option to use different templates for BrainsExtraction doing it via a commandline parameter in run_workflow.py is the best option. Is the template we are using now (OASIS) not working for your data? Maybe we can figure out how to choose which template to use based on some data properties? Thanks in advance, and apologies if I?m missing anything obvious. Happy to > chat over Skype/Hangouts if that?s easier than email. > Happy to have a chat if you want to. We should get some documentation for develeopers soon . BTW one thing I'm interested in is how did you organize your input data. The lesion masks are hand drawn yes? We should work on some standardized way of saving such data in BIDS that FMRIPREP can work with. Best, Chris > Dan > > -- > Dan Lurie > Graduate Student > Department of Psychology > University of California, Berkeley > http://despolab.berkeley.edu/lurie > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Wed Jan 11 10:47:37 2017 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 11 Jan 2017 10:47:37 -0500 Subject: [Neuroimaging] iterating a workflow over inputs In-Reply-To: References: Message-ID: hi ian, in the current API, there are a few ways to do this, but all involve wrapping the subworkflow in something. option 1: create a function node option 2: create a workflow interface in both cases, some code will have to take the node/interface inputs and map them to the inputs of the subworkflow, take the outputs and mapping it to the outputs of the node/interface. however, unless your cluster allows job submission from arbitrary nodes, you may need to preallocate resources. a function node example: def run_subwf(input1, input2, plugin='MultiProc', plugin_args={'n_procs': 2}): import os from myscripts import import create_workflow_func wf = create_workflow_func() wf.inputs.inputnode.input1 = input1 wf.inputs.inputnode.input2 = input2 wf.base_dir = os.getcwd() egraph = wf.run(plugin=plugin, plugin_args=plugin_args) outputnode = ['outputnode' in node for node in egraph.nodes()] return outputnode.out1, outputnode.out2 subnode = Node(Function(input_names=['input1', 'input2', ...], output_names=['out1', 'out2'], func=run_subwf), name='subwf') one could probably optimize a few things automatically given a workflow. in the next generation API, this will be doable without creating these special nodes/interfaces. cheers, satra On Tue, Jan 10, 2017 at 11:47 AM, Ian Malone wrote: > On 9 January 2017 at 18:59, Ian Malone wrote: > > Hi, > > > > I've got a relatively complex workflow that I'd like to use as a > > sub-workflow of another one, however it needs to be iterated over some > > of the inputs. I suppose I could replace the appropriate pe.Node()s in > > it with MapNode()s, but there are a fair number of them, and quite a > > few connections. (I also think, that this would prevent it being used > > on single instance inputs without first packing them into a list, > > though I could be wrong.) > > > > > Is this at all possible, or should I bite the bullet and start > > MapNode-ing the sub-workflow? > > This has turned out to be doubly interesting as I forgot my > sub-workflow already had its own sub-workflow, which is already used > elsewhere with a single set of inputs. I suppose I can use > interfaces.utility.Split() to extract the single output again in that > case, but the bigger workflow (which I'd also like to use elsewhere) > has quite a few outputs, and connecting a split to each one seems a > bit unwieldy. Any good solutions to this? > > -- > imalone > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibmalone at gmail.com Wed Jan 11 13:22:13 2017 From: ibmalone at gmail.com (Ian Malone) Date: Wed, 11 Jan 2017 18:22:13 +0000 Subject: [Neuroimaging] iterating a workflow over inputs In-Reply-To: References: Message-ID: Thanks. I guess the workflow interface version you mention would be similar, using Select() or something to split the inputs for sub-workflow assignment? I wanted arbitrary numbers of processes and ended up doing something along those lines: for dtiN in range(in_count): split_unmerged_images = pe.Node(niu.Select(index=dtiN), name='select_images{0}'.format(dtiN)) workflow.connect(out_unmerger, 'dwis_out', split_unmerged_images, 'inlist') split_unmerged_bvals = pe.Node(niu.Select(index=dtiN), name='select_bvals{0}'.format(dtiN)) workflow.connect(out_unmerger, 'bvals_out', split_unmerged_bvals, 'inlist') split_unmerged_bvecs = pe.Node(niu.Select(index=dtiN), name='select_bvecs{0}'.format(dtiN)) workflow.connect(out_unmerger, 'bvecs_out', split_unmerged_bvecs, 'inlist') split_unmerged_orig_file = pe.Node(niu.Select(index=dtiN), name='select_orig_file{0}'.format(dtiN)) workflow.connect(out_unmerger, 'orig_file', split_unmerged_orig_file, 'inlist') workflow.connect(split_unmerged_images, "out", tensor_fit[dtiN], 'input_node.in_dwi_4d_file') workflow.connect(split_unmerged_bvals, "out", tensor_fit[dtiN], 'input_node.in_bval_file') workflow.connect(split_unmerged_bvecs, "out", tensor_fit[dtiN], 'input_node.in_bvec_file') workflow.connect(split_unmerged_orig_file, "out", tensor_fit[dtiN], 'input_node.in_orig_filename') tensor_fit[dtiN].inputs.input_node.in_t1_file = in_t1 workflow.connect(r, 'output_node.t1_mask', tensor_fit[dtiN], 'input_node.in_t1_mask_file') workflow.connect(r, 'output_node.mask', tensor_fit[dtiN], 'input_node.in_b0_mask_file') workflow.connect(r, 'output_node.b0_to_t1', tensor_fit[dtiN], 'input_node.in_b0_to_t1') workflow.connect(tensor_fit[dtiN], 'renamer.fa', ds, 'unmerge. at fa{0}'.format(dtiN)) workflow.connect(tensor_fit[dtiN], 'renamer.fa_res', ds, 'unmerge. at fa_res{0}'.format(dtiN)) workflow.connect(tensor_fit[dtiN], 'renamer.b0', ds, 'unmerge. at b0{0}'.format(dtiN)) workflow.connect(tensor_fit[dtiN], 'renamer.b0_res', ds, 'unmerge. at b0_res{0}'.format(dtiN)) It's a dti workflow, fortunately the number of nodes is determined by the number of datasets supplied to the program (they have to be merged for an earlier processing step), so it's easy to loop over. I'm not sure if this would work currently if the number of processes needed is determined in an earlier node (I suppose something similar to this but in the function node example could work in that case). Either way, handling for this in the next API would be very good news! Best wishes, Ian On 11 January 2017 at 15:47, Satrajit Ghosh wrote: > hi ian, > > in the current API, there are a few ways to do this, but all involve > wrapping the subworkflow in something. > > option 1: create a function node > option 2: create a workflow interface > > in both cases, some code will have to take the node/interface inputs and map > them to the inputs of the subworkflow, take the outputs and mapping it to > the outputs of the node/interface. however, unless your cluster allows job > submission from arbitrary nodes, you may need to preallocate resources. > > a function node example: > > def run_subwf(input1, input2, plugin='MultiProc', plugin_args={'n_procs': > 2}): > import os > from myscripts import import create_workflow_func > wf = create_workflow_func() > wf.inputs.inputnode.input1 = input1 > wf.inputs.inputnode.input2 = input2 > wf.base_dir = os.getcwd() > egraph = wf.run(plugin=plugin, plugin_args=plugin_args) > outputnode = ['outputnode' in node for node in egraph.nodes()] > return outputnode.out1, outputnode.out2 > > subnode = Node(Function(input_names=['input1', 'input2', ...], > output_names=['out1', 'out2'], func=run_subwf), name='subwf') > > one could probably optimize a few things automatically given a workflow. > > in the next generation API, this will be doable without creating these > special nodes/interfaces. > > cheers, > > satra > > On Tue, Jan 10, 2017 at 11:47 AM, Ian Malone wrote: >> >> On 9 January 2017 at 18:59, Ian Malone wrote: >> > Hi, >> > >> > I've got a relatively complex workflow that I'd like to use as a >> > sub-workflow of another one, however it needs to be iterated over some >> > of the inputs. I suppose I could replace the appropriate pe.Node()s in >> > it with MapNode()s, but there are a fair number of them, and quite a >> > few connections. (I also think, that this would prevent it being used >> > on single instance inputs without first packing them into a list, >> > though I could be wrong.) >> > >> >> > Is this at all possible, or should I bite the bullet and start >> > MapNode-ing the sub-workflow? >> >> This has turned out to be doubly interesting as I forgot my >> sub-workflow already had its own sub-workflow, which is already used >> elsewhere with a single set of inputs. I suppose I can use >> interfaces.utility.Split() to extract the single output again in that >> case, but the bigger workflow (which I'd also like to use elsewhere) >> has quite a few outputs, and connecting a split to each one seems a >> bit unwieldy. Any good solutions to this? >> >> -- >> imalone >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging > > > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -- imalone From krzysztof.gorgolewski at gmail.com Fri Jan 13 11:20:28 2017 From: krzysztof.gorgolewski at gmail.com (Chris Gorgolewski) Date: Fri, 13 Jan 2017 08:20:28 -0800 Subject: [Neuroimaging] [fmriprep] Specifying different priors and templates In-Reply-To: References: Message-ID: The documentation for new contributors is up now (along a new release): http://fmriprep.readthedocs.io/en/latest/contributors.html On Tue, Jan 10, 2017 at 3:03 PM, Chris Gorgolewski < krzysztof.gorgolewski at gmail.com> wrote: > Hi Dan! > > It would be great to get your help! More comments below: > > On Tue, Jan 10, 2017 at 1:07 PM, Dan Lurie wrote: > >> Hey fmriprep team, >> >> I am working on a pull request to add support for using lesion masks >> during T1 registration, and wanted to check in with you guys before I got >> too deep. A couple of questions: >> >> 1) I?m assuming it?s best to create a separate new pipeline based on >> ds005 instead of modifying ds005? I want to make sure i?m adding things in >> a way that is consistent with the overall architecture you?ve envisioned. >> > Yes this is the best way to go around it. In general FMRIPREP is looking > at what data is available for a given subject and then applies the > appropriate pipeline (this happens here > ). > So in your case you should see if there is a lesion mask available for a > given subject and chose your new workflow instead of ds005. The selection > of the workflow can be overridden on the commandline - os you need to add > the name of the new workflow here > . > BTW we are considering changing workflow names > to make them more > intuitive. > > The new workflow should reuse subworkflows of ds005 as much as it can. We > want to minimize the amount of replicated code. > > 2) Lesion masks are usually in subject native space, so using cost >> function masking in ANTs requires specifying the template as the moving >> image and then using the inverse transforms (see here: >> https://github.com/stnava/ANTs/issues/48). My plan was to just change >> the pipeline workflow inputs/outputs to accommodate this change (e.g. use >> the inverse transform when registering the EPI and TPMs). Is there anything >> tricky about how the pipeline is organized that I should keep in mind when >> doing this? >> > I see. You just need to connect the output transforms to the right > outputnode fields. You also minght need to calculat/generate inverse > transforms. > > >> 3) is run_workflow.py the place to add options/settings for things like >> specifying what templates to use during skull stripping and tissue >> segmentation? I looked through the BIDS-Apps docs to see if there was any >> common design pattern established, but didn?t see anything. >> > If you want to add an option to use different templates for > BrainsExtraction doing it via a commandline parameter in run_workflow.py is > the best option. Is the template we are using now (OASIS) not working for > your data? Maybe we can figure out how to choose which template to use > based on some data properties? > > Thanks in advance, and apologies if I?m missing anything obvious. Happy to >> chat over Skype/Hangouts if that?s easier than email. >> > Happy to have a chat if you want to. We should get some documentation for develeopers > soon . > > BTW one thing I'm interested in is how did you organize your input data. > The lesion masks are hand drawn yes? We should work on some standardized > way of saving such data in BIDS that FMRIPREP can work with. > > Best, > Chris > >> Dan >> >> -- >> Dan Lurie >> Graduate Student >> Department of Psychology >> University of California, Berkeley >> http://despolab.berkeley.edu/lurie >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuanw at uw.edu Mon Jan 23 01:19:48 2017 From: shuanw at uw.edu (shuang wu) Date: Mon, 23 Jan 2017 01:19:48 -0500 Subject: [Neuroimaging] Extract XYZ coordinates in mm using nibabel Message-ID: Hi there: I am recently using the nibabel (nib) package in python to try to extract the information from .img file. I use nib.load.get_data() to extract the intensity values from the img file, but I do not know how to extract those corresponding xyz coordinates in mm. In Matlab, I can use [Y, xyz] = spm_read_vols(vol) to extract both the intensity values Y and the coordinates xyz in mm. Is there anyway to do the same thing using python? Thanks in advance for any help! Best, Shuang(Sam) Wu, MS Applied Mathematics Department University of Washington shuanw at uw.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Jan 23 13:15:56 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 23 Jan 2017 10:15:56 -0800 Subject: [Neuroimaging] Extract XYZ coordinates in mm using nibabel In-Reply-To: References: Message-ID: Hi, On Sun, Jan 22, 2017 at 10:19 PM, shuang wu wrote: > Hi there: > > I am recently using the nibabel (nib) package in python to try to extract > the information from .img file. I use nib.load.get_data() to extract the > intensity values from the img file, but I do not know how to extract those > corresponding xyz coordinates in mm. > > In Matlab, I can use [Y, xyz] = spm_read_vols(vol) to extract both the > intensity values Y and the coordinates xyz in mm. > > Is there anyway to do the same thing using python? > > Thanks in advance for any help! There's no canned way of doing that in nibabel, but this snippet will do the job: In [1]: import numpy as np In [2]: import nibabel as nib In [3]: img = nib.load('test.nii') In [4]: shape, affine = img.shape[:3], img.affine In [5]: coords = np.array(np.meshgrid(*(range(i) for i in shape), indexing='ij')) In [6]: coords = np.rollaxis(coords, 0, len(shape) + 1) In [7]: mm_coords = nib.affines.apply_affine(affine, coords) For background see: http://nipy.org/nibabel/coordinate_systems.html Cheers, Matthew From matthew.brett at gmail.com Tue Jan 24 12:43:56 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 24 Jan 2017 09:43:56 -0800 Subject: [Neuroimaging] Quick review for nipy PR? Message-ID: Hi, I just put in a pull request to nipy to fix lots of failures with current numpy: https://github.com/nipy/nipy/pull/419 The Debian freeze is very fast approaching, so I'd be hugely grateful for a quick review, so we can get these fixes in before the freeze. Any takers? Cheers, Matthew From shuanw at uw.edu Wed Jan 25 05:41:43 2017 From: shuanw at uw.edu (shuang wu) Date: Wed, 25 Jan 2017 18:41:43 +0800 Subject: [Neuroimaging] Extract XYZ coordinates in mm using nibabel In-Reply-To: References: Message-ID: This is very helpful and I can use this to found the mm coordinates. Thanks! Shuang(Sam) Wu, MS Applied Mathematics Department University of Washington shuanw at uw.edu 2017-01-24 2:15 GMT+08:00 Matthew Brett : > Hi, > > On Sun, Jan 22, 2017 at 10:19 PM, shuang wu wrote: > > Hi there: > > > > I am recently using the nibabel (nib) package in python to try to extract > > the information from .img file. I use nib.load.get_data() to extract the > > intensity values from the img file, but I do not know how to extract > those > > corresponding xyz coordinates in mm. > > > > In Matlab, I can use [Y, xyz] = spm_read_vols(vol) to extract both the > > intensity values Y and the coordinates xyz in mm. > > > > Is there anyway to do the same thing using python? > > > > Thanks in advance for any help! > > There's no canned way of doing that in nibabel, but this snippet will > do the job: > > In [1]: import numpy as np > In [2]: import nibabel as nib > In [3]: img = nib.load('test.nii') > In [4]: shape, affine = img.shape[:3], img.affine > In [5]: coords = np.array(np.meshgrid(*(range(i) for i in shape), > indexing='ij')) > In [6]: coords = np.rollaxis(coords, 0, len(shape) + 1) > In [7]: mm_coords = nib.affines.apply_affine(affine, coords) > > For background see: http://nipy.org/nibabel/coordinate_systems.html > > Cheers, > > Matthew > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.wiestler at tum.de Mon Jan 30 05:48:01 2017 From: b.wiestler at tum.de (Benedikt Wiestler) Date: Mon, 30 Jan 2017 11:48:01 +0100 Subject: [Neuroimaging] Two (beginner) questions for DiPy Message-ID: <588F19E1.7030506@tum.de> Dear all, first of all thank you very much for making DiPy publicly available. I am new to Python and diffusion imaging (I have worked before with R and genetics), and have two (hopefully easy) questions: 1) As input, I have a 4D nifti image with my B0 and B1000 images stacked. How can I run an affine registration (for motion / eddy correction) on this stack, using the first B0 image as fixed image and the remaining images (one after another) as moving images? Or do I have to split the stack first? 2) How do I rotate the B-matrix after registration? Thanks a lot! Benedikt -- Dr. med. Benedikt Wiestler Abteilung f?r Neuroradiologie Klinikum rechts der Isar, TU M?nchen Ismaninger Str. 22 81675 M?nchen From stadler at lin-magdeburg.de Mon Jan 30 06:03:31 2017 From: stadler at lin-magdeburg.de (=?UTF-8?Q?J=c3=b6rg_Stadler?=) Date: Mon, 30 Jan 2017 12:03:31 +0100 Subject: [Neuroimaging] Two (beginner) questions for DiPy In-Reply-To: <588F19E1.7030506@tum.de> References: <588F19E1.7030506@tum.de> Message-ID: <5b31dabb-bc00-6773-0423-b62f9f9f17f2@lin-magdeburg.de> Dear Benedikt, please have a look at http://nipype.readthedocs.io/en/0.12.0/interfaces/generated/nipype.workflows.dmri.fsl.artifacts.html This will solve Question 1 & 2 for you (and a lot more) Joerg > Dear all, > > first of all thank you very much for making DiPy publicly available. > I am new to Python and diffusion imaging (I have worked before with R > and genetics), and have two (hopefully easy) questions: > > 1) As input, I have a 4D nifti image with my B0 and B1000 images > stacked. How can I run an affine registration (for motion / eddy > correction) on this stack, using the first B0 image as fixed image and > the remaining images (one after another) as moving images? Or do I have > to split the stack first? > 2) How do I rotate the B-matrix after registration? > > Thanks a lot! > > Benedikt From arokem at gmail.com Mon Jan 30 17:04:58 2017 From: arokem at gmail.com (Ariel Rokem) Date: Mon, 30 Jan 2017 14:04:58 -0800 Subject: [Neuroimaging] Two (beginner) questions for DiPy In-Reply-To: <588F19E1.7030506@tum.de> References: <588F19E1.7030506@tum.de> Message-ID: Hi Benedikt, On Mon, Jan 30, 2017 at 2:48 AM, Benedikt Wiestler wrote: > Dear all, > > first of all thank you very much for making DiPy publicly available. > I am new to Python and diffusion imaging (I have worked before with R and > genetics), and have two (hopefully easy) questions: > > 1) As input, I have a 4D nifti image with my B0 and B1000 images stacked. > How can I run an affine registration (for motion / eddy correction) on this > stack, using the first B0 image as fixed image and the remaining images > (one after another) as moving images? Or do I have to split the stack first? > 2) How do I rotate the B-matrix after registration? > This function will do that: https://github.com/nipy/dipy/blob/master/dipy/core/gradients.py#L265 Cheers, Ariel > Thanks a lot! > > Benedikt > -- > Dr. med. Benedikt Wiestler > Abteilung f?r Neuroradiologie > Klinikum rechts der Isar, TU M?nchen > Ismaninger Str. 22 > 81675 M?nchen > > _______________________________________________ > Neuroimaging mailing list > Neuroimaging at python.org > https://mail.python.org/mailman/listinfo/neuroimaging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jetzel at wustl.edu Tue Jan 31 16:29:11 2017 From: jetzel at wustl.edu (Jo Etzel) Date: Tue, 31 Jan 2017 15:29:11 -0600 Subject: [Neuroimaging] call for papers & tutorials: PRNI (Pattern Recognition in NeuroImaging) Message-ID: <2c10896f-7e8d-5101-db39-0bfb4d8d0773@wustl.edu> ******* please accept our apologies for cross-posting ******* ----------------------------------------------------------------------- FIRST CALL FOR PAPERS AND TUTORIALS PRNI 2017 7th International Workshop on Pattern Recognition in Neuroimaging to be held 21-23 June 2017 at the University of Toronto, Toronto, Canada www.prni.org - @PRNIworkshop - www.facebook.com/PRNIworkshop/ ----------------------------------------------------------------------- Pattern recognition techniques are an important tool for neuroimaging data analysis. These techniques are helping to elucidate normal and abnormal brain function, cognition and perception, anatomical and functional brain architecture, biomarkers for diagnosis and personalized medicine, and as a scientific tool to decipher neural mechanisms underlying human cognition. The International Workshop on Pattern Recognition in Neuroimaging (PRNI) aims to: (1) foster dialogue between developers and users of cutting-edge analysis techniques in order to find matches between analysis techniques and neuroscientific questions; (2) showcase recent methodological advances in pattern recognition algorithms for neuroimaging analysis; and (3) identify challenging neuroscientific questions in need of new analysis approaches. Authors should prepare full papers with a maximum length of 4 pages (two column IEEE style) for double-blind review. The manuscript submission deadline is 27 March 2017, 11:59 pm EST. Accepted manuscripts will be assigned either to an oral or poster sessions; all accepted manuscripts will be included in the workshop proceedings. Similarly to previous years, in addition to full length papers PRNI will also accept short abstracts (500 words excluding the title, abstract, tables, figure and data legends, and references) for poster presentation. Finally, this year PRNI has an open call for tutorial proposals. A tutorial can take a form of 2h, 4h or whole day event aimed at demonstrating a computational technique, software tool, or specific concept. Tutorial proposals featuring hands on demonstrations and promoting diversity (e.g. gender, background, institution) will be preferred. PRNI will cover conference registration fees for up to two tutors per accepted program. The submission deadline is also 27 March 2017, 11:59 pm EST. Please see www.prni.org and follow @PRNIworkshop and www.facebook.com/PRNIworkshop/ for news and details. -- Joset A. Etzel, Ph.D. Research Analyst Cognitive Control & Psychopathology Lab Washington University in St. Louis http://mvpa.blogspot.com/ From arokem at gmail.com Tue Jan 31 16:54:16 2017 From: arokem at gmail.com (Ariel Rokem) Date: Tue, 31 Jan 2017 13:54:16 -0800 Subject: [Neuroimaging] Two (beginner) questions for DiPy In-Reply-To: <1485847213823.45200@tum.de> References: <588F19E1.7030506@tum.de> <1485847213823.45200@tum.de> Message-ID: Hi Benedikt, On Mon, Jan 30, 2017 at 11:20 PM, Wiestler, Benedikt wrote: > Dear Ariel, > > > > thank you very much for your very helpful reply reg. B-Matrix rotation! > > Could you also point out to me how I input a 4D DTI stack (B0 first, then > some B1000s) for affine co-registration? > > That requires a bit more code. I wrote something in that direction here: https://github.com/yeatmanlab/pyAFQ/blob/master/AFQ/registration.py#L342-L391, but I haven't had a lot of opportunities to test/tweak this. Feel free to use that as a starting point. Cheers, Ariel > Cheers, > > > > Benedikt > > --- > Dr. med. Benedikt Wiestler > Abteilung f?r Neuroradiologie > Klinikum rechts der Isar, TU M?nchen > Ismaninger Str. 22 > 81675 M?nchen > ------------------------------ > *From:* Neuroimaging > on behalf of Ariel Rokem > *Sent:* Monday, January 30, 2017 23:04 > *To:* Neuroimaging analysis in Python > *Subject:* Re: [Neuroimaging] Two (beginner) questions for DiPy > > Hi Benedikt, > > On Mon, Jan 30, 2017 at 2:48 AM, Benedikt Wiestler > wrote: > >> Dear all, >> >> first of all thank you very much for making DiPy publicly available. >> I am new to Python and diffusion imaging (I have worked before with R and >> genetics), and have two (hopefully easy) questions: >> >> 1) As input, I have a 4D nifti image with my B0 and B1000 images stacked. >> How can I run an affine registration (for motion / eddy correction) on this >> stack, using the first B0 image as fixed image and the remaining images >> (one after another) as moving images? Or do I have to split the stack first? >> 2) How do I rotate the B-matrix after registration? >> > > This function will do that: https://github.com/nipy/ > dipy/blob/master/dipy/core/gradients.py#L265 > > Cheers, > > Ariel > > >> Thanks a lot! >> >> Benedikt >> -- >> Dr. med. Benedikt Wiestler >> Abteilung f?r Neuroradiologie >> Klinikum rechts der Isar, TU M?nchen >> Ismaninger Str. 22 >> 81675 M?nchen >> >> _______________________________________________ >> Neuroimaging mailing list >> Neuroimaging at python.org >> https://mail.python.org/mailman/listinfo/neuroimaging >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: