[SciPy-User] 2D slice of transformed data

Chris Weisiger cweisiger at msg.ucsf.edu
Thu Mar 24 11:09:24 EDT 2011


That works fine for the XY view with no Z translate, but breaks down
completely as soon as you want to look at other views or introduce a Z
translation factor.

It is possible to read pixel data back after OpenGL has applied transforms,
e.g. by using a framebuffer object (FBO) to render to a texture and then
using glGetTexImage to read the texture's pixels. Even ignoring the issue of
nonapplicability for non-XY views, I suspect scipy's interpolation would be
more accurate than OpenGL's.

-Chris

On Thu, Mar 24, 2011 at 5:16 AM, Sebastian Haase <seb.haase at gmail.com>wrote:

> Hi Chris,
> if I understood correctly, you are foremost interested in visualizing
> the data after applying the respective pixel transforms.  Could you
> not simply use the OpenGL rotate, translate and scale operations ? --
> then it could be done literally instantaneously.
> There is already code for this in the viewer modules in my Priithon
> project.
> For such large data it would be good to have a video card with 1GB (if
> not 2GB) memory, which  is now rather cheap (one to a few 100 $) to
> buy these days.
> I'm not sure but it might even be feasible, once the user has
> confirmed that a given transform parameter set is optimal, to read the
> transformed pixel values back from the graphics card -- if you really
> want that; but I would probably suggest to just store the parameters
> to the image data header, and take those into account for all further
> visualizations and other image processing you might be doing.
>
> Regards,
> Sebastian
>
>
> On Thu, Mar 24, 2011 at 2:22 AM, Isaiah Norton <isaiah.norton at gmail.com>
> wrote:
> > Hi Chris,
> >
> > It's not strictly Python, but VTK and ITK are the heavy-iron for this
> sort
> > of thing (py wrappings available). There are several tools built on these
> > libraries to provide user-friendly 3D/4D registration, visualization,
> etc.
> >
> > GoFigure2: http://gofigure2.sourceforge.net/
> > - very microscopy oriented. 4D support. linux/mac/win
> >
> >  V3D: http://penglab.janelia.org/proj/v3d/V3D/About_V3D.html
> > - also 4D and triplatform.
> >
> > BioImageXD
> > - mostly written in Python glue for vtk/itk.
> >
> > If you want to build something custom in Python, check out MayaVi - it
> uses
> > VTK under the hood so the transforms will be handled fast in C++, but has
> > nice pythonic tvtk syntax and native numpy support.
> >
> > -Isaiah
> >
> >
> >
> >
> > On Wed, Mar 23, 2011 at 6:00 PM, Chris Weisiger <cweisiger at msg.ucsf.edu>
> > wrote:
> >>
> >> In preface, I'm not remotely an expert at array manipulation here. I'm
> an
> >> experienced programmer, but not an experienced *scientific* programmer.
> I'm
> >> sure what I want to do is possible, and I'm pretty certain it's even
> >> possible to do efficiently, but figuring out the actual implementation
> is
> >> giving me fits.
> >>
> >> I have two four-dimensional arrays of data: time, Z, Y, X. These
> represent
> >> microscopy data taken of the same sample with two different cameras.
> Their
> >> views don't quite match up if you overlay them, so we have a
> >> three-dimensional transform to align one array with the other. That
> >> transformation consists of X, Y, and Z translations (shifts), rotation
> about
> >> the Z axis, and equal scaling in X and Y -- thus, the transformation has
> 5
> >> parameters. I can perform the transformation on the data without
> difficulty
> >> with ndimage.affine_transform, but because we typically have hundreds of
> >> millions of pixels in one array, it takes a moderately long time. A
> >> representative array would be 30x50x512x512 or thereabouts.
> >>
> >> I'm writing a program to allow users to adjust the transformation and
> see
> >> how well-aligned the data looks from several perspectives. In addition
> to
> >> the traditional XY view, we also want to show XZ and YZ views, as well
> as
> >> kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D
> >> slices of the transformed data in a timely fashion. These slices are
> always
> >> perpendicular to two axes (e.g. an XY slice passing through T = 0, Z =
> 20,
> >> or a TZ slice passing through X = 256, Y = 256), never diagonal. It
> seems
> >> like the fast way to do this would be to take each pixel in the desired
> >> slice, apply the reverse transform, and figure out where in the original
> >> data it came from. But I'm having trouble figuring out how to
> efficiently do
> >> this.
> >>
> >> I could construct a 3D array with shape (length of axis 1), (length of
> >> axis 2), (4), such that each position in the array is a 4-tuple of the
> >> coordinates of the pixel in the desired slice. For example, if doing a
> YX
> >> slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0],
> [10,
> >> 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]].
> Then
> >> perhaps there'd be some way to efficiently apply the inverse transform
> to
> >> each coordinate tuple, then using ndimage.map_coordinates to turn those
> into
> >> pixel data. But I haven't managed to figure that out yet.
> >>
> >> By any chance is this already solved? If not, any suggestions /
> assistance
> >> would be wonderful.
> >>
> >> -Chris
> >>
> >> _______________________________________________
> >> SciPy-User mailing list
> >> SciPy-User at scipy.org
> >> http://mail.scipy.org/mailman/listinfo/scipy-user
> >>
> >
> >
> > _______________________________________________
> > SciPy-User mailing list
> > SciPy-User at scipy.org
> > http://mail.scipy.org/mailman/listinfo/scipy-user
> >
> >
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20110324/00199d43/attachment.html>


More information about the SciPy-User mailing list