[Numpy-discussion] nd_image.affine_transform edge effects

James Turner jturner at gemini.edu
Thu Mar 15 18:01:55 EDT 2007


Hi Stefan,

Thanks for the suggestions!

> Is this related to
> 
> http://projects.scipy.org/scipy/scipy/ticket/213
> 
> in any way?

As far as I can see, the problems look different, but thanks for
the example of how to document this. I did confirm that your example
exhibits the same behaviour under numarray, in case that is useful.

> Code snippets to illustrate the problem would be welcome.

OK. I have had a go at producing a code snippet. I apologize that
this is based on numarray rather than numpy, since I'm using STScI
Python, but I think it should be very easy to convert if you have
numpy instead.

What I am doing is to transform overlapping input images onto a
common, larger grid and co-adding them. Although I'm using affine_
transform on 3D data from FITS images, the issue can be illustrated
using a simple 1D translation of a single 2D test array. The input
values are just [4., 3., 2., 1.] in each row. With a translation of
-0.1, the values should therefore be something like
[X, 3.1, 2.1, 1.1, X, X], where the Xs represent points outside the
original data range. What I actually get, however, is roughly
[X, 3.1, 2.1, 1.0, 1.9, X]. The 5th value of 1.9 contaminates the
co-added data in the final output array. Now I'm looking at this
element-by-element, I suppose the bad value of 1.9 is just a result
of extrapolating in order to preserve the original number of data
points, isn't it? Sorry I wasn't clear on that in my original post
-- but surely a blank value (as specified by cval) would be better?

I suppose I could work around this by blanking out the extrapolated
column after doing the affine_transform. I could calculate which is
the column to blank out based on the sense of the offset and the
input array dimensions. It seems pretty messy and inefficient though.
Another idea is to split the translation into integer and fractional
parts, keep the input and output array dimensions the same initially
and then copy the output into a larger array with integer offsets.
That is messy to keep track of though. Maybe a parameter could
instead be added to affine_transform that tells it to shrink the
number of elements instead of extrapolating? I'd be a bit out of my
depth trying to implement that though, even if the authors agree...
(maybe in a few months though).

Can anyone comment on whether this problem should be considered a
bug, or whether it's intended behaviour that I should work around?

The code snippet follows below. Thanks for your patience with
someone who isn't accustomed to posting questions like this
routinely :-).

James.

-----

import numarray as N
import numarray.nd_image as ndi

# Create a 2D test pattern:
I = N.zeros((2,4),N.Float32)
I[:,:] = N.arange(4.0, 0.0, -1.0)

# Transformation parameters for a simple translation in 1D:
trmatrix = N.array([[1,0],[0,1]])
troffset = (0.0, -0.1)

# Apply the offset to the test pattern:
I_off1 = ndi.affine_transform(I, trmatrix, troffset, order=3, mode='constant',
                              cval=-1.0, output_shape=(2,6))

I_off2 = ndi.affine_transform(I, trmatrix, troffset, order=3, mode='constant',
                              cval=-1.0, output_shape=(2,6), prefilter=False)

# Compare the data before and after interpolation:
print I
print I_off1
print I_off2




More information about the NumPy-Discussion mailing list