[Image-SIG] Re: Help wanted: sensible way to scale 16-bit grayscaleimage

Fredrik Lundh fredrik at pythonware.com
Wed Dec 15 20:42:34 CET 2004


Russell E. Owen wrote:

> I had hoped to use an LUT to directly map a 32-bit floating point image
> (or 16 bit integer) to 8 bit color or grayscale (for now grayscale is
> fine, though I hope to support psuedocolor at some point). I could then
> recompute the LUT as needed (e.g. if the user asked for a different
> scaling function or contrast or...).
>
> Unfortunately, I have not been able to figure out how to use an LUT.
> Despite the following hopeful statement in the Concepts section of the
> Handbook:
>
>  The mode of an image defines the type and depth of a pixel
>   in the image. The current release supports the following
>  standard modes:
>  ...
>  * P (8-bit pixels, mapped to any other mode using a colour palette)
>
> it appears that palettes can only be attached to L or P images. I hope
> I'm missing something obvious in the use of palettes.

lookup table (LUT) != palette.

the "point" method replaces every pixel in the image by the corresponding
value from the given lookup table:

    for each pixel in image:
        new pixel value = LUT[old pixel value]

"point" with a lookup table is currently only supported for 8-bit images.

a palette maps colour indexes to RGB values.  if you attach an palette to an
8-bit grayscale image (mode "L"), it becomes a palette image (mode "P"), but
it still only contains 8-bit values.

if you have 16-bit or 32-bit data, you can use the "getextrema" method to find
the lowest and highest value in the image, calculate a suitable linear transform to
bring the values into a 256-value range, and use "point" with a lambda expression
to convert the values to 0-255.  you can then convert the resulting image to "L",
and optionally attach a psuedo-color palette to it to get a "P" image.

or, in code:

        lo, hi = im.getextrema()
        scale  = 255.0 / (hi - lo)
        offset = -lo * scale
        im = im.point(lambda v: v * scale + offet)
        im = im.convert("L")

in older Python versions, replace the point call with:

        im = im.point(lambda v, s=scale, o=offset: v * s + o)

this snippet simply maps the darkest pixel value to 0, and the lightest to 255.
you may want to adjust the lo/hi values before calculating the coefficients.

also note that this only works well if the source data is relatively linear; there's no
way to use a non-linear transform on I or F data in the current version of PIL.

</F> 





More information about the Image-SIG mailing list