[Image-SIG] 16-bit unsigned short: PIL tile descriptor?

K Schutte schutte@fel.tno.nl
Thu, 06 Mar 2003 08:27:08 +0100


Hi,

>From my understanding, the correct mode for unsigned shorts (in the same
endian mode as the machine where you work on) is I;16. A F;16 mode
denotes a 16 bit float, which obviously is wrong.

Good luck,

Klamer

"K.Arun" wrote:
> 
> Hello,
> 
>         I'm trying to write a PIL plug-in for a in-house image
> format. It's a very simple format with a 1024-byte header followed by
> unsigned shorts in big-endian byte order. I tried using both the 'raw'
> and 'bit' decoders without much success - while 'L' works with 'raw',
> in that I don't get any errors, my images look weird when
> processed. Using "F;16B" in the parameters tuple for 'raw' mode (what
> should self.mode be in this case ?), results in a 'ValueError :
> unrecognized mode' exception being thrown. Could someone throw light
> on what self.mode needs to be set to and the correct tile descriptor
> values to use ? Thanks,
> 
>                                                   -arun
> 
> _______________________________________________
> Image-SIG maillist  -  Image-SIG@python.org
> http://mail.python.org/mailman/listinfo/image-sig

-- 
Klamer Schutte, E-mail: Schutte@fel.tno.nl
Electro-Optical Systems, TNO Physics and Electronics Laboratory
Tel: +31-70-3740469 -- Fax: +31-70-3740654 -- Mobile: +31-6-51316671