From lee.j.joon at gmail.com Sun Nov 7 19:42:59 2010 From: lee.j.joon at gmail.com (Jae-Joon Lee) Date: Mon, 8 Nov 2010 09:42:59 +0900 Subject: [AstroPy] pywcsgrid2 0.1b1 : matplotlib with astronomical fits images Message-ID: Hi, I am pleased to announce the release of pywcsgrid2 0.1b1. pywcsgrid2 is a python module to be used with matplotlib for displaying astronomical fits images. It provides a custom Axes class (derived from mpl's original Axes class) suitable for displaying fits images. Its main functionality is to draw ticks, ticklabels, and grids in an appropriate sky coordinates. For a sampling, take a look at the gallery http://leejjoon.github.com/matplotlib_astronomy_gallery/ * Provide a custom Matplotlib Axes class based on mpl_toolkits.axisartist module * Easy integration with mpl_toolkits.axes_grid1 module Requirement ------------------ * matplotlib v1.0 (or later) * a wcs module (pywcs or Kapteyn) * pyfits. Supported OS -------------------- * Linux & OS X * pywcsgrid2 itself is a pure python module. URLs ------ Home page : http://leejjoon.github.com/pywcsgrid2/ Overview doc. : http://leejjoon.github.com/pywcsgrid2/users/overview.html Download : http://github.com/leejjoon/pywcsgrid2/downloads github : http://github.com/leejjoon/pywcsgrid2 For questions, bug reports or feature suggestions, please use the issue tracker in the github or contact me via email. Regards, -JJ From thomas.robitaille at gmail.com Tue Nov 9 13:39:12 2010 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Tue, 9 Nov 2010 13:39:12 -0500 Subject: [AstroPy] Python-Montage 0.9.2 Message-ID: I am happy to announce the availability of Python-Montage 0.9.2, a pure-Python package that provides a Python interface to the Montage Mosaicking Software (http://montage.ipac.caltech.edu/). The Python module includes both functions to access individual Montage commands, and high-level functions to facilitate mosaicking and reprojecting. More information and instructions for downloading and installing the Python package are located at: http://astrofrog.github.com/python-montage/ Please let me know if you encounter any problems, or have any suggestions! Best regards, Thomas Robitaille From jh at physics.ucf.edu Wed Nov 10 19:24:33 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Wed, 10 Nov 2010 19:24:33 -0500 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle Message-ID: My research group uses Python pickles to save data as it goes through our pipeline (.npy and .npz do not save objects, and neither does HDF, etc.). These need to be loadable forever, as we often compare work to work done much earlier. Some of the objects we save contain pyfits header objects. Pickles have to import all classes used in the pickled objects before they load, and we are getting an ImportError about NP_pyfits. The file NP_pyfits.py existed in stsci_python 2.8 but is gone in 2.10. The pickles refer to this object explicitly: .... sS'photchan' p494 I3 sS'header' p495 (ipyfits.NP_pyfits Header p496 (dp497 S'_hdutype' p498 cpyfits.NP_pyfits PrimaryHDU p499 sS'ascard' p500 ccopy_reg _reconstructor p501 (cpyfits.NP_pyfits CardList p502 c__builtin__ list p503 (lp504 g501 (cpyfits.NP_pyfits Card p505 c__builtin__ object p506 NtRp507 (dp508 S'_valuestring' p509 S'T' .... Is there any way to make our pickles readable again, other than running the old version of pyfits forever? Can you provide a pickle converter that replaces the old names in the file with whatever is new? Please (everyone, not just STScI) be aware of this issue going forward. Pickles are the only way we know of to save objects. You can add things to your classes, but if you change what they import (or otherwise break pickle), nobody can restore your class across releases. Thanks, --jh-- From perry at stsci.edu Thu Nov 11 14:38:38 2010 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 11 Nov 2010 14:38:38 -0500 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: Message-ID: Hi Joe, We'll look into it. This is a general problem with pickles (and one reason I've been hesitant to avoid using them like save files). I wonder if there is a better solution than that. In this case we had to clean out the previous numarray interface. Perry On Nov 10, 2010, at 7:24 PM, Joe Harrington wrote: > My research group uses Python pickles to save data as it goes through > our pipeline (.npy and .npz do not save objects, and neither does HDF, > etc.). These need to be loadable forever, as we often compare work to > work done much earlier. Some of the objects we save contain pyfits > header objects. Pickles have to import all classes used in the > pickled objects before they load, and we are getting an ImportError > about NP_pyfits. The file NP_pyfits.py existed in stsci_python 2.8 > but is gone in 2.10. The pickles refer to this object explicitly: > > .... > sS'photchan' > p494 > I3 > sS'header' > p495 > (ipyfits.NP_pyfits > Header > p496 > (dp497 > S'_hdutype' > p498 > cpyfits.NP_pyfits > PrimaryHDU > p499 > sS'ascard' > p500 > ccopy_reg > _reconstructor > p501 > (cpyfits.NP_pyfits > CardList > p502 > c__builtin__ > list > p503 > (lp504 > g501 > (cpyfits.NP_pyfits > Card > p505 > c__builtin__ > object > p506 > NtRp507 > (dp508 > S'_valuestring' > p509 > S'T' > .... > > Is there any way to make our pickles readable again, other than > running the old version of pyfits forever? Can you provide a pickle > converter that replaces the old names in the file with whatever is > new? > > Please (everyone, not just STScI) be aware of this issue going > forward. Pickles are the only way we know of to save objects. You > can add things to your classes, but if you change what they import (or > otherwise break pickle), nobody can restore your class across > releases. > > Thanks, > > --jh-- From erik.tollerud at gmail.com Thu Nov 11 14:43:45 2010 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Thu, 11 Nov 2010 11:43:45 -0800 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: Message-ID: The pickle module supports customized unpicklers that let you control how modules and classes are loaded. Have a look at http://docs.python.org/library/pickle.html#subclassing-unpicklers for details. If you're using cPickle, the snippet below should fix it. It probably would make sense to have this in pyfits, though... from cPickle import Unpickler import imp,sys def fg(modname,classname): if 'NP_pyfits' in modname: ... do whatever is needed to get the correct pyfits module after whatever changes were done... else: if modname in sys.modules: mod = sys.modules[modname] else: fp, pathname, description = imp.find_module(modname) try: mod = imp.load_module(modname, fp, pathname, description) finally: fp.close() return getattr(mod,modname) with open(filename) as f: u = Unpickler(f) u.find_globals = fg obj = u.load() (load however many objects you need...) On Wed, Nov 10, 2010 at 4:24 PM, Joe Harrington wrote: > My research group uses Python pickles to save data as it goes through > our pipeline (.npy and .npz do not save objects, and neither does HDF, > etc.). ?These need to be loadable forever, as we often compare work to > work done much earlier. ?Some of the objects we save contain pyfits > header objects. ?Pickles have to import all classes used in the > pickled objects before they load, and we are getting an ImportError > about NP_pyfits. ?The file NP_pyfits.py existed in stsci_python 2.8 > but is gone in 2.10. ?The pickles refer to this object explicitly: > > .... > sS'photchan' > p494 > I3 > sS'header' > p495 > (ipyfits.NP_pyfits > Header > p496 > (dp497 > S'_hdutype' > p498 > cpyfits.NP_pyfits > PrimaryHDU > p499 > sS'ascard' > p500 > ccopy_reg > _reconstructor > p501 > (cpyfits.NP_pyfits > CardList > p502 > c__builtin__ > list > p503 > (lp504 > g501 > (cpyfits.NP_pyfits > Card > p505 > c__builtin__ > object > p506 > NtRp507 > (dp508 > S'_valuestring' > p509 > S'T' > .... > > Is there any way to make our pickles readable again, other than > running the old version of pyfits forever? ?Can you provide a pickle > converter that replaces the old names in the file with whatever is > new? > > Please (everyone, not just STScI) be aware of this issue going > forward. ?Pickles are the only way we know of to save objects. ?You > can add things to your classes, but if you change what they import (or > otherwise break pickle), nobody can restore your class across > releases. > > Thanks, > > --jh-- > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > -- Erik Tollerud From jh at physics.ucf.edu Thu Nov 11 17:57:57 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 11 Nov 2010 17:57:57 -0500 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: (message from Perry Greenfield on Thu, 11 Nov 2010 14:38:38 -0500) References: Message-ID: Erik, thanks for the unpickler code. We saw the note about those but don't offhand know how to use it to fix the problem. Perry, thanks for looking into it. Also, if you know of *any* other way to save an object, please say. It seems pretty clear to me that to load objects from any kind of save file, you have to import the classes of the object and any objects it contains that are not standard Python objects. So even if we had other methods for saving, they would have the same problem as pickle. But we have to be able to save objects! Perhaps saving the definitions of the types rather than importing them would be the way to go? I bet there's a long thread about this somewhere. --jh-- On Thu, 11 Nov 2010 14:38:38 -0500, Perry Greenfield wrote: > We'll look into it. This is a general problem with pickles (and one > reason I've been hesitant to avoid using them like save files). I > wonder if there is a better solution than that. In this case we had to > clean out the previous numarray interface. > > Perry > > On Nov 10, 2010, at 7:24 PM, Joe Harrington wrote: > > > My research group uses Python pickles to save data as it goes through > > our pipeline (.npy and .npz do not save objects, and neither does HDF, > > etc.). These need to be loadable forever, as we often compare work to > > work done much earlier. Some of the objects we save contain pyfits > > header objects. Pickles have to import all classes used in the > > pickled objects before they load, and we are getting an ImportError > > about NP_pyfits. The file NP_pyfits.py existed in stsci_python 2.8 > > but is gone in 2.10. The pickles refer to this object explicitly: > > > > .... > > sS'photchan' > > p494 > > I3 > > sS'header' > > p495 > > (ipyfits.NP_pyfits > > Header > > p496 > > (dp497 > > S'_hdutype' > > p498 > > cpyfits.NP_pyfits > > PrimaryHDU > > p499 > > sS'ascard' > > p500 > > ccopy_reg > > _reconstructor > > p501 > > (cpyfits.NP_pyfits > > CardList > > p502 > > c__builtin__ > > list > > p503 > > (lp504 > > g501 > > (cpyfits.NP_pyfits > > Card > > p505 > > c__builtin__ > > object > > p506 > > NtRp507 > > (dp508 > > S'_valuestring' > > p509 > > S'T' > > .... > > > > Is there any way to make our pickles readable again, other than > > running the old version of pyfits forever? Can you provide a pickle > > converter that replaces the old names in the file with whatever is > > new? > > > > Please (everyone, not just STScI) be aware of this issue going > > forward. Pickles are the only way we know of to save objects. You > > can add things to your classes, but if you change what they import (or > > otherwise break pickle), nobody can restore your class across > > releases. > > > > Thanks, > > > > --jh-- From cohen at lpta.in2p3.fr Thu Nov 11 18:13:20 2010 From: cohen at lpta.in2p3.fr (Johann Cohen-Tanugi) Date: Fri, 12 Nov 2010 00:13:20 +0100 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: Message-ID: <4CDC7890.7000805@lpta.in2p3.fr> well there is ROOT : http://root.cern.ch but it might represent too large a stretch..... Johann On 11/11/2010 11:57 PM, Joe Harrington wrote: > Erik, thanks for the unpickler code. We saw the note about those but > don't offhand know how to use it to fix the problem. > > Perry, thanks for looking into it. > > Also, if you know of *any* other way to save an object, please say. > It seems pretty clear to me that to load objects from any kind of save > file, you have to import the classes of the object and any objects it > contains that are not standard Python objects. So even if we had > other methods for saving, they would have the same problem as pickle. > But we have to be able to save objects! Perhaps saving the definitions > of the types rather than importing them would be the way to go? I bet > there's a long thread about this somewhere. > > --jh-- > > On Thu, 11 Nov 2010 14:38:38 -0500, Perry Greenfield wrote: > > >> We'll look into it. This is a general problem with pickles (and one >> reason I've been hesitant to avoid using them like save files). I >> wonder if there is a better solution than that. In this case we had to >> clean out the previous numarray interface. >> >> Perry >> >> On Nov 10, 2010, at 7:24 PM, Joe Harrington wrote: >> >> >>> My research group uses Python pickles to save data as it goes through >>> our pipeline (.npy and .npz do not save objects, and neither does HDF, >>> etc.). These need to be loadable forever, as we often compare work to >>> work done much earlier. Some of the objects we save contain pyfits >>> header objects. Pickles have to import all classes used in the >>> pickled objects before they load, and we are getting an ImportError >>> about NP_pyfits. The file NP_pyfits.py existed in stsci_python 2.8 >>> but is gone in 2.10. The pickles refer to this object explicitly: >>> >>> .... >>> sS'photchan' >>> p494 >>> I3 >>> sS'header' >>> p495 >>> (ipyfits.NP_pyfits >>> Header >>> p496 >>> (dp497 >>> S'_hdutype' >>> p498 >>> cpyfits.NP_pyfits >>> PrimaryHDU >>> p499 >>> sS'ascard' >>> p500 >>> ccopy_reg >>> _reconstructor >>> p501 >>> (cpyfits.NP_pyfits >>> CardList >>> p502 >>> c__builtin__ >>> list >>> p503 >>> (lp504 >>> g501 >>> (cpyfits.NP_pyfits >>> Card >>> p505 >>> c__builtin__ >>> object >>> p506 >>> NtRp507 >>> (dp508 >>> S'_valuestring' >>> p509 >>> S'T' >>> .... >>> >>> Is there any way to make our pickles readable again, other than >>> running the old version of pyfits forever? Can you provide a pickle >>> converter that replaces the old names in the file with whatever is >>> new? >>> >>> Please (everyone, not just STScI) be aware of this issue going >>> forward. Pickles are the only way we know of to save objects. You >>> can add things to your classes, but if you change what they import (or >>> otherwise break pickle), nobody can restore your class across >>> releases. >>> >>> Thanks, >>> >>> --jh-- >>> > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > From thomas.robitaille at gmail.com Thu Nov 11 18:25:28 2010 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Thu, 11 Nov 2010 18:25:28 -0500 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: Message-ID: Hi Joe, > Also, if you know of *any* other way to save an object, please say. If you don't have too many different object types, and if you really want long-term retrieval, then you may want to consider actually not using pickles, but for example in the case of FITS headers, you could really just use the toTxtFile and fromTxtFile methods to save and read from ASCII. I can see how pickling can be useful for some instances, but I think FITS headers are definitely one example where there is not much gain in using pickles over plain old ASCII. You can always write your own 'pickle' module which would decide how to deal with various datatypes, and use a plain ASCII write/read for pyfits.Header objects. Cheers, Tom > It seems pretty clear to me that to load objects from any kind of save > file, you have to import the classes of the object and any objects it > contains that are not standard Python objects. So even if we had > other methods for saving, they would have the same problem as pickle. > But we have to be able to save objects! Perhaps saving the definitions > of the types rather than importing them would be the way to go? I bet > there's a long thread about this somewhere. > > --jh-- > > On Thu, 11 Nov 2010 14:38:38 -0500, Perry Greenfield wrote: > >> We'll look into it. This is a general problem with pickles (and one >> reason I've been hesitant to avoid using them like save files). I >> wonder if there is a better solution than that. In this case we had to >> clean out the previous numarray interface. >> >> Perry >> >> On Nov 10, 2010, at 7:24 PM, Joe Harrington wrote: >> >>> My research group uses Python pickles to save data as it goes through >>> our pipeline (.npy and .npz do not save objects, and neither does HDF, >>> etc.). These need to be loadable forever, as we often compare work to >>> work done much earlier. Some of the objects we save contain pyfits >>> header objects. Pickles have to import all classes used in the >>> pickled objects before they load, and we are getting an ImportError >>> about NP_pyfits. The file NP_pyfits.py existed in stsci_python 2.8 >>> but is gone in 2.10. The pickles refer to this object explicitly: >>> >>> .... >>> sS'photchan' >>> p494 >>> I3 >>> sS'header' >>> p495 >>> (ipyfits.NP_pyfits >>> Header >>> p496 >>> (dp497 >>> S'_hdutype' >>> p498 >>> cpyfits.NP_pyfits >>> PrimaryHDU >>> p499 >>> sS'ascard' >>> p500 >>> ccopy_reg >>> _reconstructor >>> p501 >>> (cpyfits.NP_pyfits >>> CardList >>> p502 >>> c__builtin__ >>> list >>> p503 >>> (lp504 >>> g501 >>> (cpyfits.NP_pyfits >>> Card >>> p505 >>> c__builtin__ >>> object >>> p506 >>> NtRp507 >>> (dp508 >>> S'_valuestring' >>> p509 >>> S'T' >>> .... >>> >>> Is there any way to make our pickles readable again, other than >>> running the old version of pyfits forever? Can you provide a pickle >>> converter that replaces the old names in the file with whatever is >>> new? >>> >>> Please (everyone, not just STScI) be aware of this issue going >>> forward. Pickles are the only way we know of to save objects. You >>> can add things to your classes, but if you change what they import (or >>> otherwise break pickle), nobody can restore your class across >>> releases. >>> >>> Thanks, >>> >>> --jh-- > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From erik.tollerud at gmail.com Thu Nov 11 19:06:21 2010 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Thu, 11 Nov 2010 16:06:21 -0800 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: Message-ID: > Erik, thanks for the unpickler code. ?We saw the note about those but > don't offhand know how to use it to fix the problem. Actually, I looked at this a bit further (anticipating that I will also encounter this problem when I next upgrade...), and have attached a python file that appears to do the conversion. Either call the function unpickle_old_pyfits to load your pyfits file, or use it as a command line script to save a new version of the file (appended with '.new') that has the same content, but unpickles correctly. Perry, perhaps this can be included somewhere in pyfits? > Also, if you know of *any* other way to save an object, please say. A solution for large amounts of data might be databases. The only actual object database (e.g. the entries are all actually objects rather than translating back and forth between SQL tables) is ZODB (http://www.zodb.org/). I know it has explicit support for migrating databases from an old class definition to a new one, but I've never actually used that functionality, so I don't know much about it. There's also lots of stuff out there to map python objects to SQL databases (I think SQLAlchemy is the most popular), but that of course requires figuring out how to make your object look like an SQL table. > contains that are not standard Python objects. ?So even if we had > other methods for saving, they would have the same problem as pickle. Yeah, this is in inherent problem... If you write the class yourself, you can override __setstate__ and __getstate__ to manually fix up the old classes to have whatever new things they need. But that doesn't help much if it's an external module that you didn't write. The whole idea of having the object data and the classes defined separately (inherent in python) I suppose forces this issue. There is probable some way to use the imp and inspect modules to inject old versions of classes into the stream of pickled objects, but it's probably a lot of work to do, and wouldn't work if you have any C-based extensions. -- Erik Tollerud -------------- next part -------------- A non-text attachment was scrubbed... Name: np_unpickle.py Type: text/x-python Size: 1380 bytes Desc: not available URL: From stefan.schwarzburg at googlemail.com Fri Nov 12 04:29:53 2010 From: stefan.schwarzburg at googlemail.com (Stefan Schwarzburg) Date: Fri, 12 Nov 2010 10:29:53 +0100 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: Message-ID: Dear Joe, I know that you did not ask for this advice, but I feel the need to share this with you anyway. I've seen projects in the past that had similar problems like you describe (although they did not use python but root or R), and I think there is only one real solution to this: You should try to separate the data and logic and store the data in a standard file format that will be readable in the far future. In astronomy this is the FITS format. Of cause it is not able to do a lot of the cool things you can do with pickled objects and it has some strange restrictions that come from the time when FITS was supposed to be saved onto tapes, but this is actually its strenght. Just as you are able to open FITS files that were saved 20 years ago, you will be able to open these files in 20 years. Probably it will not be the same software (I guess there is a reason why you chose python and not algol or fortran77) but the data will still be there. With pickled objects you will definitely get the same problems over and over in the future. Pickled objects are great, because you can send them over a network and so on, but they are not a way to archive data. FITS files are good enough for all data of all astronomy projects I've seen so far, although you sometimes have to give up a certain style of thinking, but in the end you are always able to save what you need in images, tables and meta-data in the headers. As I said, this was not what you asked for, but I think I needed to tell you that large astronomy collaborations started with something similar like your setup and in the end had to learn the hard way that this does not work and ties them to old software that they would like to give up (and will switch to FITS in the future...). Best regards, Stefan -- Institut f?r Astronomie und Astrophysik Eberhard Karls Universit?t T?bingen Sand 1 - D-72076 T?bingen ----------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jh at physics.ucf.edu Fri Nov 12 10:00:49 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 12 Nov 2010 10:00:49 -0500 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: (message from Thomas Robitaille on Thu, 11 Nov 2010 18:25:28 -0500) References: Message-ID: We use Python's OO capabilities extensively. Our basic class has over 100 attributes, has a tree structure, and contains complex objects within itself. For example, it contains several FITS headers. We subclass it all the time, our analysis has branches for each exoplanet eclipse we analyze (we have over 100 of these now), and stuff gets added to each branch. So, few of these objects even have the same attributes. We can't write a save/load routine for each one. IDL and MATLAB both provide a robust save/load capability, and we now have Python routines that can handle those formats. I hesitate to use them since I'm sure their object formats are different, but maybe they capture what's needed? Has anyone tried using them to save/restore complex Python objects? What we need is a general facility for saving and loading such arbitrary objects. While FITS (and better, HDF) might store all the components of an object, you'd have to write something that would disassemble and reassemble them with all their Python properties, such as the names of the data types, the tree structure, and so forth. That is what pickle does, and I think that its approach of importing to get the types it uses is the obvious one. The problems come when the import changes, of course. >From the pickle side, I think the only other alternative would be something that recorded the structure and the names given to each attribute, and any other internal properties, attempted an import, and if they conflicted gave you what you saved and a warning that it is out of sync with the import. You don't want just to create the old object and expect the user to figure it out when those objects are fed to new software that's expecting the new object. And saving old *methods* could be disaster! >From the importee's side, including a version number in your object and checking it would let you have backward compatibility, as would providing unpicklers and converters when objects do change. --jh-- Thomas Robitaille on Thu, 11 Nov 2010 18:25:28 -0500: > > Also, if you know of *any* other way to save an object, please say. > > If you don't have too many different object types, and if you really > want long> -term retrieval, then you may want to consider actually > not using p> ickles, but for example in the case of FITS headers, > you could re> ally just use the toTxtFile and fromTxtFile methods to > save and > read from ASCII. I can see how pickling can be useful for > some inst> ances, but I think FITS headers are definitely one > example where th> ere is not much gain in using pickles over plain > old ASCII. > You can always write your own 'pickle' module which > would decide > how to deal with various datatypes, and use a plain > ASCII write> /read for pyfits.Header objects. > > Cheers, > > Tom > > > It seems pretty clear to me that to load objects from any kind of save > > file, you have to import the classes of the object and any objects it > > contains that are not standard Python objects. So even if we had > > other methods for saving, they would have the same problem as pickle. > > But we have to be able to save objects! Perhaps saving the definitions > > of the types rather than importing them would be the way to go? I bet > > there's a long thread about this somewhere. > > > > --jh-- > > > > On Thu, 11 Nov 2010 14:38:38 -0500, Perry Greenfield > > wrote> : > > > >> We'll look into it. This is a general problem with pickles (and one > >> reason I've been hesitant to avoid using them like save files). I > >> wonder if there is a better solution than that. In this case we had to > >> clean out the previous numarray interface. > >> > >> Perry > >> > >> On Nov 10, 2010, at 7:24 PM, Joe Harrington wrote: > >> > >>> My research group uses Python pickles to save data as it goes through > >>> our pipeline (.npy and .npz do not save objects, and neither does HDF, > >>> etc.). These need to be loadable forever, as we often compare work to > >>> work done much earlier. Some of the objects we save contain pyfits > >>> header objects. Pickles have to import all classes used in the > >>> pickled objects before they load, and we are getting an ImportError > >>> about NP_pyfits. The file NP_pyfits.py existed in stsci_python 2.8 > >>> but is gone in 2.10. The pickles refer to this object explicitly: > >>> > >>> .... > >>> sS'photchan' > >>> p494 > >>> I3 > >>> sS'header' > >>> p495 > >>> (ipyfits.NP_pyfits > >>> Header > >>> p496 > >>> (dp497 > >>> S'_hdutype' > >>> p498 > >>> cpyfits.NP_pyfits > >>> PrimaryHDU > >>> p499 > >>> sS'ascard' > >>> p500 > >>> ccopy_reg > >>> _reconstructor > >>> p501 > >>> (cpyfits.NP_pyfits > >>> CardList > >>> p502 > >>> c__builtin__ > >>> list > >>> p503 > >>> (lp504 > >>> g501 > >>> (cpyfits.NP_pyfits > >>> Card > >>> p505 > >>> c__builtin__ > >>> object > >>> p506 > >>> NtRp507 > >>> (dp508 > >>> S'_valuestring' > >>> p509 > >>> S'T' > >>> .... > >>> > >>> Is there any way to make our pickles readable again, other than > >>> running the old version of pyfits forever? Can you provide a pickle > >>> converter that replaces the old names in the file with whatever is > >>> new? > >>> > >>> Please (everyone, not just STScI) be aware of this issue going > >>> forward. Pickles are the only way we know of to save objects. You > >>> can add things to your classes, but if you change what they import (or > >>> otherwise break pickle), nobody can restore your class across > >>> releases. > >>> > >>> Thanks, > >>> > >>> --jh-- > > _______________________________________________ > > AstroPy mailing list > > AstroPy at scipy.org > > http://mail.scipy.org/mailman/listinfo/astropy From sienkiew at stsci.edu Fri Nov 12 10:27:14 2010 From: sienkiew at stsci.edu (Mark Sienkiewicz) Date: Fri, 12 Nov 2010 10:27:14 -0500 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: Message-ID: <4CDD5CD2.5030107@stsci.edu> Perry Greenfield wrote: > Hi Joe, > > We'll look into it. This is a general problem with pickles (and one > reason I've been hesitant to avoid using them like save files). I > wonder if there is a better solution than that. In this case we had to > clean out the previous numarray interface. The pickle module is flawed because it stores internal structure of an object that is not part of the object's defined interface. The flaw is inherent in the design of the python pickle module; it is impossible to fix. A more reliable pickling system works by making a common format for specifying objects, coupled with a custom pickler/unpickler for each data type that might be stored. The pickle converter needs to know about the object being pickled, but that knowledge is not stored in the pickle file. This method lacks some of the magic of the python pickler, but it is also less fragile. One obvious solution is to use a standard pickle format like XML or JSON. JSON is particularly attractive if you can convert your object into a dictionary or list that contains only dictionaries, lists, and strings. (The json library can convert this data structure to a string (or back) in a single call.) The whole point, though, to make a pickler that knows how to convert the object without storing knowledge of the objects internal structure. The pickle format (including the _names_ and _meanings_ of every field) must be well-defined. If it is, you can have picklers/unpicklers that make the pickle files work transparently with different libraries. So, for example, you could pickle a FITS header object as an ordered list of tuples, where each tuple is (name, value, comment). With this format, it should be possible to convert a FITS header to/from JSON using only the public interfaces in Pyfits. Public interfaces change less often than internal implementation, and so you are somewhat insulated from the current problem. Mark S. From Jim.Vickroy at noaa.gov Fri Nov 12 10:34:07 2010 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Fri, 12 Nov 2010 08:34:07 -0700 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: Message-ID: <4CDD5E6F.3050008@noaa.gov> ... not sure I really grasp your requirements and issues but you may want to take a look at pyyaml (http://pyyaml.org/). I use it occasionally and find it very powerful for my needs. -- jv On 11/12/2010 8:00 AM, Joe Harrington wrote: > We use Python's OO capabilities extensively. Our basic class has over > 100 attributes, has a tree structure, and contains complex objects > within itself. For example, it contains several FITS headers. We > subclass it all the time, our analysis has branches for each exoplanet > eclipse we analyze (we have over 100 of these now), and stuff gets > added to each branch. So, few of these objects even have the same > attributes. We can't write a save/load routine for each one. > > IDL and MATLAB both provide a robust save/load capability, and we now > have Python routines that can handle those formats. I hesitate to use > them since I'm sure their object formats are different, but maybe they > capture what's needed? Has anyone tried using them to save/restore > complex Python objects? > > What we need is a general facility for saving and loading such > arbitrary objects. While FITS (and better, HDF) might store all the > components of an object, you'd have to write something that would > disassemble and reassemble them with all their Python properties, such > as the names of the data types, the tree structure, and so forth. > > That is what pickle does, and I think that its approach of importing > to get the types it uses is the obvious one. The problems come when > the import changes, of course. > > > From the pickle side, I think the only other alternative would be > something that recorded the structure and the names given to each > attribute, and any other internal properties, attempted an import, and > if they conflicted gave you what you saved and a warning that it is > out of sync with the import. You don't want just to create the old > object and expect the user to figure it out when those objects are fed > to new software that's expecting the new object. And saving old > *methods* could be disaster! > > > From the importee's side, including a version number in your object > and checking it would let you have backward compatibility, as would > providing unpicklers and converters when objects do change. > > --jh-- > > Thomas Robitaille on Thu, 11 Nov 2010 18:25:28 -0500: > >>> Also, if you know of *any* other way to save an object, please say. >> If you don't have too many different object types, and if you really >> want long> -term retrieval, then you may want to consider actually >> not using p> ickles, but for example in the case of FITS headers, >> you could re> ally just use the toTxtFile and fromTxtFile methods to >> save and> read from ASCII. I can see how pickling can be useful for >> some inst> ances, but I think FITS headers are definitely one >> example where th> ere is not much gain in using pickles over plain >> old ASCII.> You can always write your own 'pickle' module which >> would decide> how to deal with various datatypes, and use a plain >> ASCII write> /read for pyfits.Header objects. >> >> Cheers, >> >> Tom >> >>> It seems pretty clear to me that to load objects from any kind of save >>> file, you have to import the classes of the object and any objects it >>> contains that are not standard Python objects. So even if we had >>> other methods for saving, they would have the same problem as pickle. >>> But we have to be able to save objects! Perhaps saving the definitions >>> of the types rather than importing them would be the way to go? I bet >>> there's a long thread about this somewhere. >>> >>> --jh-- >>> >>> On Thu, 11 Nov 2010 14:38:38 -0500, Perry Greenfield >>> wrote> : >>> >>>> We'll look into it. This is a general problem with pickles (and one >>>> reason I've been hesitant to avoid using them like save files). I >>>> wonder if there is a better solution than that. In this case we had to >>>> clean out the previous numarray interface. >>>> >>>> Perry >>>> >>>> On Nov 10, 2010, at 7:24 PM, Joe Harrington wrote: >>>> >>>>> My research group uses Python pickles to save data as it goes through >>>>> our pipeline (.npy and .npz do not save objects, and neither does HDF, >>>>> etc.). These need to be loadable forever, as we often compare work to >>>>> work done much earlier. Some of the objects we save contain pyfits >>>>> header objects. Pickles have to import all classes used in the >>>>> pickled objects before they load, and we are getting an ImportError >>>>> about NP_pyfits. The file NP_pyfits.py existed in stsci_python 2.8 >>>>> but is gone in 2.10. The pickles refer to this object explicitly: >>>>> >>>>> .... >>>>> sS'photchan' >>>>> p494 >>>>> I3 >>>>> sS'header' >>>>> p495 >>>>> (ipyfits.NP_pyfits >>>>> Header >>>>> p496 >>>>> (dp497 >>>>> S'_hdutype' >>>>> p498 >>>>> cpyfits.NP_pyfits >>>>> PrimaryHDU >>>>> p499 >>>>> sS'ascard' >>>>> p500 >>>>> ccopy_reg >>>>> _reconstructor >>>>> p501 >>>>> (cpyfits.NP_pyfits >>>>> CardList >>>>> p502 >>>>> c__builtin__ >>>>> list >>>>> p503 >>>>> (lp504 >>>>> g501 >>>>> (cpyfits.NP_pyfits >>>>> Card >>>>> p505 >>>>> c__builtin__ >>>>> object >>>>> p506 >>>>> NtRp507 >>>>> (dp508 >>>>> S'_valuestring' >>>>> p509 >>>>> S'T' >>>>> .... >>>>> >>>>> Is there any way to make our pickles readable again, other than >>>>> running the old version of pyfits forever? Can you provide a pickle >>>>> converter that replaces the old names in the file with whatever is >>>>> new? >>>>> >>>>> Please (everyone, not just STScI) be aware of this issue going >>>>> forward. Pickles are the only way we know of to save objects. You >>>>> can add things to your classes, but if you change what they import (or >>>>> otherwise break pickle), nobody can restore your class across >>>>> releases. >>>>> >>>>> Thanks, >>>>> >>>>> --jh-- >>> _______________________________________________ >>> AstroPy mailing list >>> AstroPy at scipy.org >>> http://mail.scipy.org/mailman/listinfo/astropy > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From jh at physics.ucf.edu Fri Nov 12 10:39:21 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 12 Nov 2010 10:39:21 -0500 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: (astropy-request@scipy.org) References: Message-ID: Stefan, you make very good points, and we're aware of them. We do rigorously separate code and data, and we do save our final lightcurves and tables as FITS. We also save PNG and PS plots (each generated automatically in both projection and print versions), tables in both straight ASCII and Latex, and the text output of each run. But internally, we have to rely on complex Python objects to manage in a simple way what has become a very complex analysis. The analysis has branches, so we need to save at each branch point so that we can start parallel sessions that will load from that branch point and handle each of the branches (otherwise we end up reading the same gigabyte data set and finding all the bad pixels in it 50 times, for each of 100 data sets). It's the short delays between running the early stages and doing the 50 variants that's giving us the headache right now. Our archiving is simply to save the code (under SVN, with version numbers of everything recorded in each run), input data, and ancillary stuff that makes each analysis unique, and record how to run the pipeline to regenerate what we need. We update our OS and Python stack annually, and we archive those (including sources) so that we can in principle rerun under old software, though I hope that will never be necessary. My experience has been that when questions are raised about old projects, they're usually answered by looking at the code and work logs, along with the plots, figures, and final data saved at the time. --jh-- > Date: Fri, 12 Nov 2010 10:29:53 +0100 > From: Stefan Schwarzburg > Subject: Re: [AstroPy] new pyfits version deletes NP_pyfits, breaking > pickle > To: astropy at scipy.org > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Dear Joe, > > I know that you did not ask for this advice, but I feel the need to share > this with you anyway. > I've seen projects in the past that had similar problems like you describe > (although they did not use python but root or R), and I think there is only > one real solution to this: > You should try to separate the data and logic and store the data in a > standard file format that will be readable in the far future. In astronomy > this is the FITS format. Of cause it is not able to do a lot of the cool > things you can do with pickled objects and it has some strange restrictions > that come from the time when FITS was supposed to be saved onto tapes, but > this is actually its strenght. Just as you are able to open FITS files that > were saved 20 years ago, you will be able to open these files in 20 years. > > Probably it will not be the same software (I guess there is a reason why you > chose python and not algol or fortran77) but the data will still be there. > > With pickled objects you will definitely get the same problems over and over > in the future. Pickled objects are great, because you can send them over a > network and so on, but they are not a way to archive data. > > FITS files are good enough for all data of all astronomy projects I've seen > so far, although you sometimes have to give up a certain style of thinking, > but in the end you are always able to save what you need in images, tables > and meta-data in the headers. > > As I said, this was not what you asked for, but I think I needed to tell you > that large astronomy collaborations started with something similar like your > setup and in the end had to learn the hard way that this does not work and > ties them to old software that they would like to give up (and will switch > to FITS in the future...). > > Best regards, > Stefan > > -- > Institut f?r Astronomie und Astrophysik > Eberhard Karls Universit?t T?bingen > Sand 1 - D-72076 T?bingen > ----------------------------------------------------------------------- > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/astropy/attachments/20101112/4925bfc9/att> > achment.html > > ------------------------------ > From jh at physics.ucf.edu Fri Nov 12 12:22:11 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 12 Nov 2010 12:22:11 -0500 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: <4CDD5CD2.5030107@stsci.edu> (message from Mark Sienkiewicz on Fri, 12 Nov 2010 10:27:14 -0500) References: <4CDD5CD2.5030107@stsci.edu> Message-ID: > A more reliable pickling system works by making a common format for > specifying objects, coupled with a custom pickler/unpickler for each > data type that might be stored. Mark, Python's objects are mutable, and this is a *good* thing. We can do much more with mutable objects as we develop a new analysis than we ever could with IDL's awful static objects, and this was a big motivator for me to switch from IDL to Python. A user can add to an object just by uttering the name of a new attribute and assigning something to it. They then carry that "basket" of associated variables through an analysis in an organized fashion, with one handle. They can add to it, save it, load it, copy it, pass it to functions, and so on, without having to track and separately treat hundreds of minor variables separately. It's the whole point of using OOP for interactive data analysis. But this makes writing a new pickler/unpickler for "each data type that must be stored" (which is all of them, when saving a session) an impossible solution. Every possible mutation that a user might make would need its own pickler/unpickler, and a general save/load-session command would be impossible to write. So, I think the pickle authors realized that and did the best they could. So, it's incumbent on software developers to realize that anything pickle needs is part of their public interface, like it or not. Even if you provide your own pickler/unpickler, that doesn't cover the case where one of your objects is an attribute of a higher-level object, since pickle won't know where to find it when it encounters that object. Or actually it does. It's in your import. Oops, that's the current problem, when the import changes. Perhaps a possible pickle PEP might allow you to register your pickler/unpickler when you define an object, so that this could be part of the solution. But then the unpickler would have to be saved in the pickle to ensure that it will be available. --jh-- From jturner at gemini.edu Mon Nov 15 14:36:06 2010 From: jturner at gemini.edu (James Turner) Date: Mon, 15 Nov 2010 16:36:06 -0300 Subject: [AstroPy] new pyfits version deletes NP_pyfits, breaking pickle In-Reply-To: References: <4CDD5CD2.5030107@stsci.edu> Message-ID: <4CE18BA6.8010806@gemini.edu> > Perhaps a possible pickle PEP might allow you to register your > pickler/unpickler when you define an object, so that this could be > part of the solution. But then the unpickler would have to be saved > in the pickle to ensure that it will be available. Peter Python picked a PEP of pickled PyFITS. Sorry, too hard to resist. I don't really have experience with pickle, but couldn't you implement something like this now, say using a dictionary to track your on-the-fly attributes? It would be marginally less convenient, of course. Is that too naive? James. From miguel.deval at gmail.com Tue Nov 16 09:28:03 2010 From: miguel.deval at gmail.com (Miguel de Val Borro) Date: Tue, 16 Nov 2010 15:28:03 +0100 Subject: [AstroPy] PyFITS IndexError Message-ID: <20101116142803.GA20503@poincare> Hello, I have problems reading FITS files from the Herschel Science Archive generated using their Java data processing system. The file http://www.mps.mpg.de/data/outgoing/deval/hsa.fits has a binary table that cannot be accessed by index or name as shown below. When the line hdulist[1].data.field(0) is run a few times in the interpreter the data is finally printed. How can I read this kind of file using PyFITS? Best regards, Miguel >>> pyfits.__version__ '2.3.1' >>> hdulist = pyfits.open('hsa.fits') >>> hdulist[1].data.field(0) --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /home/miguel/HssO/103P/data/fits/ in () /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, attr) 6240 if size: 6241 self._file.seek(self._datLoc) -> 6242 data = _get_tbdata(self) 6243 data._coldefs = self.columns 6244 data.formats = self.columns.formats /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in _get_tbdata(hdu) 5165 Get the table data from input (an HDU object). 5166 """ -> 5167 tmp = hdu.columns 5168 # get the right shape for the data part of the random group, 5169 # since binary table does not support ND yet /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, attr) 6251 class_name = str(self.__class__) 6252 class_name = class_name[class_name.rfind('.')+1:-2] -> 6253 self.__dict__[attr] = ColDefs(self, tbtype=class_name) 6254 6255 elif attr == '_theap': /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __init__(self, input, tbtype) 4887 if col <= _nfields and col > 0: 4888 cname = _commonNames[_keyNames.index(keyword)] -> 4889 dict[col-1][cname] = _card.value 4890 4891 # data reading will be delayed /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, name) 438 self._extractKey() 439 elif name in ['value', 'comment']: --> 440 self._extractValueComment(name) 441 else: 442 raise AttributeError, name /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in _extractValueComment(self, name) 1450 1451 # drop the ending "&" -> 1452 if _val[-1] == '&': 1453 _val = _val[:-1] 1454 longstring = longstring + _val IndexError: string index out of range From mdroe at stsci.edu Tue Nov 16 10:53:30 2010 From: mdroe at stsci.edu (Michael Droettboom) Date: Tue, 16 Nov 2010 10:53:30 -0500 Subject: [AstroPy] PyFITS IndexError In-Reply-To: <20101116142803.GA20503@poincare> References: <20101116142803.GA20503@poincare> Message-ID: <4CE2A8FA.7050906@stsci.edu> The problem is related to the handling of CONTINUE cards. The linked file has a surprising use of CONTINUE cards that pyfits has not been tested for. hsa.fits has the following: TTYPE17 = 'LOF_code' / & CONTINUE '' / & The CONTINUE card has an empty value (which is what is tripping up pyfits). This is probably not strictly against the specification [1], but it is meaningless, and it's not clear why the system that generated the file would be spitting those out. Also, the '&' is supposed to go inside the value (i.e. inside the '') not inside the comment part. In any case, the second line above should not contain an &, since it is the last line of the continued value. [1] http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/ofwg_recomm/r13.html The attached patch against PyFITS allows it to handle this file. However, I would also recommend reporting the bug to the author(s) of the tool that generated this file. Mike On 11/16/2010 09:28 AM, Miguel de Val Borro wrote: > Hello, > > I have problems reading FITS files from the Herschel Science Archive > generated using their Java data processing system. The file > http://www.mps.mpg.de/data/outgoing/deval/hsa.fits > has a binary table that cannot be accessed by index or name as shown > below. > > When the line hdulist[1].data.field(0) is run a few times in the > interpreter the data is finally printed. How can I read this kind of > file using PyFITS? > > Best regards, > Miguel > > >>>> pyfits.__version__ >>>> > '2.3.1' > >>>> hdulist = pyfits.open('hsa.fits') >>>> hdulist[1].data.field(0) >>>> > --------------------------------------------------------------------------- > IndexError Traceback (most recent call last) > > /home/miguel/HssO/103P/data/fits/ in() > > /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, attr) > 6240 if size: > 6241 self._file.seek(self._datLoc) > -> 6242 data = _get_tbdata(self) > 6243 data._coldefs = self.columns > 6244 data.formats = self.columns.formats > > /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in _get_tbdata(hdu) > 5165 Get the table data from input (an HDU object). > 5166 """ > -> 5167 tmp = hdu.columns > 5168 # get the right shape for the data part of the random group, > > 5169 # since binary table does not support ND yet > > > /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, attr) > 6251 class_name = str(self.__class__) > 6252 class_name = class_name[class_name.rfind('.')+1:-2] > -> 6253 self.__dict__[attr] = ColDefs(self, tbtype=class_name) > 6254 > 6255 elif attr == '_theap': > > /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __init__(self, input, tbtype) > 4887 if col<= _nfields and col> 0: > 4888 cname = _commonNames[_keyNames.index(keyword)] > -> 4889 dict[col-1][cname] = _card.value > 4890 > 4891 # data reading will be delayed > > > /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, name) > 438 self._extractKey() > 439 elif name in ['value', 'comment']: > --> 440 self._extractValueComment(name) > 441 else: > 442 raise AttributeError, name > > /usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in _extractValueComment(self, name) > 1450 > 1451 # drop the ending "&" > > -> 1452 if _val[-1] == '&': > 1453 _val = _val[:-1] > 1454 longstring = longstring + _val > > IndexError: string index out of range > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > -- Michael Droettboom Science Software Branch Space Telescope Science Institute Baltimore, Maryland, USA -------------- next part -------------- A non-text attachment was scrubbed... Name: pyfits_continue_workaround.diff Type: text/x-patch Size: 452 bytes Desc: not available URL: From miguel.deval at gmail.com Tue Nov 16 11:14:06 2010 From: miguel.deval at gmail.com (Miguel de Val-Borro) Date: Tue, 16 Nov 2010 17:14:06 +0100 Subject: [AstroPy] PyFITS IndexError In-Reply-To: <4CE2A8FA.7050906@stsci.edu> References: <20101116142803.GA20503@poincare> <4CE2A8FA.7050906@stsci.edu> Message-ID: <20101116161406.GA3395@poincare> Many thanks for the patch! I will report the bug to the HSC helpdesk. Miguel On Tue, Nov 16, 2010 at 10:53:30AM -0500, Michael Droettboom wrote: > The problem is related to the handling of CONTINUE cards. > > The linked file has a surprising use of CONTINUE cards that pyfits > has not been tested for. > > hsa.fits has the following: > > TTYPE17 = 'LOF_code' / & > CONTINUE '' / & > > The CONTINUE card has an empty value (which is what is tripping up > pyfits). This is probably not strictly against the specification > [1], but it is meaningless, and it's not clear why the system that > generated the file would be spitting those out. Also, the '&' is > supposed to go inside the value (i.e. inside the '') not inside the > comment part. In any case, the second line above should not contain > an &, since it is the last line of the continued value. > > [1] http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/ofwg_recomm/r13.html > > The attached patch against PyFITS allows it to handle this file. > However, I would also recommend reporting the bug to the author(s) > of the tool that generated this file. > > Mike > > On 11/16/2010 09:28 AM, Miguel de Val Borro wrote: > >Hello, > > > >I have problems reading FITS files from the Herschel Science Archive > >generated using their Java data processing system. The file > >http://www.mps.mpg.de/data/outgoing/deval/hsa.fits > >has a binary table that cannot be accessed by index or name as shown > >below. > > > >When the line hdulist[1].data.field(0) is run a few times in the > >interpreter the data is finally printed. How can I read this kind of > >file using PyFITS? > > > >Best regards, > >Miguel > > > >>>>pyfits.__version__ > >'2.3.1' > >>>>hdulist = pyfits.open('hsa.fits') > >>>>hdulist[1].data.field(0) > >--------------------------------------------------------------------------- > >IndexError Traceback (most recent call last) > > > >/home/miguel/HssO/103P/data/fits/ in() > > > >/usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, attr) > > 6240 if size: > > 6241 self._file.seek(self._datLoc) > >-> 6242 data = _get_tbdata(self) > > 6243 data._coldefs = self.columns > > 6244 data.formats = self.columns.formats > > > >/usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in _get_tbdata(hdu) > > 5165 Get the table data from input (an HDU object). > > 5166 """ > >-> 5167 tmp = hdu.columns > > 5168 # get the right shape for the data part of the random group, > > > > 5169 # since binary table does not support ND yet > > > > > >/usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, attr) > > 6251 class_name = str(self.__class__) > > 6252 class_name = class_name[class_name.rfind('.')+1:-2] > >-> 6253 self.__dict__[attr] = ColDefs(self, tbtype=class_name) > > 6254 > > 6255 elif attr == '_theap': > > > >/usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __init__(self, input, tbtype) > > 4887 if col<= _nfields and col> 0: > > 4888 cname = _commonNames[_keyNames.index(keyword)] > >-> 4889 dict[col-1][cname] = _card.value > > 4890 > > 4891 # data reading will be delayed > > > > > >/usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in __getattr__(self, name) > > 438 self._extractKey() > > 439 elif name in ['value', 'comment']: > >--> 440 self._extractValueComment(name) > > 441 else: > > 442 raise AttributeError, name > > > >/usr/local/lib/python2.6/dist-packages/pyfits/core.pyc in _extractValueComment(self, name) > > 1450 > > 1451 # drop the ending "&" > > > >-> 1452 if _val[-1] == '&': > > 1453 _val = _val[:-1] > > 1454 longstring = longstring + _val > > > >IndexError: string index out of range > >_______________________________________________ > >AstroPy mailing list > >AstroPy at scipy.org > >http://mail.scipy.org/mailman/listinfo/astropy > > > -- > Michael Droettboom > Science Software Branch > Space Telescope Science Institute > Baltimore, Maryland, USA > > Index: lib/core.py > =================================================================== > --- lib/core.py (revision 714) > +++ lib/core.py (working copy) > @@ -1468,7 +1468,7 @@ > _val = re.sub("''", "'", _card.value).rstrip() > > # drop the ending "&" > - if _val[-1] == '&': > + if len(_val) and _val[-1] == '&': > _val = _val[:-1] > longstring = longstring + _val >