From thomas.robitaille at gmail.com Wed Aug 1 11:47:24 2012 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Wed, 1 Aug 2012 17:47:24 +0200 Subject: [AstroPy] IDLSave 1.0.0 Released! Message-ID: I am happy to announce the availability of IDLSave 1.0.0! IDLSave is a pure Python module to import variables from IDL ?save? files into Python, and does not require IDL to work. More information and download/installation instructions are available at http://astrofrog.github.com/idlsave/ Please note that since IDLSave is now available as part of Scipy (in scipy.io), this is intended to be the last standalone stable release. If you have Scipy 0.9.0 or more recent, you do not need to install IDLSave - instead, see: http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.readsav.html Let me know if you encounter any problems, or have any suggestions! Cheers, Tom From goldbaum at ucolick.org Thu Aug 2 23:21:36 2012 From: goldbaum at ucolick.org (Nathan Goldbaum) Date: Thu, 2 Aug 2012 20:21:36 -0700 Subject: [AstroPy] yt 2.4 release announcement In-Reply-To: References: Message-ID: Dear Colleagues, We?re proud to announce the release of version 2.4 of the yt Project, http://yt-project.org/ . The new version includes many new features, refinements of existing features and numerous bugfixes. We encourage all users to upgrade to take advantage of the changes. yt is a community-developed analysis and visualization toolkit, primarily directed at astrophysical hydrodynamics simulations. It provides full support for output from the Enzo, FLASH, Orion, and Nyx codes, with preliminary support for several others. It provides access to simulation data using an intuitive python interface, can perform many common visualization tasks and offers a framework for conducting data reductions and analysis of simulation data. The most visible changes with the 2.4 release include: * A threaded volume renderer, which has been completely rewritten for speed, clarity and extensibility * A new plotting mechanism, the ?Plot Window,? for easier and more efficient visualization * Many improvements to time series analysis, including auto-parallelization * A complete rewrite of the Web GUI for yt including a WebGL 3D scene and substantial user interface improvements * Integration with the new yt Hub (http://hub.yt-project.org/ ) for sharing of data, scripts and images * Extensive additions and improvements to the cookbook For a complete list of changes in this release, please visit the Changelog ( http://yt-project.org/docs/2.4/changelog.html ). Information about the yt project, including installation instructions, can be found on the homepage: http://yt-project.org/ Development of yt has been sponsored by the NSF, the DOE, and various universities. We develop yt in the open and encourage contributions from users who extend and improve the code. We invite you to get involved with developing and using yt! Please forward this announcement to other interested parties. Sincerely, The yt development team From john.parejko at yale.edu Thu Aug 2 23:40:37 2012 From: john.parejko at yale.edu (John K. Parejko) Date: Thu, 2 Aug 2012 23:40:37 -0400 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> Message-ID: <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> Follow up on this: Erin's suggestion to use fitsio gave me a factor of more than 10 improvement in speed. I was quite astonished at how much faster it was, so I've written up a short example, and attached it. On my laptop (13" macbook pro, OS X 10.6.8, regular HDD), the code produces the following: $ python fits_tester.py fitsio version: 0.9.0 pyfits version: 3.0.6 Single pass: fitsio took 1.14109 seconds. Single pass: pyfits took 14.64361 seconds. One of the problems with the pyfits version is that I don't know how to efficiently get at row(n) of a pyfits object in a form that can be directly ingested into an ndarray. If there is a way to make the pyfits version significantly faster just by calling pyfits differently, I'm all ears. Looking at the profiles for the runs (output to .prof files), it looks like pyfits is doing a lot of object creation and destruction in the background, which may be what's killing it. Anyway, there does seem to be a major difference in speed here, even in what is probably the most favorable configuration for pyfits, with it running last and thus having files potentially cached. Assuming this difference isn't just me, is way to get these speed improvements merged into pyfits? John -------------- next part -------------- A non-text attachment was scrubbed... Name: fits_tester.py Type: text/x-python-script Size: 3497 bytes Desc: not available URL: From perry at stsci.edu Fri Aug 3 06:16:22 2012 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 3 Aug 2012 06:16:22 -0400 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> Message-ID: <7961F589-2439-47B2-B3D8-33CB04CC4391@stsci.edu> On Aug 2, 2012, at 11:40 PM, John K. Parejko wrote: > Follow up on this: > > Erin's suggestion to use fitsio gave me a factor of more than 10 > improvement in speed. I was quite astonished at how much faster it > was, so I've written up a short example, and attached it. On my > laptop (13" macbook pro, OS X 10.6.8, regular HDD), the code > produces the following: > > $ python fits_tester.py > fitsio version: 0.9.0 > pyfits version: 3.0.6 > Single pass: fitsio took 1.14109 seconds. > Single pass: pyfits took 14.64361 seconds. > > One of the problems with the pyfits version is that I don't know how > to efficiently get at row(n) of a pyfits object in a form that can > be directly ingested into an ndarray. If there is a way to make the > pyfits version significantly faster just by calling pyfits > differently, I'm all ears. > > Looking at the profiles for the runs (output to .prof files), it > looks like pyfits is doing a lot of object creation and destruction > in the background, which may be what's killing it. > > Anyway, there does seem to be a major difference in speed here, even > in what is probably the most favorable configuration for pyfits, > with it running last and thus having files potentially cached. > > Assuming this difference isn't just me, is way to get these speed > improvements merged into pyfits? > I'm not so familiar with the internals now, but one approach we can take is to expose a "bare" ndarray for a table. That probably should short-circuit a lot of the object stuff that has to deal with special FITS cases like mapping FITS booleans into numpy booleans, bscale/ bzero, etc. (of course, that means you have to do that yourself if any of these cases exist). I'll have to ask Erik about that. There still would be the time needed to parse the header to determine the structure of the array. Perry From embray at stsci.edu Fri Aug 3 13:48:15 2012 From: embray at stsci.edu (Erik Bray) Date: Fri, 3 Aug 2012 13:48:15 -0400 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> Message-ID: <501C0EDF.60401@stsci.edu> On 08/02/2012 11:40 PM, John K. Parejko wrote: > Follow up on this: > > Erin's suggestion to use fitsio gave me a factor of more than 10 > improvement in speed. I was quite astonished at how much faster it was, > so I've written up a short example, and attached it. On my laptop (13" > macbook pro, OS X 10.6.8, regular HDD), the code produces the following: > > $ python fits_tester.py > fitsio version: 0.9.0 > pyfits version: 3.0.6 > Single pass: fitsio took 1.14109 seconds. > Single pass: pyfits took 14.64361 seconds. > > One of the problems with the pyfits version is that I don't know how to > efficiently get at row(n) of a pyfits object in a form that can be > directly ingested into an ndarray. If there is a way to make the pyfits > version significantly faster just by calling pyfits differently, I'm all > ears. > > Looking at the profiles for the runs (output to .prof files), it looks > like pyfits is doing a lot of object creation and destruction in the > background, which may be what's killing it. > > Anyway, there does seem to be a major difference in speed here, even in > what is probably the most favorable configuration for pyfits, with it > running last and thus having files potentially cached. > > Assuming this difference isn't just me, is way to get these speed > improvements merged into pyfits? > > John Thanks John for this benchmarking--this is very helpful. For what it's worth, a lot of improvements have been made since PyFITS 3.0.6, and these are the results I'm getting on my end: fitsio version: 0.9.0 pyfits version: 3.1.dev Single pass: fitsio took 1.62691 seconds. Single pass: pyfits took 7.50556 seconds. A few additional trials gave roughly the same results. I'm also less astonished by the speed differences, simply in that fitsio wraps CFITSIO, a C library, while much of PyFITS is pure Python. Looking at the profile, it spends about 2/5th of the time just opening the file and creating objects for the Header and HDU structures. There are some more micro-optimizations to be made there, but not much. PyFITS provides a very flexible and extensible object-oriented interface that simply isn't possible with CFITSIO, but there's a tradeoff there in terms of raw performance, since it's all in pure Python. For example, in this benchmark, PyFITS spends over half a second (cumulatively, under the profiler) just on the routine for determining which HDU subclass to initialize based on the header keywords--CFITSIO has no equivalent routine because it doesn't even care what the HDU type is until you try to read some data. And even then the only real distinction it tries to make is, "Is this an image or a table?" So in simply opening files you'll always get better performance with fitsio. That said, when I amend the benchmark to just open files and read the headers (without touching the data) fitsio is only about three times faster. Still a big difference when dealing with a lot of files, but far less dramatic. Where PyFITS really takes a big hit performance-wise is in the handling of table columns, and, as Perry mentioned, the conversion from the raw data to Python data types like bools and strings. As I wrote earlier in this thread, the biggest problem is that PyFITS' design has always been optimized for column-based access, and is horribly inefficient for row-based access, since the latter usually involves reading entire columns into memory anyways. The reason for this is mostly historical--PyFITS' table interface is built on top of Numpy's recarray object, which I think is pretty flawed to begin with. At the time this was necessary because PyFITS did not yet support compound dtypes in its normal ndarrays. At least I think that was the issue. But now it seems to be more of a hindrance. In any case, I'm glad fitsio is available too. It's clear from this experiment that in cases where reading and parsing FITS files is the major bottleneck, it's probably the way to go for now. I don't know how much time there will be going forward to devote to improving PyFITS in this regard. Erik From hack at stsci.edu Fri Aug 3 16:04:33 2012 From: hack at stsci.edu (Warren J. Hack) Date: Fri, 3 Aug 2012 16:04:33 -0400 Subject: [AstroPy] TABLES/STSDAS v3.15 and STSCI_PYTHON v2.13 NOW AVAILABLE Message-ID: <501C2ED1.60501@stsci.edu> ******************************************************** TABLES/STSDAS v3.15 and STSCI_PYTHON v2.13 NOW AVAILABLE ******************************************************** The Science Software Branch of the Space Telescope Science Institute wishes to announce the availability of these packages: v3.15 of the Space Telescope Science Data Analysis Software (STSDAS) v3.15 of the TABLES external package v1.1 of the HSTCAL package v2.13 of STScI_Python Both the STSDAS and Tables packages are layered upon IRAF as external packages; it is therefore necessary to obtain and install IRAF (from http://iraf.noao.edu) before installing TABLES and STSDAS. STSDAS is linked against libraries in TABLES, so TABLES must be installed before STSDAS. The HSTCAL package contains everything necessary to compile C code, written under IRAF using the CVOS interface, without the use of IRAF at all. The STScI_Python package can be installed independent of IRAF, with the PyRAF package providing a Python interface to IRAF should it be installed. ------ STSDAS ------ This release of STSDAS contains changes to STIS, COS, and WFC3. This release includes the same versions of the calibration software used in the pipeline to support most of the active HST instruments using the algorithms developed from the data taken during Servicing Mission 4, especially for the Wide Field Camera 3(WFC3) and the Cosmic Origins Spectrograph(COS). For WFC3, calwf3 v2.7 contains improvements to cosmic-ray rejection for IR data, along with changes to processing of RPTCORR data. STSDAS also includes some Python-based tasks; running these requires the STScI Python Libraries to be installed (all the non-Python tasks can be used in the traditional way and do not require Python or the STScI Python libraries). These tasks can only be run from PyRAF using standard IRAF CL task syntax, or within Python scripts using Python syntax. HSTCAL and Future STSDAS Releases ================================= The HSTCAL package for the past year has contained a version of CALACS which no longer relied on IRAF at all. This version has become the sole version of CALACS being supported by STScI for calibrating ACS data, with the most recent updates to CALACS only being implemented in the HSTCAL version of the code. The out-dated code for CALACS has now been removed entirely from STSDAS. Any user loading the 'acs' package under STSDAS will be directed to the HSTCAL version which now has a Python-based TEAL interface as well as the standard command-line executable interface. The STSDAS versions of CALWF3 and CALSTIS will also be removed in future releases as they get replaced by HSTCAL versions of the code. stecf software ============== This release contains the stecf IRAF package for supporting HST data that had been developed by members of the Space Telescope European Coordinating Facility (ST-ECF) which closed on 31 December 2010. Support for these packages may be obtained by contacting the STScI Help Desk at 'help at stsci.edu'. These questions will then be forwarded to the package authors at ESO as required. STSDAS Platform Support ======================= This release does not contain STSDAS/Tables binaries for PowerPC Macintosh systems. However, there are no known reasons why a user could not build this package from source code on a PowerPC using the installation instructions for STSDAS. Binaries for this release were built on Red Hat Enterprise Linux 4 and Mac OS X 10.5. Binaries were built with IRAF 2.14. Binaries for newer versions of IRAF or operating systems may be built as needed, and users can check the STSDAS web pages for more details. IRAF is available from http://iraf.net. No Solaris Support for STSDAS ----------------------------- This version was not built or tested on Solaris. The source code, however, can always be downloaded and compiled locally as needed. 64-bit IRAF Support =================== IRAF 2.15 includes support for 64-bit systems. The changes necessary to support 64-bits generally require applications to be modified. We will only release 32-bit binaries fo the Tables and STSDAS packages for use with IRAF 2.15. Support for 32-bit binaries will continue as long as there are sufficient plaforms that such binaries will run on. We may only convert some packages in Tables and STSDAS to 64-bit version when the 32-bit versions are no longer viable and if created will be released as a separate package with a different name. When the 32-bit versions of TABLES and STSDAS are no longer viable, support for any tasks within those packages not ported to 64-bits will end. The full description of this policy can be found online at: http://www.stsci.edu/resources/software_hardware/stsdas/iraf64 ------ Tables ------ This distribution of TABLES includes the TTOOLS package, as well as FITSIO, and TBPLOT, along with the gflib, stxtools, and tbtables libraries. All these packages and libraries have been compiled to run under 32-bit IRAF only, in contrast to the 64-bit version of TTOOLS (installed as NTTOOLS) and the tbtables library supported by IRAF and included in IRAF 2.15. -------------- HSTCAL Package -------------- The HSTCAL package provides an environment for compiling IRAF tasks written in C without using IRAF at all. The HSTIO image and tables C API has been replicated using CFITSIO. This API was used to compile the ACS calibration code CALACS for use without IRAF. This version of CALACS, Version 8.0.5, has been installed as the calibration software that gets used in the archive and for pipeline calibrations, replacing the now outdated STSDAS/IRAF based version. This version of CALACS includes CTE correction (with and without parallel processing using OpenMP), post-SM4 destriping, post-SM4 bias-shift correction, and post-SM4 electronics cross-talk removal along with numerous bug fixes. ------------ STScI_Python ------------ STScI_Python is a collection of Python modules, packages and C extensions that has been developed to provide a general astronomical data analysis infrastructure. They can be used for Python tasks that are accessible from within STSDAS when running under PyRAF or standalone from within Python. This distribution continues to develop packages with tasks specific to each HST instrument, with updates being made to calibration code for COS (calcos v2.18.5), while adding tasks for working with ACS data. This release includes the new packages drizzlepac (a replacement for MultiDrizzle and the STSDAS dither package) and fitsblender. Updates were also made to pyraf, stwcs, acstools, stsci.tools, and stsci.image. PyRAF v2.0 now supports both Python 2 and 3, amongst its many updates. The TEAL interface for CALACS along with updates to the CTE and destripe tasks were also added to the acstools package. Updates to calcos include support for the new gain-sag reference file, along with numerous bug fixes and improvements. This release also contains pysynphot v0.9.3 which contains several bug fixes and significant performance improvements for some calculations. This release has been tested using Python Versions 2.6.5 and 2.7.3. This release also requires at least numpy 1.5. All packages within this release have been revised to support easy_install of the code, while still allowing the code to be installed using the disutils command 'python setup.py'. Python 2.5 support ================== Many of the packages in this distribution, including the new drizzlepac package, have been implemented using features not available under Python 2.5 in preparation for the transition to Python 3.x. Future releases will only support Python 2.6 or newer as part of this transition, and all testing of our code under Python 2.5 has been terminated. Python 3.x support ================== The majority of stsci_python does not yet support Python 3. While the scope of the work involved in transitioning to Python 3.x is being considered, be aware that we expect to support Python 2.x for quite some time. At this time however, both the PyFITS and the PyRAF packages support Python 3.2 (as well as Python 2). Platform Support ================ This testing took place under Linux and Mac OS X (both 10.5 and 10.6), but testing no longer takes place on Solaris. STSCI_Python on WINDOWS ======================= This release includes a port of stsci_python that runs natively under Windows. PyRAF also now comes as part of the installation under Windows, complete with its own desktop shortcut, albeit without IRAF. The primary requirement for use of this installer is that the Windows executables were built against and, therefore, can run only under Python 2.7. STScI_Python and astropy ======================== Some packages currently distributed as part of stsci_python provide more general functionality beyond that needed by HST data; packages such as pyfits and vo. Those packages will eventually migrate to the more general astropy distribution and when that happens, our code will require the astropy core libraries. We have no time frame for this transition, but it will within a small number of releases. The astropy distribution, along with more details about the code included, can be found at: http://www.astropy.org ============================= WHERE TO OBTAIN THIS SOFTWARE ============================= STSDAS/TABLES can be downloaded from the STSDAS web site. The installation instructions for all these packages have changed. You can now download a single tar file that contains the source code and binaries for STSDAS, TABLES, and STECF. There is also a smaller file if you just need TABLES. You can see the new installation instructions at http://www.stsci.edu/resources/software_hardware/stsdas/download Installation instructions are available on the web site. Precompiled binaries also exist for some of the ports supported by NOAO, including RedHat Linux and Mac OS X. STSCI_PYTHON be downloaded from the PyRAF web site: http://www.stsci.edu/resources/software_hardware/pyraf/stsci_python/current/download Installation instructions are available from the PyRAF web site, with support for installations on RedHat Linux, Mac OS X, and Windows. Please contact us through e-mail (help at stsci.edu), by telephone at (410) 338-1082. From erin.sheldon at gmail.com Fri Aug 3 17:25:37 2012 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 3 Aug 2012 17:25:37 -0400 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: <501C0EDF.60401@stsci.edu> References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> <501C0EDF.60401@stsci.edu> Message-ID: On Fri, Aug 3, 2012 at 1:48 PM, Erik Bray wrote: > A few additional trials gave roughly the same results. I'm also less > astonished by the speed differences, simply in that fitsio wraps > CFITSIO, a C library, while much of PyFITS is pure Python. Looking at > the profile, it spends about 2/5th of the time just opening the file and > creating objects for the Header and HDU structures. There are some more > micro-optimizations to be made there, but not much. PyFITS provides a > very flexible and extensible object-oriented interface that simply isn't > possible with CFITSIO, but there's a tradeoff there in terms of raw > performance, since it's all in pure Python. For example, in this > benchmark, PyFITS spends over half a second (cumulatively, under the > profiler) just on the routine for determining which HDU subclass to > initialize based on the header keywords--CFITSIO has no equivalent > routine because it doesn't even care what the HDU type is until you try > to read some data. And even then the only real distinction it tries to > make is, "Is this an image or a table?" Hi Erik - Actually I *do* run through all the HDUs and gather all the metadata, even in John's case of reading a single row and column from a particular HDU. It just turns out that CFITSIO provides efficient routines to access this information, so it doesn't incur significant overhead. It might be that the efficiency in the CFITSIO routines could be translated to python and used in PyFits. > Where PyFITS really takes a big hit performance-wise is in the handling > of table columns, and, as Perry mentioned, the conversion from the raw > data to Python data types like bools and strings. As I wrote earlier in > this thread, the biggest problem is that PyFITS' design has always been > optimized for column-based access, and is horribly inefficient for > row-based access, since the latter usually involves reading entire > columns into memory anyways. The reason for this is mostly > historical--PyFITS' table interface is built on top of Numpy's recarray > object, which I think is pretty flawed to begin with. At the time this > was necessary because PyFITS did not yet support compound dtypes in its > normal ndarrays. At least I think that was the issue. But now it seems > to be more of a hindrance. This is a fundamental limitation for structured numpy arrays and the memmap interface. I developed a C code to get around this limitation and added it to numpy. This can be used by pyfits for tables (but not variable length columns). Unfortunately that project got a bit hijacked by other developers because they want the ascii reading portion of the code to do type inference. This has dramatically increased the scope of the project and delayed things indefinitely. I am no longer involved, but it is my understanding that this functionality should be available in an upcoming version of numpy. -e -- Erin Scott Sheldon Brookhaven National Laboratory erin dot sheldon at gmail dot com From embray at stsci.edu Fri Aug 3 18:06:17 2012 From: embray at stsci.edu (Erik Bray) Date: Fri, 3 Aug 2012 18:06:17 -0400 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> <501C0EDF.60401@stsci.edu> Message-ID: <501C4B59.80303@stsci.edu> Hi :) Inline replies below: On 08/03/2012 05:25 PM, Erin Sheldon wrote: > On Fri, Aug 3, 2012 at 1:48 PM, Erik Bray wrote: >> A few additional trials gave roughly the same results. I'm also less >> astonished by the speed differences, simply in that fitsio wraps >> CFITSIO, a C library, while much of PyFITS is pure Python. Looking at >> the profile, it spends about 2/5th of the time just opening the file and >> creating objects for the Header and HDU structures. There are some more >> micro-optimizations to be made there, but not much. PyFITS provides a >> very flexible and extensible object-oriented interface that simply isn't >> possible with CFITSIO, but there's a tradeoff there in terms of raw >> performance, since it's all in pure Python. For example, in this >> benchmark, PyFITS spends over half a second (cumulatively, under the >> profiler) just on the routine for determining which HDU subclass to >> initialize based on the header keywords--CFITSIO has no equivalent >> routine because it doesn't even care what the HDU type is until you try >> to read some data. And even then the only real distinction it tries to >> make is, "Is this an image or a table?" > > Hi Erik - > > Actually I *do* run through all the HDUs and gather all the metadata, > even in John's case of reading a single row and column from a particular > HDU. It just turns out that CFITSIO provides efficient routines to > access this information, so it doesn't incur significant overhead. It > might be that the efficiency in the CFITSIO routines could be translated > to python and used in PyFits. Okay, I wasn't entirely sure about that. Or rather, I haven't delved too deeply into fitsio itself, so I was just going by what I know about CFITSIO. That said, fitsio has a much simpler object interface--there is a single FitsHDU class that covers images and binary and ASCII tables. This is because, to my knowledge, those are the only major distinctions CFITSIO makes, aside from special cases like group HDUs (which are just a special case of binary tables). PyFITS on the other hand has a rather complex class hierarchy for different HDU types. This has some advantages, in that it keeps the code fairly simple and reusable. It has also made it easy to support special HDU types in a seamless manner. For example, it was easy to add HDU subclasses to support constant value image arrays used for HST data. And more recently, it made it easy to add new classes for special HDUs that encapsulate FITS files, which could be further subclassed and specialized for "Headerlet" HDUs meant to encapsulate WCS data as a FITS file encapsulated in a FITS file. All of the above *could* be done by tacking pure Python frontends to CFITSIO, and would probably still be a little bit faster. But not *much* faster for the same degree of flexibility. There are several other high-level abstractions that PyFITS provides. For example, support for WCS record-valued cards. This feature has a surprising amount of overhead, and although I think it can be improved a little bit, it falls into the category of "micro-optimizations". >> Where PyFITS really takes a big hit performance-wise is in the handling >> of table columns, and, as Perry mentioned, the conversion from the raw >> data to Python data types like bools and strings. As I wrote earlier in >> this thread, the biggest problem is that PyFITS' design has always been >> optimized for column-based access, and is horribly inefficient for >> row-based access, since the latter usually involves reading entire >> columns into memory anyways. The reason for this is mostly >> historical--PyFITS' table interface is built on top of Numpy's recarray >> object, which I think is pretty flawed to begin with. At the time this >> was necessary because PyFITS did not yet support compound dtypes in its >> normal ndarrays. At least I think that was the issue. But now it seems >> to be more of a hindrance. > > This is a fundamental limitation for structured numpy arrays and the > memmap interface. Indeed--exactly my point. If were to write PyFITS from scratch I would probably not rely on Numpy as my primary I/O interface, but would instead serve up Numpy arrays as a high-level abstraction. Under the hood I would probably do something more closely resembling CFITSIO's buffer ring (the returned Numpy arrays would of course not be contiguous by default). > I developed a C code to get around this limitation and added it to > numpy. This can be used by pyfits for tables (but not variable length > columns). Unfortunately that project got a bit hijacked by other > developers because they want the ascii reading portion of the code to do > type inference. This has dramatically increased the scope of the project > and delayed things indefinitely. I am no longer involved, but it is my > understanding that this functionality should be available in an upcoming > version of numpy. > > -e You'll have to tell me more about exactly what this Numpy development is--if it's something I can still incorporate into PyFITS (nevermind what the Numpy people are doing with it) it could be very helpful. We can take that discussion offlist if you want. Thanks, Erik From embray at stsci.edu Fri Aug 3 18:08:56 2012 From: embray at stsci.edu (Erik Bray) Date: Fri, 3 Aug 2012 18:08:56 -0400 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: <501C4B59.80303@stsci.edu> References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> <501C0EDF.60401@stsci.edu> <501C4B59.80303@stsci.edu> Message-ID: <501C4BF8.1020009@stsci.edu> On 08/03/2012 06:06 PM, Erik Bray wrote: > Indeed--exactly my point. If were to write PyFITS from scratch I would > probably not rely on Numpy as my primary I/O interface, but would > instead serve up Numpy arrays as a high-level abstraction. Under the > hood I would probably do something more closely resembling CFITSIO's > buffer ring (the returned Numpy arrays would of course not be contiguous > by default). Just to clarify, this should read "would not be *guaranteed* contiguous by default". From wkerzendorf at gmail.com Fri Aug 3 18:34:21 2012 From: wkerzendorf at gmail.com (Wolfgang Kerzendorf) Date: Fri, 3 Aug 2012 18:34:21 -0400 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: <501C4BF8.1020009@stsci.edu> References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> <501C0EDF.60401@stsci.edu> <501C4B59.80303@stsci.edu> <501C4BF8.1020009@stsci.edu> Message-ID: I have been following this discussion with interest. My two cents: I think PyFits is fast enough for what the most common operations and provides much more functionality. BUT fitsio is useful for iterating over many little files (as shown), I don't think integrating one into the other is useful. They are both useful for different applications. These email discussions again remind me that we need to have a designated area where to put HOWTOs/examples. This email trail is basically a HOWTO and we should make sure that it doesn't get lost. Can we get a section on Astropy.org for examples and HOWTOs (publically editably wiki/ for now freeform) Cheers Wolfgang On 2012-08-03, at 6:08 PM, Erik Bray wrote: > On 08/03/2012 06:06 PM, Erik Bray wrote: >> Indeed--exactly my point. If were to write PyFITS from scratch I would >> probably not rely on Numpy as my primary I/O interface, but would >> instead serve up Numpy arrays as a high-level abstraction. Under the >> hood I would probably do something more closely resembling CFITSIO's >> buffer ring (the returned Numpy arrays would of course not be contiguous >> by default). > > Just to clarify, this should read "would not be *guaranteed* contiguous > by default". > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From erik.tollerud at gmail.com Fri Aug 3 19:27:50 2012 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Fri, 3 Aug 2012 16:27:50 -0700 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> <501C0EDF.60401@stsci.edu> <501C4B59.80303@stsci.edu> <501C4BF8.1020009@stsci.edu> Message-ID: See https://github.com/astropy/astropy/wiki/code-examples - I just added a section just as you suggest. Feel free to add this as you wish! On Fri, Aug 3, 2012 at 3:34 PM, Wolfgang Kerzendorf wrote: > I have been following this discussion with interest. My two cents: I think PyFits is fast enough for what the most common operations and provides much more functionality. BUT fitsio is useful for iterating over many little files (as shown), I don't think integrating one into the other is useful. They are both useful for different applications. > > These email discussions again remind me that we need to have a designated area where to put HOWTOs/examples. This email trail is basically a HOWTO and we should make sure that it doesn't get lost. > > Can we get a section on Astropy.org for examples and HOWTOs (publically editably wiki/ for now freeform) > > > Cheers > Wolfgang > > > On 2012-08-03, at 6:08 PM, Erik Bray wrote: > >> On 08/03/2012 06:06 PM, Erik Bray wrote: >>> Indeed--exactly my point. If were to write PyFITS from scratch I would >>> probably not rely on Numpy as my primary I/O interface, but would >>> instead serve up Numpy arrays as a high-level abstraction. Under the >>> hood I would probably do something more closely resembling CFITSIO's >>> buffer ring (the returned Numpy arrays would of course not be contiguous >>> by default). >> >> Just to clarify, this should read "would not be *guaranteed* contiguous >> by default". >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy -- Erik Tollerud From erin.sheldon at gmail.com Fri Aug 3 19:29:15 2012 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 3 Aug 2012 19:29:15 -0400 Subject: [AstroPy] reading one line from many small fits files In-Reply-To: <501C4B59.80303@stsci.edu> References: <853FBC70-C380-4813-98DA-18F5E883D83B@yale.edu> <91016FD8-9705-404D-8271-9B75DBEFD6D9@astro.physik.uni-goettingen.de> <5017FF27.7080700@stsci.edu> <43478DD9-4B1E-400E-A9EC-181E7DA508F1@yale.edu> <501C0EDF.60401@stsci.edu> <501C4B59.80303@stsci.edu> Message-ID: On Fri, Aug 3, 2012 at 6:06 PM, Erik Bray wrote: >> This is a fundamental limitation for structured numpy arrays and the >> memmap interface. > > > Indeed--exactly my point. If were to write PyFITS from scratch I would > probably not rely on Numpy as my primary I/O interface, but would instead > serve up Numpy arrays as a high-level abstraction. Under the hood I would > probably do something more closely resembling CFITSIO's buffer ring (the > returned Numpy arrays would of course not be contiguous by default). > > >> I developed a C code to get around this limitation and added it to >> numpy. This can be used by pyfits for tables (but not variable length >> columns). Unfortunately that project got a bit hijacked by other >> developers because they want the ascii reading portion of the code to do >> type inference. This has dramatically increased the scope of the project >> and delayed things indefinitely. I am no longer involved, but it is my >> understanding that this functionality should be available in an upcoming >> version of numpy. >> >> -e > > > You'll have to tell me more about exactly what this Numpy development is--if > it's something I can still incorporate into PyFITS (nevermind what the Numpy > people are doing with it) it could be very helpful. We can take that > discussion offlist if you want. Hi Erik - I'll say a bit here, since it may be of general interest. The issue I addressed with my numpy addition was grabbing arbitrary sub-elements of a file, ascii or binary, directly using a memmap like interface. On many systems the memmap interface to binary files is somehow broken and will actually eat a lot of real memory, notably OS X. Furthermore the ascii reading routines that come with numpy are inefficient for inhomogeneous tabular data. So the code I developed made reading binary and ascii files efficient and built right into numpy using a unified interface. The main portion of FITS tables fall under these categories. The variable length columns must be dealt with separately. Some technical details: The code in numpy is nearly stand-alone and could be ripped out and used anywhere. My fork is here https://github.com/esheldon/numpy. See numpy.recfile. It includes some nice contributions from others. As a reference, an early C++ incarnation of the code is here http://code.google.com/p/recfile/ The version incorporated into numpy is pure C and is, frankly, far superior. I've been working with the view that a structured ndarray is an adequate representation for FITS table data. It can represent all the types in some way, even variable length columns as object arrays. It is important to me to have the data stored in a standard numpy object for interoperability with other libraries. I think it is adequate to make the HDU classes smart and understand what they are doing during input and output. I agree the fitsio hdu class should probably be factored. -- Erin Scott Sheldon Brookhaven National Laboratory erin dot sheldon at gmail dot com From cabanela at mnstate.edu Fri Aug 3 23:26:49 2012 From: cabanela at mnstate.edu (Dr. Juan E Cabanela Ph.D.) Date: Fri, 3 Aug 2012 22:26:49 -0500 Subject: [AstroPy] bouncing balls problem in EPD 7.3-2 Message-ID: <9D444747-A554-4E4E-88BB-5692781223E3@mnstate.edu> Hello all, I went through the Python 4 Astronomers website last summer to learn python. At that time, I took the route of installing EPD (32-bit OSX version) on my MacBook Pro and working with that. It went well. This year, I am interested in using Python in a Computational Physics course and so I was installing EPD 7.3-2 on my Mac. I have discovered that I now have an odd problem where the bounce.py script from http://python4astronomers.github.com/contest/bounce.html no longer runs properly in 'ipython -pylab'. Instead of animating the motion of the balls, the script seems to hang for a moment, then displays what appears to be the last frame of motion of the balls. I have confirmed this behavior for EPD 7.3-2 (32-bit OSX) installed under Lion and Mountain Lion. Was there a recent change to Python that would have propagated through EPD 7.3-2 and could be affecting these animations? Juan -- Dr. Juan Cabanela 218-477-2453 (V) 218-477-2290 (F) Minnesota State University Moorhead WWW: http://www.cabanela.com/ Dept. of Physics and Astronomy Twitter: Juan_Kinda_Guy 1104 Seventh Ave South, Hagen 307B IM: AstroJuanCab (AIM) Moorhead, MN 56563 cabanela at mnstate.edu (MSN) juancab at gmail.com (GTalk) Public PGP Key available at: http://www.cabanela.com/juan_public.asc From aldcroft at head.cfa.harvard.edu Sun Aug 5 09:21:52 2012 From: aldcroft at head.cfa.harvard.edu (Tom Aldcroft) Date: Sun, 5 Aug 2012 09:21:52 -0400 Subject: [AstroPy] bouncing balls problem in EPD 7.3-2 In-Reply-To: <9D444747-A554-4E4E-88BB-5692781223E3@mnstate.edu> References: <9D444747-A554-4E4E-88BB-5692781223E3@mnstate.edu> Message-ID: Hi Juan, Last year the CfA Python users group talked about this issue over lunch. We also saw the same issues you mentioned, depending on the platform (Mac, linux) and the GUI backend (Tk, Gtk, Qt). Basically in some cases issuing multiple draw() commands within a running script would not update the plot until the script finished. So you are not doing anything wrong, but the simple bouncing_balls.py technique does not work across all platforms / backends. It looks like matplotlib (at least version 1.1) has an animation subpackage that hopefully smooths over these differences to provide a uniform and reliable way to animate. See: http://matplotlib.sourceforge.net/examples/animation/index.html http://matplotlib.sourceforge.net/examples/animation/basic_example.html - Tom On Fri, Aug 3, 2012 at 11:26 PM, Dr. Juan E Cabanela Ph.D. wrote: > Hello all, > > I went through the Python 4 Astronomers website last summer to learn python. At that time, I took the route of installing EPD (32-bit OSX version) on my MacBook Pro and working with that. It went well. > > This year, I am interested in using Python in a Computational Physics course and so I was installing EPD 7.3-2 on my Mac. I have discovered that I now have an odd problem where the bounce.py script from > > http://python4astronomers.github.com/contest/bounce.html > > no longer runs properly in 'ipython -pylab'. Instead of animating the motion of the balls, the script seems to hang for a moment, then displays what appears to be the last frame of motion of the balls. I have confirmed this behavior for EPD 7.3-2 (32-bit OSX) installed under Lion and Mountain Lion. Was there a recent change to Python that would have propagated through EPD 7.3-2 and could be affecting these animations? > > Juan > -- > Dr. Juan Cabanela 218-477-2453 (V) 218-477-2290 (F) > Minnesota State University Moorhead WWW: http://www.cabanela.com/ > Dept. of Physics and Astronomy Twitter: Juan_Kinda_Guy > 1104 Seventh Ave South, Hagen 307B IM: AstroJuanCab (AIM) > Moorhead, MN 56563 cabanela at mnstate.edu (MSN) > juancab at gmail.com (GTalk) > Public PGP Key available at: http://www.cabanela.com/juan_public.asc > > > > > > > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > From embray at stsci.edu Wed Aug 8 17:45:59 2012 From: embray at stsci.edu (Erik Bray) Date: Wed, 8 Aug 2012 17:45:59 -0400 Subject: [AstroPy] [ANN] PyFITS 3.0.9 and 3.1.0 Released Message-ID: <5022DE17.8080509@stsci.edu> I'm pleased to announce the releases of PyFITS 3.0.9 and 3.1.0. This is probably the first time for the PyFITS project to do a dual release. Version 3.0.9 is a minor bug fix release building on 3.0.8 and the rest of the 3.0.x series. The changelog for 3.0.9 can be read here: http://pypi.python.org/pypi/pyfits/3.0.9 Version 3.1.0 has been almost a year in the making. It incorporates all the bug fixes from 3.0.9 and previous 3.0.x releases, as well as more bug fixes, and many new features and API changes. It should be noted for users of STSCI_PYTHON that this is the same version of PyFITS that is included in the recent STSCI_PYTHON 2.13 release. The reason for this dual release is that 3.1 contains enough changes that many users may not wish to switch to it right away for production use. The biggest differences include a new Header API, and the use of mmap by default for reading FITS data from disk. I strongly recommend testing all your software with 3.1 before using it in production. Also be sure to read the full release notes at http://pypi.python.org/pypi/pyfits/3.1 The new Header API is mostly compatible with existing code, but there are a few corner cases where it may not be (most notably the Header.keys() method, and the behavior of slicing Headers). A detailed description of the Header changes can be read at http://pyfits.readthedocs.org/en/latest/appendix/header_transition.html The 3.0.x series will continue to receive support in the form of bug fix releases for some time to come before ending support and switching fully over to the 3.1.x series. (Note: PyFITS 3.0.9 was actually released via PyPI a few days earlier, but I fell ill and didn't have time to finish the 3.1 release.) Installation Notes for PyFITS 3.1 ================================= For several versions now PyFITS has relied on a setup of installation utilities in a package called stsci.distutils (http://pypi.python.org/pypi/stsci.distutils). Normally, when installing PyFITS, stsci.distutils will automatically be downloaded and bootstrapped to your Python interpreter, so it's not necessary to take any action. However, if you have stsci.distutils installed into your site-packages, it may be necessary to upgrade it to install PyFITS 3.1, which requires stsci.distutils >= 0.3. In particular, if you get an "VersionConflict" error when trying to install PyFITS, this is probably because you need to update stsci.distutils first. This can be done with easy_install or pip: $ easy_install -U "stsci.distutils>=0.3" or $ pip install --upgrade "stsci.distutils>=0.3" Otherwise, installation of pyfits is the same, using easy_install or pip, or by manually downloading and unpacking the source and running `python setup.py install`. Downloads ========= At the time of writing, the STScI website for PyFITS has not been updated (still waiting on IT to push the updated pages to production). In the meantime PyFITS 3.0.9 can still by downloaded from PyPI: http://pypi.python.org/pypi/pyfits/3.0.9 http://pypi.python.org/pypi/pyfits/3.1 Thanks, and let me know if you have any issues. Erik B. From perry at stsci.edu Tue Aug 14 10:14:06 2012 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 14 Aug 2012 10:14:06 -0400 Subject: [AstroPy] Astropy Project Coordination Meeting registration now available References: <334B172A-B57F-4492-8C27-60CDBB856C12@stsci.edu> Message-ID: <93B82F23-E3FE-480B-9B4B-7A0AE0C6A045@stsci.edu> We have set up a registration page for the Astropy Project Coordination meeting to be held at STScI October 9-10 (with the 11th as a sprint day). The registration site is: http://www.stsci.edu/institute/conference/astropy There is no registration charge. There are a limited number of rooms being held until September 1st at the nearby Colonnade hotel that is within easy walking distance of STScI. If you are planning on attending, please register as soon as you can confirm your attendance so that we can plan accordingly. We will be setting up a wiki site to go into more detail about what we will cover during the meeting. From kevin.gullikson.signup at gmail.com Tue Aug 14 11:10:47 2012 From: kevin.gullikson.signup at gmail.com (Kevin Gullikson) Date: Tue, 14 Aug 2012 10:10:47 -0500 Subject: [AstroPy] Reading in wavelength-calibrated spectra Message-ID: Hi, I have some spectra that were reduced in iraf, and I would like to read them in to python to do some fitting that I already have code for. However, I am having trouble reading in the wavelength calibration. The spectra are echelle, and the wavelengths are defined with this thing in the header that looks like this: " WAT2_001= 'wtype=multispec spec1 = "1 32 2 10672.221264362 0.086046403007761 20' WAT2_002= '46 0. 6.65 26.40 1. 0. 1 5 1. 2046. 10761.5346117191 87.998952480064' WAT2_003= ' -1.32409633589482 -0.0165054046288388 -0.00680394594704411" spec2 =' WAT2_004= ' "2 33 2 10348.8582697 0.083445037904806 2046 0. 34.60 58.07 1. 0. 1' WAT2_005= ' 5 1. 2046. 10435.4699172677 85.3379030819273 -1.2833101652519 -0.01' WAT2_006= '53518242619121 -0.0057861452751027" spec3 = "3 34 2 10043.881872825 ' ... " The numbers give the conversion from pixel coordinates to wavelength coordinates. I was wondering if there was a python function that could correctly read all that in, parse it, and give me a spectrum in flux vs wavelength? I have tried playing with pywcs but that seems only to work for images? Cheers, Kevin Gullikson -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellecruz at gmail.com Tue Aug 14 11:35:29 2012 From: kellecruz at gmail.com (Kelle Cruz) Date: Tue, 14 Aug 2012 11:35:29 -0400 Subject: [AstroPy] Reading in wavelength-calibrated spectra In-Reply-To: References: Message-ID: i'm guessing it's probably gonna be easiest to export spectrum from IRAF to plain text (wspectext) and just use asciitable to read in. kelle On Tue, Aug 14, 2012 at 11:10 AM, Kevin Gullikson < kevin.gullikson.signup at gmail.com> wrote: > Hi, > > I have some spectra that were reduced in iraf, and I would like to read > them in to python to do some fitting that I already have code for. However, > I am having trouble reading in the wavelength calibration. The spectra are > echelle, and the wavelengths are defined with this thing in the header that > looks like this: > > " > WAT2_001= 'wtype=multispec spec1 = "1 32 2 10672.221264362 > 0.086046403007761 20' > WAT2_002= '46 0. 6.65 26.40 1. 0. 1 5 1. 2046. 10761.5346117191 > 87.998952480064' > WAT2_003= ' -1.32409633589482 -0.0165054046288388 -0.00680394594704411" > spec2 =' > WAT2_004= ' "2 33 2 10348.8582697 0.083445037904806 2046 0. 34.60 58.07 1. > 0. 1' > WAT2_005= ' 5 1. 2046. 10435.4699172677 85.3379030819273 -1.2833101652519 > -0.01' > WAT2_006= '53518242619121 -0.0057861452751027" spec3 = "3 34 2 > 10043.881872825 ' > ... > " > > The numbers give the conversion from pixel coordinates to wavelength > coordinates. > I was wondering if there was a python function that could correctly read > all that in, parse it, and give me a spectrum in flux vs wavelength? I have > tried playing with pywcs but that seems only to work for images? > > Cheers, > > Kevin Gullikson > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > -- Kelle Cruz, PhD ? http://kellecruz.com/ 917.725.1334 ? Hunter ext: 16486 ? AMNH ext: 3404 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianc at mpia-hd.mpg.de Tue Aug 14 11:47:35 2012 From: ianc at mpia-hd.mpg.de (Ian Crossfield) Date: Tue, 14 Aug 2012 17:47:35 +0200 Subject: [AstroPy] Reading in wavelength-calibrated spectra In-Reply-To: References: Message-ID: <502A7317.1080605@mpia-hd.mpg.de> It's not quite what you're looking for, but I have Python code to generate a wavelength array from the "ec" echelle database IRAF generates: http://www.astro.ucla.edu/~ianc/python/nsdata.html#nsdata.dispeval Kelle's suggestion is probably the easiest, though likely not the fastest. -Ian On 8/14/12 5:10 PM, Kevin Gullikson wrote: > Hi, > > I have some spectra that were reduced in iraf, and I would like to > read them in to python to do some fitting that I already have code > for. However, I am having trouble reading in the wavelength > calibration. The spectra are echelle, and the wavelengths are defined > with this thing in the header that looks like this: > > " > WAT2_001= 'wtype=multispec spec1 = "1 32 2 10672.221264362 > 0.086046403007761 20' > WAT2_002= '46 0. 6.65 26.40 1. 0. 1 5 1. 2046. 10761.5346117191 > 87.998952480064' > WAT2_003= ' -1.32409633589482 -0.0165054046288388 > -0.00680394594704411" spec2 =' > WAT2_004= ' "2 33 2 10348.8582697 0.083445037904806 2046 0. 34.60 > 58.07 1. 0. 1' > WAT2_005= ' 5 1. 2046. 10435.4699172677 85.3379030819273 > -1.2833101652519 -0.01' > WAT2_006= '53518242619121 -0.0057861452751027" spec3 = "3 34 2 > 10043.881872825 ' > ... > " > > The numbers give the conversion from pixel coordinates to wavelength > coordinates. > I was wondering if there was a python function that could correctly > read all that in, parse it, and give me a spectrum in flux vs > wavelength? I have tried playing with pywcs but that seems only to > work for images? > > Cheers, > > Kevin Gullikson > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy -- Ian Crossfield Max-Planck-Instit?t f?r Astronomie -------------- next part -------------- An HTML attachment was scrubbed... URL: From acasey at mso.anu.edu.au Tue Aug 14 11:54:57 2012 From: acasey at mso.anu.edu.au (Andy Casey) Date: Tue, 14 Aug 2012 11:54:57 -0400 Subject: [AstroPy] Reading in wavelength-calibrated spectra In-Reply-To: <502A7317.1080605@mpia-hd.mpg.de> References: <502A7317.1080605@mpia-hd.mpg.de> Message-ID: <593302DD-67D0-4627-A60E-FD2E08234E24@mso.anu.edu.au> In the past I've done what Kelle has suggested (and I hope AstroPy will eventually be able to parse all of this, even if I end up writing it myself), but just for reference: http://iraf.net/irafdocs/specwcs.php Provides a good overview on the different dispersion representations that you'll typically find in FITS headers. Cheers, Andy On 14/08/2012, at 11:47 AM, Ian Crossfield wrote: > It's not quite what you're looking for, but I have Python code to generate a wavelength array from the "ec" echelle database IRAF generates: > http://www.astro.ucla.edu/~ianc/python/nsdata.html#nsdata.dispeval > > Kelle's suggestion is probably the easiest, though likely not the fastest. > > -Ian > > > On 8/14/12 5:10 PM, Kevin Gullikson wrote: >> >> Hi, >> >> I have some spectra that were reduced in iraf, and I would like to read them in to python to do some fitting that I already have code for. However, I am having trouble reading in the wavelength calibration. The spectra are echelle, and the wavelengths are defined with this thing in the header that looks like this: >> >> " >> WAT2_001= 'wtype=multispec spec1 = "1 32 2 10672.221264362 0.086046403007761 20' >> WAT2_002= '46 0. 6.65 26.40 1. 0. 1 5 1. 2046. 10761.5346117191 87.998952480064' >> WAT2_003= ' -1.32409633589482 -0.0165054046288388 -0.00680394594704411" spec2 =' >> WAT2_004= ' "2 33 2 10348.8582697 0.083445037904806 2046 0. 34.60 58.07 1. 0. 1' >> WAT2_005= ' 5 1. 2046. 10435.4699172677 85.3379030819273 -1.2833101652519 -0.01' >> WAT2_006= '53518242619121 -0.0057861452751027" spec3 = "3 34 2 10043.881872825 ' >> ... >> " >> >> The numbers give the conversion from pixel coordinates to wavelength coordinates. >> I was wondering if there was a python function that could correctly read all that in, parse it, and give me a spectrum in flux vs wavelength? I have tried playing with pywcs but that seems only to work for images? >> >> Cheers, >> >> Kevin Gullikson >> >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy > > > -- > Ian Crossfield > Max-Planck-Instit?t f?r Astronomie > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.gullikson.signup at gmail.com Tue Aug 14 12:46:57 2012 From: kevin.gullikson.signup at gmail.com (Kevin Gullikson) Date: Tue, 14 Aug 2012 11:46:57 -0500 Subject: [AstroPy] Reading in wavelength-calibrated spectra In-Reply-To: <593302DD-67D0-4627-A60E-FD2E08234E24@mso.anu.edu.au> References: <502A7317.1080605@mpia-hd.mpg.de> <593302DD-67D0-4627-A60E-FD2E08234E24@mso.anu.edu.au> Message-ID: Thanks for the quick replies, everyone. I guess I will take Kelle's suggestion for now. On Tue, Aug 14, 2012 at 10:54 AM, Andy Casey wrote: > In the past I've done what Kelle has suggested (and I hope AstroPy will > eventually be able to parse all of this, even if I end up writing it > myself), but just for reference: > > http://iraf.net/irafdocs/specwcs.php > > Provides a good overview on the different dispersion representations that > you'll typically find in FITS headers. > > Cheers, > Andy > > On 14/08/2012, at 11:47 AM, Ian Crossfield wrote: > > It's not quite what you're looking for, but I have Python code to > generate a wavelength array from the "ec" echelle database IRAF generates: > http://www.astro.ucla.edu/~ianc/python/nsdata.html#nsdata.dispeval > > Kelle's suggestion is probably the easiest, though likely not the fastest. > > -Ian > > > On 8/14/12 5:10 PM, Kevin Gullikson wrote: > > Hi, > > I have some spectra that were reduced in iraf, and I would like to read > them in to python to do some fitting that I already have code for. However, > I am having trouble reading in the wavelength calibration. The spectra are > echelle, and the wavelengths are defined with this thing in the header that > looks like this: > > " > WAT2_001= 'wtype=multispec spec1 = "1 32 2 10672.221264362 > 0.086046403007761 20' > WAT2_002= '46 0. 6.65 26.40 1. 0. 1 5 1. 2046. 10761.5346117191 > 87.998952480064' > WAT2_003= ' -1.32409633589482 -0.0165054046288388 -0.00680394594704411" > spec2 =' > WAT2_004= ' "2 33 2 10348.8582697 0.083445037904806 2046 0. 34.60 58.07 1. > 0. 1' > WAT2_005= ' 5 1. 2046. 10435.4699172677 85.3379030819273 -1.2833101652519 > -0.01' > WAT2_006= '53518242619121 -0.0057861452751027" spec3 = "3 34 2 > 10043.881872825 ' > ... > " > > The numbers give the conversion from pixel coordinates to wavelength > coordinates. > I was wondering if there was a python function that could correctly read > all that in, parse it, and give me a spectrum in flux vs wavelength? I have > tried playing with pywcs but that seems only to work for images? > > Cheers, > > Kevin Gullikson > > > _______________________________________________ > AstroPy mailing listAstroPy at scipy.orghttp://mail.scipy.org/mailman/listinfo/astropy > > > > -- > Ian Crossfield > Max-Planck-Instit?t f?r Astronomie > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlw at stsci.edu Wed Aug 15 16:41:24 2012 From: rlw at stsci.edu (Rick White) Date: Wed, 15 Aug 2012 20:41:24 +0000 Subject: [AstroPy] Reading in wavelength-calibrated spectra In-Reply-To: References: Message-ID: I've attached a module that reads at least some versions of these IRAF multispec format files and returns flux and wavelength arrays. I've had an IDL version of this for a long time now (well-tested), but Kevin's message prompted me to make a Python translation. This is only lightly tested (finished it this morning!) but it worked on a few different files I had lying around. I already sent it to Kevin off list and he confirmed that it worked on his file. So I'm circulating it in case other people find it useful. I agree with the comment that it will be useful for astropy to be able to read this format. I don't have any code that writes the format (and I'm not convinced we want any). This format is a fine supporting example for Perry's argument that we need to replace FITS format with something more general! Here's a sketch of how to use it: import readmultispec rdict = readmultispec.readmultispec('yourfitsfile.fits') wavelen = rdict['wavelen'] flux = rdict['flux'] The returned dictionary includes a couple of other values (the FITS header, the raw wavelength coefficients from the header). The header might be useful; the 'wavefields' array is probably not very useful. The FITS file can be in a variety of formats, including multi-order echelle spectra. In the general case you'll get back arrays that look like: wavelen[norders, nwavelengths] flux[ncomponents, norders, nwavelengths] If it is a 1-D spectrum (with just one order), these become wavelen[nwavelengths] flux[ncomponents, nwavelengths] The components dimension for IRAF-extracted spectra typically has 4 elements: the extracted spectrum = flux[0,*], an alternative extracted spectrum = flux[1,*], the subtracted sky = flux[2,*], and the error estimates = flux[3,*]. Not all files will have all 4 of these components. And it's not that easy to figure out from the header exactly what you've got when the number of components is less than 4. But generally the first component should be the extracted spectrum. This is not completely general, but it probably could be a reasonable starting point for a complete module. Cheers, Rick White On Aug 14, 2012, at 4:10 PM, Kevin Gullikson wrote: > Hi, > > I have some spectra that were reduced in iraf, and I would like to read them in to python to do some fitting that I already have code for. However, I am having trouble reading in the wavelength calibration. The spectra are echelle, and the wavelengths are defined with this thing in the header that looks like this: > > " > WAT2_001= 'wtype=multispec spec1 = "1 32 2 10672.221264362 0.086046403007761 20' > WAT2_002= '46 0. 6.65 26.40 1. 0. 1 5 1. 2046. 10761.5346117191 87.998952480064' > WAT2_003= ' -1.32409633589482 -0.0165054046288388 -0.00680394594704411" spec2 =' > WAT2_004= ' "2 33 2 10348.8582697 0.083445037904806 2046 0. 34.60 58.07 1. 0. 1' > WAT2_005= ' 5 1. 2046. 10435.4699172677 85.3379030819273 -1.2833101652519 -0.01' > WAT2_006= '53518242619121 -0.0057861452751027" spec3 = "3 34 2 10043.881872825 ' > ... > " > > The numbers give the conversion from pixel coordinates to wavelength coordinates. > I was wondering if there was a python function that could correctly read all that in, parse it, and give me a spectrum in flux vs wavelength? I have tried playing with pywcs but that seems only to work for images? > > Cheers, > > Kevin Gullikson _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy -------------- next part -------------- A non-text attachment was scrubbed... Name: readmultispec.py Type: text/x-python-script Size: 9148 bytes Desc: readmultispec.py URL: From ralph.morgan at my.jcu.edu.au Fri Aug 24 22:49:15 2012 From: ralph.morgan at my.jcu.edu.au (Ralph Morgan) Date: Sat, 25 Aug 2012 02:49:15 +0000 Subject: [AstroPy] Additional Package Installation problem Message-ID: <821597F8ACBCD441A321DFAB748D9BEB2E7B9100@HKXPRD0111MB376.apcprd01.prod.exchangelabs.com> I tried following the instructions given in the Installing Scientific Python tutorial (http://python4astronomers.github.com/installation/python_install.html) after installing the current EDP academic version (EDP-7.3-2) of python 2.7 on my Windows system (7 Home Premium ver 6.1 build 7601 SP1) without any problems, but get error messages when attempting the 'Install Additional Packages' commands the tutorial lists for Windows: Enthought Python Distribution -- www.enthought.com Python 2.7.3 |EPD 7.3-2 (32-bit)| (default, Apr 12 2012, 14:30:37) [MSC v.1500 3 . . . Welcome to pylab, a matplotlib-based Python environment [backend: WXAgg]. For more information, type 'help(pylab)'. In [1]: cd C:\Python27\Scripts C:\Python27\Scripts In [2]: easy_install.exe --upgrade pip File "", line 1 easy_install.exe --upgrade pip ^ SyntaxError: invalid syntax In [3]: pip.exe install --upgrade distribute File "", line 1 pip.exe install --upgrade distribute ^ SyntaxError: invalid syntax Any hints on what to do? Is the tutorial out of date? I also get some error messages when running the commands the tutorial provides in the 'Test your installation' section: In [4]: % python -V ERROR: Magic function `python` not found. In [5]: % ipython -V ERROR: Magic function `ipython` not found. In [6]: % ipython -pylab ERROR: Magic function `ipython` not found. In [7]: import numpy In [8]: import scipy In [9]: import scipy.linalg In [10]: print numpy.__version__ 1.6.1 In [11]: print scipy.__version__ 0.10.1 In [12]: print matplotlib.__version__ 1.1.0 And since the easy_install.exe upgrade didn't work, I can't install some of the required packages: In [13]: import asciitable --------------------------------------------------------------------------- ImportError Traceback (most recent call last) C:\Python27\Scripts\ in () ----> 1 import asciitable ImportError: No module named asciitable Is there a suitable forum that answers such newbie questions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rahman at astro.utoronto.ca Sat Aug 25 00:08:22 2012 From: rahman at astro.utoronto.ca (Mubdi Rahman) Date: Sat, 25 Aug 2012 05:08:22 +0100 (BST) Subject: [AstroPy] Additional Package Installation problem In-Reply-To: <821597F8ACBCD441A321DFAB748D9BEB2E7B9100@HKXPRD0111MB376.apcprd01.prod.exchangelabs.com> References: <821597F8ACBCD441A321DFAB748D9BEB2E7B9100@HKXPRD0111MB376.apcprd01.prod.exchangelabs.com> Message-ID: <1345867702.32852.YahooMailNeo@web28906.mail.ir2.yahoo.com> Hi Ralph, It seems that you're running these commands from an ipython session, rather than the command line/command prompt. Rather than opening an ipython session, from the start menu, find the "Command Prompt" and run the tutorial commands from there. Best, Mubdi >________________________________ > From: Ralph Morgan >To: "astropy at scipy.org" >Sent: Friday, August 24, 2012 10:49:15 PM >Subject: [AstroPy] Additional Package Installation problem > > > >I tried following the instructions given in?the Installing Scientific Python tutorial (http://python4astronomers.github.com/installation/python_install.html)?after installing the current EDP academic version (EDP-7.3-2) of python 2.7 on my Windows system (7 Home Premium ver 6.1 build 7601 SP1) without any problems, but?get?error messages when attempting the 'Install Additional Packages' commands?the tutorial lists?for Windows: >? >Enthought Python Distribution -- www.enthought.com >Python 2.7.3 |EPD 7.3-2 (32-bit)| (default, Apr 12 2012, 14:30:37) [MSC v.1500 3 >. >. >. >Welcome to pylab, a matplotlib-based Python environment [backend: WXAgg]. >For more information, type 'help(pylab)'. >In [1]: cd C:\Python27\Scripts >C:\Python27\Scripts >In [2]: easy_install.exe --upgrade pip >? File "", line 1 >??? easy_install.exe --upgrade pip >???????????????????????????????? ^ >SyntaxError: invalid syntax > >In [3]: pip.exe install --upgrade distribute >? File "", line 1 >??? pip.exe install --upgrade distribute >????????????????? ^ >SyntaxError: invalid syntax > >Any hints on what to do? Is the tutorial out of date? >? >I also get some error messages when running the commands the tutorial provides in the 'Test your installation' section: >? >In [4]: % python -V >ERROR: Magic function `python` not found. >In [5]: % ipython -V >ERROR: Magic function `ipython` not found. >In [6]: % ipython -pylab >ERROR: Magic function `ipython` not found. >In [7]: import numpy >In [8]: import scipy >In [9]: import scipy.linalg >In [10]: print numpy.__version__ >1.6.1 >In [11]: print scipy.__version__ >0.10.1 >In [12]: print matplotlib.__version__ >1.1.0 >? >And since the easy_install.exe upgrade didn't work, I can't install some of the required packages: >? >In [13]: import asciitable >--------------------------------------------------------------------------- >ImportError?????????????????????????????? Traceback (most recent call last) >C:\Python27\Scripts\ in () >----> 1 import asciitable >ImportError: No module named asciitable >? >Is there a suitable forum that answers such newbie questions? >_______________________________________________ >AstroPy mailing list >AstroPy at scipy.org >http://mail.scipy.org/mailman/listinfo/astropy > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perry at stsci.edu Thu Aug 30 07:56:20 2012 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 30 Aug 2012 07:56:20 -0400 Subject: [AstroPy] Fwd: A sad day for our community. John Hunter: 1968-2012. References: Message-ID: <182B5B89-D448-4D0B-A1CC-D9B1EA8D6016@stsci.edu> Begin forwarded message: From: Fernando Perez Date: August 29, 2012 10:55:07 PM EDT Subject: A sad day for our community. John Hunter: 1968-2012. Dear friends and colleagues, [please excuse a possible double-post of this message, in-flight internet glitches] I am terribly saddened to report that yesterday, August 28 2012 at 10am, John D. Hunter died from complications arising from cancer treatment at the University of Chicago hospital, after a brief but intense battle with this terrible illness. John is survived by his wife Miriam, his three daughters Rahel, Ava and Clara, his sisters Layne and Mary, and his mother Sarah. Note: If you decide not to read any further (I know this is a long message), please go to this page for some important information about how you can thank John for everything he gave in a decade of generous contributions to the Python and scientific communities: http://numfocus.org/johnhunter. Just a few weeks ago, John delivered his keynote address at the SciPy 2012 conference in Austin centered around the evolution of matplotlib: http://www.youtube.com/watch?v=e3lTby5RI54 but tragically, shortly after his return home he was diagnosed with advanced colon cancer. This diagnosis was a terrible discovery to us all, but John took it with his usual combination of calm and resolve, and initiated treatment procedures. Unfortunately, the first round of chemotherapy treatments led to severe complications that sent him to the intensive care unit, and despite the best efforts of the University of Chicago medical center staff, he never fully recovered from these. Yesterday morning, he died peacefully at the hospital with his loved ones at his bedside. John fought with grace and courage, enduring every necessary procedure with a smile on his face and a kind word for all of his caretakers and becoming a loved patient of the many teams that ended up involved with his case. This was no surprise for those of us who knew him, but he clearly left a deep and lasting mark even amongst staff hardened by the rigors of oncology floors and intensive care units. I don't need to explain to this community the impact of John's work, but allow me to briefly recap, in case this is read by some who don't know the whole story. In 2002, John was a postdoc at the University of Chicago hospital working on the analysis of epilepsy seizure data in children. Frustrated with the state of the existing proprietary solutions for this class of problems, he started using Python for his work, back when the scientific Python ecosystem was much, much smaller than it is today and this could have been seen as a crazy risk. Furthermore, he found that there were many half-baked solutions for data visualization in Python at the time, but none that truly met his needs. Undeterred, he went on to create matplotlib (http://matplotlib.org) and thus overcome one of the key obstacles for Python to become the best solution for open source scientific and technical computing. Matplotlib is both an amazing technical achievement and a shining example of open source community building, as John not only created its backbone but also fostered the development of a very strong development team, ensuring that the talent of many others could also contribute to this project. The value and importance of this are now painfully clear: despite having lost John, matplotlib continues to thrive thanks to the leadership of Michael Droetboom, the support of Perry Greenfield at the Hubble Telescope Science Institute, and the daily work of the rest of the team. I want to thank Perry and Michael for putting their resources and talent once more behind matplotlib, securing the future of the project. It is difficult to overstate the value and importance of matplotlib, and therefore of John's contributions (which do not end in matplotlib, by the way; but a biography will have to wait for another day...). Python has become a major force in the technical and scientific computing world, leading the open source offers and challenging expensive proprietary platforms with large teams and millions of dollars of resources behind them. But this would be impossible without a solid data visualization tool that would allow both ad-hoc data exploration and the production of complex, fine-tuned figures for papers, reports or websites. John had the vision to make matplotlib easy to use, but powerful and flexible enough to work in graphical user interfaces and as a server-side library, enabling a myriad use cases beyond his personal needs. This means that now, matplotlib powers everything from plots in dissertations and journal articles to custom data analysis projects and websites. And despite having left his academic career a few years ago for a job in industry, he remained engaged enough that as of today, he is still the top committer to matplotlib; this is the git shortlog of those with more than 1000 commits to the project: 2145 John Hunter 2130 Michael Droettboom 1060 Eric Firing All of this was done by a man who had three children to raise and who still always found the time to help those on the mailing lists, solve difficult technical problems in matplotlib, teach courses and seminars about scientific Python, and more recently help create the NumFOCUS foundation project. Despite the challenges that raising three children in an expensive city like Chicago presented, he never once wavered from his commitment to open source. But unfortunately now he is not here anymore to continue providing for their well-being, and I hope that all those who have so far benefited from his generosity, will thank this wonderful man who always gave far more than he received. Thanks to the rapid action of Travis Oliphant, the NumFOCUS foundation is now acting as an escrow agent to accept donations that will go into a fund to support the education and care of his wonderful girls Rahel, Ava and Clara. If you have benefited from John's many contributions, please say thanks in the way that would matter most to him, by helping Miriam continue the task of caring for and educating Rahel, Ava and Clara. You will find all the information necessary to make a donation here: http://numfocus.org/johnhunter Remember that even a small donation helps! If all those who ever use matplotlib give just a little bit, in the long run I am sure that we can make a difference. If you are a company that benefits in a serious way from matplotlib, remember that John was a staunch advocate of keeping all scientific Python projects under the BSD license so that commercial users could benefit from them without worry. Please say thanks to John in a way commensurate with your resources (and check how much a yearly matlab license would cost you in case you have any doubts about the value you are getting...). John's family is planning a private burial in Tennessee, but (most likely in September) there will also be a memorial service in Chicago that friends and members of the community can attend. We don't have the final scheduling details at this point, but I will post them once we know. I would like to again express my gratitude to Travis Oliphant for moving quickly with the setup of the donation support, and to Eric Jones (the founder of Enthought and another one of the central figures in our community) who immediately upon learning of John's plight contributed resources to support the family with everyday logistics while John was facing treatment as well as my travel to Chicago to assist. This kind of immediate urge to come to the help of others that Eric and Travis displayed is a hallmark of our community. Before closing, I want to take a moment to publicly thank the incredible staff of the University of Chicago medical center. The last two weeks were an intense and brutal ordeal for John and his loved ones, but the hospital staff offered a sometimes hard to believe, unending supply of generosity, care and humanity in addition to their technical competence. The latter is something we expect from a first-rate hospital at a top university, where the attending physicians can be world-renowned specialists in their field. But the former is often forgotten in a world often ruled by a combination of science and concerns about regulations and liability. Instead, we found generous and tireless staff who did everything in their power to ease the pain, always putting our well being ahead of any mindless adherence to protocol, patiently tending to every need we had and working far beyond their stated responsibilities to support us. To name only one person (and many others are equally deserving), I want to thank Dr. Carla Moreira, chief surgical resident, who spent the last few hours of John's life with us despite having just completed a solid night shift of surgical work. Instead of resting she came to the ICU and worked to ensure that those last hours were as comfortable as possible for John; her generous actions helped us through a very difficult moment. It is now time to close this already too long message... John, thanks for everything you gave all of us, and for the privilege of knowing you. Fernando. ps - I have sent this with my 'mailing lists' email. If you need to contact me directly for anything regarding the above, please write to my regular address at Fernando.Perez at berkeley.edu, where I do my best to reply more promptly.