From embray at stsci.edu Fri Jan 8 16:13:22 2016 From: embray at stsci.edu (Erik Bray) Date: Fri, 8 Jan 2016 16:13:22 -0500 Subject: [AstroPy] [ANN] Released Astropy v1.1.1 and v1.0.8 Message-ID: <56902672.6000707@stsci.edu> Hi all, I just released two patch releases for Astropy: v1.1.1, and for those using the LTS v1.0.8. These releases include a patch to astropy.io.fits for a critical bug [1], which could cause quiet data loss from FITS binary tables on Python 3. Specifically, this only affects updates to string columns in existing binary tables, and only occurs on Python 3, not Python 2. I'm announcing this now because in order to get awareness out to anyone who might possibly be affected by this issue. The new releases are currently available on PyPI only, but will be available via conda before long. The releases also include a handful of other bug fixes that have been merged since the previous patch releases. Thanks everyone who helped me get these releases out the door quickly, and for the continued support of our users. Erik B. [1] https://github.com/astropy/astropy/pull/4452 From matthew.brett at gmail.com Sat Jan 9 01:07:24 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 8 Jan 2016 22:07:24 -0800 Subject: [AstroPy] [ANN] Released Astropy v1.1.1 and v1.0.8 In-Reply-To: <56902672.6000707@stsci.edu> References: <56902672.6000707@stsci.edu> Message-ID: Hi, On Fri, Jan 8, 2016 at 1:13 PM, Erik Bray wrote: > Hi all, > > I just released two patch releases for Astropy: v1.1.1, and for those using > the LTS v1.0.8. These releases include a patch to astropy.io.fits for a > critical bug [1], which could cause quiet data loss from FITS binary tables > on Python 3. Specifically, this only affects updates to string columns in > existing binary tables, and only occurs on Python 3, not Python 2. > > I'm announcing this now because in order to get awareness out to anyone who > might possibly be affected by this issue. The new releases are currently > available on PyPI only, but will be available via conda before long. I built OSX wheels for both releases, now available on pypi: https://github.com/MacPython/astropy-wheels Cheers, Matthew From dlane at ap.stmarys.ca Fri Jan 15 12:53:07 2016 From: dlane at ap.stmarys.ca (Dave Lane) Date: Fri, 15 Jan 2016 13:53:07 -0400 Subject: [AstroPy] (no subject) Message-ID: <56993203.7040904@ap.stmarys.ca> Hi, I'm having trouble with 'wcs_world2pix' giving the wrong answers for pixel positions (off by about 10 pixels) compared to other software include including that interpreted by ds9 image display. The optical images were plate-solved by pinpoint (from dc3 dreams). I am getting the error: WARNING: FITSFixedWarning: RADECSYS= 'FK5 ' / Equatorial coordinate system RADECSYS is non-standard, use RADESYSa. [astropy.wcs.wcs] Any suggestions on how to proceed? --- Dave --->>> A code snipped is as follows: import astropy.io.fits from astropy import wcs import numpy as np from phot import aperphot,hms,dms imagelist=["/home/bgo/bgoimages/2015-11-09/observations/processed/VXCYG-102453-V.fit"] RAtarget=hms("20:57:20.8") DECtarget=dms("+40:10:39") imagefile=imagelist[1] # path to the image hdulist = astropy.io.fits.open(imagefile) w = wcs.WCS(hdulist['PRIMARY'].header) world = np.array([[RAtarget, DECtarget]]) pix = w.wcs_world2pix(world,1) print "Pixel Coordinates: ", pix[0,0], pix[0,1] radec = w.wcs_pix2world(pix,1) print "RADEC Coordinates: ", radec[0,0], radec[0,1] observation=aperphot(imagefile, timekey=None, pos=[pix[0,0], pix[0,1]], dap=[10,15,20], resamp=2,retfull=False) print "Aperture flux:", observation.phot print "Background: ", observation.bg The pertinent header keys are: RADECSYS= 'FK5 ' / Equatorial coordinate system RA = '20 57 22.20' / [hms J2000] Target right ascension DEC = '+40 19 11.0' / [dms +N J2000] Target declination EQUINOX = 2000.0 / Equatorial coordinates are J2000 EPOCH = 2000.0 / (incorrect but needed by old programs) PA = 1.29763632687E+000 / [deg, 0-360 CCW] Position angle of plate CTYPE1 = 'RA---TAN' / X-axis coordinate type CRVAL1 = 3.14341739025E+002 / X-axis coordinate value CRPIX1 = 7.68000000000E+002 / X-axis reference pixel CDELT1 = -2.59534986961E-004 / [deg/pixel] X-axis plate scale CROTA1 = -1.29763632687E+000 / [deg] Roll angle wrt X-axis CTYPE2 = 'DEC--TAN' / Y-axis coordinate type CRVAL2 = 4.03127760707E+001 / Y-axis coordinate value CRPIX2 = 7.68000000000E+002 / Y-axis reference pixel CDELT2 = -2.59455750274E-004 / [deg/pixel] Y-Axis Plate scale CROTA2 = -1.29763632687E+000 / [deg] Roll angle wrt Y-axis CD1_1 = -2.59468427764E-004 / Change in RA---TAN along X-Axis CD1_2 = -5.87565834781E-006 / Change in RA---TAN along Y-Axis CD2_1 = 5.87745274898E-006 / Change in DEC--TAN along X-Axis CD2_2 = -2.59389211397E-004 / Change in DEC--TAN along Y-Axis From dlane at ap.stmarys.ca Fri Jan 15 12:54:10 2016 From: dlane at ap.stmarys.ca (Dave Lane) Date: Fri, 15 Jan 2016 13:54:10 -0400 Subject: [AstroPy] wcs_world2pix giving wrong answer Message-ID: <56993242.4000206@ap.stmarys.ca> (sorry for the blank subject earlier - hit send too early!) Hi, I'm having trouble with 'wcs_world2pix' giving the wrong answers for pixel positions (off by about 10 pixels) compared to other software include including that interpreted by ds9 image display. The optical images were plate-solved by pinpoint (from dc3 dreams). I am getting the error: WARNING: FITSFixedWarning: RADECSYS= 'FK5 ' / Equatorial coordinate system RADECSYS is non-standard, use RADESYSa. [astropy.wcs.wcs] Any suggestions on how to proceed? --- Dave --->>> A code snipped is as follows: import astropy.io.fits from astropy import wcs import numpy as np from phot import aperphot,hms,dms imagelist=["/home/bgo/bgoimages/2015-11-09/observations/processed/VXCYG-102453-V.fit"] RAtarget=hms("20:57:20.8") DECtarget=dms("+40:10:39") imagefile=imagelist[1] # path to the image hdulist = astropy.io.fits.open(imagefile) w = wcs.WCS(hdulist['PRIMARY'].header) world = np.array([[RAtarget, DECtarget]]) pix = w.wcs_world2pix(world,1) print "Pixel Coordinates: ", pix[0,0], pix[0,1] radec = w.wcs_pix2world(pix,1) print "RADEC Coordinates: ", radec[0,0], radec[0,1] observation=aperphot(imagefile, timekey=None, pos=[pix[0,0], pix[0,1]], dap=[10,15,20], resamp=2,retfull=False) print "Aperture flux:", observation.phot print "Background: ", observation.bg The pertinent header keys are: RADECSYS= 'FK5 ' / Equatorial coordinate system RA = '20 57 22.20' / [hms J2000] Target right ascension DEC = '+40 19 11.0' / [dms +N J2000] Target declination EQUINOX = 2000.0 / Equatorial coordinates are J2000 EPOCH = 2000.0 / (incorrect but needed by old programs) PA = 1.29763632687E+000 / [deg, 0-360 CCW] Position angle of plate CTYPE1 = 'RA---TAN' / X-axis coordinate type CRVAL1 = 3.14341739025E+002 / X-axis coordinate value CRPIX1 = 7.68000000000E+002 / X-axis reference pixel CDELT1 = -2.59534986961E-004 / [deg/pixel] X-axis plate scale CROTA1 = -1.29763632687E+000 / [deg] Roll angle wrt X-axis CTYPE2 = 'DEC--TAN' / Y-axis coordinate type CRVAL2 = 4.03127760707E+001 / Y-axis coordinate value CRPIX2 = 7.68000000000E+002 / Y-axis reference pixel CDELT2 = -2.59455750274E-004 / [deg/pixel] Y-Axis Plate scale CROTA2 = -1.29763632687E+000 / [deg] Roll angle wrt Y-axis CD1_1 = -2.59468427764E-004 / Change in RA---TAN along X-Axis CD1_2 = -5.87565834781E-006 / Change in RA---TAN along Y-Axis CD2_1 = 5.87745274898E-006 / Change in DEC--TAN along X-Axis CD2_2 = -2.59389211397E-004 / Change in DEC--TAN along Y-Axis _______________________________________________ AstroPy mailing list AstroPy at scipy.org https://mail.scipy.org/mailman/listinfo/astropy From michael.roberts.15 at ucl.ac.uk Sun Jan 17 05:43:52 2016 From: michael.roberts.15 at ucl.ac.uk (Roberts, Michael) Date: Sun, 17 Jan 2016 10:43:52 +0000 Subject: [AstroPy] IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment In-Reply-To: References: Message-ID: Dear Astropy community, I'm having a little problem with a script that I am using. The parts of the script which is giving me the problems are as follows: #Function to replace all NaNs in the exposure map by 0s and to replace the corresponding pixels in the sky and large scale sensitivity map by 0s. def replace_nan(filename): #Print that all NaNs will be replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s. print "All NaNs will be replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s." #Open the exposure map, the corresponding sky and large scale sensitivity map and copy the primary headers (extension 0 of hdulist) to new hdulists. hdulist_ex = fits.open(filename) new_hdu_header_ex = fits.PrimaryHDU(header=hdulist_ex[0].header) new_hdulist_ex = fits.HDUList([new_hdu_header_ex]) hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) new_hdu_header_sk = fits.PrimaryHDU(header=hdulist_sk[0].header) new_hdulist_sk = fits.HDUList([new_hdu_header_sk]) hdulist_lss = fits.open(filename.replace("ex","lss_m")) new_hdu_header_lss = fits.PrimaryHDU(header=hdulist_lss[0].header) new_hdulist_lss = fits.HDUList([new_hdu_header_lss]) #For all frames in the image: Create the mask and run the function replace_pix. for i in range(1,len(hdulist_ex)): mask = np.isnan(hdulist_ex[i].data) replace_pix(hdulist_ex[i],mask,new_hdulist_ex) replace_pix(hdulist_sk[i],mask,new_hdulist_sk) replace_pix(hdulist_lss[i],mask,new_hdulist_lss) #Write the new hdulists to new images. new_hdulist_ex.writeto(filename.replace(".img","_new.img")) new_hdulist_sk.writeto(filename.replace("ex.img","sk_new.img")) new_hdulist_lss.writeto(filename.replace("ex.img","lss_new.img")) #Print that all NaNs are replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s. print "All NaNs are replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s." When running: replace_nan("/Users/.../sw00031048001uw1_ex.img") (where I have dotted out my path for convenience.) it is failing on (traceback) is hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) The error is simply IOError: [Errno 2] No such file or directory: '/Users/.../sw00031048001uw1_sk_corrected.img' But this is the file I am attempting to create by replacing '/Users/.../sw00031048001uw1_ex.img' I'm within the iPython development environment (if that helps, or if that is relevant). I'm guessing at this stage that maybe I don't have permissions to be messing around with files from the iPython console? Or I need some extra arguments for this to work... Any suggestions would be warmly welcomed. Many thanks, Michael Roberts -------------- next part -------------- An HTML attachment was scrubbed... URL: From dklaes at astro.uni-bonn.de Sun Jan 17 06:03:01 2016 From: dklaes at astro.uni-bonn.de (Dominik Klaes) Date: Sun, 17 Jan 2016 12:03:01 +0100 Subject: [AstroPy] IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment In-Reply-To: References: Message-ID: Hi Michael, I think what you are trying is to open this file before you create it with fits.open(). If you create a FITS file from scratch, which is I think what you want to do, then you don't need this command, just as you do in the end hdu.writeto(). Cheers, Dominik 2016-01-17 11:43 GMT+01:00 Roberts, Michael : > Dear Astropy community, > > > I'm having a little problem with a script that I am using. The parts of > the script which is giving me the problems are as follows: > > > #Function to replace all NaNs in the exposure map by 0s and to replace the corresponding pixels in the sky and large scale sensitivity map by 0s.def replace_nan(filename): > #Print that all NaNs will be replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s. > print "All NaNs will be replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s." > #Open the exposure map, the corresponding sky and large scale sensitivity map and copy the primary headers (extension 0 of hdulist) to new hdulists. > hdulist_ex = fits.open(filename) > new_hdu_header_ex = fits.PrimaryHDU(header=hdulist_ex[0].header) > new_hdulist_ex = fits.HDUList([new_hdu_header_ex]) > hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) > new_hdu_header_sk = fits.PrimaryHDU(header=hdulist_sk[0].header) > new_hdulist_sk = fits.HDUList([new_hdu_header_sk]) > hdulist_lss = fits.open(filename.replace("ex","lss_m")) > new_hdu_header_lss = fits.PrimaryHDU(header=hdulist_lss[0].header) > new_hdulist_lss = fits.HDUList([new_hdu_header_lss]) > > #For all frames in the image: Create the mask and run the function replace_pix. > for i in range(1,len(hdulist_ex)): > mask = np.isnan(hdulist_ex[i].data) > replace_pix(hdulist_ex[i],mask,new_hdulist_ex) > replace_pix(hdulist_sk[i],mask,new_hdulist_sk) > replace_pix(hdulist_lss[i],mask,new_hdulist_lss) > > #Write the new hdulists to new images. > new_hdulist_ex.writeto(filename.replace(".img","_new.img")) > new_hdulist_sk.writeto(filename.replace("ex.img","sk_new.img")) > new_hdulist_lss.writeto(filename.replace("ex.img","lss_new.img")) > > #Print that all NaNs are replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s. > print "All NaNs are replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s." > > > When running: > > > replace_nan("/Users/.../sw00031048001uw1_ex.img") > > (where I have dotted out my path for convenience.) it is failing on > (traceback) is hdulist_sk = fits.open(filename.replace("ex","sk_corrected" > )) > > > The error is simply > > > IOError: [Errno 2] No such file or directory: '/Users/.../sw00031048001uw1_sk_corrected.img' > > But this is the file I am attempting to create by replacing > '/Users/.../sw00031048001uw1_ex.img' > > > I'm within the iPython development environment (if that helps, or if that > is relevant). I'm guessing at this stage that maybe I don't have > permissions to be messing around with files from the iPython console? Or I > need some extra arguments for this to work... > > > Any suggestions would be warmly welcomed. > > > Many thanks, > > > Michael Roberts > > > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > -- Dominik Klaes Argelander-Institut f?r Astronomie Room 2.027a Auf dem H?gel 71 53121 Bonn Telefon: 0228/73-5773 E-Mail: dklaes at astro.uni-bonn.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From evert.rol at gmail.com Sun Jan 17 06:03:48 2016 From: evert.rol at gmail.com (Evert Rol) Date: Sun, 17 Jan 2016 22:03:48 +1100 Subject: [AstroPy] IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment In-Reply-To: References: Message-ID: Michael, if I understand your problem correctly, then you shouldn't attempt to open the ...sk_corrected file (for reading). That's what you're doing now, hence Python can't find the file. Instead, create a HDUList (or simply a PrimaryHDU, store the corrected data in that HDUList (as you do in your loop), and then use the .writeto() method as you do at the bottom, with the sk_corrected filename. In fact, you seem to already do this, in the two lines below the faulty line. There is no need to try and create a new FITS file first: HDUList.writeto can do that all in one go. It's a bit unclear to me why you want an ...sk_corrected file, and then at the bottom you write three other (...new.img) files, which seem to be the actually NaN corrected files. Perhaps you don't really need the ...sk_corrected file? Judging from the loop, perhaps you're trying to make copies of the original file, so that they are guaranteed to have the same amount of HDUs? Instead, you can iterate over the original HDUList and append the corrected HDUs. That would give something like: with fits.open(filename) as hdulist: hdulist_sk = fits.HDUList([fits.PrimaryHDU(hdulist[0].header)]) for hdu in hdulist[1:]: mask = np.isnan(hdu.data) tmphdu = fits.ImageHDU() replace_pix(hdu, mask, tmphdu) hdulist_sk.append(tmphdu.copy()) hdulist_sk.writeto(filename.replace("ex", "sk_corrected") I'm taking a few shortcuts above, using the with statement and iterating directly over the hdulist. Note if you let the function replace_pix() create its own, new, corrected, HDU, and let it return it, that line and the two lines around it can become: hdulist_sk.append(replace_pix(hdu, mask)) Now, you'll need the .copy() method. But do check if the above code is what you're trying to achieve. Cheers, Evert > > Dear Astropy community, > > I'm having a little problem with a script that I am using. The parts of the script which is giving me the problems are as follows: > > #Function to replace all NaNs in the exposure map by 0s and to replace the corresponding pixels in the sky and large scale sensitivity map by 0s. > def replace_nan(filename): > > > #Print that all NaNs will be replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s. > > > print "All NaNs will be replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s." > > > #Open the exposure map, the corresponding sky and large scale sensitivity map and copy the primary headers (extension 0 of hdulist) to new hdulists. > > hdulist_ex > = fits.open(filename) > > new_hdu_header_ex > = fits.PrimaryHDU(header=hdulist_ex[0].header) > > new_hdulist_ex > = fits.HDUList([new_hdu_header_ex]) > > hdulist_sk > = fits.open(filename.replace("ex","sk_corrected")) > > new_hdu_header_sk > = fits.PrimaryHDU(header=hdulist_sk[0].header) > > new_hdulist_sk > = fits.HDUList([new_hdu_header_sk]) > > hdulist_lss > = fits.open(filename.replace("ex","lss_m")) > > new_hdu_header_lss > = fits.PrimaryHDU(header=hdulist_lss[0].header) > > new_hdulist_lss > = fits.HDUList([new_hdu_header_lss]) > > > > #For all frames in the image: Create the mask and run the function replace_pix. > > > for i in range(1,len(hdulist_ex)): > > mask > = np.isnan(hdulist_ex[i].data) > > replace_pix > (hdulist_ex[i],mask,new_hdulist_ex) > > replace_pix > (hdulist_sk[i],mask,new_hdulist_sk) > > replace_pix > (hdulist_lss[i],mask,new_hdulist_lss) > > > > #Write the new hdulists to new images. > > new_hdulist_ex > .writeto(filename.replace(".img","_new.img")) > > new_hdulist_sk > .writeto(filename.replace("ex.img","sk_new.img")) > > new_hdulist_lss > .writeto(filename.replace("ex.img","lss_new.img")) > > > > #Print that all NaNs are replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s. > > > print "All NaNs are replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s." > > When running: > > replace_nan("/Users/.../sw00031048001uw1_ex.img") > (where I have dotted out my path for convenience.) it is failing on (traceback) is hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) > > The error is simply > > IOError: [Errno 2] No such file or directory: '/Users/.../sw00031048001uw1_sk_corrected.img' > But this is the file I am attempting to create by replacing '/Users/.../sw00031048001uw1_ex.img' > > I'm within the iPython development environment (if that helps, or if that is relevant). I'm guessing at this stage that maybe I don't have permissions to be messing around with files from the iPython console? Or I need some extra arguments for this to work... > > Any suggestions would be warmly welcomed. > > Many thanks, > > Michael Roberts > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy From michael.roberts.15 at ucl.ac.uk Sun Jan 17 06:16:35 2016 From: michael.roberts.15 at ucl.ac.uk (Roberts, Michael) Date: Sun, 17 Jan 2016 11:16:35 +0000 Subject: [AstroPy] AstroPy Digest, Vol 112, Issue 4 In-Reply-To: References: Message-ID: Hi Domink, Not sure I follow you on that. Could you give an example of what you think I may need to do? I was hoping that this would work as it was given to me as a working script....surely it must be permissions for the files I am trying to modify? Michael ________________________________________ From: AstroPy on behalf of astropy-request at scipy.org Sent: Sunday, January 17, 2016 11:03 AM To: astropy at scipy.org Subject: AstroPy Digest, Vol 112, Issue 4 Send AstroPy mailing list submissions to astropy at scipy.org To subscribe or unsubscribe via the World Wide Web, visit https://mail.scipy.org/mailman/listinfo/astropy or, via email, send a message with subject or body 'help' to astropy-request at scipy.org You can reach the person managing the list at astropy-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of AstroPy digest..." Today's Topics: 1. Re: IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment (Dominik Klaes) ---------------------------------------------------------------------- Message: 1 Date: Sun, 17 Jan 2016 12:03:01 +0100 From: Dominik Klaes To: Astronomical Python mailing list Subject: Re: [AstroPy] IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment Message-ID: Content-Type: text/plain; charset="utf-8" Hi Michael, I think what you are trying is to open this file before you create it with fits.open(). If you create a FITS file from scratch, which is I think what you want to do, then you don't need this command, just as you do in the end hdu.writeto(). Cheers, Dominik 2016-01-17 11:43 GMT+01:00 Roberts, Michael : > Dear Astropy community, > > > I'm having a little problem with a script that I am using. The parts of > the script which is giving me the problems are as follows: > > > #Function to replace all NaNs in the exposure map by 0s and to replace the corresponding pixels in the sky and large scale sensitivity map by 0s.def replace_nan(filename): > #Print that all NaNs will be replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s. > print "All NaNs will be replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s." > #Open the exposure map, the corresponding sky and large scale sensitivity map and copy the primary headers (extension 0 of hdulist) to new hdulists. > hdulist_ex = fits.open(filename) > new_hdu_header_ex = fits.PrimaryHDU(header=hdulist_ex[0].header) > new_hdulist_ex = fits.HDUList([new_hdu_header_ex]) > hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) > new_hdu_header_sk = fits.PrimaryHDU(header=hdulist_sk[0].header) > new_hdulist_sk = fits.HDUList([new_hdu_header_sk]) > hdulist_lss = fits.open(filename.replace("ex","lss_m")) > new_hdu_header_lss = fits.PrimaryHDU(header=hdulist_lss[0].header) > new_hdulist_lss = fits.HDUList([new_hdu_header_lss]) > > #For all frames in the image: Create the mask and run the function replace_pix. > for i in range(1,len(hdulist_ex)): > mask = np.isnan(hdulist_ex[i].data) > replace_pix(hdulist_ex[i],mask,new_hdulist_ex) > replace_pix(hdulist_sk[i],mask,new_hdulist_sk) > replace_pix(hdulist_lss[i],mask,new_hdulist_lss) > > #Write the new hdulists to new images. > new_hdulist_ex.writeto(filename.replace(".img","_new.img")) > new_hdulist_sk.writeto(filename.replace("ex.img","sk_new.img")) > new_hdulist_lss.writeto(filename.replace("ex.img","lss_new.img")) > > #Print that all NaNs are replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s. > print "All NaNs are replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s." > > > When running: > > > replace_nan("/Users/.../sw00031048001uw1_ex.img") > > (where I have dotted out my path for convenience.) it is failing on > (traceback) is hdulist_sk = fits.open(filename.replace("ex","sk_corrected" > )) > > > The error is simply > > > IOError: [Errno 2] No such file or directory: '/Users/.../sw00031048001uw1_sk_corrected.img' > > But this is the file I am attempting to create by replacing > '/Users/.../sw00031048001uw1_ex.img' > > > I'm within the iPython development environment (if that helps, or if that > is relevant). I'm guessing at this stage that maybe I don't have > permissions to be messing around with files from the iPython console? Or I > need some extra arguments for this to work... > > > Any suggestions would be warmly welcomed. > > > Many thanks, > > > Michael Roberts > > > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > -- Dominik Klaes Argelander-Institut f?r Astronomie Room 2.027a Auf dem H?gel 71 53121 Bonn Telefon: 0228/73-5773 E-Mail: dklaes at astro.uni-bonn.de -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ AstroPy mailing list AstroPy at scipy.org https://mail.scipy.org/mailman/listinfo/astropy ------------------------------ End of AstroPy Digest, Vol 112, Issue 4 *************************************** From michael.roberts.15 at ucl.ac.uk Sun Jan 17 07:44:56 2016 From: michael.roberts.15 at ucl.ac.uk (Roberts, Michael) Date: Sun, 17 Jan 2016 12:44:56 +0000 Subject: [AstroPy] AstroPy Digest, Vol 112, Issue 5 In-Reply-To: References: Message-ID: Hi Evert, I've attempted to try and implement your suggestions, but I'm getting a few syntax errors. Could you run through the exact changes you would make for me. I'm a little lost with this... Kindest regards, Michael ________________________________________ From: AstroPy on behalf of astropy-request at scipy.org Sent: Sunday, January 17, 2016 12:00 PM To: astropy at scipy.org Subject: AstroPy Digest, Vol 112, Issue 5 Send AstroPy mailing list submissions to astropy at scipy.org To subscribe or unsubscribe via the World Wide Web, visit https://mail.scipy.org/mailman/listinfo/astropy or, via email, send a message with subject or body 'help' to astropy-request at scipy.org You can reach the person managing the list at astropy-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of AstroPy digest..." Today's Topics: 1. Re: IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment (Evert Rol) 2. Re: AstroPy Digest, Vol 112, Issue 4 (Roberts, Michael) ---------------------------------------------------------------------- Message: 1 Date: Sun, 17 Jan 2016 22:03:48 +1100 From: Evert Rol To: Astronomical Python mailing list Subject: Re: [AstroPy] IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment Message-ID: Content-Type: text/plain; charset=iso-8859-1 Michael, if I understand your problem correctly, then you shouldn't attempt to open the ...sk_corrected file (for reading). That's what you're doing now, hence Python can't find the file. Instead, create a HDUList (or simply a PrimaryHDU, store the corrected data in that HDUList (as you do in your loop), and then use the .writeto() method as you do at the bottom, with the sk_corrected filename. In fact, you seem to already do this, in the two lines below the faulty line. There is no need to try and create a new FITS file first: HDUList.writeto can do that all in one go. It's a bit unclear to me why you want an ...sk_corrected file, and then at the bottom you write three other (...new.img) files, which seem to be the actually NaN corrected files. Perhaps you don't really need the ...sk_corrected file? Judging from the loop, perhaps you're trying to make copies of the original file, so that they are guaranteed to have the same amount of HDUs? Instead, you can iterate over the original HDUList and append the corrected HDUs. That would give something like: with fits.open(filename) as hdulist: hdulist_sk = fits.HDUList([fits.PrimaryHDU(hdulist[0].header)]) for hdu in hdulist[1:]: mask = np.isnan(hdu.data) tmphdu = fits.ImageHDU() replace_pix(hdu, mask, tmphdu) hdulist_sk.append(tmphdu.copy()) hdulist_sk.writeto(filename.replace("ex", "sk_corrected") I'm taking a few shortcuts above, using the with statement and iterating directly over the hdulist. Note if you let the function replace_pix() create its own, new, corrected, HDU, and let it return it, that line and the two lines around it can become: hdulist_sk.append(replace_pix(hdu, mask)) Now, you'll need the .copy() method. But do check if the above code is what you're trying to achieve. Cheers, Evert > > Dear Astropy community, > > I'm having a little problem with a script that I am using. The parts of the script which is giving me the problems are as follows: > > #Function to replace all NaNs in the exposure map by 0s and to replace the corresponding pixels in the sky and large scale sensitivity map by 0s. > def replace_nan(filename): > > > #Print that all NaNs will be replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s. > > > print "All NaNs will be replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s." > > > #Open the exposure map, the corresponding sky and large scale sensitivity map and copy the primary headers (extension 0 of hdulist) to new hdulists. > > hdulist_ex > = fits.open(filename) > > new_hdu_header_ex > = fits.PrimaryHDU(header=hdulist_ex[0].header) > > new_hdulist_ex > = fits.HDUList([new_hdu_header_ex]) > > hdulist_sk > = fits.open(filename.replace("ex","sk_corrected")) > > new_hdu_header_sk > = fits.PrimaryHDU(header=hdulist_sk[0].header) > > new_hdulist_sk > = fits.HDUList([new_hdu_header_sk]) > > hdulist_lss > = fits.open(filename.replace("ex","lss_m")) > > new_hdu_header_lss > = fits.PrimaryHDU(header=hdulist_lss[0].header) > > new_hdulist_lss > = fits.HDUList([new_hdu_header_lss]) > > > > #For all frames in the image: Create the mask and run the function replace_pix. > > > for i in range(1,len(hdulist_ex)): > > mask > = np.isnan(hdulist_ex[i].data) > > replace_pix > (hdulist_ex[i],mask,new_hdulist_ex) > > replace_pix > (hdulist_sk[i],mask,new_hdulist_sk) > > replace_pix > (hdulist_lss[i],mask,new_hdulist_lss) > > > > #Write the new hdulists to new images. > > new_hdulist_ex > .writeto(filename.replace(".img","_new.img")) > > new_hdulist_sk > .writeto(filename.replace("ex.img","sk_new.img")) > > new_hdulist_lss > .writeto(filename.replace("ex.img","lss_new.img")) > > > > #Print that all NaNs are replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s. > > > print "All NaNs are replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s." > > When running: > > replace_nan("/Users/.../sw00031048001uw1_ex.img") > (where I have dotted out my path for convenience.) it is failing on (traceback) is hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) > > The error is simply > > IOError: [Errno 2] No such file or directory: '/Users/.../sw00031048001uw1_sk_corrected.img' > But this is the file I am attempting to create by replacing '/Users/.../sw00031048001uw1_ex.img' > > I'm within the iPython development environment (if that helps, or if that is relevant). I'm guessing at this stage that maybe I don't have permissions to be messing around with files from the iPython console? Or I need some extra arguments for this to work... > > Any suggestions would be warmly welcomed. > > Many thanks, > > Michael Roberts > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy ------------------------------ Message: 2 Date: Sun, 17 Jan 2016 11:16:35 +0000 From: "Roberts, Michael" To: "astropy at scipy.org" Subject: Re: [AstroPy] AstroPy Digest, Vol 112, Issue 4 Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi Domink, Not sure I follow you on that. Could you give an example of what you think I may need to do? I was hoping that this would work as it was given to me as a working script....surely it must be permissions for the files I am trying to modify? Michael ________________________________________ From: AstroPy on behalf of astropy-request at scipy.org Sent: Sunday, January 17, 2016 11:03 AM To: astropy at scipy.org Subject: AstroPy Digest, Vol 112, Issue 4 Send AstroPy mailing list submissions to astropy at scipy.org To subscribe or unsubscribe via the World Wide Web, visit https://mail.scipy.org/mailman/listinfo/astropy or, via email, send a message with subject or body 'help' to astropy-request at scipy.org You can reach the person managing the list at astropy-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of AstroPy digest..." Today's Topics: 1. Re: IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment (Dominik Klaes) ---------------------------------------------------------------------- Message: 1 Date: Sun, 17 Jan 2016 12:03:01 +0100 From: Dominik Klaes To: Astronomical Python mailing list Subject: Re: [AstroPy] IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment Message-ID: Content-Type: text/plain; charset="utf-8" Hi Michael, I think what you are trying is to open this file before you create it with fits.open(). If you create a FITS file from scratch, which is I think what you want to do, then you don't need this command, just as you do in the end hdu.writeto(). Cheers, Dominik 2016-01-17 11:43 GMT+01:00 Roberts, Michael : > Dear Astropy community, > > > I'm having a little problem with a script that I am using. The parts of > the script which is giving me the problems are as follows: > > > #Function to replace all NaNs in the exposure map by 0s and to replace the corresponding pixels in the sky and large scale sensitivity map by 0s.def replace_nan(filename): > #Print that all NaNs will be replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s. > print "All NaNs will be replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s." > #Open the exposure map, the corresponding sky and large scale sensitivity map and copy the primary headers (extension 0 of hdulist) to new hdulists. > hdulist_ex = fits.open(filename) > new_hdu_header_ex = fits.PrimaryHDU(header=hdulist_ex[0].header) > new_hdulist_ex = fits.HDUList([new_hdu_header_ex]) > hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) > new_hdu_header_sk = fits.PrimaryHDU(header=hdulist_sk[0].header) > new_hdulist_sk = fits.HDUList([new_hdu_header_sk]) > hdulist_lss = fits.open(filename.replace("ex","lss_m")) > new_hdu_header_lss = fits.PrimaryHDU(header=hdulist_lss[0].header) > new_hdulist_lss = fits.HDUList([new_hdu_header_lss]) > > #For all frames in the image: Create the mask and run the function replace_pix. > for i in range(1,len(hdulist_ex)): > mask = np.isnan(hdulist_ex[i].data) > replace_pix(hdulist_ex[i],mask,new_hdulist_ex) > replace_pix(hdulist_sk[i],mask,new_hdulist_sk) > replace_pix(hdulist_lss[i],mask,new_hdulist_lss) > > #Write the new hdulists to new images. > new_hdulist_ex.writeto(filename.replace(".img","_new.img")) > new_hdulist_sk.writeto(filename.replace("ex.img","sk_new.img")) > new_hdulist_lss.writeto(filename.replace("ex.img","lss_new.img")) > > #Print that all NaNs are replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s. > print "All NaNs are replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s." > > > When running: > > > replace_nan("/Users/.../sw00031048001uw1_ex.img") > > (where I have dotted out my path for convenience.) it is failing on > (traceback) is hdulist_sk = fits.open(filename.replace("ex","sk_corrected" > )) > > > The error is simply > > > IOError: [Errno 2] No such file or directory: '/Users/.../sw00031048001uw1_sk_corrected.img' > > But this is the file I am attempting to create by replacing > '/Users/.../sw00031048001uw1_ex.img' > > > I'm within the iPython development environment (if that helps, or if that > is relevant). I'm guessing at this stage that maybe I don't have > permissions to be messing around with files from the iPython console? Or I > need some extra arguments for this to work... > > > Any suggestions would be warmly welcomed. > > > Many thanks, > > > Michael Roberts > > > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > -- Dominik Klaes Argelander-Institut f?r Astronomie Room 2.027a Auf dem H?gel 71 53121 Bonn Telefon: 0228/73-5773 E-Mail: dklaes at astro.uni-bonn.de -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ AstroPy mailing list AstroPy at scipy.org https://mail.scipy.org/mailman/listinfo/astropy ------------------------------ End of AstroPy Digest, Vol 112, Issue 4 *************************************** ------------------------------ Subject: Digest Footer _______________________________________________ AstroPy mailing list AstroPy at scipy.org https://mail.scipy.org/mailman/listinfo/astropy ------------------------------ End of AstroPy Digest, Vol 112, Issue 5 *************************************** From michael.roberts.15 at ucl.ac.uk Sun Jan 17 10:47:23 2016 From: michael.roberts.15 at ucl.ac.uk (Roberts, Michael) Date: Sun, 17 Jan 2016 15:47:23 +0000 Subject: [AstroPy] AstroPy Digest, Vol 112, Issue 5 In-Reply-To: References: , Message-ID: Hi Evert, Update: I have implemented your suggestions and it is now working correctly. Many (many!) thanks, Michael ________________________________________ From: Roberts, Michael Sent: Sunday, January 17, 2016 12:44 PM To: astropy at scipy.org Cc: evert.rol at gmail.com Subject: Re: AstroPy Digest, Vol 112, Issue 5 Hi Evert, I've attempted to try and implement your suggestions, but I'm getting a few syntax errors. Could you run through the exact changes you would make for me. I'm a little lost with this... Kindest regards, Michael ________________________________________ From: AstroPy on behalf of astropy-request at scipy.org Sent: Sunday, January 17, 2016 12:00 PM To: astropy at scipy.org Subject: AstroPy Digest, Vol 112, Issue 5 Send AstroPy mailing list submissions to astropy at scipy.org To subscribe or unsubscribe via the World Wide Web, visit https://mail.scipy.org/mailman/listinfo/astropy or, via email, send a message with subject or body 'help' to astropy-request at scipy.org You can reach the person managing the list at astropy-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of AstroPy digest..." Today's Topics: 1. Re: IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment (Evert Rol) 2. Re: AstroPy Digest, Vol 112, Issue 4 (Roberts, Michael) ---------------------------------------------------------------------- Message: 1 Date: Sun, 17 Jan 2016 22:03:48 +1100 From: Evert Rol To: Astronomical Python mailing list Subject: Re: [AstroPy] IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment Message-ID: Content-Type: text/plain; charset=iso-8859-1 Michael, if I understand your problem correctly, then you shouldn't attempt to open the ...sk_corrected file (for reading). That's what you're doing now, hence Python can't find the file. Instead, create a HDUList (or simply a PrimaryHDU, store the corrected data in that HDUList (as you do in your loop), and then use the .writeto() method as you do at the bottom, with the sk_corrected filename. In fact, you seem to already do this, in the two lines below the faulty line. There is no need to try and create a new FITS file first: HDUList.writeto can do that all in one go. It's a bit unclear to me why you want an ...sk_corrected file, and then at the bottom you write three other (...new.img) files, which seem to be the actually NaN corrected files. Perhaps you don't really need the ...sk_corrected file? Judging from the loop, perhaps you're trying to make copies of the original file, so that they are guaranteed to have the same amount of HDUs? Instead, you can iterate over the original HDUList and append the corrected HDUs. That would give something like: with fits.open(filename) as hdulist: hdulist_sk = fits.HDUList([fits.PrimaryHDU(hdulist[0].header)]) for hdu in hdulist[1:]: mask = np.isnan(hdu.data) tmphdu = fits.ImageHDU() replace_pix(hdu, mask, tmphdu) hdulist_sk.append(tmphdu.copy()) hdulist_sk.writeto(filename.replace("ex", "sk_corrected") I'm taking a few shortcuts above, using the with statement and iterating directly over the hdulist. Note if you let the function replace_pix() create its own, new, corrected, HDU, and let it return it, that line and the two lines around it can become: hdulist_sk.append(replace_pix(hdu, mask)) Now, you'll need the .copy() method. But do check if the above code is what you're trying to achieve. Cheers, Evert > > Dear Astropy community, > > I'm having a little problem with a script that I am using. The parts of the script which is giving me the problems are as follows: > > #Function to replace all NaNs in the exposure map by 0s and to replace the corresponding pixels in the sky and large scale sensitivity map by 0s. > def replace_nan(filename): > > > #Print that all NaNs will be replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s. > > > print "All NaNs will be replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s." > > > #Open the exposure map, the corresponding sky and large scale sensitivity map and copy the primary headers (extension 0 of hdulist) to new hdulists. > > hdulist_ex > = fits.open(filename) > > new_hdu_header_ex > = fits.PrimaryHDU(header=hdulist_ex[0].header) > > new_hdulist_ex > = fits.HDUList([new_hdu_header_ex]) > > hdulist_sk > = fits.open(filename.replace("ex","sk_corrected")) > > new_hdu_header_sk > = fits.PrimaryHDU(header=hdulist_sk[0].header) > > new_hdulist_sk > = fits.HDUList([new_hdu_header_sk]) > > hdulist_lss > = fits.open(filename.replace("ex","lss_m")) > > new_hdu_header_lss > = fits.PrimaryHDU(header=hdulist_lss[0].header) > > new_hdulist_lss > = fits.HDUList([new_hdu_header_lss]) > > > > #For all frames in the image: Create the mask and run the function replace_pix. > > > for i in range(1,len(hdulist_ex)): > > mask > = np.isnan(hdulist_ex[i].data) > > replace_pix > (hdulist_ex[i],mask,new_hdulist_ex) > > replace_pix > (hdulist_sk[i],mask,new_hdulist_sk) > > replace_pix > (hdulist_lss[i],mask,new_hdulist_lss) > > > > #Write the new hdulists to new images. > > new_hdulist_ex > .writeto(filename.replace(".img","_new.img")) > > new_hdulist_sk > .writeto(filename.replace("ex.img","sk_new.img")) > > new_hdulist_lss > .writeto(filename.replace("ex.img","lss_new.img")) > > > > #Print that all NaNs are replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s. > > > print "All NaNs are replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s." > > When running: > > replace_nan("/Users/.../sw00031048001uw1_ex.img") > (where I have dotted out my path for convenience.) it is failing on (traceback) is hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) > > The error is simply > > IOError: [Errno 2] No such file or directory: '/Users/.../sw00031048001uw1_sk_corrected.img' > But this is the file I am attempting to create by replacing '/Users/.../sw00031048001uw1_ex.img' > > I'm within the iPython development environment (if that helps, or if that is relevant). I'm guessing at this stage that maybe I don't have permissions to be messing around with files from the iPython console? Or I need some extra arguments for this to work... > > Any suggestions would be warmly welcomed. > > Many thanks, > > Michael Roberts > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy ------------------------------ Message: 2 Date: Sun, 17 Jan 2016 11:16:35 +0000 From: "Roberts, Michael" To: "astropy at scipy.org" Subject: Re: [AstroPy] AstroPy Digest, Vol 112, Issue 4 Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi Domink, Not sure I follow you on that. Could you give an example of what you think I may need to do? I was hoping that this would work as it was given to me as a working script....surely it must be permissions for the files I am trying to modify? Michael ________________________________________ From: AstroPy on behalf of astropy-request at scipy.org Sent: Sunday, January 17, 2016 11:03 AM To: astropy at scipy.org Subject: AstroPy Digest, Vol 112, Issue 4 Send AstroPy mailing list submissions to astropy at scipy.org To subscribe or unsubscribe via the World Wide Web, visit https://mail.scipy.org/mailman/listinfo/astropy or, via email, send a message with subject or body 'help' to astropy-request at scipy.org You can reach the person managing the list at astropy-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of AstroPy digest..." Today's Topics: 1. Re: IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment (Dominik Klaes) ---------------------------------------------------------------------- Message: 1 Date: Sun, 17 Jan 2016 12:03:01 +0100 From: Dominik Klaes To: Astronomical Python mailing list Subject: Re: [AstroPy] IOError: [Errno 2] No such file or directory when using satrapy fits moduel in IPython environment Message-ID: Content-Type: text/plain; charset="utf-8" Hi Michael, I think what you are trying is to open this file before you create it with fits.open(). If you create a FITS file from scratch, which is I think what you want to do, then you don't need this command, just as you do in the end hdu.writeto(). Cheers, Dominik 2016-01-17 11:43 GMT+01:00 Roberts, Michael : > Dear Astropy community, > > > I'm having a little problem with a script that I am using. The parts of > the script which is giving me the problems are as follows: > > > #Function to replace all NaNs in the exposure map by 0s and to replace the corresponding pixels in the sky and large scale sensitivity map by 0s.def replace_nan(filename): > #Print that all NaNs will be replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s. > print "All NaNs will be replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map will also be replaced by 0s." > #Open the exposure map, the corresponding sky and large scale sensitivity map and copy the primary headers (extension 0 of hdulist) to new hdulists. > hdulist_ex = fits.open(filename) > new_hdu_header_ex = fits.PrimaryHDU(header=hdulist_ex[0].header) > new_hdulist_ex = fits.HDUList([new_hdu_header_ex]) > hdulist_sk = fits.open(filename.replace("ex","sk_corrected")) > new_hdu_header_sk = fits.PrimaryHDU(header=hdulist_sk[0].header) > new_hdulist_sk = fits.HDUList([new_hdu_header_sk]) > hdulist_lss = fits.open(filename.replace("ex","lss_m")) > new_hdu_header_lss = fits.PrimaryHDU(header=hdulist_lss[0].header) > new_hdulist_lss = fits.HDUList([new_hdu_header_lss]) > > #For all frames in the image: Create the mask and run the function replace_pix. > for i in range(1,len(hdulist_ex)): > mask = np.isnan(hdulist_ex[i].data) > replace_pix(hdulist_ex[i],mask,new_hdulist_ex) > replace_pix(hdulist_sk[i],mask,new_hdulist_sk) > replace_pix(hdulist_lss[i],mask,new_hdulist_lss) > > #Write the new hdulists to new images. > new_hdulist_ex.writeto(filename.replace(".img","_new.img")) > new_hdulist_sk.writeto(filename.replace("ex.img","sk_new.img")) > new_hdulist_lss.writeto(filename.replace("ex.img","lss_new.img")) > > #Print that all NaNs are replaced by 0s in the exposure map and that the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s. > print "All NaNs are replaced by 0s in " + filename + " and the corresponding pixels in the sky and large scale sensitivity map are also replaced by 0s." > > > When running: > > > replace_nan("/Users/.../sw00031048001uw1_ex.img") > > (where I have dotted out my path for convenience.) it is failing on > (traceback) is hdulist_sk = fits.open(filename.replace("ex","sk_corrected" > )) > > > The error is simply > > > IOError: [Errno 2] No such file or directory: '/Users/.../sw00031048001uw1_sk_corrected.img' > > But this is the file I am attempting to create by replacing > '/Users/.../sw00031048001uw1_ex.img' > > > I'm within the iPython development environment (if that helps, or if that > is relevant). I'm guessing at this stage that maybe I don't have > permissions to be messing around with files from the iPython console? Or I > need some extra arguments for this to work... > > > Any suggestions would be warmly welcomed. > > > Many thanks, > > > Michael Roberts > > > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > -- Dominik Klaes Argelander-Institut f?r Astronomie Room 2.027a Auf dem H?gel 71 53121 Bonn Telefon: 0228/73-5773 E-Mail: dklaes at astro.uni-bonn.de -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ AstroPy mailing list AstroPy at scipy.org https://mail.scipy.org/mailman/listinfo/astropy ------------------------------ End of AstroPy Digest, Vol 112, Issue 4 *************************************** ------------------------------ Subject: Digest Footer _______________________________________________ AstroPy mailing list AstroPy at scipy.org https://mail.scipy.org/mailman/listinfo/astropy ------------------------------ End of AstroPy Digest, Vol 112, Issue 5 *************************************** From jjk at uvic.ca Sun Jan 17 12:11:20 2016 From: jjk at uvic.ca (JJ Kavelaars) Date: Sun, 17 Jan 2016 09:11:20 -0800 Subject: [AstroPy] (no subject) In-Reply-To: <56993203.7040904@ap.stmarys.ca> References: <56993203.7040904@ap.stmarys.ca> Message-ID: <5F39708E-706E-466D-9109-25207EE10C37@uvic.ca> Hi Dave, My understanding is that astropy.wcs examines the header for an extended set of keywords to determine which possible WCS functions to build. I see that both CD matrix and CROTA are given. So if those are not identical or one of them is incompletely specified you will have trouble. Are the CD matrix values in your header (sent in message) the only ones that might be WCS related? Perhaps there are even more WCS keywords, like 'PV' or SIP values, present? WCS keywords are sometimes used in different ways by different people and can lead to this sort of problem. DS9 (often) silently ignores header keywords that don't make sense and I think it falls through to the CD values when those are available. I think, astropy.wcs spits out an error indicating that some keywords in the header made choosing the appropriate form of the WCS function ambiguous or wrong. One can tell astropy to try to fix the WCS ( fix=True) or delete keywords from the header array to remove the ambiguity. JJ > On Jan 15, 2016, at 9:53 AM, Dave Lane wrote: > > Hi, > > I'm having trouble with 'wcs_world2pix' giving the wrong answers for pixel positions (off by about 10 pixels) compared to other software include including that interpreted by ds9 image display. The optical images were plate-solved by pinpoint (from dc3 dreams). I am getting the error: > > WARNING: FITSFixedWarning: RADECSYS= 'FK5 ' / Equatorial coordinate system > RADECSYS is non-standard, use RADESYSa. [astropy.wcs.wcs] > > Any suggestions on how to proceed? > > --- Dave > > --->>> > > A code snipped is as follows: > > import astropy.io.fits > from astropy import wcs > import numpy as np > from phot import aperphot,hms,dms > imagelist=["/home/bgo/bgoimages/2015-11-09/observations/processed/VXCYG-102453-V.fit"] > RAtarget=hms("20:57:20.8") > DECtarget=dms("+40:10:39") > imagefile=imagelist[1] # path to the image > hdulist = astropy.io.fits.open(imagefile) > w = wcs.WCS(hdulist['PRIMARY'].header) > world = np.array([[RAtarget, DECtarget]]) > pix = w.wcs_world2pix(world,1) > print "Pixel Coordinates: ", pix[0,0], pix[0,1] > radec = w.wcs_pix2world(pix,1) > print "RADEC Coordinates: ", radec[0,0], radec[0,1] > observation=aperphot(imagefile, timekey=None, pos=[pix[0,0], pix[0,1]], dap=[10,15,20], resamp=2,retfull=False) > print "Aperture flux:", observation.phot > print "Background: ", observation.bg > > The pertinent header keys are: > > RADECSYS= 'FK5 ' / Equatorial coordinate system > RA = '20 57 22.20' / [hms J2000] Target right ascension > DEC = '+40 19 11.0' / [dms +N J2000] Target declination > EQUINOX = 2000.0 / Equatorial coordinates are J2000 > EPOCH = 2000.0 / (incorrect but needed by old programs) > PA = 1.29763632687E+000 / [deg, 0-360 CCW] Position angle of plate > CTYPE1 = 'RA---TAN' / X-axis coordinate type > CRVAL1 = 3.14341739025E+002 / X-axis coordinate value > CRPIX1 = 7.68000000000E+002 / X-axis reference pixel > CDELT1 = -2.59534986961E-004 / [deg/pixel] X-axis plate scale > CROTA1 = -1.29763632687E+000 / [deg] Roll angle wrt X-axis > CTYPE2 = 'DEC--TAN' / Y-axis coordinate type > CRVAL2 = 4.03127760707E+001 / Y-axis coordinate value > CRPIX2 = 7.68000000000E+002 / Y-axis reference pixel > CDELT2 = -2.59455750274E-004 / [deg/pixel] Y-Axis Plate scale > CROTA2 = -1.29763632687E+000 / [deg] Roll angle wrt Y-axis > CD1_1 = -2.59468427764E-004 / Change in RA---TAN along X-Axis > CD1_2 = -5.87565834781E-006 / Change in RA---TAN along Y-Axis > CD2_1 = 5.87745274898E-006 / Change in DEC--TAN along X-Axis > CD2_2 = -2.59389211397E-004 / Change in DEC--TAN along Y-Axis > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy From thomas.robitaille at gmail.com Sun Jan 17 17:34:37 2016 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Sun, 17 Jan 2016 22:34:37 +0000 Subject: [AstroPy] (no subject) In-Reply-To: <56993203.7040904@ap.stmarys.ca> References: <56993203.7040904@ap.stmarys.ca> Message-ID: Hi Dave, The RADECSYS is non-standard, and should be RADESYS. As far as I can tell, all you are seeing is a warning, and you should be able to proceed anyway, and issues with the accuracy should not be related to this (though please report the issue about the RADECSYS spelling to the person/organization that made the files). Have you tried using all_world2pix instead of wcs_world2pix? The former includes distortion terms, if present. Cheers, Tom On 15 January 2016 at 17:53, Dave Lane wrote: > Hi, > > I'm having trouble with 'wcs_world2pix' giving the wrong answers for pixel > positions (off by about 10 pixels) compared to other software include > including that interpreted by ds9 image display. The optical images were > plate-solved by pinpoint (from dc3 dreams). I am getting the error: > > WARNING: FITSFixedWarning: RADECSYS= 'FK5 ' / Equatorial coordinate > system > RADECSYS is non-standard, use RADESYSa. [astropy.wcs.wcs] > > Any suggestions on how to proceed? > > --- Dave > > --->>> > > A code snipped is as follows: > > import astropy.io.fits > from astropy import wcs > import numpy as np > from phot import aperphot,hms,dms > imagelist=["/home/bgo/bgoimages/2015-11-09/observations/processed/VXCYG-102453-V.fit"] > RAtarget=hms("20:57:20.8") > DECtarget=dms("+40:10:39") > imagefile=imagelist[1] # path to the image > hdulist = astropy.io.fits.open(imagefile) > w = wcs.WCS(hdulist['PRIMARY'].header) > world = np.array([[RAtarget, DECtarget]]) > pix = w.wcs_world2pix(world,1) > print "Pixel Coordinates: ", pix[0,0], pix[0,1] > radec = w.wcs_pix2world(pix,1) > print "RADEC Coordinates: ", radec[0,0], radec[0,1] > observation=aperphot(imagefile, timekey=None, pos=[pix[0,0], pix[0,1]], > dap=[10,15,20], resamp=2,retfull=False) > print "Aperture flux:", observation.phot > print "Background: ", observation.bg > > The pertinent header keys are: > > RADECSYS= 'FK5 ' / Equatorial coordinate system > RA = '20 57 22.20' / [hms J2000] Target right ascension > DEC = '+40 19 11.0' / [dms +N J2000] Target declination > EQUINOX = 2000.0 / Equatorial coordinates are J2000 > EPOCH = 2000.0 / (incorrect but needed by old programs) > PA = 1.29763632687E+000 / [deg, 0-360 CCW] Position angle of plate > CTYPE1 = 'RA---TAN' / X-axis coordinate type > CRVAL1 = 3.14341739025E+002 / X-axis coordinate value > CRPIX1 = 7.68000000000E+002 / X-axis reference pixel > CDELT1 = -2.59534986961E-004 / [deg/pixel] X-axis plate scale > CROTA1 = -1.29763632687E+000 / [deg] Roll angle wrt X-axis > CTYPE2 = 'DEC--TAN' / Y-axis coordinate type > CRVAL2 = 4.03127760707E+001 / Y-axis coordinate value > CRPIX2 = 7.68000000000E+002 / Y-axis reference pixel > CDELT2 = -2.59455750274E-004 / [deg/pixel] Y-Axis Plate scale > CROTA2 = -1.29763632687E+000 / [deg] Roll angle wrt Y-axis > CD1_1 = -2.59468427764E-004 / Change in RA---TAN along X-Axis > CD1_2 = -5.87565834781E-006 / Change in RA---TAN along Y-Axis > CD2_1 = 5.87745274898E-006 / Change in DEC--TAN along X-Axis > CD2_2 = -2.59389211397E-004 / Change in DEC--TAN along Y-Axis > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy From dlane at ap.stmarys.ca Mon Jan 18 12:42:25 2016 From: dlane at ap.stmarys.ca (Dave Lane) Date: Mon, 18 Jan 2016 13:42:25 -0400 Subject: [AstroPy] (no subject) In-Reply-To: <5F39708E-706E-466D-9109-25207EE10C37@uvic.ca> References: <56993203.7040904@ap.stmarys.ca> <5F39708E-706E-466D-9109-25207EE10C37@uvic.ca> Message-ID: <569D2401.8050206@ap.stmarys.ca> Hi JJ and Thomas, I have to fess up and upon further investigation I messed up - there were a bunch of fits files of a time series of the same field in the same folder and wcs was not looking at the one I thought it was, which explains the position being correct within a few pixels! Before I figured this out I changed the fits header key to RADESYS and removed it entirely (!) and it gave the same warning and same end result. --- Dave On 17/01/2016 1:11 PM, JJ Kavelaars wrote: > Hi Dave, > > My understanding is that astropy.wcs examines the header for an extended set of keywords to determine which possible WCS functions to build. > > I see that both CD matrix and CROTA are given. So if those are not identical or one of them is incompletely specified you will have trouble. > > Are the CD matrix values in your header (sent in message) the only ones that might be WCS related? Perhaps there are even more WCS keywords, like 'PV' or SIP values, present? > > WCS keywords are sometimes used in different ways by different people and can lead to this sort of problem. DS9 (often) silently ignores header keywords that don't make sense and I think it falls through to the CD values when those are available. I think, astropy.wcs spits out an error indicating that some keywords in the header made choosing the appropriate form of the WCS function ambiguous or wrong. > > One can tell astropy to try to fix the WCS ( fix=True) or delete keywords from the header array to remove the ambiguity. > > > JJ > >> On Jan 15, 2016, at 9:53 AM, Dave Lane wrote: >> >> Hi, >> >> I'm having trouble with 'wcs_world2pix' giving the wrong answers for pixel positions (off by about 10 pixels) compared to other software include including that interpreted by ds9 image display. The optical images were plate-solved by pinpoint (from dc3 dreams). I am getting the error: >> >> WARNING: FITSFixedWarning: RADECSYS= 'FK5 ' / Equatorial coordinate system >> RADECSYS is non-standard, use RADESYSa. [astropy.wcs.wcs] >> >> Any suggestions on how to proceed? >> >> --- Dave >> >> --->>> >> >> A code snipped is as follows: >> >> import astropy.io.fits >> from astropy import wcs >> import numpy as np >> from phot import aperphot,hms,dms >> imagelist=["/home/bgo/bgoimages/2015-11-09/observations/processed/VXCYG-102453-V.fit"] >> RAtarget=hms("20:57:20.8") >> DECtarget=dms("+40:10:39") >> imagefile=imagelist[1] # path to the image >> hdulist = astropy.io.fits.open(imagefile) >> w = wcs.WCS(hdulist['PRIMARY'].header) >> world = np.array([[RAtarget, DECtarget]]) >> pix = w.wcs_world2pix(world,1) >> print "Pixel Coordinates: ", pix[0,0], pix[0,1] >> radec = w.wcs_pix2world(pix,1) >> print "RADEC Coordinates: ", radec[0,0], radec[0,1] >> observation=aperphot(imagefile, timekey=None, pos=[pix[0,0], pix[0,1]], dap=[10,15,20], resamp=2,retfull=False) >> print "Aperture flux:", observation.phot >> print "Background: ", observation.bg >> >> The pertinent header keys are: >> >> RADECSYS= 'FK5 ' / Equatorial coordinate system >> RA = '20 57 22.20' / [hms J2000] Target right ascension >> DEC = '+40 19 11.0' / [dms +N J2000] Target declination >> EQUINOX = 2000.0 / Equatorial coordinates are J2000 >> EPOCH = 2000.0 / (incorrect but needed by old programs) >> PA = 1.29763632687E+000 / [deg, 0-360 CCW] Position angle of plate >> CTYPE1 = 'RA---TAN' / X-axis coordinate type >> CRVAL1 = 3.14341739025E+002 / X-axis coordinate value >> CRPIX1 = 7.68000000000E+002 / X-axis reference pixel >> CDELT1 = -2.59534986961E-004 / [deg/pixel] X-axis plate scale >> CROTA1 = -1.29763632687E+000 / [deg] Roll angle wrt X-axis >> CTYPE2 = 'DEC--TAN' / Y-axis coordinate type >> CRVAL2 = 4.03127760707E+001 / Y-axis coordinate value >> CRPIX2 = 7.68000000000E+002 / Y-axis reference pixel >> CDELT2 = -2.59455750274E-004 / [deg/pixel] Y-Axis Plate scale >> CROTA2 = -1.29763632687E+000 / [deg] Roll angle wrt Y-axis >> CD1_1 = -2.59468427764E-004 / Change in RA---TAN along X-Axis >> CD1_2 = -5.87565834781E-006 / Change in RA---TAN along Y-Axis >> CD2_1 = 5.87745274898E-006 / Change in DEC--TAN along X-Axis >> CD2_2 = -2.59389211397E-004 / Change in DEC--TAN along Y-Axis >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> https://mail.scipy.org/mailman/listinfo/astropy > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy From demitri.muna at gmail.com Mon Jan 18 16:09:03 2016 From: demitri.muna at gmail.com (Demitri Muna) Date: Mon, 18 Jan 2016 16:09:03 -0500 Subject: [AstroPy] Need HEALPix functionality in Astropy? Please read. Message-ID: Hi, Many of us are interested in incorporating HEALPix support in our software, whether that means getting Healpy into AstroPy or my interest in some level of support in Nightlight. Part of the issue is the license; the current HEALPix library is under GPL. I sent a message to the healpix-support mailing list requesting a more liberal license, but the developers are resistant to this. There is some support for a limited version of the library to be released under a more liberal license. This library would include core functionality, i.e. probably the majority of what most of us need. Currently, it looks like I am the only one requesting this. If anyone is interested in such a library, PLEASE subscribe to the healpix-support mailing list and request this. We need to show there is interest for this. The list is here: https://lists.sourceforge.net/lists/listinfo/healpix-support This will only take a few minutes, which I promise is less time than it will take to implement the code independent of the existing library! Thanks, Demitri _________________________________________ Demitri Muna http://muna.com Department of Astronomy Ohio State University My Projects: http://nightlightapp.io http://trillianverse.org http://scicoder.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From evert.rol at gmail.com Tue Jan 19 23:52:24 2016 From: evert.rol at gmail.com (Evert Rol) Date: Wed, 20 Jan 2016 15:52:24 +1100 Subject: [AstroPy] astropy.table.Table groups: aggregate over & combine multiple columns? Message-ID: <56994418-7032-4122-AB43-D942F79ED7D5@gmail.com> Is there a way in an astropy table to run the TableGroups aggregate function on multiple columns at once? In this specific case, I'd like to group by names in one column, and then average the second column weighted by values in the third column. An example would be: from astropy.table import Table def average(col): # Manipulate multiple columns at once? return col.mean() t = Table([['a', 'a', 'a', 'b', 'b', 'c'], [1, 2, 3, 4, 5, 6], [2, 2, 1, 2, 1, 1]], names=('name', 'value', 'weight')) group = t.group_by('name') result = group.groups.aggregate(average) print(result) which gives name value weight ---- ----- ------------- a 2.0 1.66666666667 b 4.5 1.5 c 6.0 1.0 which is not what I want. In Pandas, this can be done with apply() on a groupby object, since that passes the relevant subsection the dataframe as input to the function. So I can write: def average_pd(df): weight = df['weight'] total = weight.sum() df['value'] *= weight / total df['value'] = df['value'].sum() df['weight'] = total # for info; not necessary return df.iloc[0] # ignore other rows: they are the same anyway df = t.to_pandas() result = df.groupby('name')[['value', 'weight']].apply(average_pd) print(result) which gives: value weight name a 1.800000 5 b 4.333333 3 c 6.000000 1 and 'value' consists of weighted averages. (code also on https://gist.github.com/evertrol/12955a5d98edf055a2f4 ) Perhaps I overlooked some documentation, but I can't find if this can be done in astropy.table. Or do I just need to approach this differently? Alternatively, should I convert & stick to Pandas for this type of functionality? Evert From andrew.hearin at yale.edu Wed Jan 20 07:47:29 2016 From: andrew.hearin at yale.edu (Andrew Hearin) Date: Wed, 20 Jan 2016 07:47:29 -0500 Subject: [AstroPy] astropy.table.Table groups: aggregate over & combine multiple columns? In-Reply-To: <56994418-7032-4122-AB43-D942F79ED7D5@gmail.com> References: <56994418-7032-4122-AB43-D942F79ED7D5@gmail.com> Message-ID: Hi Evert, Great question, I'm also really interested to hear the answer to this. I always use the built-in table aggregation functions when possible, but sometimes end up writing my own Numpy calculation for more complicated examples (using np.unique and/or np.searchsorted). For computing a group-wise weighted average, there is a way you can recast your problem that allows you to use the existing astropy built-in: just create a new column that is the product of your second and third columns, ' and then use aggregate(average) in the normal way on this new column. So I *think* that gives an answer to the specific example you gave, but it dodges the real question, which I am also interested to hear the experts weigh in on. Andrew On Tue, Jan 19, 2016 at 11:52 PM, Evert Rol wrote: > Is there a way in an astropy table to run the TableGroups aggregate > function on multiple columns at once? > > In this specific case, I'd like to group by names in one column, and then > average the second column weighted by values in the third column. > An example would be: > > from astropy.table import Table > > def average(col): > # Manipulate multiple columns at once? > return col.mean() > > t = Table([['a', 'a', 'a', 'b', 'b', 'c'], > [1, 2, 3, 4, 5, 6], > [2, 2, 1, 2, 1, 1]], > names=('name', 'value', 'weight')) > group = t.group_by('name') > result = group.groups.aggregate(average) > print(result) > > which gives > > name value weight > ---- ----- ------------- > a 2.0 1.66666666667 > b 4.5 1.5 > c 6.0 1.0 > > which is not what I want. > > > In Pandas, this can be done with apply() on a groupby object, since that > passes the relevant subsection the dataframe as input to the function. > So I can write: > > def average_pd(df): > weight = df['weight'] > total = weight.sum() > df['value'] *= weight / total > df['value'] = df['value'].sum() > df['weight'] = total # for info; not necessary > return df.iloc[0] # ignore other rows: they are the same anyway > > df = t.to_pandas() > result = df.groupby('name')[['value', 'weight']].apply(average_pd) > print(result) > > which gives: > > value weight > name > a 1.800000 5 > b 4.333333 3 > c 6.000000 1 > > and 'value' consists of weighted averages. > > (code also on > https://urldefense.proofpoint.com/v2/url?u=https-3A__gist.github.com_evertrol_12955a5d98edf055a2f4&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=ix-QzeHis8ltaMFyVo3QvHpnQYri_s75MpTGsufcbqM&e= > ) > > > Perhaps I overlooked some documentation, but I can't find if this can be > done in astropy.table. Or do I just need to approach this differently? > Alternatively, should I convert & stick to Pandas for this type of > functionality? > > > Evert > > > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > > https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.scipy.org_mailman_listinfo_astropy&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=XZ616g8wR7LBzFglTQ8J2F-bDe6rE-HuXIrePKntv6w&e= > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldcroft at head.cfa.harvard.edu Wed Jan 20 12:40:27 2016 From: aldcroft at head.cfa.harvard.edu (Aldcroft, Thomas) Date: Wed, 20 Jan 2016 12:40:27 -0500 Subject: [AstroPy] astropy.table.Table groups: aggregate over & combine multiple columns? In-Reply-To: References: <56994418-7032-4122-AB43-D942F79ED7D5@gmail.com> Message-ID: Excellent question indeed. The first quick comment I have is to always be aware that directly using the functions np.sum and np.mean in aggregation will be orders of magnitude faster than calling the `average()` function that was defined in the original post. That is because in those special cases the numpy `reduceat` method is called and everything gets done in the C layer. Thus Andrew's suggestion for a workaround in this case is the right way to go if the data tables are large. About the more generalized problem of getting access to the other table columns within the aggregation function, that is unfortunately not possible in the current release code. I have an idea for doing this, which is now an astropy issue (https://github.com/astropy/astropy/issues/4513). As for what to do right now with astropy 1.1, the following illustrates how to do generalized aggregation in the way that is needed for this example. It will be relatively slow and possibly memory intensive, but if the tables are not huge that won't be a problem: from __future__ import division, print_function from astropy import table from astropy.table import Table from collections import OrderedDict t = Table([['a', 'a', 'a', 'b', 'b', 'c'], [1, 2, 3, 4, 5, 6], [2, 2, 1, 2, 1, 1]], names=('name', 'value', 'weight')) grouped = t.group_by('name') def transform_table(tbl): """ Generalized function that takes table ``tbl`` as input and returns a new Table ``out``. Note that ``out`` does not necessarily need to have the same types or columns. The example is just the identity transform. Be aware that in-place operations will affect the input table. """ out = tbl return out out_tables = [] for group in grouped.groups: out_tables.append(transform_table(group)) result = table.vstack(out_tables) print('transform_table') print(result) print() def average_weighted(tbl, name): col = tbl[name] if name == 'weight': value = col.sum() else: weight = tbl['weight'] value = (col * weight).sum() / weight.sum() return value def transform_table_to_row(tbl, func): """ Generalized function that takes table ``tbl`` as input and returns a new table row as an OrderedDict. It applies function ``func`` to each column. The example computes the weighted average of each field (where possible) assuming the weights are in column ``weight``. """ out = OrderedDict() for name in t.colnames: try: value = func(tbl, name) except: # If something went wrong just ignore (could not perform # operation on this column). pass else: out[name] = value return out out_rows = [] for group in grouped.groups: out_rows.append(transform_table_to_row(group, average_weighted)) result = Table(rows=out_rows) print('transform_table_to_row') print(result) Code also at https://gist.github.com/taldcroft/12249ad7eeacbec12f44. Cheers, Tom On Wed, Jan 20, 2016 at 7:47 AM, Andrew Hearin wrote: > Hi Evert, > > Great question, I'm also really interested to hear the answer to this. I > always use the built-in table aggregation functions when possible, but > sometimes end up writing my own Numpy calculation for more complicated > examples (using np.unique and/or np.searchsorted). > > For computing a group-wise weighted average, there is a way you can recast > your problem that allows you to use the existing astropy built-in: just > create a new column that is the product of your second and third columns, ' > and then use aggregate(average) in the normal way on this new column. > > So I *think* that gives an answer to the specific example you gave, but it > dodges the real question, which I am also interested to hear the experts > weigh in on. > > Andrew > > On Tue, Jan 19, 2016 at 11:52 PM, Evert Rol wrote: > >> Is there a way in an astropy table to run the TableGroups aggregate >> function on multiple columns at once? >> >> In this specific case, I'd like to group by names in one column, and then >> average the second column weighted by values in the third column. >> An example would be: >> >> from astropy.table import Table >> >> def average(col): >> # Manipulate multiple columns at once? >> return col.mean() >> >> t = Table([['a', 'a', 'a', 'b', 'b', 'c'], >> [1, 2, 3, 4, 5, 6], >> [2, 2, 1, 2, 1, 1]], >> names=('name', 'value', 'weight')) >> group = t.group_by('name') >> result = group.groups.aggregate(average) >> print(result) >> >> which gives >> >> name value weight >> ---- ----- ------------- >> a 2.0 1.66666666667 >> b 4.5 1.5 >> c 6.0 1.0 >> >> which is not what I want. >> >> >> In Pandas, this can be done with apply() on a groupby object, since that >> passes the relevant subsection the dataframe as input to the function. >> So I can write: >> >> def average_pd(df): >> weight = df['weight'] >> total = weight.sum() >> df['value'] *= weight / total >> df['value'] = df['value'].sum() >> df['weight'] = total # for info; not necessary >> return df.iloc[0] # ignore other rows: they are the same anyway >> >> df = t.to_pandas() >> result = df.groupby('name')[['value', 'weight']].apply(average_pd) >> print(result) >> >> which gives: >> >> value weight >> name >> a 1.800000 5 >> b 4.333333 3 >> c 6.000000 1 >> >> and 'value' consists of weighted averages. >> >> (code also on >> https://urldefense.proofpoint.com/v2/url?u=https-3A__gist.github.com_evertrol_12955a5d98edf055a2f4&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=ix-QzeHis8ltaMFyVo3QvHpnQYri_s75MpTGsufcbqM&e= >> ) >> >> >> Perhaps I overlooked some documentation, but I can't find if this can be >> done in astropy.table. Or do I just need to approach this differently? >> Alternatively, should I convert & stick to Pandas for this type of >> functionality? >> >> >> Evert >> >> >> >> >> >> >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.scipy.org_mailman_listinfo_astropy&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=XZ616g8wR7LBzFglTQ8J2F-bDe6rE-HuXIrePKntv6w&e= >> > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldcroft at head.cfa.harvard.edu Wed Jan 20 14:34:37 2016 From: aldcroft at head.cfa.harvard.edu (Aldcroft, Thomas) Date: Wed, 20 Jan 2016 14:34:37 -0500 Subject: [AstroPy] astropy.table.Table groups: aggregate over & combine multiple columns? In-Reply-To: References: <56994418-7032-4122-AB43-D942F79ED7D5@gmail.com> Message-ID: There is now a working implementation of this if anyone wants to give it whirl or suggest improvements: https://github.com/astropy/astropy/pull/4513 - Tom On Wed, Jan 20, 2016 at 12:40 PM, Aldcroft, Thomas < aldcroft at head.cfa.harvard.edu> wrote: > Excellent question indeed. > > The first quick comment I have is to always be aware that directly using > the functions np.sum and np.mean in aggregation will be orders of magnitude > faster than calling the `average()` function that was defined in the > original post. That is because in those special cases the numpy `reduceat` > method is called and everything gets done in the C layer. Thus Andrew's > suggestion for a workaround in this case is the right way to go if the data > tables are large. > > About the more generalized problem of getting access to the other table > columns within the aggregation function, that is unfortunately not possible > in the current release code. I have an idea for doing this, which is now > an astropy issue (https://github.com/astropy/astropy/issues/4513). > > As for what to do right now with astropy 1.1, the following illustrates > how to do generalized aggregation in the way that is needed for this > example. It will be relatively slow and possibly memory intensive, but if > the tables are not huge that won't be a problem: > > from __future__ import division, print_function > from astropy import table > from astropy.table import Table > from collections import OrderedDict > > t = Table([['a', 'a', 'a', 'b', 'b', 'c'], > [1, 2, 3, 4, 5, 6], > [2, 2, 1, 2, 1, 1]], > names=('name', 'value', 'weight')) > > grouped = t.group_by('name') > > > def transform_table(tbl): > """ > Generalized function that takes table ``tbl`` as input > and returns a new Table ``out``. Note that ``out`` does not > necessarily need to have the same types or columns. > > The example is just the identity transform. Be aware that > in-place operations will affect the input table. > """ > out = tbl > return out > > out_tables = [] > for group in grouped.groups: > out_tables.append(transform_table(group)) > result = table.vstack(out_tables) > print('transform_table') > print(result) > print() > > > def average_weighted(tbl, name): > col = tbl[name] > if name == 'weight': > value = col.sum() > else: > weight = tbl['weight'] > value = (col * weight).sum() / weight.sum() > > return value > > > def transform_table_to_row(tbl, func): > """ > Generalized function that takes table ``tbl`` as input > and returns a new table row as an OrderedDict. It applies > function ``func`` to each column. > > The example computes the weighted average of each field (where > possible) assuming the weights are in column ``weight``. > """ > out = OrderedDict() > for name in t.colnames: > try: > value = func(tbl, name) > except: > # If something went wrong just ignore (could not perform > # operation on this column). > pass > else: > out[name] = value > return out > > > out_rows = [] > for group in grouped.groups: > out_rows.append(transform_table_to_row(group, average_weighted)) > result = Table(rows=out_rows) > > print('transform_table_to_row') > print(result) > > Code also at https://gist.github.com/taldcroft/12249ad7eeacbec12f44. > > Cheers, > Tom > > On Wed, Jan 20, 2016 at 7:47 AM, Andrew Hearin > wrote: > >> Hi Evert, >> >> Great question, I'm also really interested to hear the answer to this. I >> always use the built-in table aggregation functions when possible, but >> sometimes end up writing my own Numpy calculation for more complicated >> examples (using np.unique and/or np.searchsorted). >> >> For computing a group-wise weighted average, there is a way you can >> recast your problem that allows you to use the existing astropy built-in: >> just create a new column that is the product of your second and third >> columns, ' and then use aggregate(average) in the normal way on this new >> column. >> >> So I *think* that gives an answer to the specific example you gave, but >> it dodges the real question, which I am also interested to hear the experts >> weigh in on. >> >> Andrew >> >> On Tue, Jan 19, 2016 at 11:52 PM, Evert Rol wrote: >> >>> Is there a way in an astropy table to run the TableGroups aggregate >>> function on multiple columns at once? >>> >>> In this specific case, I'd like to group by names in one column, and >>> then average the second column weighted by values in the third column. >>> An example would be: >>> >>> from astropy.table import Table >>> >>> def average(col): >>> # Manipulate multiple columns at once? >>> return col.mean() >>> >>> t = Table([['a', 'a', 'a', 'b', 'b', 'c'], >>> [1, 2, 3, 4, 5, 6], >>> [2, 2, 1, 2, 1, 1]], >>> names=('name', 'value', 'weight')) >>> group = t.group_by('name') >>> result = group.groups.aggregate(average) >>> print(result) >>> >>> which gives >>> >>> name value weight >>> ---- ----- ------------- >>> a 2.0 1.66666666667 >>> b 4.5 1.5 >>> c 6.0 1.0 >>> >>> which is not what I want. >>> >>> >>> In Pandas, this can be done with apply() on a groupby object, since that >>> passes the relevant subsection the dataframe as input to the function. >>> So I can write: >>> >>> def average_pd(df): >>> weight = df['weight'] >>> total = weight.sum() >>> df['value'] *= weight / total >>> df['value'] = df['value'].sum() >>> df['weight'] = total # for info; not necessary >>> return df.iloc[0] # ignore other rows: they are the same anyway >>> >>> df = t.to_pandas() >>> result = df.groupby('name')[['value', 'weight']].apply(average_pd) >>> print(result) >>> >>> which gives: >>> >>> value weight >>> name >>> a 1.800000 5 >>> b 4.333333 3 >>> c 6.000000 1 >>> >>> and 'value' consists of weighted averages. >>> >>> (code also on >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__gist.github.com_evertrol_12955a5d98edf055a2f4&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=ix-QzeHis8ltaMFyVo3QvHpnQYri_s75MpTGsufcbqM&e= >>> ) >>> >>> >>> Perhaps I overlooked some documentation, but I can't find if this can be >>> done in astropy.table. Or do I just need to approach this differently? >>> Alternatively, should I convert & stick to Pandas for this type of >>> functionality? >>> >>> >>> Evert >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> AstroPy mailing list >>> AstroPy at scipy.org >>> >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.scipy.org_mailman_listinfo_astropy&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=XZ616g8wR7LBzFglTQ8J2F-bDe6rE-HuXIrePKntv6w&e= >>> >> >> >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> https://mail.scipy.org/mailman/listinfo/astropy >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hearin at yale.edu Wed Jan 20 16:11:08 2016 From: andrew.hearin at yale.edu (Andrew Hearin) Date: Wed, 20 Jan 2016 16:11:08 -0500 Subject: [AstroPy] astropy.table.Table groups: aggregate over & combine multiple columns? In-Reply-To: References: <56994418-7032-4122-AB43-D942F79ED7D5@gmail.com> Message-ID: Hi Evert, I wrote up a gist of the Numpy solution to the problem that I made passing mention of in my first reply. I have not benchmarked this against Tom's solution, but the solution in the following gist is based on running np.unique on an Astropy Table sorted on the grouping key. It's pretty fast, and it seems to require a bit less code. https://gist.github.com/aphearin/1a125a7ee6cd370740ef I use this pattern all the time in my work; once you see how it works it's pretty straightforward to just copy-and-paste and adapt a couple of lines for your particular problem. When I can, I use the built-in Table aggregation features; when that doesn't work, I do this. The example shows how to compute the stellar mass-weighted average star formation-rate on a group-by-group basis for a large fake galaxy table generated at the beginning of the gist. The trick to make it fast is to loop over the memory buffer of the data in the Table; otherwise looping over the table elements directly is orders of magnitude slower. Cheers, Andrew On Wed, Jan 20, 2016 at 2:34 PM, Aldcroft, Thomas < aldcroft at head.cfa.harvard.edu> wrote: > There is now a working implementation of this if anyone wants to give it > whirl or suggest improvements: > > https://github.com/astropy/astropy/pull/4513 > > > - Tom > > > On Wed, Jan 20, 2016 at 12:40 PM, Aldcroft, Thomas < > aldcroft at head.cfa.harvard.edu> wrote: > >> Excellent question indeed. >> >> The first quick comment I have is to always be aware that directly using >> the functions np.sum and np.mean in aggregation will be orders of magnitude >> faster than calling the `average()` function that was defined in the >> original post. That is because in those special cases the numpy `reduceat` >> method is called and everything gets done in the C layer. Thus Andrew's >> suggestion for a workaround in this case is the right way to go if the data >> tables are large. >> >> About the more generalized problem of getting access to the other table >> columns within the aggregation function, that is unfortunately not possible >> in the current release code. I have an idea for doing this, which is now >> an astropy issue (https://github.com/astropy/astropy/issues/4513 >> >> ). >> >> As for what to do right now with astropy 1.1, the following illustrates >> how to do generalized aggregation in the way that is needed for this >> example. It will be relatively slow and possibly memory intensive, but if >> the tables are not huge that won't be a problem: >> >> from __future__ import division, print_function >> from astropy import table >> from astropy.table import Table >> from collections import OrderedDict >> >> t = Table([['a', 'a', 'a', 'b', 'b', 'c'], >> [1, 2, 3, 4, 5, 6], >> [2, 2, 1, 2, 1, 1]], >> names=('name', 'value', 'weight')) >> >> grouped = t.group_by('name') >> >> >> def transform_table(tbl): >> """ >> Generalized function that takes table ``tbl`` as input >> and returns a new Table ``out``. Note that ``out`` does not >> necessarily need to have the same types or columns. >> >> The example is just the identity transform. Be aware that >> in-place operations will affect the input table. >> """ >> out = tbl >> return out >> >> out_tables = [] >> for group in grouped.groups: >> out_tables.append(transform_table(group)) >> result = table.vstack(out_tables) >> print('transform_table') >> print(result) >> print() >> >> >> def average_weighted(tbl, name): >> col = tbl[name] >> if name == 'weight': >> value = col.sum() >> else: >> weight = tbl['weight'] >> value = (col * weight).sum() / weight.sum() >> >> return value >> >> >> def transform_table_to_row(tbl, func): >> """ >> Generalized function that takes table ``tbl`` as input >> and returns a new table row as an OrderedDict. It applies >> function ``func`` to each column. >> >> The example computes the weighted average of each field (where >> possible) assuming the weights are in column ``weight``. >> """ >> out = OrderedDict() >> for name in t.colnames: >> try: >> value = func(tbl, name) >> except: >> # If something went wrong just ignore (could not perform >> # operation on this column). >> pass >> else: >> out[name] = value >> return out >> >> >> out_rows = [] >> for group in grouped.groups: >> out_rows.append(transform_table_to_row(group, average_weighted)) >> result = Table(rows=out_rows) >> >> print('transform_table_to_row') >> print(result) >> >> Code also at https://gist.github.com/taldcroft/12249ad7eeacbec12f44 >> >> . >> >> Cheers, >> Tom >> >> On Wed, Jan 20, 2016 at 7:47 AM, Andrew Hearin >> wrote: >> >>> Hi Evert, >>> >>> Great question, I'm also really interested to hear the answer to this. I >>> always use the built-in table aggregation functions when possible, but >>> sometimes end up writing my own Numpy calculation for more complicated >>> examples (using np.unique and/or np.searchsorted). >>> >>> For computing a group-wise weighted average, there is a way you can >>> recast your problem that allows you to use the existing astropy built-in: >>> just create a new column that is the product of your second and third >>> columns, ' and then use aggregate(average) in the normal way on this new >>> column. >>> >>> So I *think* that gives an answer to the specific example you gave, but >>> it dodges the real question, which I am also interested to hear the experts >>> weigh in on. >>> >>> Andrew >>> >>> On Tue, Jan 19, 2016 at 11:52 PM, Evert Rol wrote: >>> >>>> Is there a way in an astropy table to run the TableGroups aggregate >>>> function on multiple columns at once? >>>> >>>> In this specific case, I'd like to group by names in one column, and >>>> then average the second column weighted by values in the third column. >>>> An example would be: >>>> >>>> from astropy.table import Table >>>> >>>> def average(col): >>>> # Manipulate multiple columns at once? >>>> return col.mean() >>>> >>>> t = Table([['a', 'a', 'a', 'b', 'b', 'c'], >>>> [1, 2, 3, 4, 5, 6], >>>> [2, 2, 1, 2, 1, 1]], >>>> names=('name', 'value', 'weight')) >>>> group = t.group_by('name') >>>> result = group.groups.aggregate(average) >>>> print(result) >>>> >>>> which gives >>>> >>>> name value weight >>>> ---- ----- ------------- >>>> a 2.0 1.66666666667 >>>> b 4.5 1.5 >>>> c 6.0 1.0 >>>> >>>> which is not what I want. >>>> >>>> >>>> In Pandas, this can be done with apply() on a groupby object, since >>>> that passes the relevant subsection the dataframe as input to the function. >>>> So I can write: >>>> >>>> def average_pd(df): >>>> weight = df['weight'] >>>> total = weight.sum() >>>> df['value'] *= weight / total >>>> df['value'] = df['value'].sum() >>>> df['weight'] = total # for info; not necessary >>>> return df.iloc[0] # ignore other rows: they are the same anyway >>>> >>>> df = t.to_pandas() >>>> result = df.groupby('name')[['value', 'weight']].apply(average_pd) >>>> print(result) >>>> >>>> which gives: >>>> >>>> value weight >>>> name >>>> a 1.800000 5 >>>> b 4.333333 3 >>>> c 6.000000 1 >>>> >>>> and 'value' consists of weighted averages. >>>> >>>> (code also on >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__gist.github.com_evertrol_12955a5d98edf055a2f4&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=ix-QzeHis8ltaMFyVo3QvHpnQYri_s75MpTGsufcbqM&e= >>>> ) >>>> >>>> >>>> Perhaps I overlooked some documentation, but I can't find if this can >>>> be done in astropy.table. Or do I just need to approach this differently? >>>> Alternatively, should I convert & stick to Pandas for this type of >>>> functionality? >>>> >>>> >>>> Evert >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> AstroPy mailing list >>>> AstroPy at scipy.org >>>> >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.scipy.org_mailman_listinfo_astropy&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=XZ616g8wR7LBzFglTQ8J2F-bDe6rE-HuXIrePKntv6w&e= >>>> >>> >>> >>> _______________________________________________ >>> AstroPy mailing list >>> AstroPy at scipy.org >>> https://mail.scipy.org/mailman/listinfo/astropy >>> >>> >>> >> > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > > https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.scipy.org_mailman_listinfo_astropy&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=dCXpQy8L_TlVzBcoYmH89spQdsv-85GQVn8oW79SyRE&s=YhXoP-ihVwi4j73PmjzKCQJUJPGvzKR82ag-Kj9YXfk&e= > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evert.rol at gmail.com Wed Jan 20 23:22:10 2016 From: evert.rol at gmail.com (Evert Rol) Date: Thu, 21 Jan 2016 15:22:10 +1100 Subject: [AstroPy] astropy.table.Table groups: aggregate over & combine multiple columns? In-Reply-To: References: <56994418-7032-4122-AB43-D942F79ED7D5@gmail.com> Message-ID: Hi, thanks for the great responses! In particular with a gist and a PR. Seems like there's certainly interest in this. I realise the example is overly simplistic and can be solved (better) differently, but was indeed just meant as an example to the underlying issue. I hadn't thought about iterating over the groups (Andrew's gist); good to keep in mind. I had a look at Thomas's PR, but it feels slightly off somehow. I guess I'm rather partial to the way Pandas handles this, where the (row-sliced) dataframe gets passed to the user function. It's then up to the user to return a new and correct (single-row) dataframe, but easily allows fancy tricks with e.g. (boolean) indexing with yet another column, or even using the value of the key column(s) that have been aggregated over. Since I already had half a mind to make a PR (if there were interest and I hadn't overlooked the obvious), I've now implemented a PR[1] which somewhat mimicks the Pandas one (in a more simplistic manner), for thoughts and comparison with Thomas's PR. Since the previous sentence steers this topic in the direction of the astropy development mailing list, I'll leave the discussion on an implementation to Github. It seems that in short, there is no neat way to do this directly in astropy, but it might be coming soon. Thanks again, Evert [1] https://github.com/astropy/astropy/pull/4516 > Hi Evert, > > I wrote up a gist of the Numpy solution to the problem that I made passing mention of in my first reply. I have not benchmarked this against Tom's solution, but the solution in the following gist is based on running np.unique on an Astropy Table sorted on the grouping key. It's pretty fast, and it seems to require a bit less code. > > https://gist.github.com/aphearin/1a125a7ee6cd370740ef > > I use this pattern all the time in my work; once you see how it works it's pretty straightforward to just copy-and-paste and adapt a couple of lines for your particular problem. When I can, I use the built-in Table aggregation features; when that doesn't work, I do this. > > The example shows how to compute the stellar mass-weighted average star formation-rate on a group-by-group basis for a large fake galaxy table generated at the beginning of the gist. The trick to make it fast is to loop over the memory buffer of the data in the Table; otherwise looping over the table elements directly is orders of magnitude slower. > > Cheers, > Andrew > > On Wed, Jan 20, 2016 at 2:34 PM, Aldcroft, Thomas wrote: > There is now a working implementation of this if anyone wants to give it whirl or suggest improvements: > > https://github.com/astropy/astropy/pull/4513 > > - Tom > > > On Wed, Jan 20, 2016 at 12:40 PM, Aldcroft, Thomas wrote: > Excellent question indeed. > > The first quick comment I have is to always be aware that directly using the functions np.sum and np.mean in aggregation will be orders of magnitude faster than calling the `average()` function that was defined in the original post. That is because in those special cases the numpy `reduceat` method is called and everything gets done in the C layer. Thus Andrew's suggestion for a workaround in this case is the right way to go if the data tables are large. > > About the more generalized problem of getting access to the other table columns within the aggregation function, that is unfortunately not possible in the current release code. I have an idea for doing this, which is now an astropy issue (https://github.com/astropy/astropy/issues/4513). > > As for what to do right now with astropy 1.1, the following illustrates how to do generalized aggregation in the way that is needed for this example. It will be relatively slow and possibly memory intensive, but if the tables are not huge that won't be a problem: > > from __future__ import division, print_function > from astropy import table > from astropy.table import Table > from collections import OrderedDict > > t = Table([['a', 'a', 'a', 'b', 'b', 'c'], > [1, 2, 3, 4, 5, 6], > [2, 2, 1, 2, 1, 1]], > names=('name', 'value', 'weight')) > > grouped = t.group_by('name') > > > def transform_table(tbl): > """ > Generalized function that takes table ``tbl`` as input > and returns a new Table ``out``. Note that ``out`` does not > necessarily need to have the same types or columns. > > The example is just the identity transform. Be aware that > in-place operations will affect the input table. > """ > out = tbl > return out > > out_tables = [] > for group in grouped.groups: > out_tables.append(transform_table(group)) > result = table.vstack(out_tables) > print('transform_table') > print(result) > print() > > > def average_weighted(tbl, name): > col = tbl[name] > if name == 'weight': > value = col.sum() > else: > weight = tbl['weight'] > value = (col * weight).sum() / weight.sum() > > return value > > > def transform_table_to_row(tbl, func): > """ > Generalized function that takes table ``tbl`` as input > and returns a new table row as an OrderedDict. It applies > function ``func`` to each column. > > The example computes the weighted average of each field (where > possible) assuming the weights are in column ``weight``. > """ > out = OrderedDict() > for name in t.colnames: > try: > value = func(tbl, name) > except: > # If something went wrong just ignore (could not perform > # operation on this column). > pass > else: > out[name] = value > return out > > > out_rows = [] > for group in grouped.groups: > out_rows.append(transform_table_to_row(group, average_weighted)) > result = Table(rows=out_rows) > > print('transform_table_to_row') > print(result) > > Code also at https://gist.github.com/taldcroft/12249ad7eeacbec12f44. > > Cheers, > Tom > > On Wed, Jan 20, 2016 at 7:47 AM, Andrew Hearin wrote: > Hi Evert, > > Great question, I'm also really interested to hear the answer to this. I always use the built-in table aggregation functions when possible, but sometimes end up writing my own Numpy calculation for more complicated examples (using np.unique and/or np.searchsorted). > > For computing a group-wise weighted average, there is a way you can recast your problem that allows you to use the existing astropy built-in: just create a new column that is the product of your second and third columns, ' and then use aggregate(average) in the normal way on this new column. > > So I *think* that gives an answer to the specific example you gave, but it dodges the real question, which I am also interested to hear the experts weigh in on. > > Andrew > > On Tue, Jan 19, 2016 at 11:52 PM, Evert Rol wrote: > Is there a way in an astropy table to run the TableGroups aggregate function on multiple columns at once? > > In this specific case, I'd like to group by names in one column, and then average the second column weighted by values in the third column. > An example would be: > > from astropy.table import Table > > def average(col): > # Manipulate multiple columns at once? > return col.mean() > > t = Table([['a', 'a', 'a', 'b', 'b', 'c'], > [1, 2, 3, 4, 5, 6], > [2, 2, 1, 2, 1, 1]], > names=('name', 'value', 'weight')) > group = t.group_by('name') > result = group.groups.aggregate(average) > print(result) > > which gives > > name value weight > ---- ----- ------------- > a 2.0 1.66666666667 > b 4.5 1.5 > c 6.0 1.0 > > which is not what I want. > > > In Pandas, this can be done with apply() on a groupby object, since that passes the relevant subsection the dataframe as input to the function. > So I can write: > > def average_pd(df): > weight = df['weight'] > total = weight.sum() > df['value'] *= weight / total > df['value'] = df['value'].sum() > df['weight'] = total # for info; not necessary > return df.iloc[0] # ignore other rows: they are the same anyway > > df = t.to_pandas() > result = df.groupby('name')[['value', 'weight']].apply(average_pd) > print(result) > > which gives: > > value weight > name > a 1.800000 5 > b 4.333333 3 > c 6.000000 1 > > and 'value' consists of weighted averages. > > (code also on https://urldefense.proofpoint.com/v2/url?u=https-3A__gist.github.com_evertrol_12955a5d98edf055a2f4&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=ix-QzeHis8ltaMFyVo3QvHpnQYri_s75MpTGsufcbqM&e= ) > > > Perhaps I overlooked some documentation, but I can't find if this can be done in astropy.table. Or do I just need to approach this differently? > Alternatively, should I convert & stick to Pandas for this type of functionality? > > > Evert > > > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.scipy.org_mailman_listinfo_astropy&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=-BKjmG3hTRkdOfmFIKI5e3myB8cKiFHeJbTAhi3Zg5U&s=XZ616g8wR7LBzFglTQ8J2F-bDe6rE-HuXIrePKntv6w&e= > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.scipy.org_mailman_listinfo_astropy&d=AwICAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=AHkQ8HPUDwzl0x62ybAnwN_OEebPRGDtcjUPBcnLYw4&m=dCXpQy8L_TlVzBcoYmH89spQdsv-85GQVn8oW79SyRE&s=YhXoP-ihVwi4j73PmjzKCQJUJPGvzKR82ag-Kj9YXfk&e= > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy From j.allen at physics.usyd.edu.au Thu Jan 21 00:13:24 2016 From: j.allen at physics.usyd.edu.au (James Allen) Date: Thu, 21 Jan 2016 16:13:24 +1100 Subject: [AstroPy] WCS exception due to header keyword PLATEID Message-ID: Hi, I'm having a problem with WCS() raising an exception for what looks to me like a perfectly valid FITS header. The issue appears to be that the header includes a PLATEID, which in this case is completely unrelated to the coordinates. When I delete that card from the header, WCS() works fine. Is astropy interpreting PLATEID as being related to distortion in some way? I couldn't find it in the FITS WCS specifications anywhere. Is there any way to get it to ignore this card without deleting it? Obviously I could just pass the first N lines of the header to WCS(), but that seems pretty fragile. The header [with irrelevant lines replaced by ...] and exception are copied below. I'm using v1.1.1 of astropy in Python 2.7. Thanks, James In [52]: header = fits.getheader('10000089/10000089_red_4.fits') In [53]: header Out[53]: SIMPLE = T / conforms to FITS standard BITPIX = -64 / array data type NAXIS = 3 / number of array dimensions NAXIS1 = 50 NAXIS2 = 50 NAXIS3 = 2048 EXTEND = T WCSAXES = 3 / Number of coordinate axes CRPIX1 = 25.5 / Pixel coordinate of reference point CRPIX2 = 25.5 / Pixel coordinate of reference point CRPIX3 = 1024.0 / Pixel coordinate of reference point CDELT1 = -0.000138888888889 / [deg] Coordinate increment at reference point CDELT2 = 0.000138888888889 / [deg] Coordinate increment at reference point CDELT3 = 0.568812591597500 / [m] Coordinate increment at reference point CUNIT1 = 'deg ' / Units of coordinate increment and value CUNIT2 = 'deg ' / Units of coordinate increment and value CUNIT3 = 'Angstrom' / Units of coordinate increment and value CTYPE1 = 'RA---TAN' / Right ascension, gnomonic projection CTYPE2 = 'DEC--TAN' / Declination, gnomonic projection CTYPE3 = 'AWAV' / Air wavelength (linear) CRVAL1 = 175.104583000 / [deg] Coordinate value at reference point CRVAL2 = -0.828278000 / [deg] Coordinate value at reference point CRVAL3 = 6843.014421829 / [m] Coordinate value at reference point ... PLATEID = 'Y13SAR1_P015_12T014_15T023' / Plate ID (from config file) ... In [54]: WCS(header) ERROR: MemoryError: NAXES was not set (or bad) for distortion on axis 3 [astropy.wcs.wcs] --------------------------------------------------------------------------- MemoryError Traceback (most recent call last) in () ----> 1 WCS(header) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/astropy/wcs/wcs.pyc in __init__(self, header, fobj, key, minerr, relax, naxis, keysel, colsel, fix, translate_units, _do_set) 419 tmp_wcsprm = _wcs.Wcsprm(header=tmp_header_bytes, key=key, 420 relax=relax, keysel=keysel_flags, --> 421 colsel=colsel, warnings=False) 422 except _wcs.NoWcsKeywordsFoundError: 423 est_naxis = 0 MemoryError: NAXES was not set (or bad) for distortion on axis 3 In [55]: del header['PLATEID'] In [56]: WCS(header) Out[56]: WCS Keywords Number of WCS axes: 3 CTYPE : 'RA---TAN' 'DEC--TAN' 'AWAV' CRVAL : 175.10458299999999 -0.82827799999999996 6.8430144218290003e-07 CRPIX : 25.5 25.5 1024.0 PC1_1 PC1_2 PC1_3 : 1.0 0.0 0.0 PC2_1 PC2_2 PC2_3 : 0.0 1.0 0.0 PC3_1 PC3_2 PC3_3 : 0.0 0.0 1.0 CDELT : -0.00013888888888899999 0.00013888888888899999 5.6881259159749997e-11 NAXIS : 50 50 -------------- next part -------------- An HTML attachment was scrubbed... URL: From demitri.muna at gmail.com Thu Jan 21 10:35:49 2016 From: demitri.muna at gmail.com (Demitri Muna) Date: Thu, 21 Jan 2016 10:35:49 -0500 Subject: [AstroPy] WCS exception due to header keyword PLATEID In-Reply-To: References: Message-ID: <88D4A0FA-596E-4A9A-94FE-FD4CF7D3E9CD@gmail.com> Hi James, On Jan 21, 2016, at 12:13 AM, James Allen wrote: > I'm having a problem with WCS() raising an exception for what looks to me like a perfectly valid FITS header. The issue appears to be that the header includes a PLATEID, which in this case is completely unrelated to the coordinates. When I delete that card from the header, WCS() works fine. I looked for ?plateid? in the Astropy sources. It does look like the keyword is interfering. I?ll leave it to you to poke around these hits further: % grep -ri plateid * cextern/wcslib/C/flexed/wcspih.c:18348: keyname = "PLATEID"; cextern/wcslib/C/flexed/wcspih.c:20945: sprintf(wcsp->wcsname, "DSS PLATEID %.4s", (char *)(dsstmp+13)); cextern/wcslib/C/wcspih.l:1020:^PLATEID" " { cextern/wcslib/C/wcspih.l:1028: keyname = "PLATEID"; cextern/wcslib/C/wcspih.l:2290: sprintf(wcsp->wcsname, "DSS PLATEID %.4s", (char *)(dsstmp+13)); This is in wcspih.l: ^PLATEID" " { /* DSS: plate identification. */ valtype = STRING; distype = SEQUENT; vptr = dsstmp+13; dssflag = 2; distran = DSS; keyname = "PLATEID"; BEGIN(VALUE); } I?d consider this a bug - as the plate designer for SDSS, I don?t think it?s reasonable to assume that any field that begins with ?PLATEID? indicates a DSS plate. It?s likely that deleting that specific keyword is your best workaround for the time being (not selecting the first N rows as you note). Someone else can suggest how best to propagate this issue. You know what the FITS format needs? The concept of a namespace. Cheers, Demitri _________________________________________ Demitri Muna http://muna.com Department of Astronomy Il Ohio State University My Projects: http://nightlightapp.io http://trillianverse.org http://scicoder.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From pweilbacher at aip.de Thu Jan 21 10:43:55 2016 From: pweilbacher at aip.de (Peter Weilbacher) Date: Thu, 21 Jan 2016 16:43:55 +0100 (CET) Subject: [AstroPy] WCS exception due to header keyword PLATEID In-Reply-To: <88D4A0FA-596E-4A9A-94FE-FD4CF7D3E9CD@gmail.com> References: <88D4A0FA-596E-4A9A-94FE-FD4CF7D3E9CD@gmail.com> Message-ID: On Thu, 21 Jan 2016, Demitri Muna wrote: > You know what the FITS format needs? The concept of a namespace. You mean like the HIERARCH Keyword Convention (http://fits.gsfc.nasa.gov/registry/hierarch_keyword.html)? Peter. -- Dr. Peter M. Weilbacher http://www.aip.de/People/PWeilbacher Phone +49 331 74 99-667 encryption key ID 7D6B4AA0 ------------------------------------------------------------------------ Leibniz-Institut f?r Astrophysik Potsdam (AIP) An der Sternwarte 16, D-14482 Potsdam Vorstand: Prof. Dr. Matthias Steinmetz, Matthias Winker Stiftung b?rgerlichen Rechts, Stiftungsverz. Brandenburg: 26 742-00/7026 From demitri.muna at gmail.com Thu Jan 21 11:11:32 2016 From: demitri.muna at gmail.com (Demitri Muna) Date: Thu, 21 Jan 2016 11:11:32 -0500 Subject: [AstroPy] WCS exception due to header keyword PLATEID In-Reply-To: References: <88D4A0FA-596E-4A9A-94FE-FD4CF7D3E9CD@gmail.com> Message-ID: <85251363-2168-49B8-A6D8-3D81DCC2ADC7@gmail.com> On Jan 21, 2016, at 10:43 AM, Peter Weilbacher wrote: > You mean like the HIERARCH Keyword Convention > (http://fits.gsfc.nasa.gov/registry/hierarch_keyword.html )? I?m familiar with it. It?s clearly a hack. I?m talking about first class support. _________________________________________ Demitri Muna http://muna.com Department of Astronomy Der Ohio State University My Projects: http://nightlightapp.io http://trillianverse.org http://scicoder.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.allen at physics.usyd.edu.au Thu Jan 21 21:58:21 2016 From: j.allen at physics.usyd.edu.au (James Allen) Date: Fri, 22 Jan 2016 13:58:21 +1100 Subject: [AstroPy] WCS exception due to header keyword PLATEID In-Reply-To: <6771cf512a7347928d9f9a69a8012213@EX-TPR-PRO-03.mcs.usyd.edu.au> References: <88D4A0FA-596E-4A9A-94FE-FD4CF7D3E9CD@gmail.com> <6771cf512a7347928d9f9a69a8012213@EX-TPR-PRO-03.mcs.usyd.edu.au> Message-ID: Thanks Demitri. I agree that it would be better to allow the PLATEID keyword, as it's not listed in the FITS WCS specification so you need some fairly obscure knowledge to know to avoid it. But from my poking I couldn't be sure if this is an issue with astropy.wcs or wcslib, so I've emailed the maintainers of the packages directly for their opinions. Cheers, James On Fri, Jan 22, 2016 at 3:11 AM, Demitri Muna wrote: > > On Jan 21, 2016, at 10:43 AM, Peter Weilbacher wrote: > > You mean like the HIERARCH Keyword Convention > (http://fits.gsfc.nasa.gov/registry/hierarch_keyword.html)? > > > I?m familiar with it. It?s clearly a hack. I?m talking about first class > support. > > _________________________________________ > Demitri Muna > http://muna.com > > Department of Astronomy > Der Ohio State University > > My Projects: > http://nightlightapp.io > http://trillianverse.org > http://scicoder.org > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hack at stsci.edu Fri Jan 22 11:44:16 2016 From: hack at stsci.edu (Warren Hack) Date: Fri, 22 Jan 2016 11:44:16 -0500 Subject: [AstroPy] Opportunities at STScI Message-ID: <56A25C60.9020109@stsci.edu> Hello, The Science Software Branch (SSB) at Space Telescope Science Institute (STScI) is currently looking for developers to help us develop astropy and other software we need to support the James Webb Space Telescope(JWST) and, in the near future, the WFIRST-AFTA space telescope. This work will focus primarily on developing the libraries SSB needs to use in creating the calibration and data analysis software for working with JWST data. Those libraries include expanding on many of the packages in astropy, as well as other Python and/or C packages such as matplotlib and numpy. The full official posting for the positions we are trying to fill can be found at: https://rn11.ultipro.com/SPA1004/JobBoard/JobDetails.aspx?__ID=*BC749502A69AE971 This posting will remain active until we fill all our open positions, so please ignore any dates specified in the posting and specify in your cover letter that you are applying for the SSB positions. Please note that we are only able to hire developers who can meet ITAR requirements (primarily, US Citizen or Green card holder). SSB and STScI strongly encourages women, minorities, veterans and disabled individuals to apply for these opportunities. Sincerely, Warren Hack Science Software Branch Lead (Acting) From dkirkby at uci.edu Tue Jan 26 19:54:28 2016 From: dkirkby at uci.edu (David Kirkby) Date: Wed, 27 Jan 2016 00:54:28 +0000 Subject: [AstroPy] Enhanced CSV format file extension Message-ID: Is there a recommended file extension to use with the ECSV format: https://github.com/astropy/astropy-APEs/blob/master/APE6.rst I see that astropy.io.registry does not auto-identify this format, but it seems like .txt and .csv would both be reasonable choices. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenbailey at lbl.gov Tue Jan 26 20:09:17 2016 From: stephenbailey at lbl.gov (Stephen Bailey) Date: Tue, 26 Jan 2016 17:09:17 -0800 Subject: [AstroPy] Enhanced CSV format file extension In-Reply-To: References: Message-ID: The code examples in APE6 use .ecsv, which seems good for distinguishing it from plain .csv or who-knows-what .txt . Stephen On Tue, Jan 26, 2016 at 4:54 PM, David Kirkby wrote: > Is there a recommended file extension to use with the ECSV format: > > https://github.com/astropy/astropy-APEs/blob/master/APE6.rst > > I see that astropy.io.registry does not auto-identify this format, but it > seems like .txt and .csv would both be reasonable choices. > > David > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldcroft at head.cfa.harvard.edu Tue Jan 26 22:48:41 2016 From: aldcroft at head.cfa.harvard.edu (Aldcroft, Thomas) Date: Tue, 26 Jan 2016 22:48:41 -0500 Subject: [AstroPy] Enhanced CSV format file extension In-Reply-To: References: Message-ID: On Tue, Jan 26, 2016 at 8:09 PM, Stephen Bailey wrote: > The code examples in APE6 use .ecsv, which seems good for distinguishing > it from plain .csv or who-knows-what .txt . > Yes, .ecsv was informally settled on as the preferred extension. It's a good idea to not use .csv since that extension is special to astropy and will default to using the "ascii.csv" reader. - Tom > > Stephen > > On Tue, Jan 26, 2016 at 4:54 PM, David Kirkby wrote: > >> Is there a recommended file extension to use with the ECSV format: >> >> https://github.com/astropy/astropy-APEs/blob/master/APE6.rst >> >> I see that astropy.io.registry does not auto-identify this format, but it >> seems like .txt and .csv would both be reasonable choices. >> >> David >> >> >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> https://mail.scipy.org/mailman/listinfo/astropy >> >> > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From embray at stsci.edu Wed Jan 27 10:55:05 2016 From: embray at stsci.edu (Erik Bray) Date: Wed, 27 Jan 2016 10:55:05 -0500 Subject: [AstroPy] Enhanced CSV format file extension In-Reply-To: References: Message-ID: <56A8E859.20804@stsci.edu> On 01/26/2016 10:48 PM, Aldcroft, Thomas wrote: > > > On Tue, Jan 26, 2016 at 8:09 PM, Stephen Bailey > wrote: > > The code examples in APE6 use .ecsv, which seems good for distinguishing it > from plain .csv or who-knows-what .txt . > > > Yes, .ecsv was informally settled on as the preferred extension. It's a good > idea to not use .csv since that extension is special to astropy and will default > to using the "ascii.csv" reader. That seems like it could be fixed though, right? It wouldn't be too hard to detect likely ECSV by checking if there are comments, and then looking at just the first comment with the ECSV header. I agree though better to use .ecsv to be explicit. Erik > > Stephen > > On Tue, Jan 26, 2016 at 4:54 PM, David Kirkby > wrote: > > Is there a recommended file extension to use with the ECSV format: > > https://github.com/astropy/astropy-APEs/blob/master/APE6.rst > > I see that astropy.io.registry does not auto-identify this format, but > it seems like .txt and .csv would both be reasonable choices. > > David > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > From dburke.gw at gmail.com Thu Jan 28 12:36:26 2016 From: dburke.gw at gmail.com (Doug Burke) Date: Thu, 28 Jan 2016 17:36:26 +0000 Subject: [AstroPy] ANN: Sherpa v4.8.0 released Message-ID: [It wasn't clear from the docs whether announcements like this are supported on the email list; apologies if not] Dear colleagues, We are very happy to announce the v4.8.0 release of Sherpa, a Python-based fitting and modelling system that has strong support for Astronomy data (in particular, X-ray data). Thanks to Zenodo and GitHub its DOI is http://dx.doi.org/10.5281/zenodo.45243 Sherpa has recently been moved to an open-development model - with development at https://github.com/sherpa/sherpa/ - and this release is provided to match the functionality in the CIAO 4.8.0 release. Release notes are available at https://github.com/sherpa/sherpa/releases/tag/4.8.0 and include improved functionality (e.g. the addition of the "wstat" statistic introduced by XSPEC for incorporating a background when using Poisson statistics and improvements to the interface to the XSPEC user models) as well as improvements to the build and organisation of the module (e.g. improved docstrings). We provide Linux and OS-X packages for users of the Anaconda Python Distribution, $ conda config --add channels https://conda.binstar.org/sherpa $ conda install sherpa Or source code can be downloaded from https://github.com/sherpa/sherpa/tags One known issue with this release (which is fixed in the master branch) is that it can not be used with version 1.5 of matplotlib. It is, as far as we know, compatible with recent releases of AstroPy! Further information can be found at http://cxc.harvard.edu/contrib/sherpa/ and we look forward to any input you have. Please feel free to forward this announcement to anyone who you feel would be interested in this release. For the Sherpa core development team, Doug Burke -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslavin at cfa.harvard.edu Fri Jan 29 08:53:18 2016 From: jslavin at cfa.harvard.edu (Slavin, Jonathan) Date: Fri, 29 Jan 2016 08:53:18 -0500 Subject: [AstroPy] ANN: Sherpa v4.8.0 released Message-ID: ?Hi Doug, I'm happy to hear of the improvements to Sherpa, though I'm disappointed that it's not compatible with matplotlib 1.5 and Numpy 1.10. Do you expect to overcome those incompatibilities soon? I use anaconda, so to use Sherpa at this point, I suppose I'd need to create an environment for it with those different versions of matplotlib and numpy. Regards, Jon? On Fri, Jan 29, 2016 at 7:00 AM, wrote: > Date: Thu, 28 Jan 2016 17:36:26 +0000 > From: Doug Burke > To: astropy at scipy.org > Cc: Aneta Siemiginowska > Subject: [AstroPy] ANN: Sherpa v4.8.0 released > Message-ID: > vMhA at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > [It wasn't clear from the docs whether announcements like this are > supported on the email list; apologies if not] > > Dear colleagues, > > We are very happy to announce the v4.8.0 release of Sherpa, a Python-based > fitting and modelling system that has strong support for Astronomy data (in > particular, X-ray data). Thanks to Zenodo and GitHub its DOI is > http://dx.doi.org/10.5281/zenodo.45243 > > Sherpa has recently been moved to an open-development model - with > development at https://github.com/sherpa/sherpa/ - and this release is > provided to match the functionality in the CIAO 4.8.0 release. > > Release notes are available at > https://github.com/sherpa/sherpa/releases/tag/4.8.0 and include improved > functionality (e.g. the addition of the "wstat" statistic introduced by > XSPEC for incorporating a background when using Poisson statistics and > improvements to the interface to the XSPEC user models) as well as > improvements to the build and organisation of the module (e.g. improved > docstrings). > > We provide Linux and OS-X packages for users of the Anaconda Python > Distribution, > > $ conda config --add channels https://conda.binstar.org/sherpa > $ conda install sherpa > > Or source code can be downloaded from > https://github.com/sherpa/sherpa/tags > > One known issue with this release (which is fixed in the master branch) is > that it can not be used with version 1.5 of matplotlib. It is, as far as we > know, compatible with recent releases of AstroPy! > > Further information can be found at http://cxc.harvard.edu/contrib/sherpa/ > and > we look forward to any input you have. Please feel free to forward this > announcement to anyone who you feel would be interested in this release. > > For the Sherpa core development team, > Doug Burke > -- ________________________________________________________ Jonathan D. Slavin Harvard-Smithsonian CfA jslavin at cfa.harvard.edu 60 Garden Street, MS 83 phone: (617) 496-7981 Cambridge, MA 02138-1516 cell: (781) 363-0035 USA ________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dburke.gw at gmail.com Fri Jan 29 09:04:27 2016 From: dburke.gw at gmail.com (Doug Burke) Date: Fri, 29 Jan 2016 14:04:27 +0000 Subject: [AstroPy] ANN: Sherpa v4.8.0 released In-Reply-To: References: Message-ID: Jon, I thought we were compatible with numpy 1.10 - or did I just forget something we said in the announcement? This does raise a question I've just asked Tom Aldcroft: what is the expectation in the Astronomical Python community about version support for numpy, matplotlib, gcc/gfortran, astropy, ... (we're aiming to improve our Travis-CI build matrix to cover a reasonable set of options). The master branch of sherpa supports matplotlib 1.5: this release was just to get something out that matched CIAO, so this fix didn't make it into the 4.8.0 code. The plan is to have a 4.8.1 release April-ish, which will have the matplotlib fix in. Or you could just use the master branch and compile from source - the changes in it since 4.8.0 are mainly clean up and improvements to the build/test infrastructure. There is a little-bit of improved functionality or bug fixes, but nothing likely to affect you (although, of course, standard disclaimer about using an unreleased branch). If neither of these are workable, then I'm afraid it is a case of creating an anaconda environment and then using % conda install sherpa 'matplotlib < 1.5' astropy ... Doug On Fri, Jan 29, 2016 at 8:53 AM Slavin, Jonathan wrote: > ?Hi Doug, > > I'm happy to hear of the improvements to Sherpa, though I'm disappointed > that it's not compatible with matplotlib 1.5 and Numpy 1.10. Do you expect > to overcome those incompatibilities soon? I use anaconda, so to use Sherpa > at this point, I suppose I'd need to create an environment for it with > those different versions of matplotlib and numpy. > > Regards, > Jon? > > On Fri, Jan 29, 2016 at 7:00 AM, wrote: > >> Date: Thu, 28 Jan 2016 17:36:26 +0000 >> From: Doug Burke >> To: astropy at scipy.org >> Cc: Aneta Siemiginowska >> Subject: [AstroPy] ANN: Sherpa v4.8.0 released >> Message-ID: >> > vMhA at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> > >> >> [It wasn't clear from the docs whether announcements like this are >> supported on the email list; apologies if not] >> >> Dear colleagues, >> >> We are very happy to announce the v4.8.0 release of Sherpa, a Python-based >> fitting and modelling system that has strong support for Astronomy data >> (in >> particular, X-ray data). Thanks to Zenodo and GitHub its DOI is >> http://dx.doi.org/10.5281/zenodo.45243 >> >> Sherpa has recently been moved to an open-development model - with >> development at https://github.com/sherpa/sherpa/ - and this release is >> provided to match the functionality in the CIAO 4.8.0 release. >> >> Release notes are available at >> https://github.com/sherpa/sherpa/releases/tag/4.8.0 and include improved >> functionality (e.g. the addition of the "wstat" statistic introduced by >> XSPEC for incorporating a background when using Poisson statistics and >> improvements to the interface to the XSPEC user models) as well as >> improvements to the build and organisation of the module (e.g. improved >> docstrings). >> >> We provide Linux and OS-X packages for users of the Anaconda Python >> Distribution, >> >> $ conda config --add channels https://conda.binstar.org/sherpa >> $ conda install sherpa >> >> Or source code can be downloaded from >> https://github.com/sherpa/sherpa/tags >> >> One known issue with this release (which is fixed in the master branch) is >> that it can not be used with version 1.5 of matplotlib. It is, as far as >> we >> know, compatible with recent releases of AstroPy! >> >> Further information can be found at >> http://cxc.harvard.edu/contrib/sherpa/ and >> we look forward to any input you have. Please feel free to forward this >> announcement to anyone who you feel would be interested in this release. >> >> For the Sherpa core development team, >> Doug Burke >> > > > > > -- > ________________________________________________________ > Jonathan D. Slavin Harvard-Smithsonian CfA > jslavin at cfa.harvard.edu 60 Garden Street, MS 83 > phone: (617) 496-7981 Cambridge, MA 02138-1516 > cell: (781) 363-0035 USA > ________________________________________________________ > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > https://mail.scipy.org/mailman/listinfo/astropy > -------------- next part -------------- An HTML attachment was scrubbed... URL: From embray at stsci.edu Fri Jan 29 14:00:18 2016 From: embray at stsci.edu (Erik Bray) Date: Fri, 29 Jan 2016 14:00:18 -0500 Subject: [AstroPy] [ANN] PyFITS v3.4.0 released! Message-ID: <56ABB6C2.2020201@stsci.edu> Hi all, It has been a year and a half since the last stand-alone PyFITS release. It wasn't clear if there ever would be one, since PyFITS has been merged into Astropy as the astropy.io.fits module. However it was always my plan to do "one last release" of PyFITS before dropping support for it, and here is that release: https://pypi.python.org/pypi/pyfits/3.4 This release includes most enhancements and bug fixes from astropy.io.fits, up through astropy v1.1.2 (which is not yet released). This excludes a few enhancements specific to astropy, that took advantage of other code in astropy. So this release of PyFITS should stand truly separate from Astropy. That said, using pyfits over astropy.io.fits is discouraged. To give a little more background on this, I'll quote from the changelog [1]: This will likely be the last stand-alone release of PyFITS that does not depend on Astropy. There are a few reasons for this: 1) Development resources for PyFITS are limited, and better focused on newer projects. 2) Astropy incorporates all features of PyFITS, and has many new features from which future development of the FITS reader/writer can benefit, such as a better table interface, units, and datetime types. Since the most beneficial future development in ``astropy.io.fits`` depends on other parts of Astropy there is less motivation to maintain an independent FITS module. Thanks to everyone who contributed to this release--there were several updates contributed by the community through astropy.io.fits--another advantage of being part of Astropy! Erik Bray P.S. On a personal note, for those who know me, today is my last day at STScI. I've taken a new position at Universite Paris Sud. My new position is still embedded in the open source scientific Python world, so I'm sure many of us will still be in touch, and I intend to remain involved in the Astropy project. [1] https://github.com/spacetelescope/PyFITS/blob/master/CHANGES.txt#L10 From npkuin at gmail.com Fri Jan 29 14:37:27 2016 From: npkuin at gmail.com (Paul Kuin) Date: Fri, 29 Jan 2016 19:37:27 +0000 Subject: [AstroPy] [ANN] PyFITS v3.4.0 released! In-Reply-To: <56ABB6C2.2020201@stsci.edu> References: <56ABB6C2.2020201@stsci.edu> Message-ID: Hi Erik, P.S. On a personal note, for those who know me, today is my last day at > STScI. I've taken a new position at Universite Paris Sud. My new position > is still embedded in the open source scientific Python world, so I'm sure > many of us will still be in touch, and I intend to remain involved in the > Astropy project. > > All the best wishes with your new job ! You've made a difference. Paul * * * * * * * * http://www.mssl.ucl.ac.uk/~npmk/ * * * * Dr. N.P.M. Kuin (n.kuin at ucl.ac.uk) phone +44-(0)1483 (prefix) -204211 (work) mobile +44(0)7806985366 skype ID: npkuin Mullard Space Science Laboratory ? University College London ? Holmbury St Mary ? Dorking ? Surrey RH5 6NT? U.K. -------------- next part -------------- An HTML attachment was scrubbed... URL: