From chummels at astro.columbia.edu Fri Sep 2 10:55:58 2011 From: chummels at astro.columbia.edu (Cameron Hummels) Date: Fri, 02 Sep 2011 10:55:58 -0400 Subject: [AstroPy] yt-2.2 Release Announcement Message-ID: <4E60EE7E.2090807@astro.columbia.edu> Release Announcement (Please feel encouraged to forward this message to any other interested parties.) We are proud to announce the release of yt version 2.2. This release includes several new features, bug fixes, and numerous improvements to the code base and documentation. At the new yt homepage, http://yt-project.org/, an installation script, a cookbook, documentation and a guide to getting involved can be found. We are particularly proud of the new GUI, entitled "Reason," which allows real-time exploration of datasets, and which can be used (locally or remotely over SSH) with no dependencies other than a web browser. A basic demonstration of its usage can be found at yt is a community-developed analysis and visualization toolkit for astrophysical simulation data. yt provides full support for Enzo, Orion, Nyx, and FLASH codes, with preliminary support for the RAMSES, ART, and Maestro codes. It can be used to create many common types of data products such as: * Slices * Projections * Profiles * Arbitrary Data Selection * Cosmological Analysis * Halo finding * Parallel AMR Volume Rendering * Gravitationally Bound Objects Analysis There are a few major additions since yt-2.1 (Released April 8, 2011), including: * New web GUI "Reason," designed for efficient remote usage over SSH tunnels * Command-line submission to the yt Hub (http://hub.yt-project.org/) * Absorption line spectrum generator for cosmological simulations * Support for the Nyx code * An order of magnitude speed improvement in the RAMSES support * Experimental interoperability with ParaView * Quad-tree projections, speeding up the process of projecting by up to an order of magnitude and providing better load balancing * "mapserver" for in-browser, Google Maps-style slice and projection visualization * Many bug fixes and performance improvements With this release, we also unveil the yt Hub, an astrophysical simulation-specific location for sharing scripts, analysis and visualization tools, documents and repositories used to generated publications. The yt Hub has been designed to allow programmatic access from the command line, and we encourage you to browse the current offerings and contribute your own. The yt Hub can be found at http://hub.yt-project.org/. Documentation: http://yt-project.org/doc/ Installation:http://yt-project.org/doc/advanced/installing.html Cookbook: http://yt-project.org/doc/cookbook/recipes.html Get Involved:http://yt-project.org/doc/advanced/developing.html If you can't wait to get started, install with: $ wget http://hg.yt-project.org/yt/raw/stable/doc/install_script.sh $ bash install_script.sh Development has been sponsored by the NSF, DOE, and University funding. We invite you to get involved with developing and using yt! Please forward this announcement to interested parties. Sincerely, The yt development team -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas-gaudin at laposte.net Tue Sep 6 11:33:05 2011 From: nicolas-gaudin at laposte.net (Nicolas Gaudin) Date: Tue, 6 Sep 2011 17:33:05 +0200 Subject: [AstroPy] Curious bug in Pywcs? Message-ID: <201109061733.06135.nicolas-gaudin@laposte.net> Hi, I think there is a bug in Pywcs (package pywcs-1.10-4.7.tar.gz, python2.7.1-0ubuntu5, matplotlib0.99.3-1ubuntu4). A minimal exemple : -- import pyfits import pywcs from matplotlib import pyplot fits = pyfits.open('/home/gaudin/thesis/data/things/DDO154_NA_MOM0_THINGS.FITS') head = fits[0].header pyplot.figure() wcs = pywcs.WCS(head) print wcs.all_pix2sky([[0., 0., 0., 0.],], 0) -- The .FITS can be found here: http://www.mpia.de/THINGS/Data.html, it prints (always? the bad result): [[ 17.74922799 -14.30123514 1. 29. ]] If I comment the line pyplot.figure() I have the good result: [[ 1.93759669e+02 2.69403525e+01 1.00000000e+00 1.44255606e+09]] I don't understand why the call pyplot.figure(), only *before* the call to pywcs.WCS(head), produces a bug in pywcs.all_pix2sky. Could you confirm this strange behaviour? Is it a bug? From embray at stsci.edu Tue Sep 6 11:54:54 2011 From: embray at stsci.edu (Erik Bray) Date: Tue, 6 Sep 2011 11:54:54 -0400 Subject: [AstroPy] Curious bug in Pywcs? In-Reply-To: <201109061733.06135.nicolas-gaudin@laposte.net> References: <201109061733.06135.nicolas-gaudin@laposte.net> Message-ID: <4E66424E.1060403@stsci.edu> On 09/06/2011 11:33 AM, Nicolas Gaudin wrote: > Hi, > > I think there is a bug in Pywcs (package pywcs-1.10-4.7.tar.gz, > python2.7.1-0ubuntu5, matplotlib0.99.3-1ubuntu4). A minimal exemple : > > -- > import pyfits > import pywcs > from matplotlib import pyplot > > fits = pyfits.open('/home/gaudin/thesis/data/things/DDO154_NA_MOM0_THINGS.FITS') > head = fits[0].header > pyplot.figure() > wcs = pywcs.WCS(head) > print wcs.all_pix2sky([[0., 0., 0., 0.],], 0) > -- > > The .FITS can be found here: http://www.mpia.de/THINGS/Data.html, it prints > (always? the bad result): > [[ 17.74922799 -14.30123514 1. 29. ]] > > If I comment the line pyplot.figure() I have the good result: > [[ 1.93759669e+02 2.69403525e+01 1.00000000e+00 1.44255606e+09]] > > I don't understand why the call pyplot.figure(), only *before* the call to > pywcs.WCS(head), produces a bug in pywcs.all_pix2sky. > > Could you confirm this strange behaviour? Is it a bug? I can't offer any answers on this, unfortunately. But for what it's worth I can't reproduce this behavior. I tried it with PyFITS 2.3.2 and PyFITS from trunk, pywcs 1.9 and 1.10, and matplotlib 1.0. Erik B. From mdroe at stsci.edu Tue Sep 6 20:09:19 2011 From: mdroe at stsci.edu (Michael Droettboom) Date: Tue, 6 Sep 2011 20:09:19 -0400 Subject: [AstroPy] Curious bug in Pywcs? In-Reply-To: <4E66424E.1060403@stsci.edu> References: <201109061733.06135.nicolas-gaudin@laposte.net> <4E66424E.1060403@stsci.edu> Message-ID: <4E66B62F.9050205@stsci.edu> I'm also having trouble reproducing this, even with the exact versions you specify. What version of numpy are you running? Which matplotlib backend are you using? One thing that might be useful to get to the bottom of this might be the output from valgrind --> > valgrind --tool=memcheck --leak-check=yes python script.py (where script.py is the name of the minimal example script you provided). Mike On 09/06/2011 11:54 AM, Erik Bray wrote: > On 09/06/2011 11:33 AM, Nicolas Gaudin wrote: >> Hi, >> >> I think there is a bug in Pywcs (package pywcs-1.10-4.7.tar.gz, >> python2.7.1-0ubuntu5, matplotlib0.99.3-1ubuntu4). A minimal exemple : >> >> -- >> import pyfits >> import pywcs >> from matplotlib import pyplot >> >> fits = pyfits.open('/home/gaudin/thesis/data/things/DDO154_NA_MOM0_THINGS.FITS') >> head = fits[0].header >> pyplot.figure() >> wcs = pywcs.WCS(head) >> print wcs.all_pix2sky([[0., 0., 0., 0.],], 0) >> -- >> >> The .FITS can be found here: http://www.mpia.de/THINGS/Data.html, it prints >> (always? the bad result): >> [[ 17.74922799 -14.30123514 1. 29. ]] >> >> If I comment the line pyplot.figure() I have the good result: >> [[ 1.93759669e+02 2.69403525e+01 1.00000000e+00 1.44255606e+09]] >> >> I don't understand why the call pyplot.figure(), only *before* the call to >> pywcs.WCS(head), produces a bug in pywcs.all_pix2sky. >> >> Could you confirm this strange behaviour? Is it a bug? > I can't offer any answers on this, unfortunately. But for what it's > worth I can't reproduce this behavior. I tried it with PyFITS 2.3.2 and > PyFITS from trunk, pywcs 1.9 and 1.10, and matplotlib 1.0. > > Erik B. > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From nicolas-gaudin at laposte.net Wed Sep 7 04:19:42 2011 From: nicolas-gaudin at laposte.net (Nicolas Gaudin) Date: Wed, 7 Sep 2011 10:19:42 +0200 Subject: [AstroPy] Curious bug in Pywcs? In-Reply-To: <4E66B62F.9050205@stsci.edu> References: <201109061733.06135.nicolas-gaudin@laposte.net> <4E66424E.1060403@stsci.edu> <4E66B62F.9050205@stsci.edu> Message-ID: <201109071019.42138.nicolas-gaudin@laposte.net> Thank you for your tests. I was at home, with packages from debian testing (python 2.6 and matplotlib-1.0.1-3) and I reproduce the bug. I use Qt4Agg. Indeed, GTKAgg reproduces the bug but Agg and TkAgg don't. So, I will switch to another backend. The output of Valgrind is verbose, I will print only the last lines. I've used the option --leak-check=full. For a working backend (TkAgg): [...] ==15463== 786,432 bytes in 3 blocks are still reachable in loss record 2,167 of 2,169 ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- x86-linux.so) ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) ==15463== by 0x809652F: PyObject_Realloc (in /usr/bin/python2.6) ==15463== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) ==15463== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) ==15463== by 0x805E81E: ??? (in /usr/bin/python2.6) ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) ==15463== ==15463== 786,432 bytes in 3 blocks are still reachable in loss record 2,168 of 2,169 ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- x86-linux.so) ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) ==15463== by 0x805E7CA: ??? (in /usr/bin/python2.6) ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) ==15463== ==15463== 1,048,576 bytes in 4 blocks are still reachable in loss record 2,169 of 2,169 ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- x86-linux.so) ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) ==15463== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) ==15463== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) ==15463== by 0x805E81E: ??? (in /usr/bin/python2.6) ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) ==15463== ==15463== LEAK SUMMARY: ==15463== definitely lost: 772 bytes in 10 blocks ==15463== indirectly lost: 860 bytes in 46 blocks ==15463== possibly lost: 1,524,749 bytes in 676 blocks ==15463== still reachable: 11,457,411 bytes in 9,550 blocks ==15463== suppressed: 0 bytes in 0 blocks ==15463== ==15463== For counts of detected and suppressed errors, rerun with: -v ==15463== Use --track-origins=yes to see where uninitialised values come from ==15463== ERROR SUMMARY: 13488 errors from 605 contexts (suppressed: 266 from 12) And with Qt4Agg: [...] ==15478== 786,432 bytes in 3 blocks are still reachable in loss record 2,585 of 2,588 ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- x86-linux.so) ==15478== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) ==15478== by 0x805E7CA: ??? (in /usr/bin/python2.6) ==15478== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) ==15478== by 0x80F0321: ??? (in /usr/bin/python2.6) ==15478== by 0x80F1520: ??? (in /usr/bin/python2.6) ==15478== by 0x80F179F: ??? (in /usr/bin/python2.6) ==15478== by 0x80F1D75: ??? (in /usr/bin/python2.6) ==15478== by 0x8269003: ??? (in /usr/bin/python2.6) ==15478== ==15478== 1,001,096 bytes in 1 blocks are possibly lost in loss record 2,586 of 2,588 ==15478== at 0x4024604: operator new[](unsigned int) (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) ==15478== by 0x7FED313: RendererAgg::RendererAgg(unsigned int, unsigned int, double, int) (in /usr/lib/pyshared/python2.6/matplotlib/backends/_backend_agg.so) ==15478== by 0x3801573F: ??? (in /usr/lib/valgrind/memcheck-x86-linux) ==15478== ==15478== 1,048,576 bytes in 4 blocks are still reachable in loss record 2,587 of 2,588 ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- x86-linux.so) ==15478== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) ==15478== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) ==15478== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) ==15478== by 0x805E81E: ??? (in /usr/bin/python2.6) ==15478== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) ==15478== by 0x80F0321: ??? (in /usr/bin/python2.6) ==15478== by 0x80F1520: ??? (in /usr/bin/python2.6) ==15478== by 0x80F179F: ??? (in /usr/bin/python2.6) ==15478== by 0x80F1D75: ??? (in /usr/bin/python2.6) ==15478== by 0x8269003: ??? (in /usr/bin/python2.6) ==15478== ==15478== 1,572,864 bytes in 1 blocks are still reachable in loss record 2,588 of 2,588 ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- x86-linux.so) ==15478== by 0x808E8E3: ??? (in /usr/bin/python2.6) ==15478== by 0x809E61A: PyString_InternInPlace (in /usr/bin/python2.6) ==15478== by 0x80921C2: PyDict_SetItemString (in /usr/bin/python2.6) ==15478== by 0xA2A8ACD: ??? (in /usr/lib/python2.6/dist-packages/sip.so) ==15478== by 0xA2A8E9E: ??? (in /usr/lib/python2.6/dist-packages/sip.so) ==15478== by 0xA2A8FF4: ??? (in /usr/lib/python2.6/dist-packages/sip.so) ==15478== by 0x80D7035: PyEval_EvalFrameEx (in /usr/bin/python2.6) ==15478== by 0x80DBB26: PyEval_EvalCodeEx (in /usr/bin/python2.6) ==15478== by 0x80DBC36: PyEval_EvalCode (in /usr/bin/python2.6) ==15478== by 0x80F0128: PyImport_ExecCodeModuleEx (in /usr/bin/python2.6) ==15478== by 0x80F047B: ??? (in /usr/bin/python2.6) ==15478== ==15478== LEAK SUMMARY: ==15478== definitely lost: 700 bytes in 12 blocks ==15478== indirectly lost: 766 bytes in 34 blocks ==15478== possibly lost: 1,368,637 bytes in 644 blocks ==15478== still reachable: 12,846,296 bytes in 12,562 blocks ==15478== suppressed: 0 bytes in 0 blocks ==15478== ==15478== For counts of detected and suppressed errors, rerun with: -v ==15478== Use --track-origins=yes to see where uninitialised values come from ==15478== ERROR SUMMARY: 13064 errors from 570 contexts (suppressed: 686 from 12) From mdroe at stsci.edu Wed Sep 7 10:21:51 2011 From: mdroe at stsci.edu (Michael Droettboom) Date: Wed, 7 Sep 2011 10:21:51 -0400 Subject: [AstroPy] Curious bug in Pywcs? In-Reply-To: <201109071019.42138.nicolas-gaudin@laposte.net> References: <201109061733.06135.nicolas-gaudin@laposte.net> <4E66424E.1060403@stsci.edu> <4E66B62F.9050205@stsci.edu> <201109071019.42138.nicolas-gaudin@laposte.net> Message-ID: <4E677DFF.7050904@stsci.edu> It's a helpful data point that some backends work while others don't, but frustratingly, I am still unable to reproduce this myself using the Qt4Agg or GtkAgg backends. Can you send the full valgrind log to me (off-list)? There's nothing particularly special about the last entries -- it's more than likely the memory corruption occurs earlier than that. Mike On 09/07/2011 04:19 AM, Nicolas Gaudin wrote: > Thank you for your tests. > > I was at home, with packages from debian testing (python 2.6 and > matplotlib-1.0.1-3) and I reproduce the bug. I use Qt4Agg. Indeed, GTKAgg > reproduces the bug but Agg and TkAgg don't. So, I will switch to another > backend. > > The output of Valgrind is verbose, I will print only the last lines. I've used > the option --leak-check=full. > > For a working backend (TkAgg): > > [...] > ==15463== 786,432 bytes in 3 blocks are still reachable in loss record 2,167 > of 2,169 > ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- > x86-linux.so) > ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) > ==15463== by 0x809652F: PyObject_Realloc (in /usr/bin/python2.6) > ==15463== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) > ==15463== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) > ==15463== by 0x805E81E: ??? (in /usr/bin/python2.6) > ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) > ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) > ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) > ==15463== > ==15463== 786,432 bytes in 3 blocks are still reachable in loss record 2,168 > of 2,169 > ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- > x86-linux.so) > ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) > ==15463== by 0x805E7CA: ??? (in /usr/bin/python2.6) > ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) > ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) > ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) > ==15463== > ==15463== 1,048,576 bytes in 4 blocks are still reachable in loss record 2,169 > of 2,169 > ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- > x86-linux.so) > ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) > ==15463== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) > ==15463== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) > ==15463== by 0x805E81E: ??? (in /usr/bin/python2.6) > ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) > ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) > ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) > ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) > ==15463== > ==15463== LEAK SUMMARY: > ==15463== definitely lost: 772 bytes in 10 blocks > ==15463== indirectly lost: 860 bytes in 46 blocks > ==15463== possibly lost: 1,524,749 bytes in 676 blocks > ==15463== still reachable: 11,457,411 bytes in 9,550 blocks > ==15463== suppressed: 0 bytes in 0 blocks > ==15463== > ==15463== For counts of detected and suppressed errors, rerun with: -v > ==15463== Use --track-origins=yes to see where uninitialised values come from > ==15463== ERROR SUMMARY: 13488 errors from 605 contexts (suppressed: 266 from > 12) > > And with Qt4Agg: > > [...] > ==15478== 786,432 bytes in 3 blocks are still reachable in loss record 2,585 > of 2,588 > ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- > x86-linux.so) > ==15478== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) > ==15478== by 0x805E7CA: ??? (in /usr/bin/python2.6) > ==15478== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) > ==15478== by 0x80F0321: ??? (in /usr/bin/python2.6) > ==15478== by 0x80F1520: ??? (in /usr/bin/python2.6) > ==15478== by 0x80F179F: ??? (in /usr/bin/python2.6) > ==15478== by 0x80F1D75: ??? (in /usr/bin/python2.6) > ==15478== by 0x8269003: ??? (in /usr/bin/python2.6) > ==15478== > ==15478== 1,001,096 bytes in 1 blocks are possibly lost in loss record 2,586 > of 2,588 > ==15478== at 0x4024604: operator new[](unsigned int) (in > /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) > ==15478== by 0x7FED313: RendererAgg::RendererAgg(unsigned int, unsigned > int, double, int) (in > /usr/lib/pyshared/python2.6/matplotlib/backends/_backend_agg.so) > ==15478== by 0x3801573F: ??? (in /usr/lib/valgrind/memcheck-x86-linux) > ==15478== > ==15478== 1,048,576 bytes in 4 blocks are still reachable in loss record 2,587 > of 2,588 > ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- > x86-linux.so) > ==15478== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) > ==15478== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) > ==15478== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) > ==15478== by 0x805E81E: ??? (in /usr/bin/python2.6) > ==15478== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) > ==15478== by 0x80F0321: ??? (in /usr/bin/python2.6) > ==15478== by 0x80F1520: ??? (in /usr/bin/python2.6) > ==15478== by 0x80F179F: ??? (in /usr/bin/python2.6) > ==15478== by 0x80F1D75: ??? (in /usr/bin/python2.6) > ==15478== by 0x8269003: ??? (in /usr/bin/python2.6) > ==15478== > ==15478== 1,572,864 bytes in 1 blocks are still reachable in loss record 2,588 > of 2,588 > ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- > x86-linux.so) > ==15478== by 0x808E8E3: ??? (in /usr/bin/python2.6) > ==15478== by 0x809E61A: PyString_InternInPlace (in /usr/bin/python2.6) > ==15478== by 0x80921C2: PyDict_SetItemString (in /usr/bin/python2.6) > ==15478== by 0xA2A8ACD: ??? (in /usr/lib/python2.6/dist-packages/sip.so) > ==15478== by 0xA2A8E9E: ??? (in /usr/lib/python2.6/dist-packages/sip.so) > ==15478== by 0xA2A8FF4: ??? (in /usr/lib/python2.6/dist-packages/sip.so) > ==15478== by 0x80D7035: PyEval_EvalFrameEx (in /usr/bin/python2.6) > ==15478== by 0x80DBB26: PyEval_EvalCodeEx (in /usr/bin/python2.6) > ==15478== by 0x80DBC36: PyEval_EvalCode (in /usr/bin/python2.6) > ==15478== by 0x80F0128: PyImport_ExecCodeModuleEx (in /usr/bin/python2.6) > ==15478== by 0x80F047B: ??? (in /usr/bin/python2.6) > ==15478== > ==15478== LEAK SUMMARY: > ==15478== definitely lost: 700 bytes in 12 blocks > ==15478== indirectly lost: 766 bytes in 34 blocks > ==15478== possibly lost: 1,368,637 bytes in 644 blocks > ==15478== still reachable: 12,846,296 bytes in 12,562 blocks > ==15478== suppressed: 0 bytes in 0 blocks > ==15478== > ==15478== For counts of detected and suppressed errors, rerun with: -v > ==15478== Use --track-origins=yes to see where uninitialised values come from > ==15478== ERROR SUMMARY: 13064 errors from 570 contexts (suppressed: 686 from > 12) > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From mdroe at stsci.edu Wed Sep 7 17:12:49 2011 From: mdroe at stsci.edu (Michael Droettboom) Date: Wed, 7 Sep 2011 17:12:49 -0400 Subject: [AstroPy] Curious bug in Pywcs? In-Reply-To: <4E677DFF.7050904@stsci.edu> References: <201109061733.06135.nicolas-gaudin@laposte.net> <4E66424E.1060403@stsci.edu> <4E66B62F.9050205@stsci.edu> <201109071019.42138.nicolas-gaudin@laposte.net> <4E677DFF.7050904@stsci.edu> Message-ID: <4E67DE51.3060606@stsci.edu> Curious. Thanks for sending me the full valgrind log. There is some curiousness -- it's trying to refree and already free'd Python object. But it's not clear to me why. And unfortunately, I'm still not able to reproduce here, so I'm rather stumped. I have not tried python-2.6, as you're using here. But I believe earlier you said you also experience this on python-2.7, so I don't want to go through the trouble of setting up a full Python 2.6 environment if that turns out to not be the missing variable. I will think on this some more -- in the meantime I'm glad you've found a workaround, but it would also be nice to get to the bottom of the cause. Mike On 09/07/2011 10:21 AM, Michael Droettboom wrote: > It's a helpful data point that some backends work while others don't, > but frustratingly, I am still unable to reproduce this myself using the > Qt4Agg or GtkAgg backends. > > Can you send the full valgrind log to me (off-list)? There's nothing > particularly special about the last entries -- it's more than likely the > memory corruption occurs earlier than that. > > Mike > > On 09/07/2011 04:19 AM, Nicolas Gaudin wrote: >> Thank you for your tests. >> >> I was at home, with packages from debian testing (python 2.6 and >> matplotlib-1.0.1-3) and I reproduce the bug. I use Qt4Agg. Indeed, GTKAgg >> reproduces the bug but Agg and TkAgg don't. So, I will switch to another >> backend. >> >> The output of Valgrind is verbose, I will print only the last lines. I've used >> the option --leak-check=full. >> >> For a working backend (TkAgg): >> >> [...] >> ==15463== 786,432 bytes in 3 blocks are still reachable in loss record 2,167 >> of 2,169 >> ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- >> x86-linux.so) >> ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) >> ==15463== by 0x809652F: PyObject_Realloc (in /usr/bin/python2.6) >> ==15463== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) >> ==15463== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) >> ==15463== by 0x805E81E: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) >> ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) >> ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) >> ==15463== >> ==15463== 786,432 bytes in 3 blocks are still reachable in loss record 2,168 >> of 2,169 >> ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- >> x86-linux.so) >> ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) >> ==15463== by 0x805E7CA: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) >> ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) >> ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) >> ==15463== >> ==15463== 1,048,576 bytes in 4 blocks are still reachable in loss record 2,169 >> of 2,169 >> ==15463== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- >> x86-linux.so) >> ==15463== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) >> ==15463== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) >> ==15463== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) >> ==15463== by 0x805E81E: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) >> ==15463== by 0x80F0321: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F1520: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F179F: ??? (in /usr/bin/python2.6) >> ==15463== by 0x80F1D75: ??? (in /usr/bin/python2.6) >> ==15463== by 0x8269003: ??? (in /usr/bin/python2.6) >> ==15463== >> ==15463== LEAK SUMMARY: >> ==15463== definitely lost: 772 bytes in 10 blocks >> ==15463== indirectly lost: 860 bytes in 46 blocks >> ==15463== possibly lost: 1,524,749 bytes in 676 blocks >> ==15463== still reachable: 11,457,411 bytes in 9,550 blocks >> ==15463== suppressed: 0 bytes in 0 blocks >> ==15463== >> ==15463== For counts of detected and suppressed errors, rerun with: -v >> ==15463== Use --track-origins=yes to see where uninitialised values come from >> ==15463== ERROR SUMMARY: 13488 errors from 605 contexts (suppressed: 266 from >> 12) >> >> And with Qt4Agg: >> >> [...] >> ==15478== 786,432 bytes in 3 blocks are still reachable in loss record 2,585 >> of 2,588 >> ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- >> x86-linux.so) >> ==15478== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) >> ==15478== by 0x805E7CA: ??? (in /usr/bin/python2.6) >> ==15478== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) >> ==15478== by 0x80F0321: ??? (in /usr/bin/python2.6) >> ==15478== by 0x80F1520: ??? (in /usr/bin/python2.6) >> ==15478== by 0x80F179F: ??? (in /usr/bin/python2.6) >> ==15478== by 0x80F1D75: ??? (in /usr/bin/python2.6) >> ==15478== by 0x8269003: ??? (in /usr/bin/python2.6) >> ==15478== >> ==15478== 1,001,096 bytes in 1 blocks are possibly lost in loss record 2,586 >> of 2,588 >> ==15478== at 0x4024604: operator new[](unsigned int) (in >> /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) >> ==15478== by 0x7FED313: RendererAgg::RendererAgg(unsigned int, unsigned >> int, double, int) (in >> /usr/lib/pyshared/python2.6/matplotlib/backends/_backend_agg.so) >> ==15478== by 0x3801573F: ??? (in /usr/lib/valgrind/memcheck-x86-linux) >> ==15478== >> ==15478== 1,048,576 bytes in 4 blocks are still reachable in loss record 2,587 >> of 2,588 >> ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- >> x86-linux.so) >> ==15478== by 0x8096176: PyObject_Malloc (in /usr/bin/python2.6) >> ==15478== by 0x8157AA4: PyNode_AddChild (in /usr/bin/python2.6) >> ==15478== by 0x8157EBE: PyParser_AddToken (in /usr/bin/python2.6) >> ==15478== by 0x805E81E: ??? (in /usr/bin/python2.6) >> ==15478== by 0x80FAE9E: PyParser_ASTFromFile (in /usr/bin/python2.6) >> ==15478== by 0x80F0321: ??? (in /usr/bin/python2.6) >> ==15478== by 0x80F1520: ??? (in /usr/bin/python2.6) >> ==15478== by 0x80F179F: ??? (in /usr/bin/python2.6) >> ==15478== by 0x80F1D75: ??? (in /usr/bin/python2.6) >> ==15478== by 0x8269003: ??? (in /usr/bin/python2.6) >> ==15478== >> ==15478== 1,572,864 bytes in 1 blocks are still reachable in loss record 2,588 >> of 2,588 >> ==15478== at 0x4025018: malloc (in /usr/lib/valgrind/vgpreload_memcheck- >> x86-linux.so) >> ==15478== by 0x808E8E3: ??? (in /usr/bin/python2.6) >> ==15478== by 0x809E61A: PyString_InternInPlace (in /usr/bin/python2.6) >> ==15478== by 0x80921C2: PyDict_SetItemString (in /usr/bin/python2.6) >> ==15478== by 0xA2A8ACD: ??? (in /usr/lib/python2.6/dist-packages/sip.so) >> ==15478== by 0xA2A8E9E: ??? (in /usr/lib/python2.6/dist-packages/sip.so) >> ==15478== by 0xA2A8FF4: ??? (in /usr/lib/python2.6/dist-packages/sip.so) >> ==15478== by 0x80D7035: PyEval_EvalFrameEx (in /usr/bin/python2.6) >> ==15478== by 0x80DBB26: PyEval_EvalCodeEx (in /usr/bin/python2.6) >> ==15478== by 0x80DBC36: PyEval_EvalCode (in /usr/bin/python2.6) >> ==15478== by 0x80F0128: PyImport_ExecCodeModuleEx (in /usr/bin/python2.6) >> ==15478== by 0x80F047B: ??? (in /usr/bin/python2.6) >> ==15478== >> ==15478== LEAK SUMMARY: >> ==15478== definitely lost: 700 bytes in 12 blocks >> ==15478== indirectly lost: 766 bytes in 34 blocks >> ==15478== possibly lost: 1,368,637 bytes in 644 blocks >> ==15478== still reachable: 12,846,296 bytes in 12,562 blocks >> ==15478== suppressed: 0 bytes in 0 blocks >> ==15478== >> ==15478== For counts of detected and suppressed errors, rerun with: -v >> ==15478== Use --track-origins=yes to see where uninitialised values come from >> ==15478== ERROR SUMMARY: 13064 errors from 570 contexts (suppressed: 686 from >> 12) >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From nicolas-gaudin at laposte.net Thu Sep 8 04:58:44 2011 From: nicolas-gaudin at laposte.net (Nicolas Gaudin) Date: Thu, 8 Sep 2011 10:58:44 +0200 Subject: [AstroPy] Curious bug in Pywcs? In-Reply-To: <4E67DE51.3060606@stsci.edu> References: <201109061733.06135.nicolas-gaudin@laposte.net> <4E677DFF.7050904@stsci.edu> <4E67DE51.3060606@stsci.edu> Message-ID: <201109081058.44617.nicolas-gaudin@laposte.net> I've set up a virtual machine from scratch to see why and where my specific installation has the bug since you cannot reproduce it. I use Virtualbox, install debian-6.0.2.1-i386-netinst.iso (options set to default), the numpy, scipy, matplotlib (+dvipng for my matplotlibrc), & pyfits packages from repository, and build & install current release PyWCS from website trac6.assembla.com/astrolib. No bug. All work fine. :/ I will ?build? my full working environment until the bug comes back (or not?). This will take time. I will post my results later. From eebanado at uc.cl Tue Sep 13 09:38:22 2011 From: eebanado at uc.cl (=?ISO-8859-1?Q?Eduardo_Ba=F1ados_Torres?=) Date: Tue, 13 Sep 2011 15:38:22 +0200 Subject: [AstroPy] Problem creating all-sky projection with matplotlib Message-ID: Hi all, I posted this question in Stackoverflow ( http://stackoverflow.com/questions/7355497/curious-bad-behavior-creating-all-sky-projections-with-matplotlib) but I haven't get any answer so far, so I hope some of you can help me :-) In short, I am plotting a density all-sky plot using the molloweide projection. I create objects with coordinates ranging from 0 to 360 deg in RA and from -45 to 90 deg in DEC, but the output I get is the following: image1.png -> http://i56.tinypic.com/24mu96s.png A plot which is OK in RA (0-360) but in DEC ranges only between -35 to 90, so I am missing 10 degrees in the south. But I would expect this image: image2.png -> http://oi53.tinypic.com/2yl1nch.jpg A plot ranging between 0 to360 and -45 to 90 as it was defined I attach the self-contained code to produce these images, I hope someone can tell me if I am doing something wrong that I can't notice now or misunderstanding something in the code or if there is a curious bug in matplotlib?? ############the self-contained example################ import numpy as np import matplotlib.pyplot as plt import matplotlib.backends.backend_agg from math import pi #array between 0 and 360 deg RA = np.random.random(10000)*360 #array between -45 and 90 degrees. By construction! DEC= np.random.random(10000)*135-45 fig = plt.Figure((10, 5)) ax = fig.add_subplot(111,projection='mollweide') ax.grid(True) ax.set_xlabel('RA') ax.set_ylabel('DEC') ax.set_xticklabels(np.arange(30,331,30)) hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-90,90],[0,360]]) #TO RECOVER THE EXPECTED BEHAVIOUR (image2.png), I HAVE TO CHANGE -90 FOR -80 IN THE PREVIOUS LINE: #hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-80,90],[0,360]]) #I DO NOT WHY! extent = (-pi,pi,-pi/2.,pi/2.) image = ax.imshow(hist,extent=extent,clip_on=False,aspect=0.5,origin='lower') cb = fig.colorbar(image, orientation='horizontal') canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) fig.canvas.print_figure("image1.png") ###################################################### Thanks, -- Eduardo Ba?ados -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrdavis at stsci.edu Tue Sep 13 14:18:50 2011 From: mrdavis at stsci.edu (Matt Davis) Date: Tue, 13 Sep 2011 14:18:50 -0400 Subject: [AstroPy] Problem creating all-sky projection with matplotlib In-Reply-To: References: Message-ID: Hi Eduardo, You are seeing this behavior because imshow works just by filling in rows of the image with rows of your array. In your case, your array is zeros in the bottom quarter and non-zero in the top three-quarters, so that's what you see in the resulting plot. Imshow *does not* account for the projection. I think you would prefer to use pcolor so that your data is mapped to the projection. See: http://matplotlib.sourceforge.net/api/axes_api.html#matplotlib.axes.Axes.pcolor Best, Matt Davis On Sep 13, 2011, at 9:38 AM, Eduardo Ba?ados Torres wrote: > Hi all, > > I posted this question in Stackoverflow (http://stackoverflow.com/questions/7355497/curious-bad-behavior-creating-all-sky-projections-with-matplotlib) but I haven't get any answer so far, so I hope some of you can help me :-) > > In short, I am plotting a density all-sky plot using the molloweide projection. I create objects with coordinates ranging from 0 to 360 deg in RA and from -45 to 90 deg in DEC, but the output I get is the following: > > image1.png -> http://i56.tinypic.com/24mu96s.png > > A plot which is OK in RA (0-360) but in DEC ranges only between -35 to 90, so I am missing 10 degrees in the south. > > But I would expect this image: > > image2.png -> http://oi53.tinypic.com/2yl1nch.jpg > A plot ranging between 0 to360 and -45 to 90 as it was defined > > I attach the self-contained code to produce these images, I hope someone can tell me if I am doing something wrong that I can't notice now or misunderstanding something in the code or if there is a curious bug in matplotlib?? > > > ############the self-contained example################ > import numpy as np > > import matplotlib.pyplot as plt > import matplotlib.backends.backend_agg > > from math import pi > > #array between 0 and 360 deg > RA = np.random.random(10000)*360 > > #array between -45 and 90 degrees. By construction! > DEC= np.random.random(10000)*135-45 > > > fig = plt.Figure((10, 5)) > > ax = fig.add_subplot(111,projection='mollweide') > > ax.grid(True) > ax.set_xlabel('RA') > > ax.set_ylabel('DEC') > > ax.set_xticklabels(np.arange(30,331,30)) > > hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-90,90],[0,360]]) > > #TO RECOVER THE EXPECTED BEHAVIOUR (image2.png), I HAVE TO CHANGE -90 FOR -80 IN THE PREVIOUS LINE: > #hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-80,90],[0,360]]) > > #I DO NOT WHY! > > extent = (-pi,pi,-pi/2.,pi/2.) > > image = ax.imshow(hist,extent=extent,clip_on=False,aspect=0.5,origin='lower') > > > cb = fig.colorbar(image, orientation='horizontal') > > canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) > > > fig.canvas.print_figure("image1.png") > > ###################################################### > > Thanks, > > > > -- > Eduardo Ba?ados > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy -------------- next part -------------- An HTML attachment was scrubbed... URL: From eebanado at uc.cl Tue Sep 13 16:30:48 2011 From: eebanado at uc.cl (=?ISO-8859-1?Q?Eduardo_Ba=F1ados_Torres?=) Date: Tue, 13 Sep 2011 22:30:48 +0200 Subject: [AstroPy] Problem creating all-sky projection with matplotlib In-Reply-To: References: Message-ID: Hi Matt, Thanks for your response!! your explanation of why I was getting this behavior is very clear. So, I finally solved the problem (I think) thanks to your answer and the one posted by Joe Kington in Stackoverflow. This time I used pcolormesh instead of imshow and I got this beautiful (at least for me) image: http://i53.tinypic.com/2r6l5k6.png The only "problem" is that now I cannot show the "grid" (ax.grid(True) is not working), so if someone know how to add the grid, please let me know ;) (although this is just a detail) If this is useful for someone else, I attach the "corrected version" of my code (which produces the image above). Thanks again! =========================corrected version======================== import numpy as np import matplotlib.pyplot as plt import matplotlib.backends.backend_agg #array between 0 and 360 deg #CAVEAT: it seems that is needed an array from -180 to 180, so is just a #shift in the coordinates RA = np.random.random(10000)*360-180 #array between -45 and 90 degrees DEC= np.random.random(10000)*135-45 fig = plt.Figure((10, 5)) ax = fig.add_subplot(111,projection='mollweide') ax.set_xlabel('RA') ax.set_ylabel('DEC') ax.set_xticklabels(np.arange(30,331,30)) #The ax.grid is not working though =/ ax.grid(color='r', linestyle='-', linewidth=2) hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[60,40],range=[[-90,90],[-180,180]]) X,Y = np.meshgrid(np.radians(yedges),np.radians(xedges)) image = ax.pcolormesh(X,Y,hist) cb = fig.colorbar(image, orientation='horizontal') canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) fig.canvas.print_figure("image4.png") 2011/9/13 Matt Davis > Hi Eduardo, > > You are seeing this behavior because imshow works just by filling in rows > of the image with rows of your array. In your case, your array is zeros in > the bottom quarter and non-zero in the top three-quarters, so that's what > you see in the resulting plot. Imshow *does not* account for the projection. > > I think you would prefer to use pcolor so that your data is mapped to the > projection. See: > http://matplotlib.sourceforge.net/api/axes_api.html#matplotlib.axes.Axes.pcolor > > Best, > > Matt Davis > > On Sep 13, 2011, at 9:38 AM, Eduardo Ba?ados Torres wrote: > > Hi all, > > I posted this question in Stackoverflow ( > http://stackoverflow.com/questions/7355497/curious-bad-behavior-creating-all-sky-projections-with-matplotlib) > but I haven't get any answer so far, so I hope some of you can help me :-) > > In short, I am plotting a density all-sky plot using the molloweide > projection. I create objects with coordinates ranging from 0 to 360 deg in > RA and from -45 to 90 deg in DEC, but the output I get is the following: > > image1.png -> http://i56.tinypic.com/24mu96s.png > > A plot which is OK in RA (0-360) but in DEC ranges only between -35 to 90, > so I am missing 10 degrees in the south. > > But I would expect this image: > > image2.png -> http://oi53.tinypic.com/2yl1nch.jpg > A plot ranging between 0 to360 and -45 to 90 as it was defined > > I attach the self-contained code to produce these images, I hope someone > can tell me if I am doing something wrong that I can't notice now or > misunderstanding something in the code or if there is a curious bug in > matplotlib?? > > > ############the self-contained example################ > > import numpy as np > import matplotlib.pyplot as plt > import matplotlib.backends.backend_agg > from math import pi > > #array between 0 and 360 deg > RA = np.random.random(10000)*360 > #array between -45 and 90 degrees. By construction! > DEC= np.random.random(10000)*135-45 > > fig = plt.Figure((10, 5)) > > ax = fig.add_subplot(111,projection='mollweide') > > ax.grid(True) > ax.set_xlabel('RA') > > ax.set_ylabel('DEC') > > ax.set_xticklabels(np.arange(30,331,30)) > > hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-90,90],[0,360]]) > #TO RECOVER THE EXPECTED BEHAVIOUR (image2.png), I HAVE TO CHANGE -90 FOR -80 IN THE PREVIOUS LINE: > #hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-80,90],[0,360]]) > #I DO NOT WHY! > > extent = (-pi,pi,-pi/2.,pi/2.) > > image = ax.imshow(hist,extent=extent,clip_on=False,aspect=0.5,origin='lower') > > cb = fig.colorbar(image, orientation='horizontal') > > canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) > > fig.canvas.print_figure("image1.png") > > ###################################################### > > Thanks, > > > > -- > Eduardo Ba?ados > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > > -- Eduardo Ba?ados -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrdavis at stsci.edu Tue Sep 13 16:40:23 2011 From: mrdavis at stsci.edu (Matt Davis) Date: Tue, 13 Sep 2011 16:40:23 -0400 Subject: [AstroPy] Problem creating all-sky projection with matplotlib In-Reply-To: References: Message-ID: <32B58AF5-BC22-416C-8B47-0E958F1854B4@stsci.edu> Hi Eduardo, About your grid issue: just move the grid statement further down in the code. I think matplotlib often just plots things in the order given in the code, so putting the grid at the end makes it come out on top. Another option is to use the zorder keyword. Objects created with the largest zorder keyword are plotted last. I tried your code with the ax.grid call on line 27, just before making the color bar, and I get the grids. Best, Matt Davis On Sep 13, 2011, at 4:30 PM, Eduardo Ba?ados Torres wrote: > Hi Matt, > > Thanks for your response!! your explanation of why I was getting this behavior is very clear. > So, I finally solved the problem (I think) thanks to your answer and the one posted by Joe Kington in Stackoverflow. > > This time I used pcolormesh instead of imshow and I got this beautiful (at least for me) image: > > http://i53.tinypic.com/2r6l5k6.png > > The only "problem" is that now I cannot show the "grid" (ax.grid(True) is not working), so if someone know how to add the grid, please let me know ;) (although this is just a detail) > > If this is useful for someone else, I attach the "corrected version" of my code (which produces the image above). > > Thanks again! > > =========================corrected version======================== > > import numpy as np > import matplotlib.pyplot as plt > import matplotlib.backends.backend_agg > > #array between 0 and 360 deg > #CAVEAT: it seems that is needed an array from -180 to 180, so is just a > #shift in the coordinates > RA = np.random.random(10000)*360-180 > #array between -45 and 90 degrees > DEC= np.random.random(10000)*135-45 > > fig = plt.Figure((10, 5)) > ax = fig.add_subplot(111,projection='mollweide') > > ax.set_xlabel('RA') > ax.set_ylabel('DEC') > > ax.set_xticklabels(np.arange(30,331,30)) > > #The ax.grid is not working though =/ > ax.grid(color='r', linestyle='-', linewidth=2) > > hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[60,40],range=[[-90,90],[-180,180]]) > > X,Y = np.meshgrid(np.radians(yedges),np.radians(xedges)) > > image = ax.pcolormesh(X,Y,hist) > > cb = fig.colorbar(image, orientation='horizontal') > canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) > fig.canvas.print_figure("image4.png") > > > > 2011/9/13 Matt Davis > Hi Eduardo, > > You are seeing this behavior because imshow works just by filling in rows of the image with rows of your array. In your case, your array is zeros in the bottom quarter and non-zero in the top three-quarters, so that's what you see in the resulting plot. Imshow *does not* account for the projection. > > I think you would prefer to use pcolor so that your data is mapped to the projection. See: http://matplotlib.sourceforge.net/api/axes_api.html#matplotlib.axes.Axes.pcolor > > Best, > > Matt Davis > > On Sep 13, 2011, at 9:38 AM, Eduardo Ba?ados Torres wrote: > >> Hi all, >> >> I posted this question in Stackoverflow (http://stackoverflow.com/questions/7355497/curious-bad-behavior-creating-all-sky-projections-with-matplotlib) but I haven't get any answer so far, so I hope some of you can help me :-) >> >> In short, I am plotting a density all-sky plot using the molloweide projection. I create objects with coordinates ranging from 0 to 360 deg in RA and from -45 to 90 deg in DEC, but the output I get is the following: >> >> image1.png -> http://i56.tinypic.com/24mu96s.png >> >> A plot which is OK in RA (0-360) but in DEC ranges only between -35 to 90, so I am missing 10 degrees in the south. >> >> But I would expect this image: >> >> image2.png -> http://oi53.tinypic.com/2yl1nch.jpg >> A plot ranging between 0 to360 and -45 to 90 as it was defined >> >> I attach the self-contained code to produce these images, I hope someone can tell me if I am doing something wrong that I can't notice now or misunderstanding something in the code or if there is a curious bug in matplotlib?? >> >> >> ############the self-contained example################ >> import numpy as np >> >> import matplotlib.pyplot as plt >> import matplotlib.backends.backend_agg >> >> >> from math import pi >> >> #array between 0 and 360 deg >> RA = np.random.random(10000)*360 >> >> >> #array between -45 and 90 degrees. By construction! >> DEC= np.random.random(10000)*135-45 >> >> >> >> fig = plt.Figure((10, 5)) >> >> ax = fig.add_subplot(111,projection='mollweide') >> >> ax.grid(True) >> ax.set_xlabel('RA') >> >> ax.set_ylabel('DEC') >> >> ax.set_xticklabels(np.arange(30,331,30)) >> >> >> hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-90,90],[0,360]]) >> >> >> #TO RECOVER THE EXPECTED BEHAVIOUR (image2.png), I HAVE TO CHANGE -90 FOR -80 IN THE PREVIOUS LINE: >> #hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-80,90],[0,360]]) >> >> >> #I DO NOT WHY! >> >> extent = (-pi,pi,-pi/2.,pi/2.) >> >> >> image = ax.imshow(hist,extent=extent,clip_on=False,aspect=0.5,origin='lower') >> >> >> >> cb = fig.colorbar(image, orientation='horizontal') >> >> >> canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) >> >> >> >> fig.canvas.print_figure("image1.png") >> >> ###################################################### >> >> Thanks, >> >> >> >> -- >> Eduardo Ba?ados >> >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy > > > > > -- > Eduardo Ba?ados > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eebanado at uc.cl Tue Sep 13 16:59:57 2011 From: eebanado at uc.cl (=?ISO-8859-1?Q?Eduardo_Ba=F1ados_Torres?=) Date: Tue, 13 Sep 2011 22:59:57 +0200 Subject: [AstroPy] Problem creating all-sky projection with matplotlib In-Reply-To: <32B58AF5-BC22-416C-8B47-0E958F1854B4@stsci.edu> References: <32B58AF5-BC22-416C-8B47-0E958F1854B4@stsci.edu> Message-ID: Thanks you very much again Matt, All the best, 2011/9/13 Matt Davis > Hi Eduardo, > > About your grid issue: just move the grid statement further down in the > code. I think matplotlib often just plots things in the order given in the > code, so putting the grid at the end makes it come out on top. Another > option is to use the zorder keyword. Objects created with the largest zorder > keyword are plotted last. > > I tried your code with the ax.grid call on line 27, just before making the > color bar, and I get the grids. > > Best, > > Matt Davis > > On Sep 13, 2011, at 4:30 PM, Eduardo Ba?ados Torres wrote: > > Hi Matt, > > Thanks for your response!! your explanation of why I was getting this > behavior is very clear. > So, I finally solved the problem (I think) thanks to your answer and the > one posted by Joe Kington in Stackoverflow. > > This time I used pcolormesh instead of imshow and I got this beautiful (at > least for me) image: > > http://i53.tinypic.com/2r6l5k6.png > > The only "problem" is that now I cannot show the "grid" (ax.grid(True) is > not working), so if someone know how to add the grid, please let me know ;) > (although this is just a detail) > > If this is useful for someone else, I attach the "corrected version" of my > code (which produces the image above). > > Thanks again! > > =========================corrected version======================== > > import numpy as np > import matplotlib.pyplot as plt > import matplotlib.backends.backend_agg > > #array between 0 and 360 deg > #CAVEAT: it seems that is needed an array from -180 to 180, so is just a > #shift in the coordinates > RA = np.random.random(10000)*360-180 > #array between -45 and 90 degrees > DEC= np.random.random(10000)*135-45 > > fig = plt.Figure((10, 5)) > ax = fig.add_subplot(111,projection='mollweide') > > ax.set_xlabel('RA') > ax.set_ylabel('DEC') > > ax.set_xticklabels(np.arange(30,331,30)) > > #The ax.grid is not working though =/ > ax.grid(color='r', linestyle='-', linewidth=2) > > hist,xedges,yedges = > np.histogram2d(DEC,RA,bins=[60,40],range=[[-90,90],[-180,180]]) > > X,Y = np.meshgrid(np.radians(yedges),np.radians(xedges)) > > image = ax.pcolormesh(X,Y,hist) > > cb = fig.colorbar(image, orientation='horizontal') > canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) > fig.canvas.print_figure("image4.png") > > > > 2011/9/13 Matt Davis > >> Hi Eduardo, >> >> You are seeing this behavior because imshow works just by filling in rows >> of the image with rows of your array. In your case, your array is zeros in >> the bottom quarter and non-zero in the top three-quarters, so that's what >> you see in the resulting plot. Imshow *does not* account for the projection. >> >> I think you would prefer to use pcolor so that your data is mapped to the >> projection. See: >> http://matplotlib.sourceforge.net/api/axes_api.html#matplotlib.axes.Axes.pcolor >> >> Best, >> >> Matt Davis >> >> On Sep 13, 2011, at 9:38 AM, Eduardo Ba?ados Torres wrote: >> >> Hi all, >> >> I posted this question in Stackoverflow ( >> http://stackoverflow.com/questions/7355497/curious-bad-behavior-creating-all-sky-projections-with-matplotlib) >> but I haven't get any answer so far, so I hope some of you can help me :-) >> >> In short, I am plotting a density all-sky plot using the molloweide >> projection. I create objects with coordinates ranging from 0 to 360 deg in >> RA and from -45 to 90 deg in DEC, but the output I get is the following: >> >> image1.png -> http://i56.tinypic.com/24mu96s.png >> >> A plot which is OK in RA (0-360) but in DEC ranges only between -35 to 90, >> so I am missing 10 degrees in the south. >> >> But I would expect this image: >> >> image2.png -> http://oi53.tinypic.com/2yl1nch.jpg >> A plot ranging between 0 to360 and -45 to 90 as it was defined >> >> I attach the self-contained code to produce these images, I hope someone >> can tell me if I am doing something wrong that I can't notice now or >> misunderstanding something in the code or if there is a curious bug in >> matplotlib?? >> >> >> ############the self-contained example################ >> >> import numpy as np >> import matplotlib.pyplot as plt >> import matplotlib.backends.backend_agg >> >> from math import pi >> >> #array between 0 and 360 deg >> RA = np.random.random(10000)*360 >> >> #array between -45 and 90 degrees. By construction! >> DEC= np.random.random(10000)*135-45 >> >> >> fig = plt.Figure((10, 5)) >> >> ax = fig.add_subplot(111,projection='mollweide') >> >> ax.grid(True) >> ax.set_xlabel('RA') >> >> ax.set_ylabel('DEC') >> >> ax.set_xticklabels(np.arange(30,331,30)) >> >> >> hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-90,90],[0,360]]) >> >> #TO RECOVER THE EXPECTED BEHAVIOUR (image2.png), I HAVE TO CHANGE -90 FOR -80 IN THE PREVIOUS LINE: >> #hist,xedges,yedges = np.histogram2d(DEC,RA,bins=[90,180],range=[[-80,90],[0,360]]) >> >> #I DO NOT WHY! >> >> extent = (-pi,pi,-pi/2.,pi/2.) >> >> >> image = ax.imshow(hist,extent=extent,clip_on=False,aspect=0.5,origin='lower') >> >> >> cb = fig.colorbar(image, orientation='horizontal') >> >> >> canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) >> >> >> fig.canvas.print_figure("image1.png") >> >> ###################################################### >> >> Thanks, >> >> >> >> -- >> Eduardo Ba?ados >> >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy >> >> >> > > > -- > Eduardo Ba?ados > > > -- Eduardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonca at deepspace.ucsb.edu Tue Sep 13 17:21:47 2011 From: zonca at deepspace.ucsb.edu (Andrea Zonca) Date: Tue, 13 Sep 2011 14:21:47 -0700 Subject: [AstroPy] Problem creating all-sky projection with matplotlib In-Reply-To: References: Message-ID: hi Eduardo, you might want also to check healpy: https://github.com/healpy/healpy which provide tools for dealing with healpix spere pixelization: http://healpix.jpl.nasa.gov/html/intro.htm I posted an example script on stackoverflow: http://stackoverflow.com/q/7408515/597609 cheers, andrea From eebanado at uc.cl Tue Sep 13 17:40:29 2011 From: eebanado at uc.cl (=?ISO-8859-1?Q?Eduardo_Ba=F1ados_Torres?=) Date: Tue, 13 Sep 2011 23:40:29 +0200 Subject: [AstroPy] Problem creating all-sky projection with matplotlib In-Reply-To: References: Message-ID: Hi Andrea, I will check it out ;) Thanks, Eduardo 2011/9/13 Andrea Zonca > hi Eduardo, > you might want also to check healpy: > https://github.com/healpy/healpy > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oneaufs at gmail.com Fri Sep 16 11:02:43 2011 From: oneaufs at gmail.com (Prasanth) Date: Fri, 16 Sep 2011 20:32:43 +0530 Subject: [AstroPy] equivalent routine to IDL Astronomy Library lineid_plot In-Reply-To: <1314199146.26304.16.camel@shevek> References: <1314029910.24674.24.camel@shevek> <4E549D5A.5090801@gmail.com> <1314196443.26304.9.camel@shevek> <1314199146.26304.16.camel@shevek> Message-ID: Hello, My attempt at implementing the lineid_plot.pro algorithm is available at http://github.com/phn/lineid_plot . The automatic layout calculation is identical to that in the IDL procedure. But I have tried to make use of the annotate feature provided by Matplotlib for the rest of the code. I don't have access to IDL and hence couldn't properly explore the working of the IDL code. GDL can't run the code. After implementing the layout calculations, the rest of the code, surprisingly, turns out to be not difficult to write at all. This probably means that I am blind to some obvious problems! Feedback will be greatly appreciated. Thanks, Prasanth On Wed, Aug 24, 2011 at 8:49 PM, Jonathan Slavin wrote: > If I had the time to do that adaptation, I'd certainly do it -- and may > yet -- but it does depend on a lot of things that are very IDL specific > such as the character size and plot region, etc. I think that there are > some good hints in the discussion on automatically creating enough room > for tick labels (http://matplotlib.sourceforge.net/faq/howto_faq.html), > but there's quite a bit more needed than that. > > Jon > > On Wed, 2011-08-24 at 11:48 -0300, Taro Sato wrote: > > I have my own custom routine to display line identifications at given > > redshift but it's not smart enough to avoid overlapping; it only > > alternates the offsets so that adjacent labels won't always overlap. > > What you have in your example plot is certainly doable with MPL... > > It's tricky to ensure that labels are readable most of the time but > > since you know how to approach the problem, why don't you create one > > and make it available publicly! If the desired algorithm needed is > > already coded in the IDL script it shouldn't be too painful. :D > -- > ______________________________________________________________ > Jonathan D. Slavin Harvard-Smithsonian CfA > jslavin at cfa.harvard.edu 60 Garden Street, MS 83 > phone: (617) 496-7981 Cambridge, MA 02138-1516 > cell: (781) 363-0035 USA > ______________________________________________________________ > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslavin at cfa.harvard.edu Fri Sep 16 13:33:37 2011 From: jslavin at cfa.harvard.edu (Jonathan Slavin) Date: Fri, 16 Sep 2011 13:33:37 -0400 Subject: [AstroPy] equivalent routine to IDL Astronomy Library lineid_plot In-Reply-To: References: <1314029910.24674.24.camel@shevek> <4E549D5A.5090801@gmail.com> <1314196443.26304.9.camel@shevek> <1314199146.26304.16.camel@shevek> Message-ID: <1316194417.16090.15.camel@shevek> Hi Prasanth, Nice work! The images of plots look good. Unfortunately I'm running into a problem when running the example: /export/slavin/python/phn-lineid_plot-a88d5f0/lineid_plot.py in plot_line_ids(wave, flux, line_wave, line_label1, label1_size, extend, **kwargs) 182 # coordinates and extract the width. 183 for box in ax.texts: --> 184 b_ext = box.get_window_extent() 185 box_widths.append(b_ext.transformed(ax_inv_trans).width) 186 /usr/local/lib/python2.6/site-packages/matplotlib/text.pyc in get_window_extent(self, renderer, dpi) 736 self._renderer = renderer 737 if self._renderer is None: --> 738 raise RuntimeError('Cannot get window extent w/o renderer') 739 740 bbox, info = self._get_layout(self._renderer) RuntimeError: Cannot get window extent w/o renderer I haven't been able to get it to work from the command line -- i.e. the __name__ = "__main__", doing python lineid_plot -- though I have gotten it to work in an interactive session. I think an Axes instance is necessary, so you might want to add ax=None as an explicit keyword and add: if ax == None: ax = plt.gca() For my part, I'd prefer not to have arrow heads on the lines, so I would change the arrowstyle in arrowprops to "-". I'll look over the code some more and may offer more suggestions. Thanks, Jon On Fri, 2011-09-16 at 20:32 +0530, Prasanth wrote: > Hello, > > My attempt at implementing the lineid_plot.pro algorithm is available > at > http://github.com/phn/lineid_plot. > > The automatic layout calculation is identical to that in the IDL > procedure. But I > have tried to make use of the annotate feature provided by Matplotlib > for the rest of the > code. > > I don't have access to IDL and hence couldn't properly explore the > working of > the IDL code. GDL can't run the code. > > After implementing the layout calculations, the rest of the code, > surprisingly, turns out > to be not difficult to write at all. This probably means that I am > blind to some > obvious problems! > > Feedback will be greatly appreciated. > > Thanks, > Prasanth > > On Wed, Aug 24, 2011 at 8:49 PM, Jonathan Slavin > wrote: > If I had the time to do that adaptation, I'd certainly do it > -- and may > yet -- but it does depend on a lot of things that are very IDL > specific > such as the character size and plot region, etc. I think that > there are > some good hints in the discussion on automatically creating > enough room > for tick labels > (http://matplotlib.sourceforge.net/faq/howto_faq.html), > but there's quite a bit more needed than that. > > Jon > > On Wed, 2011-08-24 at 11:48 -0300, Taro Sato wrote: > > I have my own custom routine to display line identifications > at given > > redshift but it's not smart enough to avoid overlapping; it > only > > alternates the offsets so that adjacent labels won't always > overlap. > > What you have in your example plot is certainly doable with > MPL... > > It's tricky to ensure that labels are readable most of the > time but > > since you know how to approach the problem, why don't you > create one > > and make it available publicly! If the desired algorithm > needed is > > already coded in the IDL script it shouldn't be too > painful. :D > -- > > ______________________________________________________________ > Jonathan D. Slavin Harvard-Smithsonian CfA > jslavin at cfa.harvard.edu 60 Garden Street, MS 83 > phone: (617) 496-7981 Cambridge, MA 02138-1516 > cell: (781) 363-0035 USA > ______________________________________________________________ > > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > -- ______________________________________________________________ Jonathan D. Slavin Harvard-Smithsonian CfA jslavin at cfa.harvard.edu 60 Garden Street, MS 83 phone: (617) 496-7981 Cambridge, MA 02138-1516 cell: (781) 363-0035 USA ______________________________________________________________ From oneaufs at gmail.com Sat Sep 17 01:46:23 2011 From: oneaufs at gmail.com (Prasanth) Date: Sat, 17 Sep 2011 11:16:23 +0530 Subject: [AstroPy] equivalent routine to IDL Astronomy Library lineid_plot In-Reply-To: <1316194417.16090.15.camel@shevek> References: <1314029910.24674.24.camel@shevek> <4E549D5A.5090801@gmail.com> <1314196443.26304.9.camel@shevek> <1314199146.26304.16.camel@shevek> <1316194417.16090.15.camel@shevek> Message-ID: Hello, Thanks for trying it out. I can't reproduce the error you are getting while running from the command line. I tried it in Ubuntu 11.04 using Python 2.6 with Matplotlib 0.99.3 and 1.0.1, and Python 2.7 with Matplotlib 1.0.1. I could re-create the exception by using the following code, though: >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> # No error here. >>> lineid_plot.plot_line_ids(wave, flux, line_wave, line_label1, ax=ax) (, ) >>> plt.clf() >>> # Try using the Axes that was cleared by plt.clf(). Raises exception. >>> lineid_plot.plot_line_ids(wave, flux, line_wave, line_label1, ax=ax) But if I don't pass the Axes that was cleared up by plt.clf(), then I don't run into any problem. >>> lineid_plot.plot_line_ids(wave, flux, line_wave, line_label1) I was going to remove the arrow head, over the next few commits. In any case, the function plot_line_ids() returns both the Figure and the Axes. So one can easily customize the Annotation boxes and arrows by looking through the ax.texts list. One thing I want to do is to assign meaningful unique labels to the label attribute of these boxes. They can then be easily found and customized, say using fig.findobj(). Thanks, Prasanth On Fri, Sep 16, 2011 at 11:03 PM, Jonathan Slavin wrote: > Hi Prasanth, > > Nice work! The images of plots look good. Unfortunately I'm running > into a problem when running the example: > > /export/slavin/python/phn-lineid_plot-a88d5f0/lineid_plot.py in > plot_line_ids(wave, flux, line_wave, line_label1, label1_size, extend, > **kwargs) > 182 # coordinates and extract the width. > > 183 for box in ax.texts: > --> 184 b_ext = box.get_window_extent() > 185 box_widths.append(b_ext.transformed(ax_inv_trans).width) > 186 > > /usr/local/lib/python2.6/site-packages/matplotlib/text.pyc in > get_window_extent(self, renderer, dpi) > 736 self._renderer = renderer > 737 if self._renderer is None: > --> 738 raise RuntimeError('Cannot get window extent w/o > renderer') > 739 > 740 bbox, info = self._get_layout(self._renderer) > > RuntimeError: Cannot get window extent w/o renderer > > I haven't been able to get it to work from the command line -- i.e. the > __name__ = "__main__", doing python lineid_plot -- though I have gotten > it to work in an interactive session. I think an Axes instance is > necessary, so you might want to add ax=None as an explicit keyword and > add: > if ax == None: > ax = plt.gca() > > For my part, I'd prefer not to have arrow heads on the lines, so I would > change the arrowstyle in arrowprops to "-". I'll look over the code > some more and may offer more suggestions. > > Thanks, > Jon > > On Fri, 2011-09-16 at 20:32 +0530, Prasanth wrote: > > Hello, > > > > My attempt at implementing the lineid_plot.pro algorithm is available > > at > > http://github.com/phn/lineid_plot. > > > > The automatic layout calculation is identical to that in the IDL > > procedure. But I > > have tried to make use of the annotate feature provided by Matplotlib > > for the rest of the > > code. > > > > I don't have access to IDL and hence couldn't properly explore the > > working of > > the IDL code. GDL can't run the code. > > > > After implementing the layout calculations, the rest of the code, > > surprisingly, turns out > > to be not difficult to write at all. This probably means that I am > > blind to some > > obvious problems! > > > > Feedback will be greatly appreciated. > > > > Thanks, > > Prasanth > > > > On Wed, Aug 24, 2011 at 8:49 PM, Jonathan Slavin > > wrote: > > If I had the time to do that adaptation, I'd certainly do it > > -- and may > > yet -- but it does depend on a lot of things that are very IDL > > specific > > such as the character size and plot region, etc. I think that > > there are > > some good hints in the discussion on automatically creating > > enough room > > for tick labels > > (http://matplotlib.sourceforge.net/faq/howto_faq.html), > > but there's quite a bit more needed than that. > > > > Jon > > > > On Wed, 2011-08-24 at 11:48 -0300, Taro Sato wrote: > > > I have my own custom routine to display line identifications > > at given > > > redshift but it's not smart enough to avoid overlapping; it > > only > > > alternates the offsets so that adjacent labels won't always > > overlap. > > > What you have in your example plot is certainly doable with > > MPL... > > > It's tricky to ensure that labels are readable most of the > > time but > > > since you know how to approach the problem, why don't you > > create one > > > and make it available publicly! If the desired algorithm > > needed is > > > already coded in the IDL script it shouldn't be too > > painful. :D > > -- > > > > ______________________________________________________________ > > Jonathan D. Slavin Harvard-Smithsonian CfA > > jslavin at cfa.harvard.edu 60 Garden Street, MS 83 > > phone: (617) 496-7981 Cambridge, MA 02138-1516 > > cell: (781) 363-0035 USA > > ______________________________________________________________ > > > > > > > > _______________________________________________ > > AstroPy mailing list > > AstroPy at scipy.org > > http://mail.scipy.org/mailman/listinfo/astropy > > > > > -- > ______________________________________________________________ > Jonathan D. Slavin Harvard-Smithsonian CfA > jslavin at cfa.harvard.edu 60 Garden Street, MS 83 > phone: (617) 496-7981 Cambridge, MA 02138-1516 > cell: (781) 363-0035 USA > ______________________________________________________________ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldcroft at head.cfa.harvard.edu Tue Sep 20 10:10:57 2011 From: aldcroft at head.cfa.harvard.edu (Tom Aldcroft) Date: Tue, 20 Sep 2011 10:10:57 -0400 Subject: [AstroPy] asciitable 0.7.1 Message-ID: I'd like to announce the release of version 0.7.1 of asciitable, an extensible module for reading and writing ASCII tables: http://cxc.harvard.edu/contrib/asciitable/ This is a minor feature and bug-fix release: - Add a method inconsistent_handler() to the BaseReader class as a hook to handle rows with an inconsistent number of data columns (contributed by Erik Tollerud). - Output a more informative error message when guessing fails. - Fix issues in column type handling, mostly related to the MemoryReader class which is used for writing tables. - Fix a problem in guessing where user-supplied args were not filtering the guess possibilities correctly. - Fix problem reading a single column, string-only table with MemoryReader on MacOS. Regards, Tom Aldcroft From dave31415 at gmail.com Wed Sep 21 11:01:30 2011 From: dave31415 at gmail.com (David Johnston) Date: Wed, 21 Sep 2011 10:01:30 -0500 Subject: [AstroPy] Installing scisoft on MacOS 10.6.7 Message-ID: I have been trying to install the usual set of numerical packages on my Mac laptop, numpy, scipy, matplotlib etc. I also need to be able to read jpegs and other images so I see that I probably need PIL. My system is Mac OSX 10.6.7 (Snow Leopard). This is intel core duo hardware. After much work, I have been able to install numpy, scipy, matplotlib but not PIL. This is under python 2.6.6. numpy 1.5.1, scipy 0.9.0, matplotlib 1.0.0. PIL.Image.VERSION=1.1.7 I can import PIL but it fails to work properly. I think I got it to read jpeg files but the Image.show() method failed with raise ImportError("The _imaging C module is not installed") I also tried installing scisoft. I tried versions 2011.17.1 and 2011.1.1. Neither were able to import scipy. Anyone have any advise for installing scisoft on 10.6.7 or some alternative? Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From rowen at uw.edu Wed Sep 21 12:04:31 2011 From: rowen at uw.edu (Russell Owen) Date: Wed, 21 Sep 2011 09:04:31 -0700 Subject: [AstroPy] Installing scisoft on MacOS 10.6.7 In-Reply-To: References: Message-ID: You might try my unofficial binary -- which (like most binary installers) is meant to be used with python.org's 32-bit python: -- Russell On Sep 21, 2011, at 8:01 AM, David Johnston wrote: > I have been trying to install the usual set of numerical packages on my Mac laptop, numpy, scipy, matplotlib etc. I also need to be able to read jpegs and other images so I see that I probably need PIL. > > My system is Mac OSX 10.6.7 (Snow Leopard). This is intel core duo hardware. > > After much work, I have been able to install numpy, scipy, matplotlib but not PIL. This is under python 2.6.6. numpy 1.5.1, scipy 0.9.0, matplotlib 1.0.0. PIL.Image.VERSION=1.1.7 > > I can import PIL but it fails to work properly. I think I got it to read jpeg files but the Image.show() method failed with > raise ImportError("The _imaging C module is not installed") > > I also tried installing scisoft. I tried versions 2011.17.1 and 2011.1.1. Neither were able to import scipy. > > Anyone have any advise for installing scisoft on 10.6.7 or some alternative? > Dave > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From shupe at ipac.caltech.edu Wed Sep 21 12:29:26 2011 From: shupe at ipac.caltech.edu (David Shupe) Date: Wed, 21 Sep 2011 09:29:26 -0700 Subject: [AstroPy] Installing scisoft on MacOS 10.6.7 In-Reply-To: References: Message-ID: <28CF582C-B401-4FB2-9C67-B921FE345900@ipac.caltech.edu> I can endorse the instructions provided by Thomas Robitaille for installing Python using MacPorts. These are at http://astrofrog.github.com/macports-python/ His post to this list is at http://mail.scipy.org/pipermail/astropy/2011-July/001666.html I have used these instructions to install Python with all the scientific packages I wanted on two Macs running Snow Leopard (10.6.8). It looks like PIL can be installed using MacPorts too. Regards, David On Sep 21, 2011, at 8:01 AM, David Johnston wrote: > I have been trying to install the usual set of numerical packages on my Mac laptop, numpy, scipy, matplotlib etc. I also need to be able to read jpegs and other images so I see that I probably need PIL. > > My system is Mac OSX 10.6.7 (Snow Leopard). This is intel core duo hardware. > > After much work, I have been able to install numpy, scipy, matplotlib but not PIL. This is under python 2.6.6. numpy 1.5.1, scipy 0.9.0, matplotlib 1.0.0. PIL.Image.VERSION=1.1.7 > > I can import PIL but it fails to work properly. I think I got it to read jpeg files but the Image.show() method failed with > raise ImportError("The _imaging C module is not installed") > > I also tried installing scisoft. I tried versions 2011.17.1 and 2011.1.1. Neither were able to import scipy. > > Anyone have any advise for installing scisoft on 10.6.7 or some alternative? > Dave > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave31415 at gmail.com Wed Sep 21 14:30:52 2011 From: dave31415 at gmail.com (David Johnston) Date: Wed, 21 Sep 2011 13:30:52 -0500 Subject: [AstroPy] Installing scisoft on MacOS 10.6.7 In-Reply-To: <28CF582C-B401-4FB2-9C67-B921FE345900@ipac.caltech.edu> References: <28CF582C-B401-4FB2-9C67-B921FE345900@ipac.caltech.edu> Message-ID: Thanks David. I followed those instructions and everything appears to work. On Wed, Sep 21, 2011 at 11:29 AM, David Shupe wrote: > I can endorse the instructions provided by Thomas Robitaille for installing > Python using MacPorts. These are at > http://astrofrog.github.com/macports-python/ > > His post to this list is at > http://mail.scipy.org/pipermail/astropy/2011-July/001666.html > > I have used these instructions to install Python with all the scientific > packages I wanted on two Macs running Snow Leopard (10.6.8). It looks like > PIL can be installed using MacPorts too. > > Regards, > David > > On Sep 21, 2011, at 8:01 AM, David Johnston wrote: > > I have been trying to install the usual set of numerical packages on my Mac > laptop, numpy, scipy, matplotlib etc. I also need to be able to read jpegs > and other images so I see that I probably need PIL. > > My system is Mac OSX 10.6.7 (Snow Leopard). This is intel core duo > hardware. > > After much work, I have been able to install numpy, scipy, matplotlib but > not PIL. This is under python 2.6.6. numpy 1.5.1, scipy 0.9.0, matplotlib > 1.0.0. PIL.Image.VERSION=1.1.7 > > I can import PIL but it fails to work properly. I think I got it to read > jpeg files but the Image.show() method failed with > raise ImportError("The _imaging C module is not installed") > > I also tried installing scisoft. I tried versions 2011.17.1 and 2011.1.1. > Neither were able to import scipy. > > Anyone have any advise for installing scisoft on 10.6.7 or some > alternative? > Dave > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cygnusxlist at mac.com Wed Sep 21 17:17:40 2011 From: cygnusxlist at mac.com (cygnusxlist at mac.com) Date: Wed, 21 Sep 2011 17:17:40 -0400 Subject: [AstroPy] Installing scisoft on MacOS 10.6.7 In-Reply-To: References: <28CF582C-B401-4FB2-9C67-B921FE345900@ipac.caltech.edu> Message-ID: If you're interested in a 64-bit configuration, I have a description of the process I've used for configuring two (desktop & laptop) 10.6 systems here: http://dealingwithcreationisminastronomy.blogspot.com/p/building-scientific-toolbox-on-macos-x.html Tom On Sep 21, 2011, at 2:30 PM, David Johnston wrote: > Thanks David. I followed those instructions and everything appears to work. > > On Wed, Sep 21, 2011 at 11:29 AM, David Shupe wrote: > I can endorse the instructions provided by Thomas Robitaille for installing Python using MacPorts. These are at http://astrofrog.github.com/macports-python/ > > His post to this list is at http://mail.scipy.org/pipermail/astropy/2011-July/001666.html > > I have used these instructions to install Python with all the scientific packages I wanted on two Macs running Snow Leopard (10.6.8). It looks like PIL can be installed using MacPorts too. > > Regards, > David > > On Sep 21, 2011, at 8:01 AM, David Johnston wrote: > >> I have been trying to install the usual set of numerical packages on my Mac laptop, numpy, scipy, matplotlib etc. I also need to be able to read jpegs and other images so I see that I probably need PIL. >> >> My system is Mac OSX 10.6.7 (Snow Leopard). This is intel core duo hardware. >> >> After much work, I have been able to install numpy, scipy, matplotlib but not PIL. This is under python 2.6.6. numpy 1.5.1, scipy 0.9.0, matplotlib 1.0.0. PIL.Image.VERSION=1.1.7 >> >> I can import PIL but it fails to work properly. I think I got it to read jpeg files but the Image.show() method failed with >> raise ImportError("The _imaging C module is not installed") >> >> I also tried installing scisoft. I tried versions 2011.17.1 and 2011.1.1. Neither were able to import scipy. >> >> Anyone have any advise for installing scisoft on 10.6.7 or some alternative? >> Dave >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy > > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.robitaille at gmail.com Thu Sep 22 04:13:14 2011 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Thu, 22 Sep 2011 10:13:14 +0200 Subject: [AstroPy] Installing scisoft on MacOS 10.6.7 In-Reply-To: References: <28CF582C-B401-4FB2-9C67-B921FE345900@ipac.caltech.edu> Message-ID: Just wanted to add that MacPorts does also install a 64-bit version of Python by default. Tom On Wednesday, 21 September 2011, cygnusxlist at mac.com wrote: > If you're interested in a 64-bit configuration, I have a description of the > process I've used for configuring two (desktop & laptop) 10.6 systems here: > > > http://dealingwithcreationisminastronomy.blogspot.com/p/building-scientific-toolbox-on-macos-x.html > > Tom > > On Sep 21, 2011, at 2:30 PM, David Johnston wrote: > > Thanks David. I followed those instructions and everything appears to > work. > > On Wed, Sep 21, 2011 at 11:29 AM, David Shupe > > wrote: > >> I can endorse the instructions provided by Thomas Robitaille for >> installing Python using MacPorts. These are at >> http://astrofrog.github.com/macports-python/ >> >> His post to this list is at >> http://mail.scipy.org/pipermail/astropy/2011-July/001666.html >> >> I have used these instructions to install Python with all the scientific >> packages I wanted on two Macs running Snow Leopard (10.6.8). It looks like >> PIL can be installed using MacPorts too. >> >> Regards, >> David >> >> On Sep 21, 2011, at 8:01 AM, David Johnston wrote: >> >> I have been trying to install the usual set of numerical packages on my >> Mac laptop, numpy, scipy, matplotlib etc. I also need to be able to read >> jpegs and other images so I see that I probably need PIL. >> >> My system is Mac OSX 10.6.7 (Snow Leopard). This is intel core duo >> hardware. >> >> After much work, I have been able to install numpy, scipy, matplotlib but >> not PIL. This is under python 2.6.6. numpy 1.5.1, scipy 0.9.0, matplotlib >> 1.0.0. PIL.Image.VERSION=1.1.7 >> >> I can import PIL but it fails to work properly. I think I got it to read >> jpeg files but the Image.show() method failed with >> raise ImportError("The _imaging C module is not installed") >> >> I also tried installing scisoft. I tried versions 2011.17.1 and 2011.1.1. >> Neither were able to import scipy. >> >> Anyone have any advise for installing scisoft on 10.6.7 or some >> alternative? >> Dave >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy >> >> >> > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From embray at stsci.edu Thu Sep 22 12:21:52 2011 From: embray at stsci.edu (Erik Bray) Date: Thu, 22 Sep 2011 12:21:52 -0400 Subject: [AstroPy] PyFITS and mmap Message-ID: <4E7B60A0.60203@stsci.edu> Hi all, Every now and then PyFITS gets support requests from people trying to work with very large FITS files (>4GB; I've seen as high as 50 GB) and having trouble when they run out of memory. Normally I point them to the memmap=True option to pyfits.open(), and that works for them. On 64-bit systems in particular there's more than enough virtual address space to mmap very large files. And I got to thinking that while most FITS files I encounter are not many gigabytes in size, they are still over 100 MB. And there are only so many operations that actually require having an entire array in memory at once. So maybe it would make sense to have PyFITS use mmap by default. There could be some slight performance implications here: For example, when reading the data a little bit a time mmap is a little a bit slower, unsurprisingly. But in practice I don't think it's a very noticeable difference, and the benefit--far less memory usage and more transparent support for large files--outweigh any drawbacks I can think of. I'm just putting this out there because I wonder if there are any other downsides to this that I'm not thinking of. Thanks, Erik From jturner at gemini.edu Thu Sep 22 22:39:28 2011 From: jturner at gemini.edu (James Turner) Date: Thu, 22 Sep 2011 23:39:28 -0300 Subject: [AstroPy] PyFITS and mmap In-Reply-To: <4E7B60A0.60203@stsci.edu> References: <4E7B60A0.60203@stsci.edu> Message-ID: <4E7BF160.1080103@gemini.edu> Hi Erik, This probably depends on the details, but if data arrays are mapped fairly transparently and operations are just a "little bit slower", without the danger of exhausting memory and/or making the OS swap, that certainly sounds like a net gain to me. I assume there will be cases where it's not quite so simple and things have to be kept in memory for specific performance reasons or the working directory isn't writeable or whatever, but it seems like a reasonable default. I don't have enough practical experience with memory mapping to answer your question about downsides you haven't thought of, but since you're testing the waters (and no-one has commented yet) I thought I'd throw out my initial user reaction. For what it's worth, we HAVE recently run into situations at Gemini where we have exhausted 4Gb of RAM, typical of an end user machine, and started discussing memory mapping. We're also not dealing with files larger than 200Mb or so. AFAICT, PyFITS doesn't do this by default just because not that long ago it was running mainly on 32-bit systems (I remember discussing it at the time and was told it would be more useful in future, which is now). Seems like some limited user testing would be in order first? Cheers, James. > Hi all, > > Every now and then PyFITS gets support requests from people trying to > work with very large FITS files (>4GB; I've seen as high as 50 GB) and > having trouble when they run out of memory. > > Normally I point them to the memmap=True option to pyfits.open(), and > that works for them. On 64-bit systems in particular there's more than > enough virtual address space to mmap very large files. > > And I got to thinking that while most FITS files I encounter are not > many gigabytes in size, they are still over 100 MB. And there are only > so many operations that actually require having an entire array in > memory at once. So maybe it would make sense to have PyFITS use mmap by > default. > > There could be some slight performance implications here: For example, > when reading the data a little bit a time mmap is a little a bit slower, > unsurprisingly. But in practice I don't think it's a very noticeable > difference, and the benefit--far less memory usage and more transparent > support for large files--outweigh any drawbacks I can think of. > > I'm just putting this out there because I wonder if there are any other > downsides to this that I'm not thinking of. > > Thanks, > Erik > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From pebarrett at gmail.com Fri Sep 23 08:37:07 2011 From: pebarrett at gmail.com (Paul Barrett) Date: Fri, 23 Sep 2011 08:37:07 -0400 Subject: [AstroPy] PyFITS and mmap In-Reply-To: <4E7B60A0.60203@stsci.edu> References: <4E7B60A0.60203@stsci.edu> Message-ID: Erik, The performance impact can be greater than you might think. As an example, I have some Python code that uses subprocesses to divide the processing among eight or more processors. The data is shared between the parent and child processes using memory-mapping. The calculations take about 5 minutes per subprocess and then another 7 minutes or so to write the data to disk before the subprocess ends. I would therefore prefer that memory-mapped files be an option instead of the default to avoid such a possible performance hit. If it is the default, there may be situations where the performance is poor and the novice user would not know why PyFITS is performing so poorly. This adverse behavior may discourage users from using FITS files and instead use HDF5 files (i.e., the tables package), which, when I think about it, would be a good thing. -- Paul On Thu, Sep 22, 2011 at 12:21 PM, Erik Bray wrote: > Hi all, > > Every now and then PyFITS gets support requests from people trying to > work with very large FITS files (>4GB; I've seen as high as 50 GB) and > having trouble when they run out of memory. > > Normally I point them to the memmap=True option to pyfits.open(), and > that works for them. ?On 64-bit systems in particular there's more than > enough virtual address space to mmap very large files. > > And I got to thinking that while most FITS files I encounter are not > many gigabytes in size, they are still over 100 MB. ?And there are only > so many operations that actually require having an entire array in > memory at once. ?So maybe it would make sense to have PyFITS use mmap by > default. > > There could be some slight performance implications here: For example, > when reading the data a little bit a time mmap is a little a bit slower, > unsurprisingly. ?But in practice I don't think it's a very noticeable > difference, and the benefit--far less memory usage and more transparent > support for large files--outweigh any drawbacks I can think of. > > I'm just putting this out there because I wonder if there are any other > downsides to this that I'm not thinking of. > > Thanks, > Erik > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > From aldcroft at head.cfa.harvard.edu Fri Sep 23 11:08:16 2011 From: aldcroft at head.cfa.harvard.edu (Tom Aldcroft) Date: Fri, 23 Sep 2011 11:08:16 -0400 Subject: [AstroPy] PyFITS and mmap In-Reply-To: References: <4E7B60A0.60203@stsci.edu> Message-ID: On Fri, Sep 23, 2011 at 8:37 AM, Paul Barrett wrote: > Erik, > > The performance impact can be greater than you might think. ?As an > example, I have some Python code that uses subprocesses to divide the > processing among eight or more processors. ?The data is shared between > the parent and child processes using memory-mapping. ?The calculations > take about 5 minutes per subprocess and then another 7 minutes or so > to write the data to disk before the subprocess ends. ?I would > therefore prefer that memory-mapped files be an option instead of the > default to avoid such a possible performance hit. If it is the > default, there may be situations where the performance is poor and the > novice user would not know why PyFITS is performing so poorly. ?This > adverse behavior may discourage users from using FITS files and > instead use HDF5 files (i.e., the tables package), which, when I think > about it, would be a good thing. I'm not sure many novice users will be knowingly creating subprocesses in their Python scripts. I would say the case of a novice user deciding to open a 20 Gb FITS file (and complaining about performance) is more likely. But I agree that you need to be pretty careful about making a default change like this and consider (and test) a wide variety of use cases. +1 on HDF5 for big datasets. - Tom A > On Thu, Sep 22, 2011 at 12:21 PM, Erik Bray wrote: >> Hi all, >> >> Every now and then PyFITS gets support requests from people trying to >> work with very large FITS files (>4GB; I've seen as high as 50 GB) and >> having trouble when they run out of memory. >> >> Normally I point them to the memmap=True option to pyfits.open(), and >> that works for them. ?On 64-bit systems in particular there's more than >> enough virtual address space to mmap very large files. >> >> And I got to thinking that while most FITS files I encounter are not >> many gigabytes in size, they are still over 100 MB. ?And there are only >> so many operations that actually require having an entire array in >> memory at once. ?So maybe it would make sense to have PyFITS use mmap by >> default. >> >> There could be some slight performance implications here: For example, >> when reading the data a little bit a time mmap is a little a bit slower, >> unsurprisingly. ?But in practice I don't think it's a very noticeable >> difference, and the benefit--far less memory usage and more transparent >> support for large files--outweigh any drawbacks I can think of. >> >> I'm just putting this out there because I wonder if there are any other >> downsides to this that I'm not thinking of. >> >> Thanks, >> Erik >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy >> > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > > From embray at stsci.edu Fri Sep 23 11:36:46 2011 From: embray at stsci.edu (Erik Bray) Date: Fri, 23 Sep 2011 11:36:46 -0400 Subject: [AstroPy] PyFITS and mmap In-Reply-To: <4E7BF160.1080103@gemini.edu> References: <4E7B60A0.60203@stsci.edu> <4E7BF160.1080103@gemini.edu> Message-ID: <4E7CA78E.3040205@stsci.edu> On 09/22/2011 10:39 PM, James Turner wrote: > This probably depends on the details, but if data arrays are mapped > fairly transparently and operations are just a "little bit slower", > without the danger of exhausting memory and/or making the OS swap, > that certainly sounds like a net gain to me. Technically, when reading pieces of a mmap'd file into physical RAM, swapping is *exactly* what's going on, just not to/from your OS's main pagefile :) > I assume there will be cases where it's not quite so simple and > things have to be kept in memory for specific performance reasons > or the working directory isn't writeable or whatever, but it seems > like a reasonable default. I don't have enough practical experience > with memory mapping to answer your question about downsides you > haven't thought of, but since you're testing the waters (and no-one > has commented yet) I thought I'd throw out my initial user reaction. > For what it's worth, we HAVE recently run into situations at Gemini > where we have exhausted 4Gb of RAM, typical of an end user machine, > and started discussing memory mapping. We're also not dealing with > files larger than 200Mb or so. Right--on large programs on 32-bit systems even smaller files can be problematic to mmap since it requires a contiguous address space, which may not be possible to find if the memory is fairly fragmented. On 64-bit systems (just about everything anymore, though my laptop is still 32-bit :) this is much less likely to be a problem. > AFAICT, PyFITS doesn't do this by default just because not that > long ago it was running mainly on 32-bit systems (I remember > discussing it at the time and was told it would be more useful in > future, which is now). > > Seems like some limited user testing would be in order first? > > Cheers, > > James. I could try turning it on here at STScI and see if any problems arise. Warren and I also discussed adding a global default--something like pyfits.USE_MEMMAP--that can be used to easily control the default for all pyfits.open() calls. Thanks, Erik > >> Hi all, >> >> Every now and then PyFITS gets support requests from people trying to >> work with very large FITS files (>4GB; I've seen as high as 50 GB) and >> having trouble when they run out of memory. >> >> Normally I point them to the memmap=True option to pyfits.open(), and >> that works for them. On 64-bit systems in particular there's more than >> enough virtual address space to mmap very large files. >> >> And I got to thinking that while most FITS files I encounter are not >> many gigabytes in size, they are still over 100 MB. And there are only >> so many operations that actually require having an entire array in >> memory at once. So maybe it would make sense to have PyFITS use mmap by >> default. >> >> There could be some slight performance implications here: For example, >> when reading the data a little bit a time mmap is a little a bit slower, >> unsurprisingly. But in practice I don't think it's a very noticeable >> difference, and the benefit--far less memory usage and more transparent >> support for large files--outweigh any drawbacks I can think of. >> >> I'm just putting this out there because I wonder if there are any other >> downsides to this that I'm not thinking of. >> >> Thanks, >> Erik >> _______________________________________________ >> AstroPy mailing list >> AstroPy at scipy.org >> http://mail.scipy.org/mailman/listinfo/astropy > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy From jturner at gemini.edu Fri Sep 23 11:40:37 2011 From: jturner at gemini.edu (James Turner) Date: Fri, 23 Sep 2011 12:40:37 -0300 Subject: [AstroPy] PyFITS and mmap In-Reply-To: <4E7CA78E.3040205@stsci.edu> References: <4E7B60A0.60203@stsci.edu> <4E7BF160.1080103@gemini.edu> <4E7CA78E.3040205@stsci.edu> Message-ID: <4E7CA875.9000808@gemini.edu> > Technically, when reading pieces of a mmap'd file into physical RAM, > swapping is *exactly* what's going on, just not to/from your OS's main > pagefile :) Right, but I was talking about forcing/encouraging the OS to swap other stuff out, which is often a nuisance for the user. Thanks! James. From embray at stsci.edu Fri Sep 23 11:49:22 2011 From: embray at stsci.edu (Erik Bray) Date: Fri, 23 Sep 2011 11:49:22 -0400 Subject: [AstroPy] PyFITS and mmap In-Reply-To: References: <4E7B60A0.60203@stsci.edu> Message-ID: <4E7CAA82.1060807@stsci.edu> On 09/23/2011 08:37 AM, Paul Barrett wrote: > Erik, > > The performance impact can be greater than you might think. As an > example, I have some Python code that uses subprocesses to divide the > processing among eight or more processors. The data is shared between > the parent and child processes using memory-mapping. The calculations > take about 5 minutes per subprocess and then another 7 minutes or so > to write the data to disk before the subprocess ends. I would > therefore prefer that memory-mapped files be an option instead of the > default to avoid such a possible performance hit. If it is the > default, there may be situations where the performance is poor and the > novice user would not know why PyFITS is performing so poorly. This > adverse behavior may discourage users from using FITS files and > instead use HDF5 files (i.e., the tables package), which, when I think > about it, would be a good thing. > Like Tom wrote, this hardly seems like a novice use-case. I mentioned in my previous e-mail the possibility of adding a pyfits.USE_MEMMAP variable to control the default behavior from one place (rather than having to change the arguments to all pyfits.open() calls). In your case, you would want to set pyfits.USE_MEMMAP = False. Still, this is valuable input. I don't have strong opinions either way about what the default should be, which is why I asked. We also had a few use cases come up here of heavily I/O-bound use of PyFITS where mmap might not be appropriate. I think it mostly comes down to what would work best for the average user. And yes, for these large datasets they really should be using HDF5 and PyTables. I've done quite a bit to improve the performance of PyFITS, but the FITS format was never designed for such large datasets, and there's only so much that can be done. It would be foolish to try to recreate PyTables on top of FITS and without libhdf5 :) Thanks, Erik From erik.tollerud at gmail.com Fri Sep 23 21:44:52 2011 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Fri, 23 Sep 2011 18:44:52 -0700 Subject: [AstroPy] PyFITS and mmap In-Reply-To: <4E7CA78E.3040205@stsci.edu> References: <4E7B60A0.60203@stsci.edu> <4E7BF160.1080103@gemini.edu> <4E7CA78E.3040205@stsci.edu> Message-ID: > I could try turning it on here at STScI and see if any problems arise. > Warren and I also discussed adding a global default--something like > pyfits.USE_MEMMAP--that can be used to easily control the default for > all pyfits.open() calls. I this idea of having changeable defaults like this is a great idea... but I think even better than a global variable would be to adopt some sort of very simple configuration file akin to matplotlibrc. You could stick a few other options in there too. In particular, I would love to be able to also set pyfits.core.EXTENSION_NAME_CASE_SENSITIVE to default to True. I usually end up change it by hand in pyfits once I install it, but that's annoying to do at every version update (and its a bit hackey). Even with that, though, I think it's a good decent idea to have memmap be the default - I certainly have a number of scripts where I force pyfits to memmap, and I've never been distressed by performance problems - but the limited user testing is definitely a good idea in general. >+1 on HDF5 for big datasets. +1 here as well... but we're still stuck with plenty of other peoples' gigantic fits tables for the foreseeable future... -- Erik Tollerud From embray at stsci.edu Mon Sep 26 10:30:17 2011 From: embray at stsci.edu (Erik Bray) Date: Mon, 26 Sep 2011 10:30:17 -0400 Subject: [AstroPy] PyFITS and mmap In-Reply-To: References: <4E7B60A0.60203@stsci.edu> <4E7BF160.1080103@gemini.edu> <4E7CA78E.3040205@stsci.edu> Message-ID: <4E808C79.9050609@stsci.edu> On 09/23/2011 09:44 PM, Erik Tollerud wrote: >> I could try turning it on here at STScI and see if any problems arise. >> Warren and I also discussed adding a global default--something like >> pyfits.USE_MEMMAP--that can be used to easily control the default for >> all pyfits.open() calls. > > I this idea of having changeable defaults like this is a great idea... > but I think even better than a global variable would be to adopt some > sort of very simple configuration file akin to matplotlibrc. You > could stick a few other options in there too. In particular, I would > love to be able to also set pyfits.core.EXTENSION_NAME_CASE_SENSITIVE > to default to True. I usually end up change it by hand in pyfits once > I install it, but that's annoying to do at every version update (and > its a bit hackey). I'm not too huge on adding an rc file for PyFITS if only because there are only so many 'global' options it has to tweak. EXTENSION_NAME_CASE_SENSITIVE is the only one right now. On the other hand, adding such a file would open the door to adding more options and configurable defaults, so it could be worth considering (in addition to environment variables, for people who prefer to use them). Erik B.