From apalomba at austin.rr.com Tue Nov 1 11:12:45 2011 From: apalomba at austin.rr.com (Anthony Palomba) Date: Tue, 1 Nov 2011 10:12:45 -0500 Subject: [SciPy-User] SciPy for Computational Geometry In-Reply-To: References: <4EAF0F99.7080407@gmail.com> Message-ID: I am very interested in finding a good python package that does computational geometry. I was looking at CGAL for a while, but they python bindings do not seem to work and examples are pretty limited. If you do find something that works, please be sure to inform us as well. Thanks, Anthony On Mon, Oct 31, 2011 at 4:58 PM, wrote: > Maybe have a look at "microsphere interpolation": > http://www.dudziak.com/how_microsphere_projection_works.php > (Perhaps just looking at the diagram at the bottom of that page would > suffice for a start.) > This is not a Python implementation, but it might give you some ideas. > > -- > Cameron Hayne > macdev at hayne.net > > On 31-Oct-11, at 5:14 PM, Lorenzo Isella wrote: > > This is admittedly a bit off topic, but I wonder if anybody on the > > list > > is familiar with this problem (which should belong to computational > > geometry) and is able to point me to an implementation (possibly > > relying > > on scipy). > > Imagine that you are sitting at the origin (0,0,0) of a 3D coordinate > > system and that you are looking at a set of (non-overlapping) spheres > > (all the spheres are identical and with radius R=1). > > You ask yourself how many spheres you can see overall. > > The result is in general a (positive) real number as one sphere may > > partially eclipse another sphere for an observer in the origin (e.g. > > if > > one sphere is located at (0,0,5) and the other (0,0.3,10)). > > Does anybody know an algorithm to calculate this quantity efficiently? > > I have in mind (for now at least) configurations of less that 100 > > spheres, so hopefully this should not be too demanding. > > I had a look at > > http://www.qhull.org/ > > but I am not 100% sure that this is the way to go. > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Nov 1 11:28:59 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 1 Nov 2011 11:28:59 -0400 Subject: [SciPy-User] Lerch transcendent function Message-ID: Is the Lerch transcendent function available in scipy or python? http://en.wikipedia.org/wiki/Lerch_zeta_function#Definition capital phi http://mathworld.wolfram.com/LerchTranscendent.html (used for moments of Weibull-geometric distribution) Josef From friedrichromstedt at gmail.com Tue Nov 1 11:32:55 2011 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Tue, 1 Nov 2011 16:32:55 +0100 Subject: [SciPy-User] applications for tukeylambda distribution ? In-Reply-To: References: Message-ID: 2011/10/30 : > Are there any applications for the Tukey Lambda distribution > http://en.wikipedia.org/wiki/Tukey_lambda_distribution ? > > I just stumbled over it looking at PPCC plots ( > http://www.itl.nist.gov/div898/handbook/eda/section3/ppccplot.htm and > scipy.stats.morestats) and it looks quite useful covering or > approximating a large range of distributions. Hi Josef, "The most common use of this distribution is to generate a Tukey lambda PPCC plot of a data set." (http://en.wikipedia.org/wiki/Tukey_lambda_distribution#Comments) "It is typically used to identify an appropriate distribution (see the comments below) and not used in statistical models directly." (http://en.wikipedia.org/wiki/Tukey_lambda_distribution) So I guess the mighty Wikipedia suggests explicitly that there are no applications in the sense of a statistical model? Your second reference agrees on this: "The Tukey-Lambda PPCC plot is used to suggest an appropriate distribution. You should follow-up with PPCC and probability plots of the appropriate alternatives." I would guess the "most common use" and "typically used" formulations are just backdoors to not claim what we're not sure about. Furthermore, "The probability density function (pdf) and cumulative distribution function (cdf) are both computed numerically, as the Tukey lambda distribution does not have a simple, closed form for any values of the parameters except ? = 0 (see Logistic function). However, the pdf can be expressed in parametric form, for all values of ?, in terms of the quantile function and the reciprocal of the quantile density function." (http://en.wikipedia.org/wiki/Tukey_lambda_distribution again); I have no idea off the cuff if it is useful to fit the quantile directly or not. At least it does not look like the common (aside of PPCC). I guess the reason why one would not like to model with this Tukey quantile is, that it is just a parametric model, and does not have a physical reason (at least there is none such given in what you gave). A quick googling "tukey modeling" gives this here: http://andrewgelman.com/2011/01/tukeys_philosop/ - but I find what's written there rather unclear and don't really understand what're the associations related to "model" and "method" in that post. It looks like if it comes down to "do we need a physical derivation of our distribution or not". To my belief, a pdf or cdf without a reason is missing something. It just feels wrong - it's not satisfying. The problem might be: Where does the endless circle of deriving and deriving stop? Maybe it never does. Maybe it's just a game we play and we pretend that it's of objective importance but it isn't - it's just about satisfaction and fun in the end. :-) AISI, Friedrich From fredrik.johansson at gmail.com Tue Nov 1 11:57:44 2011 From: fredrik.johansson at gmail.com (Fredrik Johansson) Date: Tue, 1 Nov 2011 16:57:44 +0100 Subject: [SciPy-User] Lerch transcendent function In-Reply-To: References: Message-ID: On Tue, Nov 1, 2011 at 4:28 PM, wrote: > Is the Lerch transcendent function available in scipy or python? > > http://en.wikipedia.org/wiki/Lerch_zeta_function#Definition ?capital phi > http://mathworld.wolfram.com/LerchTranscendent.html > > (used for moments of Weibull-geometric distribution) > > Josef mpmath has it: http://mpmath.googlecode.com/svn/trunk/doc/build/functions/zeta.html#lerch-transcendent Fredrik From josef.pktd at gmail.com Tue Nov 1 12:11:28 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 1 Nov 2011 12:11:28 -0400 Subject: [SciPy-User] applications for tukeylambda distribution ? In-Reply-To: References: Message-ID: On Tue, Nov 1, 2011 at 11:32 AM, Friedrich Romstedt wrote: > 2011/10/30 ?: >> Are there any applications for the Tukey Lambda distribution >> http://en.wikipedia.org/wiki/Tukey_lambda_distribution ? >> >> I just stumbled over it looking at PPCC plots ( >> http://www.itl.nist.gov/div898/handbook/eda/section3/ppccplot.htm and >> scipy.stats.morestats) and it looks quite useful covering or >> approximating a large range of distributions. > > Hi Josef, > > "The most common use of this distribution is to generate a Tukey > lambda PPCC plot of a data set." > (http://en.wikipedia.org/wiki/Tukey_lambda_distribution#Comments) > > "It is typically used to identify an appropriate distribution (see the > comments below) and not used in statistical models directly." > (http://en.wikipedia.org/wiki/Tukey_lambda_distribution) > > So I guess the mighty Wikipedia suggests explicitly that there are no > applications in the sense of a statistical model? > > Your second reference agrees on this: "The Tukey-Lambda PPCC plot is > used to suggest an appropriate distribution. You should follow-up with > PPCC and probability plots of the appropriate alternatives." > > I would guess the "most common use" and "typically used" formulations > are just backdoors to not claim what we're not sure about. > > Furthermore, "The probability density function (pdf) and cumulative > distribution function (cdf) are both computed numerically, as the > Tukey lambda distribution does not have a simple, closed form for any > values of the parameters except ? = 0 (see Logistic function). > However, the pdf can be expressed in parametric form, for all values > of ?, in terms of the quantile function and the reciprocal of the > quantile density function." > (http://en.wikipedia.org/wiki/Tukey_lambda_distribution again); I have > no idea off the cuff if it is useful to fit the quantile directly or > not. ?At least it does not look like the common (aside of PPCC). > > I guess the reason why one would not like to model with this Tukey > quantile is, that it is just a parametric model, and does not have a > physical reason (at least there is none such given in what you gave). That's pretty much the impression that I also got. However, we do have the cdf and indirectly the pdf in scipy.special. It also has a section in Johnson, Kotz and Balakrishnan. So, I was wondering whether it's used in any field. > > A quick googling "tukey modeling" gives this here: > http://andrewgelman.com/2011/01/tukeys_philosop/ - but I find what's > written there rather unclear and don't really understand what're the > associations related to "model" and "method" in that post. ?It looks > like if it comes down to "do we need a physical derivation of our > distribution or not". > > To my belief, a pdf or cdf without a reason is missing something. ?It > just feels wrong - it's not satisfying. ?The problem might be: Where > does the endless circle of deriving and deriving stop? ?Maybe it never > does. ?Maybe it's just a game we play and we pretend that it's of > objective importance but it isn't - it's just about satisfaction and > fun in the end. :-) The Gellman page is a bit too philosophical for my taste, especially the comments. To some extend I'm just a collector (of statistical functions instead of coins or movies or powertools), but I'd rather collect useful things (you never know when they come in handy). My impression is that in reliability they invent about ten (underestimate) new distributions a year that all have a motivating introduction (that doesn't tell me much but looks relevant) (bath-tub shapes, anyone?:) Thanks for looking into it. Josef too many brackets > > AISI, > Friedrich > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Tue Nov 1 12:51:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 1 Nov 2011 17:51:21 +0100 Subject: [SciPy-User] Error in Numpy 1.6 timedelta() In-Reply-To: References: Message-ID: On Mon, Oct 31, 2011 at 11:24 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Mon, Oct 31, 2011 at 3:45 PM, Fernando Paolo wrote: > >> Hello, >> >> I receive the following error when trying: >> >> >>> import numpy as np >> >>> np.timedelta64(10, 's') >> >> TypeError: function takes at most 1 argument (2 given) >> >> I know this is probably fully implemented in Numpy 1.7, but what about >> 1.6.1? >> >> > Yep, it's working in current master. I'll have to check 1.6.1 later unless > someone who is running it can comment. > > Doesn't work in 1.6.1. Datetime is not in very good shape in that release, if you want to use it for real work it's best to run master. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From fspaolo at gmail.com Tue Nov 1 13:09:14 2011 From: fspaolo at gmail.com (Fernando Paolo) Date: Tue, 1 Nov 2011 10:09:14 -0700 Subject: [SciPy-User] Error in Numpy 1.6 timedelta() In-Reply-To: References: Message-ID: Okay thanks. Will use the python `datetime.timedelta` for now. -Fernando On Tue, Nov 1, 2011 at 9:51 AM, Ralf Gommers wrote: > > > On Mon, Oct 31, 2011 at 11:24 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Mon, Oct 31, 2011 at 3:45 PM, Fernando Paolo wrote: >> >>> Hello, >>> >>> I receive the following error when trying: >>> >>> >>> import numpy as np >>> >>> np.timedelta64(10, 's') >>> >>> TypeError: function takes at most 1 argument (2 given) >>> >>> I know this is probably fully implemented in Numpy 1.7, but what about >>> 1.6.1? >>> >>> >> Yep, it's working in current master. I'll have to check 1.6.1 later >> unless someone who is running it can comment. >> >> Doesn't work in 1.6.1. Datetime is not in very good shape in that > release, if you want to use it for real work it's best to run master. > > Ralf > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Fernando Paolo Institute of Geophysics & Planetary Physics Scripps Institution of Oceanography University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0225 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkhilmer at chemistry.montana.edu Tue Nov 1 13:34:21 2011 From: jkhilmer at chemistry.montana.edu (jkhilmer at chemistry.montana.edu) Date: Tue, 1 Nov 2011 11:34:21 -0600 Subject: [SciPy-User] SciPy for Computational Geometry In-Reply-To: References: <4EAF0F99.7080407@gmail.com> Message-ID: Lorenzo, There is a very substantial body of research dedicated to your topic, since it is essential for real-time rendering of 3D environments: http://en.wikipedia.org/wiki/Binary_space_partitioning You can certainly adapt existing code for your need, although depending on your efficiency needs, it's probably easiest to just write something from scratch. 1. For each sphere, determine a plane perpendicular to the line-of-sight. 2. For each sphere, calculate a set of points on the radius. Granularity can be as fine as you need. 3. For each circumference point, find the intersection between the line (circumference to origin) and each plane. 3. Calculate whether the line/plane intersection point is within the radius for that circle (one circle per plane). 4. If the intersection is within the radius, that circumference point is occluded. Break out of plane calculations if you find an occlusion. 5. Break out of sphere point testing if you find a circumference point that is not occluded: that sphere is visible. It's naive and not terribly efficient, but it is simple. Jonathan On Tue, Nov 1, 2011 at 9:12 AM, Anthony Palomba wrote: > I am very interested in finding a good python package that does > computational geometry. > > I was looking at CGAL for a while, but they python bindings do > not seem to work and examples are pretty limited. > > If you do find something that works, please be sure to inform > us as well. > > > > > Thanks, > Anthony > > > > > On Mon, Oct 31, 2011 at 4:58 PM, wrote: >> >> Maybe have a look at "microsphere interpolation": >> http://www.dudziak.com/how_microsphere_projection_works.php >> (Perhaps just looking at the diagram at the bottom of that page would >> suffice for a start.) >> This is not a Python implementation, but it might give you some ideas. >> >> -- >> Cameron Hayne >> macdev at hayne.net >> >> On 31-Oct-11, at 5:14 PM, Lorenzo Isella wrote: >> > This is admittedly a bit off topic, but I wonder if anybody on the >> > list >> > is familiar with this problem (which should belong to computational >> > geometry) and is able to point me to an implementation (possibly >> > relying >> > on scipy). >> > Imagine that you are sitting at the origin (0,0,0) of a 3D coordinate >> > system and that you are looking at a set of (non-overlapping) spheres >> > (all the spheres are identical and with radius R=1). >> > You ask yourself how many spheres you can see overall. >> > The result is in general a (positive) real number as one sphere may >> > partially eclipse another sphere for an observer in the origin (e.g. >> > if >> > one sphere is located at (0,0,5) and the other (0,0.3,10)). >> > Does anybody know an algorithm to calculate this quantity efficiently? >> > I have in mind (for now at least) configurations of less that 100 >> > spheres, so hopefully this should not be too demanding. >> > I had a look at >> > http://www.qhull.org/ >> > but I am not 100% sure that this is the way to go. >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From johann.cohentanugi at gmail.com Tue Nov 1 14:10:54 2011 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Tue, 01 Nov 2011 19:10:54 +0100 Subject: [SciPy-User] Lerch transcendent function In-Reply-To: References: Message-ID: <4EB0362E.5010708@gmail.com> Hi Josef, I was planning to work on something related (polylog), and I started some groundwork : https://github.com/johannct/scipy/tree/polylog But it recently stalled, due to overburden. Also I would like to start with a complete implementation of the zeta function (analytic continuation) as in mpmath..... best, Johann On 11/01/2011 04:28 PM, josef.pktd at gmail.com wrote: > Is the Lerch transcendent function available in scipy or python? > > http://en.wikipedia.org/wiki/Lerch_zeta_function#Definition capital phi > http://mathworld.wolfram.com/LerchTranscendent.html > > (used for moments of Weibull-geometric distribution) > > Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From david_baddeley at yahoo.com.au Tue Nov 1 17:04:27 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 1 Nov 2011 14:04:27 -0700 (PDT) Subject: [SciPy-User] SciPy for Computational Geometry In-Reply-To: <4EAF0F99.7080407@gmail.com> References: <4EAF0F99.7080407@gmail.com> Message-ID: <1320181467.53787.YahooMailNeo@web113415.mail.gq1.yahoo.com> How about using OpenGL to render the problem, giving each sphere a different colour. You can then grab the rendered image (or potentially images from different camera angles) and count the number of discrete colours you have. It's probably not the most elegant of solutions, and would only be approximate (due to the pixelation), but python has good OpenGL bindings and it ought to be fast to compute.? It would probably also be pretty easy to code something up from scratch which did this, along these lines: - generate an 2D array which is going to contain the index of the nearest sphere for any given angle (theta, phi) (initialised to zero) - create a similar array which holds the distance to the nearest sphere in for each of the pixels above (analagous to a zBuffer in normal 3D rendering) and initialise this to a very large number - iterate over your spheres and draw a circle in your index and z buffer arrays where you change the index and z value if (and only if) the new z value is going to be smaller than the one currently in the z buffer (mapping the sphere to a circle in theta, phi space should be pretty simple). Again this is only going to be approximate (due to the pixelation). cheers, David ________________________________ From: Lorenzo Isella To: scipy-user at scipy.org Sent: Tuesday, 1 November 2011 10:14 AM Subject: [SciPy-User] SciPy for Computational Geometry Dear All, This is admittedly a bit off topic, but I wonder if anybody on the list is familiar with this problem (which should belong to computational geometry) and is able to point me to an implementation (possibly relying on scipy). Imagine that you are sitting at the origin (0,0,0) of a 3D coordinate system and that you are looking at a set of (non-overlapping) spheres (all the spheres are identical and with radius R=1). You ask yourself how many spheres you can see overall. The result is in general a (positive) real number as one sphere may partially eclipse another sphere for an observer in the origin (e.g. if one sphere is located at (0,0,5) and the other (0,0.3,10)). Does anybody know an algorithm to calculate this quantity efficiently? I have in mind (for now at least) configurations of less that 100 spheres, so hopefully this should not be too demanding. I had a look at http://www.qhull.org/ but I am not 100% sure that this is the way to go. Any suggestion is appreciated. Many thanks Lorenzo _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Tue Nov 1 17:19:08 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 1 Nov 2011 14:19:08 -0700 (PDT) Subject: [SciPy-User] SciPy for Computational Geometry In-Reply-To: <1320181467.53787.YahooMailNeo@web113415.mail.gq1.yahoo.com> References: <4EAF0F99.7080407@gmail.com> <1320181467.53787.YahooMailNeo@web113415.mail.gq1.yahoo.com> Message-ID: <1320182348.94664.YahooMailNeo@web113407.mail.gq1.yahoo.com> Actually I've just thought of how to find a more exact solution ?which avoids pixelation (using shapely [http://pypi.python.org/pypi/Shapely]?which is a decent package for 2D computational geometry in python): - map all your spheres to circles on theta, phi with an?associated?distance - sort by distance (closest first) - take the closest circle as a mask - iterate over the other circles doing the following: - check if they are contained in the mask (in which case they won't appear in the output and can be discarded) - if not contained in mask, increment the number of visible spheres and update the mask to be the union of the previous mask and the current circle ________________________________ From: David Baddeley To: SciPy Users List Sent: Wednesday, 2 November 2011 10:04 AM Subject: Re: [SciPy-User] SciPy for Computational Geometry How about using OpenGL to render the problem, giving each sphere a different colour. You can then grab the rendered image (or potentially images from different camera angles) and count the number of discrete colours you have. It's probably not the most elegant of solutions, and would only be approximate (due to the pixelation), but python has good OpenGL bindings and it ought to be fast to compute.? It would probably also be pretty easy to code something up from scratch which did this, along these lines: - generate an 2D array which is going to contain the index of the nearest sphere for any given angle (theta, phi) (initialised to zero) - create a similar array which holds the distance to the nearest sphere in for each of the pixels above (analagous to a zBuffer in normal 3D rendering) and initialise this to a very large number - iterate over your spheres and draw a circle in your index and z buffer arrays where you change the index and z value if (and only if) the new z value is going to be smaller than the one currently in the z buffer (mapping the sphere to a circle in theta, phi space should be pretty simple). Again this is only going to be approximate (due to the pixelation). cheers, David ________________________________ From: Lorenzo Isella To: scipy-user at scipy.org Sent: Tuesday, 1 November 2011 10:14 AM Subject: [SciPy-User] SciPy for Computational Geometry Dear All, This is admittedly a bit off topic, but I wonder if anybody on the list is familiar with this problem (which should belong to computational geometry) and is able to point me to an implementation (possibly relying on scipy). Imagine that you are sitting at the origin (0,0,0) of a 3D coordinate system and that you are looking at a set of (non-overlapping) spheres (all the spheres are identical and with radius R=1). You ask yourself how many spheres you can see overall. The result is in general a (positive) real number as one sphere may partially eclipse another sphere for an observer in the origin (e.g. if one sphere is located at (0,0,5) and the other (0,0.3,10)). Does anybody know an algorithm to calculate this quantity efficiently? I have in mind (for now at least) configurations of less that 100 spheres, so hopefully this should not be too demanding. I had a look at http://www.qhull.org/ but I am not 100% sure that this is the way to go. Any suggestion is appreciated. Many thanks Lorenzo _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From johann.cohen-tanugi at lupm.univ-montp2.fr Tue Nov 1 14:10:42 2011 From: johann.cohen-tanugi at lupm.univ-montp2.fr (Johann Cohen-Tanugi) Date: Tue, 01 Nov 2011 19:10:42 +0100 Subject: [SciPy-User] Lerch transcendent function In-Reply-To: References: Message-ID: <4EB03622.5060300@lupm.univ-montp2.fr> Hi Josef, I was planning to work on something related (polylog), and I started some groundwork : https://github.com/johannct/scipy/tree/polylog But it recently stalled, due to overburden. Also I would like to start with a complete implementation of the zeta function (analytic continuation) as in mpmath..... best, Johann On 11/01/2011 04:28 PM, josef.pktd at gmail.com wrote: > Is the Lerch transcendent function available in scipy or python? > > http://en.wikipedia.org/wiki/Lerch_zeta_function#Definition capital phi > http://mathworld.wolfram.com/LerchTranscendent.html > > (used for moments of Weibull-geometric distribution) > > Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From evert at camarchitects.co.uk Wed Nov 2 06:04:22 2011 From: evert at camarchitects.co.uk (Evert Amador) Date: Wed, 2 Nov 2011 10:04:22 +0000 Subject: [SciPy-User] Installing numpy on ironpython 2.7 Message-ID: <4FE177F0-6D6B-4F83-9665-3C53E37C91C0@camarchitects.co.uk> Hello, I'm following the instruction on this page http://www.enthought.com/repo/.iron/ to install SciPy on IronPython 2.7. I'm stuck on step 3.) ironpkg I downloaded the ironpkg-1.0.0.py (save target file as" option) to the same IronPython directory, but when I type >ipy ironpkg-1.0.0.py --install the command prompt responds that the file doesn't exist. Could you please explain to me how do I get the ironpkg command available to continue the installation? Many thanks for your help. regards Evert Amador -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.pundurs at navteq.com Wed Nov 2 11:38:51 2011 From: mark.pundurs at navteq.com (Pundurs, Mark) Date: Wed, 2 Nov 2011 10:38:51 -0500 Subject: [SciPy-User] ImportError: *.so: cannot open shared object file: No such file or directory Message-ID: <8A18D8FA4293104C9A710494FD6C273CB6356692@hq-ex-mb03.ad.navteq.com> I want to use the function stats.norm.isf, but no matter how I try to import it I end up with the error "ImportError: .so: cannot open shared object file: No such file or directory". The .so files cited do exist in /usr/lib (as symbolic links to other .so files that also exist in that directory). From what I've read, that's where they're supposed to be - but I think the Python installation is in a nonstandard location. Is that the problem? How can I work around it? Is there some way I can stitch together pieces of Scipy code in my script to directly access stats.norm.isf functionality and bypass these problems? I'm running Scipy 0.8.0/Python 2.6.3 under Red Hat Enterprise Linux ES release 4 (Nahant Update 5) on a Dell PowerEdge 1950. Traceback (most recent call last): File "/disk1/hadoop/tmp/taskTracker/jobcache/job_201109081055_6643/attempt_201109081055_6643_r_000000_0/work/./ExpandAndAverageFinal_ReducerB_pc.py", line 13, in from scipy.stats import norm File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/stats/stats.py", line 203, in import scipy.linalg as linalg File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/linalg/__init__.py", line 9, in from basic import * File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/linalg/basic.py", line 16, in from lapack import get_lapack_funcs File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/linalg/lapack.py", line 14, in from scipy.linalg import flapack ImportError: liblapack.so: cannot open shared object file: No such file or directory Traceback (most recent call last): File "/disk1/hadoop/tmp/taskTracker/jobcache/job_201109081055_6658/attempt_201109081055_6658_r_000000_0/work/./ExpandAndAverageFinal_ReducerB_pc.py", line 13, in from scipy.stats.norm import isf File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/stats/stats.py", line 202, in import scipy.special as special File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/special/basic.py", line 6, in from _cephes import * ImportError: libgfortran.so.1: cannot open shared object file: No such file or directory Thanks, Mark Pundurs Data Analyst - Traffic Nokia Location & Commerce, Chicago The information contained in this communication may be CONFIDENTIAL and is intended only for the use of the recipient(s) named above. If you are not the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication, or any of its contents, is strictly prohibited. If you have received this communication in error, please notify the sender and delete/destroy the original message and any copy of it from your computer or paper files. From josef.pktd at gmail.com Wed Nov 2 11:46:40 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Nov 2011 11:46:40 -0400 Subject: [SciPy-User] ImportError: *.so: cannot open shared object file: No such file or directory In-Reply-To: <8A18D8FA4293104C9A710494FD6C273CB6356692@hq-ex-mb03.ad.navteq.com> References: <8A18D8FA4293104C9A710494FD6C273CB6356692@hq-ex-mb03.ad.navteq.com> Message-ID: On Wed, Nov 2, 2011 at 11:38 AM, Pundurs, Mark wrote: > I want to use the function stats.norm.isf, but no matter how I try to import it I end up with the error "ImportError: .so: cannot open shared object file: No such file or directory". The .so files cited do exist in /usr/lib (as symbolic links to other .so files that also exist in that directory). From what I've read, that's where they're supposed to be - but I think the Python installation is in a nonstandard location. Is that the problem? How can I work around it? > > Is there some way I can stitch together pieces of Scipy code in my script to directly access stats.norm.isf functionality and bypass these problems? No it won't be possible if you cannot run (Fortran) extensions. norm.isf uses scipy.special, which uses _cephes. Josef > > I'm running Scipy 0.8.0/Python 2.6.3 under Red Hat Enterprise Linux ES release 4 (Nahant Update 5) on a Dell PowerEdge 1950. > > Traceback (most recent call last): > ?File "/disk1/hadoop/tmp/taskTracker/jobcache/job_201109081055_6643/attempt_201109081055_6643_r_000000_0/work/./ExpandAndAverageFinal_ReducerB_pc.py", line 13, in > ? ?from scipy.stats import norm > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/stats/__init__.py", line 7, in > ? ?from stats import * > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/stats/stats.py", line 203, in > ? ?import scipy.linalg as linalg > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/linalg/__init__.py", line 9, in > ? ?from basic import * > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/linalg/basic.py", line 16, in > ? ?from lapack import get_lapack_funcs > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/linalg/lapack.py", line 14, in > ? ?from scipy.linalg import flapack > ImportError: liblapack.so: cannot open shared object file: No such file or directory > > Traceback (most recent call last): > ?File "/disk1/hadoop/tmp/taskTracker/jobcache/job_201109081055_6658/attempt_201109081055_6658_r_000000_0/work/./ExpandAndAverageFinal_ReducerB_pc.py", line 13, in > ? ?from scipy.stats.norm import isf > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/stats/__init__.py", line 7, in > ? ?from stats import * > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/stats/stats.py", line 202, in > ? ?import scipy.special as special > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/special/__init__.py", line 8, in > ? ?from basic import * > ?File "/tools/python/2.6.3_3/linux_x86_64/lib/python2.6/site-packages/scipy/special/basic.py", line 6, in > ? ?from _cephes import * > ImportError: libgfortran.so.1: cannot open shared object file: No such file or directory > > Thanks, > Mark Pundurs > Data Analyst - Traffic > Nokia Location & Commerce, Chicago > > > The information contained in this communication may be CONFIDENTIAL and is intended only for the use of the recipient(s) named above. ?If you are not the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication, or any of its contents, is strictly prohibited. ?If you have received this communication in error, please notify the sender and delete/destroy the original message and any copy of it from your computer or paper files. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Wed Nov 2 11:51:56 2011 From: cournape at gmail.com (David Cournapeau) Date: Wed, 2 Nov 2011 15:51:56 +0000 Subject: [SciPy-User] ImportError: *.so: cannot open shared object file: No such file or directory In-Reply-To: <8A18D8FA4293104C9A710494FD6C273CB6356692@hq-ex-mb03.ad.navteq.com> References: <8A18D8FA4293104C9A710494FD6C273CB6356692@hq-ex-mb03.ad.navteq.com> Message-ID: Hi Mark, On Wed, Nov 2, 2011 at 3:38 PM, Pundurs, Mark wrote: > I want to use the function stats.norm.isf, but no matter how I try to import it I end up with the error "ImportError: .so: cannot open shared object file: No such file or directory". The .so files cited do exist in /usr/lib (as symbolic links to other .so files that also exist in that directory). From what I've read, that's where they're supposed to be - but I think the Python installation is in a nonstandard location. Is that the problem? How can I work around it? I believe RHEL 4 uses g77 as its default fortran compiler, so you have a custom gfortran build somewhere, am I right ? If so, you need to add the paths where libgfortran.so and liblapack.so are to the environment variable LD_LIBRARY_PATH. Given that scipy has been built (by someone else for you ?), you may want to ask them about it for the exact locations of those libraries. cheers, David From mark.pundurs at navteq.com Wed Nov 2 13:58:45 2011 From: mark.pundurs at navteq.com (Pundurs, Mark) Date: Wed, 2 Nov 2011 12:58:45 -0500 Subject: [SciPy-User] ImportError: *.so: cannot open shared object file: No such file or directory In-Reply-To: References: Message-ID: <8A18D8FA4293104C9A710494FD6C273CB63568C1@hq-ex-mb03.ad.navteq.com> Thanks, David! How do I (a Linux newbie) add paths to environment variable LD_LIBRARY_PATH? ------------------------------ Date: Wed, 2 Nov 2011 15:51:56 +0000 From: David Cournapeau Hi Mark, On Wed, Nov 2, 2011 at 3:38 PM, Pundurs, Mark wrote: > I want to use the function stats.norm.isf, but no matter how I try to import it I end up with the error "ImportError: .so: cannot open shared object file: No such file or directory". The .so files cited do exist in /usr/lib (as symbolic links to other .so files that also exist in that directory). From what I've read, that's where they're supposed to be - but I think the Python installation is in a nonstandard location. Is that the problem? How can I work around it? I believe RHEL 4 uses g77 as its default fortran compiler, so you have a custom gfortran build somewhere, am I right ? If so, you need to add the paths where libgfortran.so and liblapack.so are to the environment variable LD_LIBRARY_PATH. Given that scipy has been built (by someone else for you ?), you may want to ask them about it for the exact locations of those libraries. cheers, David The information contained in this communication may be CONFIDENTIAL and is intended only for the use of the recipient(s) named above. If you are not the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication, or any of its contents, is strictly prohibited. If you have received this communication in error, please notify the sender and delete/destroy the original message and any copy of it from your computer or paper files. From mozhang84 at gmail.com Wed Nov 2 18:21:44 2011 From: mozhang84 at gmail.com (Mo Zhang) Date: Wed, 2 Nov 2011 18:21:44 -0400 Subject: [SciPy-User] SciPy version 0.9.0 In-Reply-To: References: Message-ID: Hello, I installed scipy version 0.9.0 with Intel compiler, however, when I import a module in scipy, I get the following error: File "/opt/../scipy/0.9.0/lib/python2.7/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/opt/../scipy/0.9.0/lib/python2.7/site-packages/scipy/stats/stats.py", line 193, in import scipy.special as special File "/opt/../scipy/0.9.0/lib/python2.7/site-packages/scipy/special/__init__.py", line 9, in from _cephes import * ImportError: /opt/../scipy/0.9.0/lib/python2.7/site-packages/scipy/special/_cephes.so: undefined symbol: s_stop Please let me know how I can fix the problem. Thanks Mo Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From questions.anon at gmail.com Wed Nov 2 22:25:37 2011 From: questions.anon at gmail.com (questions anon) Date: Thu, 3 Nov 2011 13:25:37 +1100 Subject: [SciPy-User] mask array by shapefile Message-ID: Hi All, Is there a way to select only the values within a particular shapefile to analyse. I would like to do something like: array=numpyarraycoveringtemperatureofwholestate shapefile=forestedregions.shp newarray=ma.masked_values(array, shapefile) meantemperatureofforestedregions=MA.mean(newarray) print meantemperatureofforestedregions Any ideas of functions I could use, examples I could follow? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.raybaut at gmail.com Thu Nov 3 05:36:00 2011 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Thu, 3 Nov 2011 10:36:00 +0100 Subject: [SciPy-User] ANN: Spyder v2.1 Message-ID: Hi all, On the behalf of Spyder's development team (http://code.google.com/p/spyderlib/people/list), I'm pleased to announce that Spyder v2.1 has been released and is available for Windows XP/Vista/7, GNU/Linux and MacOS X: http://code.google.com/p/spyderlib/ Spyder is a free, open-source (MIT license) interactive development environment for the Python language with advanced editing, interactive testing, debugging and introspection features. Originally designed to provide MATLAB-like features (integrated help, interactive console, variable explorer with GUI-based editors for dictionaries, NumPy arrays, ...), it is strongly oriented towards scientific computing and software development. Thanks to the `spyderlib` library, Spyder also provides powerful ready-to-use widgets: embedded Python console (example: http://packages.python.org/guiqwt/_images/sift3.png), NumPy array editor (example: http://packages.python.org/guiqwt/_images/sift2.png), dictionary editor, source code editor, etc. Description of key features with tasty screenshots can be found at: http://code.google.com/p/spyderlib/wiki/Features This release represents a year of development since v2.0 and introduces major enhancements and new features: * Large performance and stability improvements * PySide support (PyQt is no longer exclusively required) * New profiler plugin (thanks to Santiago Jaramillo, a new contributor) * Experimental support for IPython v0.11+ * And many other changes: http://code.google.com/p/spyderlib/wiki/ChangeLog On Windows platforms, Spyder is also available as a stand-alone executable (don't forget to disable UAC on Vista/7). This all-in-one portable version is still experimental (for example, it does not embed sphinx -- meaning no rich text mode for the object inspector) but it should provide a working version of Spyder for Windows platforms without having to install anything else (except Python 2.x itself, of course). Don't forget to follow Spyder updates/news: * on the project website: http://code.google.com/p/spyderlib/ * and on our official blog: http://spyder-ide.blogspot.com/ Last, but not least, we welcome any contribution that helps making Spyder an efficient scientific development/computing environment. Join us to help creating your favourite environment! (http://code.google.com/p/spyderlib/wiki/NoteForContributors) Enjoy! -Pierre From akshar.bhosale at gmail.com Wed Nov 2 12:20:18 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Wed, 2 Nov 2011 21:50:18 +0530 Subject: [SciPy-User] numpy error with mkl 10.1 Message-ID: Hi, i am getting following error. python -c 'import numpy;numpy.matrix([[1, 5, 10], [1.0, 3j, 4]], numpy.complex128).T.I.H' MKL FATAL ERROR: Cannot load libmkl_lapack.so have installed numpy 1.6.0 with python 2.6. i have intel cluster toolkit installed on my system. (11/069 version and mlk=10.1). i have machine having intel xeon processor and rhel 5.2 x86_64 platform. Kindly help -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Wed Nov 2 13:52:26 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Wed, 2 Nov 2011 23:22:26 +0530 Subject: [SciPy-User] [Numpy-discussion] numpy error with mkl 10.1 In-Reply-To: References: Message-ID: Hi, ldd _dotblas.so libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002b12f0692000) libmkl_def.so => /opt/intel/Compiler/11.0/069/mkl/lib/em64t/libmkl_def.so (0x00002b12f099c000) libmkl_intel_lp64.so => /opt/intel/Compiler/11.0/069/mkl/lib/em64t/libmkl_intel_lp64.so (0x00002b12f14f1000) libmkl_intel_thread.so => /opt/intel/Compiler/11.0/069/mkl/lib/em64t/libmkl_intel_thread.so (0x00002b12f184c000) libmkl_core.so => /opt/intel/Compiler/11.0/069/mkl/lib/em64t/libmkl_core.so (0x00002b12f2575000) libmkl_mc.so => /opt/intel/Compiler/11.0/069/mkl/lib/em64t/libmkl_mc.so (0x00002b12f2769000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b12f34bf000) libimf.so => /opt/intel/Compiler/11.0/069/lib/intel64/libimf.so (0x00002b12f36db000) libsvml.so => /opt/intel/Compiler/11.0/069/lib/intel64/libsvml.so (0x00002b12f3a32000) libm.so.6 => /lib64/libm.so.6 (0x00002b12f3bef000) libiomp5.so => /opt/intel/Compiler/11.0/069/lib/intel64/libiomp5.so (0x00002b12f3e74000) libintlc.so.5 => /opt/intel/Compiler/11.0/069/lib/intel64/libintlc.so.5 (0x00002b12f4005000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b12f4142000) libc.so.6 => /lib64/libc.so.6 (0x00002b12f4350000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b12f46c7000) /lib64/ld-linux-x86-64.so.2 (0x0000003fb8000000) On Wed, Nov 2, 2011 at 10:14 PM, Olivier Delalleau wrote: > Ok, can you print the output of ldd numpy/core/_dotblas.so? > > > -=- Olivier > > 2011/11/2 akshar bhosale > >> HI, >> It is already added in the LD_LIBRARY_PATH, thenalso it is generating the >> same error. >> >> >> On Wed, Nov 2, 2011 at 10:01 PM, Olivier Delalleau wrote: >> >>> Locate your libmkl_lapack.so and try to add the directory that contains >>> it to your LD_LIBRARY_PATH environment variable. >>> >>> -=- Olivier >>> >>> 2011/11/2 akshar bhosale >>> >>>> Hi, >>>> >>>> i am getting following error. >>>> python -c 'import numpy;numpy.matrix([[1, 5, 10], [1.0, 3j, 4]], >>>> numpy.complex128).T.I.H' >>>> MKL FATAL ERROR: Cannot load libmkl_lapack.so >>>> >>>> have installed numpy 1.6.0 with python 2.6. >>>> i have intel cluster toolkit installed on my system. (11/069 version >>>> and mlk=10.1). i have machine having intel xeon processor and rhel 5.2 >>>> x86_64 >>>> platform. >>>> Kindly help >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mozhang84 at gmail.com Wed Nov 2 18:20:05 2011 From: mozhang84 at gmail.com (Mo Zhang) Date: Wed, 2 Nov 2011 18:20:05 -0400 Subject: [SciPy-User] SciPy version 0.9.0 Message-ID: Hello, I installed scipy version 0.9.0 with Intel compiler, however, when I import a module in scipy, I get the following error: File "/opt/../scipy/0.9.0/lib/python2.7/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/opt/../scipy/0.9.0/lib/python2.7/site-packages/scipy/stats/stats.py", line 193, in import scipy.special as special File "/opt/../scipy/0.9.0/lib/python2.7/site-packages/scipy/special/__init__.py", line 9, in from _cephes import * ImportError: /opt/../scipy/0.9.0/lib/python2.7/site-packages/scipy/special/_cephes.so: undefined symbol: s_stop Please let me know how I can fix the problem. Thanks Mo Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Thu Nov 3 05:27:13 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Thu, 3 Nov 2011 14:57:13 +0530 Subject: [SciPy-User] numpy with nose Message-ID: Hi, i am using mkl 10.1, intel cluster toolkit 11/069, os rhel 5.2 x86_64, python 2.6, processor is intel xeon numpy version is 1.6.0 my numpy.test hanging at below point : Test whether equivalent subarray dtypes hash the same. ... ok Test whether different subarray dtypes hash differently. ... ok Test some data types that are equal ... ok Test some more complicated cases that shouldn't be equal ... ok Test some simple cases that shouldn't be equal ... ok test_single_subarray (test_dtype.TestSubarray) ... ok test_einsum_errors (test_einsum.TestEinSum) ... ok test_einsum_sums_cfloat128 (test_einsum.TestEinSum) ... any pointers for this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nouiz at nouiz.org Thu Nov 3 10:41:35 2011 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Thu, 3 Nov 2011 10:41:35 -0400 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: <4EAEF6F1.30905@gmail.com> References: <4EAD71CE.8090506@gmail.com> <4EAEF6F1.30905@gmail.com> Message-ID: Hi, There is the Theano software in the python scientific tools that can do symbolic differentiation. With it, you can compute the gradient, the hessien and/or the jacobian. You can also compute efficiently jacobian times a vector, without explicitly computing the jacobian using R operation or L operation. http://deeplearning.net/software/theano/tutorial/gradients.html HTH Fred 2011/10/31 Fran?ois Boulogne : > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Le 30/10/2011 18:55, Warren Weckesser a ?crit : >> >> Having said that, I think a module specifically for computing > derivatives (with good docs and tests), as being discussed in the ticket > #1510 (http://projects.scipy.org/scipy/ticket/1510) would be a nice > addition. > > Thank to all of you for your responses. I'll follow the discussion on > the bug tracker. > > Regards, > > - -- > Fran?ois Boulogne. > > Membre de l'April - Promouvoir et d?fendre le logiciel libre > http://www.april.org > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > > iQIcBAEBAgAGBQJOrvbrAAoJEKkn15fZnrRLnRAP/3mB+lbggEegl7Bfvswv1KaO > qm8b+TKqoNSNJ42zhq3dg5sRb4RiobAX+nXhS8SOifotnPm4M+BLybEq2bu+TovC > 0i83KM86aK7c9fLV5jmN/rhLlFTEZB9Ga722ZYWc5PAg56QSrvjiTPFp4b6cJJLC > utOGLCadB7HB6S6m8XzJQ/G66eAMcz2CUCcBvAyOzY+wRLhXLqRqPZhcwrb1QeSf > ff8xOQiOAMmcHtnqJfxo6PpLuoItqUmCOXAsfC5yRjdY1AjO82voZ7ZDNuAhhFOl > GGJg7z/MEHQRx68gWJ70BxlTIqDhfvc5TI0E7/SFsO4yPd+kYpS5w6Uf6j916WGE > X7DdHD6etsqSSpuApx2vXPGgl/ozi4gM+W/H0Ey/8m+KM0N6shkrWEVg8WejUAX0 > dfrAR1txo4TioFrx0VwbFtKSsqjyxztbT0nqlO2XSJ5pwGgc6zrLoArJFT1uL25w > oMt4pB2UGvLhko8F2LM6Cirr0jfC7c1bjhHkRFpq/8TOvLg+DcRtu9Ag7C1KmskX > xpl6s9WtGxFKTPzj5IXNXGboNHaeq454tO5NTd/lRod7r7q0VO1gIldv16AhC7IO > UBWfmEIZRXMhKj4ocrRLnyijpUnxZ7ruZBQwMkAJ+SX7pTC5f2/qOFT4ERBjEvyW > lL0Qzj8p12lA6DN4Th6t > =x1nX > -----END PGP SIGNATURE----- > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Thu Nov 3 15:39:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 3 Nov 2011 20:39:21 +0100 Subject: [SciPy-User] Installing numpy on ironpython 2.7 In-Reply-To: <4FE177F0-6D6B-4F83-9665-3C53E37C91C0@camarchitects.co.uk> References: <4FE177F0-6D6B-4F83-9665-3C53E37C91C0@camarchitects.co.uk> Message-ID: On Wed, Nov 2, 2011 at 11:04 AM, Evert Amador wrote: > Hello, > > I'm following the instruction on this page > http://www.enthought.com/repo/.iron/ to install SciPy on IronPython 2.7. > > I'm stuck on step 3.) ironpkg > > I downloaded the ironpkg-1.0.0.py (save target file as" option) to the > same IronPython directory, but when I type >ipy ironpkg-1.0.0.py--install the command prompt responds that the file doesn't exist. > > Could you please explain to me how do I get the ironpkg command available > to continue the installation? > You should try asking this on enthought-dev at mail.enthought.com, or on an IronPython list. Cheers, Ralf > Many thanks for your help. > > regards > > Evert Amador > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fccoelho at gmail.com Fri Nov 4 08:35:01 2011 From: fccoelho at gmail.com (Flavio Coelho) Date: Fri, 4 Nov 2011 10:35:01 -0200 Subject: [SciPy-User] possible bug with scipy.integrate.ode Message-ID: Hi, I am a long time user of scipy.integrate.odeint for solving ODEs. Today I decided to test the "other solver" in scipy: scipy.integrate.ode I am getting strange results for model below: def fun(y,t): """ Logistic model """ a = .5 k = 1000.0 return a*(1-y/k)*y r = ode(fun).set_integrator('vode',method='bdf', with_jacobian=False) r.set_initial_value(1e-6,0) res = np.zeros(10000) i = 0 while r.successful() and r.t < 100: r.integrate(r.t+.01) res[i] = r.y i += 1 odeint solves this correctly and returns the caracteristic logistic curve which maxes out at 1000. ode, however, keeps growing beyond 1000. I may be doing something stupid, since I am not familiar with the usage of ode. Or there maybe a bug in ode. I'll just stay away from ode for now, but I thought it might be a good Idea to report this. -- Fl?vio Code?o Coelho ================ +55(21) 3799-5567 Professor Escola de Matem?tica Aplicada Funda??o Get?lio Vargas Rio de Janeiro - RJ Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Fri Nov 4 09:01:07 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 4 Nov 2011 08:01:07 -0500 Subject: [SciPy-User] possible bug with scipy.integrate.ode In-Reply-To: References: Message-ID: On Fri, Nov 4, 2011 at 7:35 AM, Flavio Coelho wrote: > Hi, > > I am a long time user of scipy.integrate.odeint for solving ODEs. Today I > decided to test the "other solver" in scipy: scipy.integrate.ode I am > getting strange results for model below: > > def fun(y,t): > """ > Logistic model > """ > a = .5 > k = 1000.0 > return a*(1-y/k)*y > > r = ode(fun).set_integrator('vode',method='bdf', with_jacobian=False) > r.set_initial_value(1e-6,0) > res = np.zeros(10000) > i = 0 > while r.successful() and r.t < 100: > r.integrate(r.t+.01) > res[i] = r.y > i += 1 > > > odeint solves this correctly and returns the caracteristic logistic curve > which maxes out at 1000. ode, however, keeps growing beyond 1000. > > I may be doing something stupid, since I am not familiar with the usage of > ode. Or there maybe a bug in ode. > > Fl?vio, It is unfortunate, but odeint and ode use different conventions for the order of the arguments of the function that defines the system of differential equations. If you change the signature of your definition of 'fun' to 'def fun(t, y):', your example works fine. Warren > I'll just stay away from ode for now, but I thought it might be a good > Idea to report this. > > > -- > Fl?vio Code?o Coelho > ================ > +55(21) 3799-5567 > Professor > Escola de Matem?tica Aplicada > Funda??o Get?lio Vargas > Rio de Janeiro - RJ > Brasil > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fccoelho at gmail.com Fri Nov 4 09:30:49 2011 From: fccoelho at gmail.com (Flavio Coelho) Date: Fri, 4 Nov 2011 11:30:49 -0200 Subject: [SciPy-User] possible bug with scipy.integrate.ode In-Reply-To: References: Message-ID: The sad thing is... I knew about the order of arguments :-( but moving from one to the other I forgot to swap the arguments... Thanks, I am very relieved to hear that there is no such bug! Fl?vio On Fri, Nov 4, 2011 at 11:01, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Fri, Nov 4, 2011 at 7:35 AM, Flavio Coelho wrote: > >> Hi, >> >> I am a long time user of scipy.integrate.odeint for solving ODEs. Today I >> decided to test the "other solver" in scipy: scipy.integrate.ode I am >> getting strange results for model below: >> >> def fun(y,t): >> """ >> Logistic model >> """ >> a = .5 >> k = 1000.0 >> return a*(1-y/k)*y >> >> r = ode(fun).set_integrator('vode',method='bdf', with_jacobian=False) >> r.set_initial_value(1e-6,0) >> res = np.zeros(10000) >> i = 0 >> while r.successful() and r.t < 100: >> r.integrate(r.t+.01) >> res[i] = r.y >> i += 1 >> >> >> odeint solves this correctly and returns the caracteristic logistic curve >> which maxes out at 1000. ode, however, keeps growing beyond 1000. >> >> I may be doing something stupid, since I am not familiar with the usage >> of ode. Or there maybe a bug in ode. >> >> > > Fl?vio, > > It is unfortunate, but odeint and ode use different conventions for the > order of the arguments of the function that defines the system of > differential equations. If you change the signature of your definition of > 'fun' to 'def fun(t, y):', your example works fine. > > Warren > > > > >> I'll just stay away from ode for now, but I thought it might be a good >> Idea to report this. >> >> >> -- >> Fl?vio Code?o Coelho >> ================ >> +55(21) 3799-5567 >> Professor >> Escola de Matem?tica Aplicada >> Funda??o Get?lio Vargas >> Rio de Janeiro - RJ >> Brasil >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Fl?vio Code?o Coelho ================ +55(21) 3799-5567 Professor Escola de Matem?tica Aplicada Funda??o Get?lio Vargas Rio de Janeiro - RJ Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Nov 5 14:29:46 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 5 Nov 2011 19:29:46 +0100 Subject: [SciPy-User] ANN: scipy 0.10 release candidate 1 Message-ID: Hi all, I am pleased to announce the availability of the first release release of SciPy 0.10.0. For this release over a 100 tickets and pull requests have been closed, and many new features have been added. Some of the highlights are: - support for Bento as a build system for scipy - generalized and shift-invert eigenvalue problems in sparse.linalg - addition of discrete-time linear systems in the signal module Sources and binaries can be found at http://sourceforge.net/projects/scipy/files/scipy/0.10.0rc1/, release notes are copied below. Please try this release and report problems on the mailing list. Note: one problem with Python 2.5 (syntax) was discovered after tagging the release, it's fixed in the 0.10.x branch already so no need to report that one. Cheers, Ralf ========================== SciPy 0.10.0 Release Notes ========================== .. note:: Scipy 0.10.0 is not released yet! .. contents:: SciPy 0.10.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a limited number of deprecations and backwards-incompatible changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.10.x branch, and on adding new features on the development master branch. Release highlights: - Support for Bento as optional build system. - Support for generalized eigenvalue problems, and all shift-invert modes available in ARPACK. This release requires Python 2.4-2.7 or 3.1- and NumPy 1.5 or greater. New features ============ Bento: new optional build system -------------------------------- Scipy can now be built with `Bento `_. Bento has some nice features like parallel builds and partial rebuilds, that are not possible with the default build system (distutils). For usage instructions see BENTO_BUILD.txt in the scipy top-level directory. Currently Scipy has three build systems, distutils, numscons and bento. Numscons is deprecated and is planned and will likely be removed in the next release. Generalized and shift-invert eigenvalue problems in ``scipy.sparse.linalg`` --------------------------------------------------------------------------- The sparse eigenvalue problem solver functions ``scipy.sparse.eigs/eigh`` now support generalized eigenvalue problems, and all shift-invert modes available in ARPACK. Discrete-Time Linear Systems (``scipy.signal``) ----------------------------------------------- Support for simulating discrete-time linear systems, including ``scipy.signal.dlsim``, ``scipy.signal.dimpulse``, and ``scipy.signal.dstep``, has been added to SciPy. Conversion of linear systems from continuous-time to discrete-time representations is also present via the ``scipy.signal.cont2discrete`` function. Enhancements to ``scipy.signal`` -------------------------------- A Lomb-Scargle periodogram can now be computed with the new function ``scipy.signal.lombscargle``. The forward-backward filter function ``scipy.signal.filtfilt`` can now filter the data in a given axis of an n-dimensional numpy array. (Previously it only handled a 1-dimensional array.) Options have been added to allow more control over how the data is extended before filtering. FIR filter design with ``scipy.signal.firwin2`` now has options to create filters of type III (zero at zero and Nyquist frequencies) and IV (zero at zero frequency). Additional decomposition options (``scipy.linalg``) --------------------------------------------------- A sort keyword has been added to the Schur decomposition routine (``scipy.linalg.schur``) to allow the sorting of eigenvalues in the resultant Schur form. Additional special matrices (``scipy.linalg``) ---------------------------------------------- The functions ``hilbert`` and ``invhilbert`` were added to ``scipy.linalg``. Enhancements to ``scipy.stats`` ------------------------------- * The *one-sided form* of Fisher's exact test is now also implemented in ``stats.fisher_exact``. * The function ``stats.chi2_contingency`` for computing the chi-square test of independence of factors in a contingency table has been added, along with the related utility functions ``stats.contingency.margins`` and ``stats.contingency.expected_freq``. Basic support for Harwell-Boeing file format for sparse matrices ---------------------------------------------------------------- Both read and write are support through a simple function-based API, as well as a more complete API to control number format. The functions may be found in scipy.sparse.io. The following features are supported: * Read and write sparse matrices in the CSC format * Only real, symmetric, assembled matrix are supported (RUA format) Deprecated features =================== ``scipy.maxentropy`` -------------------- The maxentropy module is unmaintained, rarely used and has not been functioning well for several releases. Therefore it has been deprecated for this release, and will be removed for scipy 0.11. Logistic regression in scikits.learn is a good alternative for this functionality. The ``scipy.maxentropy.logsumexp`` function has been moved to ``scipy.misc``. ``scipy.lib.blas`` ------------------ There are similar BLAS wrappers in ``scipy.linalg`` and ``scipy.lib``. These have now been consolidated as ``scipy.linalg.blas``, and ``scipy.lib.blas`` is deprecated. Numscons build system --------------------- The numscons build system is being replaced by Bento, and will be removed in one of the next scipy releases. Backwards-incompatible changes ============================== The deprecated name `invnorm` was removed from ``scipy.stats.distributions``, this distribution is available as `invgauss`. The following deprecated nonlinear solvers from ``scipy.optimize`` have been removed:: - ``broyden_modified`` (bad performance) - ``broyden1_modified`` (bad performance) - ``broyden_generalized`` (equivalent to ``anderson``) - ``anderson2`` (equivalent to ``anderson``) - ``broyden3`` (obsoleted by new limited-memory broyden methods) - ``vackar`` (renamed to ``diagbroyden``) Other changes ============= ``scipy.constants`` has been updated with the CODATA 2010 constants. ``__all__`` dicts have been added to all modules, which has cleaned up the namespaces (particularly useful for interactive work). An API section has been added to the documentation, giving recommended import guidelines and specifying which submodules are public and which aren't. Authors ======= This release contains work by the following people (contributed at least one patch to this release, names in alphabetical order): * Jeff Armstrong + * Matthew Brett * Lars Buitinck + * David Cournapeau * FI$H 2000 + * Michael McNeil Forbes + * Matty G + * Christoph Gohlke * Ralf Gommers * Yaroslav Halchenko * Charles Harris * Thouis (Ray) Jones + * Chris Jordan-Squire + * Robert Kern * Chris Lasher + * Wes McKinney + * Travis Oliphant * Fabian Pedregosa * Josef Perktold * Thomas Robitaille + * Pim Schellart + * Anthony Scopatz + * Skipper Seabold + * Fazlul Shahriar + * David Simcha + * Scott Sinclair + * Andrey Smirnov + * Collin RM Stocks + * Martin Teichmann + * Jake Vanderplas + * Ga?l Varoquaux + * Pauli Virtanen * Stefan van der Walt * Warren Weckesser * Mark Wiebe + A total of 35 people contributed to this release. People with a "+" by their names contributed a patch for the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhwang4 at gmail.com Sat Nov 5 15:52:54 2011 From: mhwang4 at gmail.com (Myunghwa Hwang) Date: Sat, 5 Nov 2011 12:52:54 -0700 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment Message-ID: Hello, list! First of all, this will be a long message, due to the complexity of our environment. So, please be patient with my question. I am trying to run a simple django application in a cluster environment. And, my application hangs while it imports scipy.linalg, and both scipy and apache do not write out error messages. When I run my application in my local python shell, it imports scipy.linalg. But, somehow it does not when it is run by apache. So, after reading this message, please share any ideas about how to debug this problem or new solutions to address this issue or deploy my application. Now, let me explain our current setup. 1. OS -- The server is a compute cluster where each node runs centos 6 that was installed from a clean version of centos6 minimal. 2. Apache -- Apache 2.2 is also manually installed from one of default linux repository. To be specific, it is installed from source code together with httpd-dev. 3. Python -- Python 2.7.2 is also installed from source code across all nodes in the cluster. Its source code is downloaded from python.org's ftp. 4. Python packages: nose, numpy, scipy -- Nose 1.1.2 was downloaded from pypi.python.org and installed from its source code. -- numpy 1.6.1 was downloaded and installed from a linux repository. When building numpy, gnu95 fortran complier was used. -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and blas from their source code. ----- atlas was from sourceforge's 3.8.4 stable version. To compile altas, gcc was used. ----- lapack and blas was obtained from netlib.org's repository. To compile the package of lapack and blas, gforan was used. ----- Finally, after exporting paths to blas, lapack, and atlas, scipy-0.9.0 was installed from its source code. scipy was obtained from sourceforge.net's repository. All of the above were installed in the same way across all nodes in our cluster. Since I am the only user of the cluster who needs to run python web applications, I installed python virtualenv package in my local directory. Within my virtual environment, django-1.3 and pysal-1.2 (our own package) were installed. To deploy my web applications, we used mod_wsgi. mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. My application is attached. Basically, it is 'hello world' application that tests if numpy, scipy, and pysal can be imported. In the attached file, lines 4-9 are just adding paths to django and pysal so that apache knows where to find these packages. Also, to let apache know where to find atlas-related packages, the path to those packages was added to the LD_LIBRARY_PATH environment variable in the /etc/sysconfig/httpd file. When I first ran my application, it just hung and wrote no message. So, across scipy.linalg modules, I added print out statements to figure out at which point the import was broken. Here is the messages I got when I imported scipy.linalg in my local python shell. 1. ######################## 2. starting linalg.__init__ 3. pre __init__.__doc__ 4. pre __init__.__version__ 5. pre __init__.misc 6. pre __init__.basic 7. ####################### 8. Starting basic 9. pre basic.flinalg 10. pre basic.lapack 11. pre basic.misc 12. pre basic.scipy.linalg 13. pre basic.decomp_svd 14. pre __init__.decomp 15. ################ 16. starting decomp 17. pre decomp.array et al. 18. pre decomp.calc_lwork 19. pre decomp.LinAlgError 20. pre decomp.get_lapack_funcs 21. pre decomp.get_blas_funcs 22. #################### 23. Starting blas 24. pre blas.scipy.linalg.fblas 25. pre blas.scipy.linalg.cblas 26. pre __init__.decomp_lu 27. pre __init__.decomp_cholesky 28. pre __init__.decomp_qr 29. ################# 30. Starting special_matrices 31. pre special_matrices.math 32. pre special_matrices.np 33. pre __init__.decomp_svd 34. pre __init__.decomp_schur 35. ################## 36. starting schur... 37. pre decomp_schur.misc 38. pre decomp_schur.LinAlgError 39. pre decomp_schur.get_lapack_funcs 40. pre decomp_schur.eigvals:1320454147.23Fri Nov 4 17:49:07 2011 41. schur testing 42. pre __init__.matfuncs 43. ##################### 44. Starting matfuncs 45. pre matfuncs. asarray et al 46. pre matfuncs.matrix 47. pre matfuncs.np 48. pre matfuncs.misc 49. pre matfuncs.basic 50. pre matfuncs.special_matrices 51. pre matfuncs.decomp 52. pre matfuncs.decomp_svd 53. pre matfuncs.decomp_schur 54. pre __init__.blas 55. pre __init__.special_matrices When scipy.linalg is successfully imported, I should get these messages. But, when my web application tried to import scipy.linalg, the output messages stop at line 41. At line 41, decomp_schur.py tries to import decomp.py. Since decomp.py was already imported at line 16, scipy ignores it and continues to import other modules in my local shell. But, somehow, in apache-mod_wsgi environment, scipy failed to ignore or reload decomp.py and seems to kill my web application. This is really odd, because python does not give any message about this error and neither does apache. apache just hangs without sending out any response. Since lapack and blas functions were imported successfully, the problem seems not related to path setup. If anyone in the list has any insights into or experience into this kind of symptom, please share your insights and experience. In particular, debugging techniques or less-known installation/compilation problems would be helpful. I feel like I am at a dead end. So, please help me. Thanks for reading this post. I will look forward to yo -- Myung-Hwa Hwang GeoDa Center School of Geographical Sciences and Urban Planning Arizona State University mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: installation_info Type: application/octet-stream Size: 2790 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hello.py Type: application/octet-stream Size: 1624 bytes Desc: not available URL: From mhwang4 at gmail.com Sat Nov 5 15:59:24 2011 From: mhwang4 at gmail.com (Myunghwa Hwang) Date: Sat, 5 Nov 2011 12:59:24 -0700 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: Hello, list! First of all, this will be a long message, due to the complexity of our environment. So, please be patient with my question. I am trying to run a simple django application in a cluster environment. And, my application hangs while it imports scipy.linalg, and both scipy and apache do not write out error messages. When I run my application in my local python shell, it imports scipy.linalg. But, somehow it does not when it is run by apache. So, after reading this message, please share any ideas about how to debug this problem or new solutions to address this issue or deploy my application. Now, let me explain our current setup. 1. OS -- The server is a compute cluster where each node runs centos 6 that was installed from a clean version of centos6 minimal. 2. Apache -- Apache 2.2 was also manually installed from one of default linux repository. To be specific, it was installed from its source code together with httpd-dev. 3. Python -- Python 2.7.2 was also installed from its source code across all nodes in the cluster. Its source code was downloaded from python.org's ftp. 4. Python packages: nose, numpy, scipy -- Nose 1.1.2 was downloaded from pypi.python.org and installed from its source code. -- numpy 1.6.1 was downloaded and installed from a linux repository. When building numpy, gnu95 fortran complier was used. -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and blas from their source code. ----- atlas was from sourceforge's 3.8.4 stable version. To compile altas, gcc was used. ----- lapack and blas was obtained from netlib.org's repository. To compile the package of lapack and blas, gforan was used. ----- Finally, after exporting paths to blas, lapack, and atlas, scipy-0.9.0 was installed from its source code. scipy was obtained from sourceforge.net's repository. A note that contains the above information about software installation is attached. All of the above were installed in the same way across all nodes in our cluster. Since I am the only user of the cluster who needs to run python web applications, I installed python virtualenv package in my local directory. Within my virtual environment, django-1.3 and pysal-1.2 (our own package) were installed. To deploy my web applications, we used mod_wsgi. mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. My application is attached. Basically, it is a 'hello world' application that tests if numpy, scipy, and pysal can be imported. In the attached file, lines 4-9 are just adding paths to django and pysal so that apache knows where to find these packages. Also, to let apache know where to find atlas-related packages, the path to those packages was added to the LD_LIBRARY_PATH environment variable in the /etc/sysconfig/httpd file. When I first ran my application, it just hung and wrote no message. So, across scipy.linalg modules, I added print out statements to figure out at which point the import was broken. Here is the messages I got when I imported scipy.linalg in my local python shell. 1. ######################## 2. starting linalg.__init__ 3. pre __init__.__doc__ 4. pre __init__.__version__ 5. pre __init__.misc 6. pre __init__.basic 7. ####################### 8. Starting basic 9. pre basic.flinalg 10. pre basic.lapack 11. pre basic.misc 12. pre basic.scipy.linalg 13. pre basic.decomp_svd 14. pre __init__.decomp 15. ################ 16. starting decomp 17. pre decomp.array et al. 18. pre decomp.calc_lwork 19. pre decomp.LinAlgError 20. pre decomp.get_lapack_funcs 21. pre decomp.get_blas_funcs 22. #################### 23. Starting blas 24. pre blas.scipy.linalg.fblas 25. pre blas.scipy.linalg.cblas 26. pre __init__.decomp_lu 27. pre __init__.decomp_cholesky 28. pre __init__.decomp_qr 29. ################# 30. Starting special_matrices 31. pre special_matrices.math 32. pre special_matrices.np 33. pre __init__.decomp_svd 34. pre __init__.decomp_schur 35. ################## 36. starting schur... 37. pre decomp_schur.misc 38. pre decomp_schur.LinAlgError 39. pre decomp_schur.get_lapack_funcs 40. pre decomp_schur.eigvals:1320454147.23Fri Nov 4 17:49:07 2011 41. schur testing 42. pre __init__.matfuncs 43. ##################### 44. Starting matfuncs 45. pre matfuncs. asarray et al 46. pre matfuncs.matrix 47. pre matfuncs.np 48. pre matfuncs.misc 49. pre matfuncs.basic 50. pre matfuncs.special_matrices 51. pre matfuncs.decomp 52. pre matfuncs.decomp_svd 53. pre matfuncs.decomp_schur 54. pre __init__.blas 55. pre __init__.special_matrices When scipy.linalg is successfully imported, I should get these messages. But, when my web application tried to import scipy.linalg, the output messages stop at line 41. At line 41, decomp_schur.py tries to import decomp.py. Since decomp.py was already imported at line 16, scipy ignores it and continues to import other modules in my local shell. But, somehow, in apache-mod_wsgi environment, scipy failed to ignore or reload decomp.py and seems to kill my web application. This is really odd, because python does not give any message about this error and neither does apache. apache just hangs without sending out any response. Since lapack and blas functions were imported successfully, the problem seems not related to path setup. If anyone in the list has any insights into or experience into this kind of symptom, please share your insights and experience. In particular, debugging techniques or less-known installation/compilation problems would be helpful. I feel like I am at a dead end. So, please help me. Thanks for reading this post. I will look forward to your responses. -- Myung-Hwa Hwang -- Myung-Hwa Hwang GeoDa Center School of Geographical Sciences and Urban Planning Arizona State University mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu -- Myung-Hwa Hwang GeoDa Center School of Geographical Sciences and Urban Planning Arizona State University mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: installation_info Type: application/octet-stream Size: 2790 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hello.py Type: application/octet-stream Size: 1624 bytes Desc: not available URL: From raycores at gmail.com Sat Nov 5 16:19:49 2011 From: raycores at gmail.com (Lynn Oliver) Date: Sat, 5 Nov 2011 13:19:49 -0700 Subject: [SciPy-User] Symbol not found: _aswfa_ Message-ID: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> I followed these steps to build and install numpy and scipy on OS X 10.7.2 (from Installing SciPy/Mac OS X -) $ export CC=gcc-4.2 $ export CXX=g++-4.2 $ export FFLAGS=-ff2c $ git clone https://github.com/numpy/numpy.git $ git clone https://github.com/scipy/scipy.git $ python setup.py build $ python setup.py install When I try: from scipy.interpolate import interp1d I get: ImportError: dlopen(/Library/Python/2.7/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _aswfa_ Any ideas on how to resolve the problem? -Lynn $ gcc --version i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00) $ gfortran --version GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) Copyright (C) 2007 Free Software Foundation, Inc. Here is the output from the build: $ python setup.py build blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] non-existing path in 'scipy/io': 'docs' lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] umfpack_info: libraries umfpack not found in /System/Library/Frameworks/Python.framework/Versions/2.7/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/distutils/system_info.py:459: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources building library "dfftpack" sources building library "fftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "dop" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack_scipy" sources building library "qhull" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.integrate._dop" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.interpolate.interpnd" sources building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/scipy/interpolate/src/dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.matlab.streams" sources building extension "scipy.io.matlab.mio_utils" sources building extension "scipy.io.matlab.mio5_utils" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spectral" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.sparse.linalg.dsolve._superlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.sparse.sparsetools._csgraph" sources building extension "scipy.spatial.qhull" sources building extension "scipy.spatial.ckdtree" sources building extension "scipy.spatial._distance_wrap" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.special.orthogonal_eval" sources building extension "scipy.special.lambertw" sources building extension "scipy.special._logit" sources building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.stats.vonmises_cython" sources building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources build_src: building npy-pkg config files running build_py copying scipy/version.py -> build/lib.macosx-10.7-intel-2.7/scipy copying build/src.macosx-10.7-intel-2.7/scipy/__config__.py -> build/lib.macosx-10.7-intel-2.7/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext extending extension 'scipy.sparse.linalg.dsolve._superlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize NAGFCompiler customize AbsoftFCompiler customize IBMFCompiler customize IntelFCompiler customize GnuFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using build_ext running scons Here is the output from install: $ python setup.py install blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] non-existing path in 'scipy/io': 'docs' lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] umfpack_info: libraries umfpack not found in /System/Library/Frameworks/Python.framework/Versions/2.7/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/distutils/system_info.py:459: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources building library "dfftpack" sources building library "fftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "dop" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack_scipy" sources building library "qhull" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.integrate._dop" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.interpolate.interpnd" sources building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/scipy/interpolate/src/dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.matlab.streams" sources building extension "scipy.io.matlab.mio_utils" sources building extension "scipy.io.matlab.mio5_utils" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spectral" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.sparse.linalg.dsolve._superlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.sparse.sparsetools._csgraph" sources building extension "scipy.spatial.qhull" sources building extension "scipy.spatial.ckdtree" sources building extension "scipy.spatial._distance_wrap" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.special.orthogonal_eval" sources building extension "scipy.special.lambertw" sources building extension "scipy.special._logit" sources building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.stats.vonmises_cython" sources building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. adding 'build/src.macosx-10.7-intel-2.7/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources build_src: building npy-pkg config files running build_py copying scipy/version.py -> build/lib.macosx-10.7-intel-2.7/scipy copying build/src.macosx-10.7-intel-2.7/scipy/__config__.py -> build/lib.macosx-10.7-intel-2.7/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext extending extension 'scipy.sparse.linalg.dsolve._superlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize NAGFCompiler customize AbsoftFCompiler customize IBMFCompiler customize IntelFCompiler customize GnuFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using build_ext running scons running install_lib copying build/lib.macosx-10.7-intel-2.7/scipy/__config__.py -> /Library/Python/2.7/site-packages/scipy copying build/lib.macosx-10.7-intel-2.7/scipy/version.py -> /Library/Python/2.7/site-packages/scipy byte-compiling /Library/Python/2.7/site-packages/scipy/__config__.py to __config__.pyc byte-compiling /Library/Python/2.7/site-packages/scipy/version.py to version.pyc running install_data running install_egg_info Removing /Library/Python/2.7/site-packages/scipy-0.11.0.dev_04b8d87-py2.7.egg-info Writing /Library/Python/2.7/site-packages/scipy-0.11.0.dev_04b8d87-py2.7.egg-info running install_clib -------------- next part -------------- An HTML attachment was scrubbed... URL: From johann.cohentanugi at gmail.com Sat Nov 5 16:25:46 2011 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Sat, 05 Nov 2011 21:25:46 +0100 Subject: [SciPy-User] Symbol not found: _aswfa_ In-Reply-To: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> References: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> Message-ID: <4EB59BCA.1070201@gmail.com> Maybe this will be of some help? http://stackoverflow.com/questions/2155986/mac-10-6-universal-binary-scipy-cephes-specfun-aswfa-symbol-not-found best, JCT On 11/05/2011 09:19 PM, Lynn Oliver wrote: > I followed these steps to build and install numpy and scipy on OS X > 10.7.2 (from Installing SciPy/Mac OS X - > ) > > $ export CC=gcc-4.2 > $ export CXX=g++-4.2 > $ export FFLAGS=-ff2c > $ git clonehttps://github.com/numpy/numpy.git > $ git clonehttps://github.com/scipy/scipy.git > $ python setup.py build > $ python setup.py install > When I try: > from scipy.interpolate import interp1d > > I get: > ImportError: > dlopen(/Library/Python/2.7/site-packages/scipy/special/_cephes.so, 2): > Symbol not found: _aswfa_ > > Any ideas on how to resolve the problem? > > -Lynn > > $ gcc --version > i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. > build 5658) (LLVM build 2335.15.00) > > $ gfortran --version > GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) > Copyright (C) 2007 Free Software Foundation, Inc. > > > Here is the output from the build: > > $ python setup.py build > blas_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-faltivec', > '-I/System/Library/Frameworks/vecLib.framework/Headers'] > > non-existing path in 'scipy/io': 'docs' > lapack_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-faltivec'] > > umfpack_info: > libraries umfpack not found in > /System/Library/Frameworks/Python.framework/Versions/2.7/lib > libraries umfpack not found in /usr/local/lib > libraries umfpack not found in /usr/lib > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/distutils/system_info.py:459: > UserWarning: > UMFPACK sparse solver > (http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be > specified in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > NOT AVAILABLE > > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands > --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > build_src > building py_modules sources > building library "dfftpack" sources > building library "fftpack" sources > building library "linpack_lite" sources > building library "mach" sources > building library "quadpack" sources > building library "odepack" sources > building library "dop" sources > building library "fitpack" sources > building library "odrpack" sources > building library "minpack" sources > building library "rootfind" sources > building library "superlu_src" sources > building library "arpack_scipy" sources > building library "qhull" sources > building library "sc_c_misc" sources > building library "sc_cephes" sources > building library "sc_mach" sources > building library "sc_toms" sources > building library "sc_amos" sources > building library "sc_cdf" sources > building library "sc_specfun" sources > building library "statlib" sources > building extension "scipy.cluster._vq" sources > building extension "scipy.cluster._hierarchy_wrap" sources > building extension "scipy.fftpack._fftpack" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.fftpack.convolve" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.integrate._quadpack" sources > building extension "scipy.integrate._odepack" sources > building extension "scipy.integrate.vode" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.integrate._dop" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.interpolate.interpnd" sources > building extension "scipy.interpolate._fitpack" sources > building extension "scipy.interpolate.dfitpack" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/scipy/interpolate/src/dfitpack-f2pywrappers.f' > to sources. > building extension "scipy.interpolate._interpolate" sources > building extension "scipy.io.matlab.streams" sources > building extension "scipy.io.matlab.mio_utils" sources > building extension "scipy.io.matlab.mio5_utils" sources > building extension "scipy.lib.blas.fblas" sources > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/lib/blas/fblas-f2pywrappers.f' > to sources. > building extension "scipy.lib.blas.cblas" sources > adding 'build/src.macosx-10.7-intel-2.7/scipy/lib/blas/cblas.pyf' to > sources. > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.lib.lapack.flapack" sources > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.lib.lapack.clapack" sources > adding > 'build/src.macosx-10.7-intel-2.7/scipy/lib/lapack/clapack.pyf' to sources. > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.lib.lapack.calc_lwork" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.lib.lapack.atlas_version" sources > building extension "scipy.linalg.fblas" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/linalg/fblas-f2pywrappers.f' > to sources. > building extension "scipy.linalg.cblas" sources > adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/cblas.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.linalg.flapack" sources > adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/flapack.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/linalg/flapack-f2pywrappers.f' > to sources. > building extension "scipy.linalg.clapack" sources > adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/clapack.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.linalg._flinalg" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.linalg.calc_lwork" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.linalg.atlas_version" sources > building extension "scipy.odr.__odrpack" sources > building extension "scipy.optimize._minpack" sources > building extension "scipy.optimize._zeros" sources > building extension "scipy.optimize._lbfgsb" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.optimize.moduleTNC" sources > building extension "scipy.optimize._cobyla" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.optimize.minpack2" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.optimize._slsqp" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.optimize._nnls" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.signal.sigtools" sources > building extension "scipy.signal.spectral" sources > building extension "scipy.signal.spline" sources > building extension "scipy.sparse.linalg.isolve._iterative" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.sparse.linalg.dsolve._superlu" sources > building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources > building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' > to sources. > building extension "scipy.sparse.sparsetools._csr" sources > building extension "scipy.sparse.sparsetools._csc" sources > building extension "scipy.sparse.sparsetools._coo" sources > building extension "scipy.sparse.sparsetools._bsr" sources > building extension "scipy.sparse.sparsetools._dia" sources > building extension "scipy.sparse.sparsetools._csgraph" sources > building extension "scipy.spatial.qhull" sources > building extension "scipy.spatial.ckdtree" sources > building extension "scipy.spatial._distance_wrap" sources > building extension "scipy.special._cephes" sources > building extension "scipy.special.specfun" sources > f2py options: ['--no-wrap-functions'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.special.orthogonal_eval" sources > building extension "scipy.special.lambertw" sources > building extension "scipy.special._logit" sources > building extension "scipy.stats.statlib" sources > f2py options: ['--no-wrap-functions'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.stats.vonmises_cython" sources > building extension "scipy.stats.futil" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.stats.mvn" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/scipy/stats/mvn-f2pywrappers.f' to > sources. > building extension "scipy.ndimage._nd_image" sources > building data_files sources > build_src: building npy-pkg config files > running build_py > copying scipy/version.py -> build/lib.macosx-10.7-intel-2.7/scipy > copying build/src.macosx-10.7-intel-2.7/scipy/__config__.py -> > build/lib.macosx-10.7-intel-2.7/scipy > running build_clib > customize UnixCCompiler > customize UnixCCompiler using build_clib > customize NAGFCompiler > Could not locate executable f95 > customize AbsoftFCompiler > Could not locate executable f90 > Could not locate executable f77 > customize IBMFCompiler > Could not locate executable xlf90 > Could not locate executable xlf > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize GnuFCompiler > Could not locate executable g77 > customize Gnu95FCompiler > Found executable /usr/local/bin/gfortran > customize Gnu95FCompiler > customize Gnu95FCompiler using build_clib > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > extending extension 'scipy.sparse.linalg.dsolve._superlu' > defined_macros with [('USE_VENDOR_BLAS', 1)] > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize NAGFCompiler > customize AbsoftFCompiler > customize IBMFCompiler > customize IntelFCompiler > customize GnuFCompiler > customize Gnu95FCompiler > customize Gnu95FCompiler > customize Gnu95FCompiler using build_ext > running scons > > Here is the output from install: > > $ python setup.py install > blas_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-faltivec', > '-I/System/Library/Frameworks/vecLib.framework/Headers'] > > non-existing path in 'scipy/io': 'docs' > lapack_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-faltivec'] > > umfpack_info: > libraries umfpack not found in > /System/Library/Frameworks/Python.framework/Versions/2.7/lib > libraries umfpack not found in /usr/local/lib > libraries umfpack not found in /usr/lib > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/distutils/system_info.py:459: > UserWarning: > UMFPACK sparse solver > (http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be > specified in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > NOT AVAILABLE > > running install > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands > --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > build_src > building py_modules sources > building library "dfftpack" sources > building library "fftpack" sources > building library "linpack_lite" sources > building library "mach" sources > building library "quadpack" sources > building library "odepack" sources > building library "dop" sources > building library "fitpack" sources > building library "odrpack" sources > building library "minpack" sources > building library "rootfind" sources > building library "superlu_src" sources > building library "arpack_scipy" sources > building library "qhull" sources > building library "sc_c_misc" sources > building library "sc_cephes" sources > building library "sc_mach" sources > building library "sc_toms" sources > building library "sc_amos" sources > building library "sc_cdf" sources > building library "sc_specfun" sources > building library "statlib" sources > building extension "scipy.cluster._vq" sources > building extension "scipy.cluster._hierarchy_wrap" sources > building extension "scipy.fftpack._fftpack" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.fftpack.convolve" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.integrate._quadpack" sources > building extension "scipy.integrate._odepack" sources > building extension "scipy.integrate.vode" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.integrate._dop" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.interpolate.interpnd" sources > building extension "scipy.interpolate._fitpack" sources > building extension "scipy.interpolate.dfitpack" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/scipy/interpolate/src/dfitpack-f2pywrappers.f' > to sources. > building extension "scipy.interpolate._interpolate" sources > building extension "scipy.io.matlab.streams" sources > building extension "scipy.io.matlab.mio_utils" sources > building extension "scipy.io.matlab.mio5_utils" sources > building extension "scipy.lib.blas.fblas" sources > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/lib/blas/fblas-f2pywrappers.f' > to sources. > building extension "scipy.lib.blas.cblas" sources > adding 'build/src.macosx-10.7-intel-2.7/scipy/lib/blas/cblas.pyf' to > sources. > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.lib.lapack.flapack" sources > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.lib.lapack.clapack" sources > adding > 'build/src.macosx-10.7-intel-2.7/scipy/lib/lapack/clapack.pyf' to sources. > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.lib.lapack.calc_lwork" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.lib.lapack.atlas_version" sources > building extension "scipy.linalg.fblas" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/linalg/fblas-f2pywrappers.f' > to sources. > building extension "scipy.linalg.cblas" sources > adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/cblas.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.linalg.flapack" sources > adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/flapack.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/linalg/flapack-f2pywrappers.f' > to sources. > building extension "scipy.linalg.clapack" sources > adding 'build/src.macosx-10.7-intel-2.7/scipy/linalg/clapack.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.linalg._flinalg" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.linalg.calc_lwork" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.linalg.atlas_version" sources > building extension "scipy.odr.__odrpack" sources > building extension "scipy.optimize._minpack" sources > building extension "scipy.optimize._zeros" sources > building extension "scipy.optimize._lbfgsb" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.optimize.moduleTNC" sources > building extension "scipy.optimize._cobyla" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.optimize.minpack2" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.optimize._slsqp" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.optimize._nnls" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.signal.sigtools" sources > building extension "scipy.signal.spectral" sources > building extension "scipy.signal.spline" sources > building extension "scipy.sparse.linalg.isolve._iterative" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.sparse.linalg.dsolve._superlu" sources > building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources > building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/build/src.macosx-10.7-intel-2.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' > to sources. > building extension "scipy.sparse.sparsetools._csr" sources > building extension "scipy.sparse.sparsetools._csc" sources > building extension "scipy.sparse.sparsetools._coo" sources > building extension "scipy.sparse.sparsetools._bsr" sources > building extension "scipy.sparse.sparsetools._dia" sources > building extension "scipy.sparse.sparsetools._csgraph" sources > building extension "scipy.spatial.qhull" sources > building extension "scipy.spatial.ckdtree" sources > building extension "scipy.spatial._distance_wrap" sources > building extension "scipy.special._cephes" sources > building extension "scipy.special.specfun" sources > f2py options: ['--no-wrap-functions'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.special.orthogonal_eval" sources > building extension "scipy.special.lambertw" sources > building extension "scipy.special._logit" sources > building extension "scipy.stats.statlib" sources > f2py options: ['--no-wrap-functions'] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.stats.vonmises_cython" sources > building extension "scipy.stats.futil" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > building extension "scipy.stats.mvn" sources > f2py options: [] > adding 'build/src.macosx-10.7-intel-2.7/fortranobject.c' to sources. > adding 'build/src.macosx-10.7-intel-2.7' to include_dirs. > adding > 'build/src.macosx-10.7-intel-2.7/scipy/stats/mvn-f2pywrappers.f' to > sources. > building extension "scipy.ndimage._nd_image" sources > building data_files sources > build_src: building npy-pkg config files > running build_py > copying scipy/version.py -> build/lib.macosx-10.7-intel-2.7/scipy > copying build/src.macosx-10.7-intel-2.7/scipy/__config__.py -> > build/lib.macosx-10.7-intel-2.7/scipy > running build_clib > customize UnixCCompiler > customize UnixCCompiler using build_clib > customize NAGFCompiler > Could not locate executable f95 > customize AbsoftFCompiler > Could not locate executable f90 > Could not locate executable f77 > customize IBMFCompiler > Could not locate executable xlf90 > Could not locate executable xlf > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize GnuFCompiler > Could not locate executable g77 > customize Gnu95FCompiler > Found executable /usr/local/bin/gfortran > customize Gnu95FCompiler > customize Gnu95FCompiler using build_clib > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > extending extension 'scipy.sparse.linalg.dsolve._superlu' > defined_macros with [('USE_VENDOR_BLAS', 1)] > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize NAGFCompiler > customize AbsoftFCompiler > customize IBMFCompiler > customize IntelFCompiler > customize GnuFCompiler > customize Gnu95FCompiler > customize Gnu95FCompiler > customize Gnu95FCompiler using build_ext > running scons > running install_lib > copying build/lib.macosx-10.7-intel-2.7/scipy/__config__.py -> > /Library/Python/2.7/site-packages/scipy > copying build/lib.macosx-10.7-intel-2.7/scipy/version.py -> > /Library/Python/2.7/site-packages/scipy > byte-compiling /Library/Python/2.7/site-packages/scipy/__config__.py > to __config__.pyc > byte-compiling /Library/Python/2.7/site-packages/scipy/version.py to > version.pyc > running install_data > running install_egg_info > Removing > /Library/Python/2.7/site-packages/scipy-0.11.0.dev_04b8d87-py2.7.egg-info > Writing > /Library/Python/2.7/site-packages/scipy-0.11.0.dev_04b8d87-py2.7.egg-info > running install_clib > > -- > This message has been scanned for viruses and > dangerous content by *MailScanner* , and is > believed to be clean. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From raycores at gmail.com Sat Nov 5 17:30:23 2011 From: raycores at gmail.com (Lynn Oliver) Date: Sat, 5 Nov 2011 14:30:23 -0700 Subject: [SciPy-User] Symbol not found: _aswfa_ In-Reply-To: <4EB59BCA.1070201@gmail.com> References: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> <4EB59BCA.1070201@gmail.com> Message-ID: <1104066C-257E-46B5-9837-CF4AEB7F5A53@gmail.com> I saw that, but I can't use macports builds without breaking something else. Sent from my iPad On Nov 5, 2011, at 1:25 PM, Johann Cohen-Tanugi wrote: > Maybe this will be of some help? > http://stackoverflow.com/questions/2155986/mac-10-6-universal-binary-scipy-cephes-specfun-aswfa-symbol-not-found > best, > JCT -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Sun Nov 6 05:44:10 2011 From: klonuo at gmail.com (klo uo) Date: Sun, 6 Nov 2011 11:44:10 +0100 Subject: [SciPy-User] How to handle GRIB datasets? Message-ID: I got interested in this meteorological data, which is usually provided in GRIB format [http://en.wikipedia.org/wiki/GRIB] First thing I did was installing GrADS package, which handles this type of datasets in it's own unique and superb way. It probably can't be better than that, but I wanted also Python interface for this data and actual numbers I installed Basemap (matplotlib tooklit) scipy.io has a way of handling netCDF files, but I find it hard to convert GRIB data to netCDF I found 'pygrib' package and build/installed it. Provided examples showed everything is fine, thou there is lot of typing to get the result, and object (pygrib) is not numpy array and not easy to handle on first try. Then I feed it with my (NCEP provided) data and it crashed IPython Then (after all my quest through foreign terminology which I didn't mention) I thought to ask if someone on this mailing list has ever used Python with GRIB datasets and maybe help me with some tip Thanks From klonuo at gmail.com Sun Nov 6 06:35:28 2011 From: klonuo at gmail.com (klo uo) Date: Sun, 6 Nov 2011 12:35:28 +0100 Subject: [SciPy-User] How to handle GRIB datasets? In-Reply-To: References: Message-ID: Just to correct myself, 'pygrib' now didn't crash IPython issuing same command on same data: first, I found that there is special module for some NCEP GRIB sets, which I tried and it reported that dataset is not GRIB-2 so it can't handle it. Then I tried again to read this dataset with 'g = pygrib.open('datafile').read()' and it was fine also, this 'g' object is list of separated GRIB data variables, and each list object can return '.values' data as numpy object I'm writing this as it seems like I painted black 'pygrab' in my previous message without reason (or fault is on my side) Cheers From yosefmel at post.tau.ac.il Sun Nov 6 09:41:29 2011 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Sun, 06 Nov 2011 16:41:29 +0200 Subject: [SciPy-User] SciPy for Computational Geometry In-Reply-To: <4EAF0F99.7080407@gmail.com> References: <4EAF0F99.7080407@gmail.com> Message-ID: <2949483.OeVS8h5zgn@yosef-pc> On Monday 31 October 2011 22:14:01 Lorenzo Isella wrote: > Dear All, > Imagine that you are sitting at the origin (0,0,0) of a 3D coordinate > system and that you are looking at a set of (non-overlapping) spheres > (all the spheres are identical and with radius R=1). > You ask yourself how many spheres you can see overall. I'm not sure I understood your problem correctly, but if I got it right, then you are talking about a 2D problem of circles intersection. Project all the spheres on a screen behind the farthest sphere, then use the inclusion- exclusion principle to find intersections between each subset of intersecting spheres. This is my take on a similar problem, although possibly simpler: http://apps.webofknowledge.com/full_record.do?product=UA&search_mode=GeneralSearch&qid=1&SID=T1LjN5d2NKd9GGLMKgG&page=1&doc=1 Analytically calculating shading in regular arrays of sun-pointing collectors SOLAR ENERGY Volume: 84 Issue: 11 Pages: 1967-1974 DOI: 10.1016/j.solener.2010.08.006 > The result is in general a (positive) real number as one sphere may > partially eclipse another sphere for an observer in the origin (e.g. if > one sphere is located at (0,0,5) and the other (0,0.3,10)). > Does anybody know an algorithm to calculate this quantity efficiently? > I have in mind (for now at least) configurations of less that 100 > spheres, so hopefully this should not be too demanding. > I had a look at > > http://www.qhull.org/ > > but I am not 100% sure that this is the way to go. > Any suggestion is appreciated. > Many thanks > > Lorenzo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ralf.gommers at googlemail.com Sun Nov 6 12:49:26 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 6 Nov 2011 18:49:26 +0100 Subject: [SciPy-User] Symbol not found: _aswfa_ In-Reply-To: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> References: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> Message-ID: On Sat, Nov 5, 2011 at 9:19 PM, Lynn Oliver wrote: > I followed these steps to build and install numpy and scipy on OS X 10.7.2 > (from Installing SciPy/Mac OS X - > ) > > $ export CC=gcc-4.2 > $ export CXX=g++-4.2 > $ export FFLAGS=-ff2c > > $ git clone https://github.com/numpy/numpy.git > $ git clone https://github.com/scipy/scipy.git > > $ python setup.py build > $ python setup.py install > > When I try: > from scipy.interpolate import interp1d > > I get: > ImportError: > dlopen(/Library/Python/2.7/site-packages/scipy/special/_cephes.so, 2): > Symbol not found: _aswfa_ > > Any ideas on how to resolve the problem? > That stackoverflow post that Johann linked to says that the aswfa symbol is only missing in the 32-bit version. Are you running 32-bit Python 2.7 and if so, is that on purpose? If you grab the right installer from python.org, you should get 64-bit by default. Ralf > -Lynn > > $ gcc --version > i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build > 5658) (LLVM build 2335.15.00) > > $ gfortran --version > GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) > Copyright (C) 2007 Free Software Foundation, Inc. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paustin at eos.ubc.ca Sun Nov 6 14:27:56 2011 From: paustin at eos.ubc.ca (Phil Austin) Date: Sun, 06 Nov 2011 11:27:56 -0800 Subject: [SciPy-User] [xpyx] Re: How to handle GRIB datasets? In-Reply-To: References: Message-ID: <4EB6DFBC.3060708@eos.ubc.ca> On 11-11-06 03:35 AM, klo uo wrote: > Just to correct myself, 'pygrib' now didn't crash IPython issuing same > command on same data: There's also pynio and pyngl: http://www.pyngl.ucar.edu/NioFormats.shtml From raycores at gmail.com Sun Nov 6 14:35:05 2011 From: raycores at gmail.com (Lynn Oliver) Date: Sun, 6 Nov 2011 11:35:05 -0800 Subject: [SciPy-User] Symbol not found: _aswfa_ In-Reply-To: References: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> Message-ID: <8425C25E-3229-4138-BEA4-D055631394B4@gmail.com> That thread is also at [SciPy-User] cephes library issues: Symbol not found: _aswfa_ [SOLVED], and it seems the issue turned out to be the version of the fortran compiler that was installed. According to Installing SciPy/Mac OS X -, I need to use gfortran-lion-5666-3.pkg at Tools - R for Mac OS X - developer's page - GNU Fortan for Xcode, which is listed as v4.2.4. I did the install, but as you can see below, it is showing the version as 4.2.1, although the build number (5666) is correct. I'm not sure if this means I am using the correct version or not. I have XCode 4.2 installed, while the package is listed as requiring 4.1. I'm running the Python version that ships with Lion, which so far has been the only way to get the correct versions of tk/tcl, tkinter, numpy, and matplotlib all working on Lion without going to a non open-source distribution. I had the environment variable set to select 32-bit mode (export VERSIONER_PYTHON_PREFER_32_BIT_=TRUE), but when I remove that I'm still getting 32-bit for some reason. Until I figure that out I can't check to see if the 64-bit version has the issue. It wouldn't matter that much anyway, because pyinstaller only works with 32-bit python. Lynn On Nov 6, 2011, at 9:49 AM, Ralf Gommers wrote: > > > On Sat, Nov 5, 2011 at 9:19 PM, Lynn Oliver wrote: > I followed these steps to build and install numpy and scipy on OS X 10.7.2 (from Installing SciPy/Mac OS X -) > > $ export CC=gcc-4.2 > $ export CXX=g++-4.2 > $ export FFLAGS=-ff2c > $ git clone https://github.com/numpy/numpy.git > $ git clone https://github.com/scipy/scipy.git > $ python setup.py build > $ python setup.py install > When I try: > from scipy.interpolate import interp1d > > I get: > ImportError: dlopen(/Library/Python/2.7/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _aswfa_ > > Any ideas on how to resolve the problem? > > That stackoverflow post that Johann linked to says that the aswfa symbol is only missing in the 32-bit version. Are you running 32-bit Python 2.7 and if so, is that on purpose? If you grab the right installer from python.org, you should get 64-bit by default. > > Ralf > > > -Lynn > > $ gcc --version > i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00) > > $ gfortran --version > GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) > Copyright (C) 2007 Free Software Foundation, Inc. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Nov 6 16:28:17 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 6 Nov 2011 22:28:17 +0100 Subject: [SciPy-User] Symbol not found: _aswfa_ In-Reply-To: <8425C25E-3229-4138-BEA4-D055631394B4@gmail.com> References: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> <8425C25E-3229-4138-BEA4-D055631394B4@gmail.com> Message-ID: On Sun, Nov 6, 2011 at 8:35 PM, Lynn Oliver wrote: > That thread is also at [SciPy-User] cephes library issues: Symbol not > found: _aswfa_ [SOLVED], > and it seems the issue turned out to be the version of the fortran compiler > that was installed. > > According to Installing SciPy/Mac OS X -, > I need to use gfortran-lion-5666-3.pkg at Tools - R for Mac OS X - > developer's page - GNU Fortan for Xcode , > which is listed as v4.2.4. I did the install, but as you can see below, it > is showing the version as 4.2.1, although the build number (5666) is > correct. I'm not sure if this means I am using the correct version or not. > I have XCode 4.2 installed, while the package is listed as requiring 4.1. > That should be the correct gfortran, the incorrect version number is a known problem. Newer XCode shouldn't matter. > > I'm running the Python version that ships with Lion, which so far has been > the only way to get the correct versions of tk/tcl, tkinter, numpy, and > matplotlib all working on Lion without going to a non open-source > distribution. > > I had the environment variable set to select 32-bit mode (export > VERSIONER_PYTHON_PREFER_32_BIT_=TRUE), but when I remove that I'm still > getting 32-bit for some reason. Until I figure that out I can't check to > see if the 64-bit version has the issue. It wouldn't matter that much > anyway, because pyinstaller only works with 32-bit python. > That's odd. But anyway you need the 32-bit version of cephes in the universal binary to work, and it doesn't. This could be a distutils bug or something in the scipy build. Picking the non-default part of a universal binary is hardly tested at all. Could you file a bug please? Ralf > Lynn > > On Nov 6, 2011, at 9:49 AM, Ralf Gommers wrote: > > > > On Sat, Nov 5, 2011 at 9:19 PM, Lynn Oliver wrote: > >> I followed these steps to build and install numpy and scipy on OS X >> 10.7.2 (from Installing SciPy/Mac OS X - >> ) >> >> $ export CC=gcc-4.2 >> $ export CXX=g++-4.2 >> $ export FFLAGS=-ff2c >> >> $ git clone https://github.com/numpy/numpy.git >> $ git clone https://github.com/scipy/scipy.git >> >> $ python setup.py build >> $ python setup.py install >> >> When I try: >> from scipy.interpolate import interp1d >> >> I get: >> ImportError: >> dlopen(/Library/Python/2.7/site-packages/scipy/special/_cephes.so, 2): >> Symbol not found: _aswfa_ >> >> Any ideas on how to resolve the problem? >> > > That stackoverflow post that Johann linked to says that the aswfa symbol > is only missing in the 32-bit version. Are you running 32-bit Python 2.7 > and if so, is that on purpose? If you grab the right installer from > python.org, you should get 64-bit by default. > > Ralf > > >> -Lynn >> >> $ gcc --version >> i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build >> 5658) (LLVM build 2335.15.00) >> >> $ gfortran --version >> GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) >> Copyright (C) 2007 Free Software Foundation, Inc. >> >> >> _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Nov 6 16:49:27 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 6 Nov 2011 22:49:27 +0100 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: On Sat, Nov 5, 2011 at 8:52 PM, Myunghwa Hwang wrote: > Hello, list! > > First of all, this will be a long message, due to the complexity of our > environment. > So, please be patient with my question. > > I am trying to run a simple django application in a cluster environment. > And, my application hangs while it imports scipy.linalg, and both scipy > and apache do not write out error messages. > When I run my application in my local python shell, it imports > scipy.linalg. But, somehow it does not when it is run by apache. > So, after reading this message, please share any ideas about how to debug > this problem or new solutions to address this issue or deploy my > application. > > Now, let me explain our current setup. > 1. OS > -- The server is a compute cluster where each node runs centos 6 that was > installed from a clean version of centos6 minimal. > 2. Apache > -- Apache 2.2 is also manually installed from one of default linux > repository. > To be specific, it is installed from source code together with httpd-dev. > 3. Python > -- Python 2.7.2 is also installed from source code across all nodes in the > cluster. Its source code is downloaded from python.org's ftp. > 4. Python packages: nose, numpy, scipy > -- Nose 1.1.2 was downloaded from pypi.python.org and installed from its > source code. > -- numpy 1.6.1 was downloaded and installed from a linux repository. When > building numpy, gnu95 fortran complier was used. > -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and blas from > their source code. > ----- atlas was from sourceforge's 3.8.4 stable version. To compile altas, > gcc was used. > ----- lapack and blas was obtained from netlib.org's repository. To > compile the package of lapack and blas, gforan was used. > ----- Finally, after exporting paths to blas, lapack, and atlas, > scipy-0.9.0 was installed from its source code. > scipy was obtained from sourceforge.net's repository. > > All of the above were installed in the same way across all nodes in our > cluster. > Since I am the only user of the cluster who needs to run python web > applications, > I installed python virtualenv package in my local directory. > Within my virtual environment, django-1.3 and pysal-1.2 (our own package) > were installed. > To deploy my web applications, we used mod_wsgi. > mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. > My application is attached. Basically, it is 'hello world' application > that tests if numpy, scipy, and pysal can be imported. > In the attached file, lines 4-9 are just adding paths to django and pysal > so that apache knows where to find these packages. > Also, to let apache know where to find atlas-related packages, the path to > those packages was added to the LD_LIBRARY_PATH environment variable in the > /etc/sysconfig/httpd file. > > When I first ran my application, it just hung and wrote no message. > So, across scipy.linalg modules, I added print out statements to figure > out at which point the import was broken. > Here is the messages I got when I imported scipy.linalg in my local python > shell. > > 1. ######################## > 2. starting linalg.__init__ > 3. pre __init__.__doc__ > 4. pre __init__.__version__ > 5. pre __init__.misc > 6. pre __init__.basic > 7. ####################### > 8. Starting basic > 9. pre basic.flinalg > 10. pre basic.lapack > 11. pre basic.misc > 12. pre basic.scipy.linalg > 13. pre basic.decomp_svd > 14. pre __init__.decomp > 15. ################ > 16. starting decomp > 17. pre decomp.array et al. > 18. pre decomp.calc_lwork > 19. pre decomp.LinAlgError > 20. pre decomp.get_lapack_funcs > 21. pre decomp.get_blas_funcs > 22. #################### > 23. Starting blas > 24. pre blas.scipy.linalg.fblas > 25. pre blas.scipy.linalg.cblas > 26. pre __init__.decomp_lu > 27. pre __init__.decomp_cholesky > 28. pre __init__.decomp_qr > 29. ################# > 30. Starting special_matrices > 31. pre special_matrices.math > 32. pre special_matrices.np > 33. pre __init__.decomp_svd > 34. pre __init__.decomp_schur > 35. ################## > 36. starting schur... > 37. pre decomp_schur.misc > 38. pre decomp_schur.LinAlgError > 39. pre decomp_schur.get_lapack_funcs > 40. pre decomp_schur.eigvals:1320454147.23Fri Nov 4 17:49:07 2011 > 41. schur testing > 42. pre __init__.matfuncs > 43. ##################### > 44. Starting matfuncs > 45. pre matfuncs. asarray et al > 46. pre matfuncs.matrix > 47. pre matfuncs.np > 48. pre matfuncs.misc > 49. pre matfuncs.basic > 50. pre matfuncs.special_matrices > 51. pre matfuncs.decomp > 52. pre matfuncs.decomp_svd > 53. pre matfuncs.decomp_schur > 54. pre __init__.blas > 55. pre __init__.special_matrices > > When scipy.linalg is successfully imported, I should get these messages. > But, when my web application tried to import scipy.linalg, the output > messages stop at line 41. > At line 41, decomp_schur.py tries to import decomp.py. Since decomp.py was > already imported at line 16, scipy ignores it and continues to import other > modules in my local shell. > But, somehow, in apache-mod_wsgi environment, scipy failed to ignore or > reload decomp.py and seems to kill my web application. > This is really odd, because python does not give any message about this > error and neither does apache. apache just hangs without sending out any > response. > Since lapack and blas functions were imported successfully, the problem > seems not related to path setup. > > If anyone in the list has any insights into or experience into this kind > of symptom, > please share your insights and experience. In particular, debugging > techniques or less-known installation/compilation problems would be helpful. > I feel like I am at a dead end. So, please help me. > > Thanks for reading this post. > I will look forward to yo > Looking at linalg/__init__.py the register_func calls are an obvious candidate for causing this strange issue. of the few functions left in decomp_schur.py, norm() and dot() have both gone through register_func, which messes with the call stack. Could you comment out all calls to that function and see if that helps? Replace functions that then become unavailable with numpy ones of the same name where needed. Cheers, Ralf > > > -- > Myung-Hwa Hwang > GeoDa Center > School of Geographical Sciences and Urban Planning > Arizona State University > mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrocher at enthought.com Sun Nov 6 17:40:34 2011 From: jrocher at enthought.com (Jonathan Rocher) Date: Sun, 6 Nov 2011 16:40:34 -0600 Subject: [SciPy-User] [xpyx] Re: How to handle GRIB datasets? In-Reply-To: <4EB6DFBC.3060708@eos.ubc.ca> References: <4EB6DFBC.3060708@eos.ubc.ca> Message-ID: Hi all, If you can use GPL packages, I would also mention GribAPI, which is developed by the European Center For Medium Range weather Forecast: http://www.ecmwf.int/products/data/software/grib_api.html It is written in C with bindings for python and that's what they use all the time. HTH Jonathan On Sun, Nov 6, 2011 at 1:27 PM, Phil Austin wrote: > On 11-11-06 03:35 AM, klo uo wrote: > > Just to correct myself, 'pygrib' now didn't crash IPython issuing same > > command on same data: > There's also pynio and pyngl: > > http://www.pyngl.ucar.edu/NioFormats.shtml > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Jonathan Rocher, PhD Scientific software developer Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From raycores at gmail.com Sun Nov 6 18:56:59 2011 From: raycores at gmail.com (Lynn Oliver) Date: Sun, 6 Nov 2011 15:56:59 -0800 Subject: [SciPy-User] Symbol not found: _aswfa_ In-Reply-To: References: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> <8425C25E-3229-4138-BEA4-D055631394B4@gmail.com> Message-ID: <0D4F8968-0B55-4CC0-88B0-5E367865B25F@gmail.com> Done. http://projects.scipy.org/scipy/ticket/1556 On Nov 6, 2011, at 1:28 PM, Ralf Gommers wrote: > That's odd. But anyway you need the 32-bit version of cephes in the universal binary to work, and it doesn't. This could be a distutils bug or something in the scipy build. Picking the non-default part of a universal binary is hardly tested at all. Could you file a bug please? > > Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at lambdafoundry.com Sun Nov 6 20:15:30 2011 From: adam at lambdafoundry.com (Adam Klein) Date: Sun, 6 Nov 2011 20:15:30 -0500 Subject: [SciPy-User] Symbol not found: _aswfa_ In-Reply-To: <0D4F8968-0B55-4CC0-88B0-5E367865B25F@gmail.com> References: <39AFF30B-797E-412C-8EF3-A94E4D681DBA@gmail.com> <8425C25E-3229-4138-BEA4-D055631394B4@gmail.com> <0D4F8968-0B55-4CC0-88B0-5E367865B25F@gmail.com> Message-ID: As an aside, I just wanted to linke this blog entry which helped me set up my environment: http://www.thisisthegreenroom.com/2011/installing-python-numpy-scipy-matplotlib-and-ipython-on-lion/ On Sun, Nov 6, 2011 at 6:56 PM, Lynn Oliver wrote: > Done. http://projects.scipy.org/scipy/ticket/1556 > > On Nov 6, 2011, at 1:28 PM, Ralf Gommers wrote: > > That's odd. But anyway you need the 32-bit version of cephes in the > universal binary to work, and it doesn't. This could be a distutils bug or > something in the scipy build. Picking the non-default part of a universal > binary is hardly tested at all. Could you file a bug please? > > Ralf > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Sun Nov 6 21:56:31 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Sun, 6 Nov 2011 20:56:31 -0600 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 3:49 PM, Ralf Gommers wrote: > > > On Sat, Nov 5, 2011 at 8:52 PM, Myunghwa Hwang wrote: >> >> Hello, list! >> First of all, this will be a long message, due to the complexity of our >> environment. >> So, please be patient with my question. >> I am trying to run a simple django application in a cluster environment. >> And, my application hangs while it imports scipy.linalg, and both scipy >> and apache do not write out error messages. >> When I run my application in my local python shell, it imports >> scipy.linalg. But, somehow it does not when it is run by apache. >> So, after reading this message, please share any ideas about how to debug >> this problem or new solutions to address this issue or deploy my >> application. >> Now, let me explain our current setup. >> 1. OS >> -- The server is a compute cluster where each node runs centos 6 that was >> installed from a clean version of centos6 minimal. >> 2. Apache >> -- Apache 2.2 is also manually installed from one of default linux >> repository. >> ? To be specific, it is installed from source code together with >> httpd-dev. >> 3. Python >> -- Python 2.7.2 is also installed from source code across all nodes in the >> cluster. Its source code is downloaded from python.org's ftp. >> 4. Python packages: nose, numpy, scipy >> -- Nose 1.1.2 was downloaded from pypi.python.org and installed from its >> source code. >> -- numpy 1.6.1 was downloaded and installed from a linux repository. When >> building numpy, gnu95 fortran complier was used. >> -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and blas from >> their source code. >> ----- atlas was from sourceforge's 3.8.4 stable version. To compile altas, >> gcc was used. >> ----- lapack and blas was obtained from netlib.org's repository. To >> compile the package of lapack and blas, gforan was used. >> ----- Finally, after exporting paths to blas, lapack, and atlas, >> scipy-0.9.0 was installed from its source code. >> ? ? ? scipy was obtained from sourceforge.net's repository. >> All of the above were installed in the same way across all nodes in our >> cluster. >> Since I am the only user of the cluster who needs to run python web >> applications, >> I installed python virtualenv package in my local directory. >> Within my virtual environment, django-1.3 and pysal-1.2 (our own package) >> were installed. >> To deploy my web applications, we used mod_wsgi. >> mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. >> My application is attached. Basically, it is 'hello world' application >> that tests if numpy, scipy, and pysal can be imported. >> In the attached file, lines 4-9 are just adding paths to django and pysal >> so that apache knows where to find these packages. >> Also, to let apache know where to find atlas-related packages, the path to >> those packages was added to the LD_LIBRARY_PATH environment variable in the >> /etc/sysconfig/httpd file. >> When I first ran my application, it just hung and wrote no message. >> So, across scipy.linalg modules, I added print out statements to figure >> out at which point the import was broken. >> Here is the messages I got when I imported scipy.linalg in my local python >> shell. >> >> ######################## >> starting linalg.__init__ >> pre __init__.__doc__ >> pre __init__.__version__ >> pre __init__.misc >> pre __init__.basic >> ####################### >> Starting basic >> pre basic.flinalg >> pre basic.lapack >> pre basic.misc >> pre basic.scipy.linalg >> pre basic.decomp_svd >> pre __init__.decomp >> ################ >> starting decomp >> pre decomp.array et al. >> pre decomp.calc_lwork >> pre decomp.LinAlgError >> pre decomp.get_lapack_funcs >> pre decomp.get_blas_funcs >> #################### >> Starting blas >> pre blas.scipy.linalg.fblas >> pre blas.scipy.linalg.cblas >> pre __init__.decomp_lu >> pre __init__.decomp_cholesky >> pre __init__.decomp_qr >> ################# >> Starting special_matrices >> pre special_matrices.math >> pre?special_matrices.np >> pre __init__.decomp_svd >> pre __init__.decomp_schur >> ################## >> starting schur... >> pre decomp_schur.misc >> pre decomp_schur.LinAlgError >> pre decomp_schur.get_lapack_funcs >> pre decomp_schur.eigvals:1320454147.23Fri Nov ?4 17:49:07 2011 >> schur testing >> pre __init__.matfuncs >> ##################### >> Starting matfuncs >> pre matfuncs. asarray et al >> pre matfuncs.matrix >> pre?matfuncs.np >> pre matfuncs.misc >> pre matfuncs.basic >> pre matfuncs.special_matrices >> pre matfuncs.decomp >> pre matfuncs.decomp_svd >> pre matfuncs.decomp_schur >> pre __init__.blas >> pre __init__.special_matrices >> >> When scipy.linalg is successfully imported, I should get these messages. >> But, when my web application tried to import scipy.linalg, the output >> messages stop at line 41. >> At line 41, decomp_schur.py tries to import decomp.py. Since decomp.py was >> already imported at line 16, scipy ignores it and continues to import other >> modules in my local shell. >> But, somehow, in apache-mod_wsgi environment, scipy failed to ignore or >> reload decomp.py and seems to kill my web application. >> This is really odd, because python does not give any message about this >> error and neither does apache. apache just hangs without sending out any >> response. >> Since lapack and blas functions were imported successfully, the problem >> seems not related to path setup. >> If anyone in the list has any insights into or experience into this kind >> of symptom, >> please share your insights and experience. In particular, debugging >> techniques or less-known installation/compilation problems would be helpful. >> I feel like I am at a dead end. So, please help me. >> Thanks for reading this post. >> I will look forward to yo > > Looking at linalg/__init__.py the register_func calls are an obvious > candidate for causing this strange issue. of the few functions left in > decomp_schur.py, norm() and dot() have both gone through register_func, > which messes with the call stack. Could you comment out all calls to that > function and see if that helps? Replace functions that then become > unavailable with numpy ones of the same name where needed. > > Cheers, > Ralf > > >> >> >> -- >> Myung-Hwa Hwang >> GeoDa Center >> School of Geographical Sciences and Urban Planning >> Arizona State University >> mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > Also, if you provide some installation instruction, I might try to run it. But I do wonder if this is due to your virtual environment and associated paths once called via Apache. Bruce From hayne at sympatico.ca Sat Nov 5 16:14:48 2011 From: hayne at sympatico.ca (hayne at sympatico.ca) Date: Sat, 5 Nov 2011 16:14:48 -0400 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: I would try putting print statements inside "decomp_schur.py" since that is the module that you said is causing problems. Print out the contents of the dictionary sys.modules just before the import of decomp in "decomp_schur.py". Is 'decomp' in the dictionary? What happens if you comment-out the import of decomp in "decomp_schur.py" ? -- Cameron Hayne macdev at hayne.net On 5-Nov-11, at 3:59 PM, Myunghwa Hwang wrote: > I am trying to run a simple django application in a cluster > environment. > And, my application hangs while it imports scipy.linalg, and both > scipy and apache do not write out error messages. > When I run my application in my local python shell, it imports > scipy.linalg. But, somehow it does not when it is run by apache. > So, after reading this message, please share any ideas about how to > debug this problem or new solutions to address this issue or deploy > my application. > > Now, let me explain our current setup. > 1. OS > -- The server is a compute cluster where each node runs centos 6 > that was installed from a clean version of centos6 minimal. > 2. Apache > -- Apache 2.2 was also manually installed from one of default linux > repository. > To be specific, it was installed from its source code together > with httpd-dev. > 3. Python > -- Python 2.7.2 was also installed from its source code across all > nodes in the cluster. Its source code was downloaded from > python.org's ftp. > 4. Python packages: nose, numpy, scipy > -- Nose 1.1.2 was downloaded from pypi.python.org and installed from > its source code. > -- numpy 1.6.1 was downloaded and installed from a linux repository. > When building numpy, gnu95 fortran complier was used. > -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and > blas from their source code. > ----- atlas was from sourceforge's 3.8.4 stable version. To compile > altas, gcc was used. > ----- lapack and blas was obtained from netlib.org's repository. To > compile the package of lapack and blas, gforan was used. > ----- Finally, after exporting paths to blas, lapack, and atlas, > scipy-0.9.0 was installed from its source code. > scipy was obtained from sourceforge.net's repository. > A note that contains the above information about software > installation is attached. > > All of the above were installed in the same way across all nodes in > our cluster. > Since I am the only user of the cluster who needs to run python web > applications, > I installed python virtualenv package in my local directory. > Within my virtual environment, django-1.3 and pysal-1.2 (our own > package) were installed. > To deploy my web applications, we used mod_wsgi. > mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. > My application is attached. Basically, it is a 'hello world' > application that tests if numpy, scipy, and pysal can be imported. > In the attached file, lines 4-9 are just adding paths to django and > pysal so that apache knows where to find these packages. > Also, to let apache know where to find atlas-related packages, the > path to those packages was added to the LD_LIBRARY_PATH environment > variable in the /etc/sysconfig/httpd file. > > When I first ran my application, it just hung and wrote no message. > So, across scipy.linalg modules, I added print out statements to > figure out at which point the import was broken. > Here is the messages I got when I imported scipy.linalg in my local > python shell. > ? ######################## > ? starting linalg.__init__ > ? pre __init__.__doc__ > ? pre __init__.__version__ > ? pre __init__.misc > ? pre __init__.basic > ? ####################### > ? Starting basic > ? pre basic.flinalg > ? pre basic.lapack > ? pre basic.misc > ? pre basic.scipy.linalg > ? pre basic.decomp_svd > ? pre __init__.decomp > ? ################ > ? starting decomp > ? pre decomp.array et al. > ? pre decomp.calc_lwork > ? pre decomp.LinAlgError > ? pre decomp.get_lapack_funcs > ? pre decomp.get_blas_funcs > ? #################### > ? Starting blas > ? pre blas.scipy.linalg.fblas > ? pre blas.scipy.linalg.cblas > ? pre __init__.decomp_lu > ? pre __init__.decomp_cholesky > ? pre __init__.decomp_qr > ? ################# > ? Starting special_matrices > ? pre special_matrices.math > ? pre special_matrices.np > ? pre __init__.decomp_svd > ? pre __init__.decomp_schur > ? ################## > ? starting schur... > ? pre decomp_schur.misc > ? pre decomp_schur.LinAlgError > ? pre decomp_schur.get_lapack_funcs > ? pre decomp_schur.eigvals:1320454147.23Fri Nov 4 17:49:07 2011 > ? schur testing > ? pre __init__.matfuncs > ? ##################### > ? Starting matfuncs > ? pre matfuncs. asarray et al > ? pre matfuncs.matrix > ? pre matfuncs.np > ? pre matfuncs.misc > ? pre matfuncs.basic > ? pre matfuncs.special_matrices > ? pre matfuncs.decomp > ? pre matfuncs.decomp_svd > ? pre matfuncs.decomp_schur > ? pre __init__.blas > ? pre __init__.special_matrices > When scipy.linalg is successfully imported, I should get these > messages. > But, when my web application tried to import scipy.linalg, the > output messages stop at line 41. > At line 41, decomp_schur.py tries to import decomp.py. Since > decomp.py was already imported at line 16, scipy ignores it and > continues to import other modules in my local shell. > But, somehow, in apache-mod_wsgi environment, scipy failed to ignore > or reload decomp.py and seems to kill my web application. > This is really odd, because python does not give any message about > this error and neither does apache. apache just hangs without > sending out any response. > Since lapack and blas functions were imported successfully, the > problem seems not related to path setup. > > If anyone in the list has any insights into or experience into this > kind of symptom, > please share your insights and experience. In particular, debugging > techniques or less-known installation/compilation problems would be > helpful. > I feel like I am at a dead end. So, please help me. > > Thanks for reading this post. > I will look forward to your responses. > > -- Myung-Hwa Hwang > > -- > Myung-Hwa Hwang > GeoDa Center > School of Geographical Sciences and Urban Planning > Arizona State University > mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu From mhwang4 at gmail.com Sun Nov 6 23:43:48 2011 From: mhwang4 at gmail.com (Myunghwa Hwang) Date: Sun, 6 Nov 2011 21:43:48 -0700 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: Hi, Hayne! Thanks for your answer. After trying out what you suggested (that is, commenting out the import of decomp), I found out the import of decomp was not the problem. In decomp_schur, there are two lines checking something related to rounding errors specific to a single machine as follows: eps = np.finfo(float).eps feps = numpy.finfo(single).eps When scipy reaches the above lines, my application hangs. I found a web document where the author encountered the same problem with these lines but in different contexts: http://stackoverflow.com/questions/7592565/when-embedding-cpython-in-java-why-does-this-hang The discussion in the web document is not applicable to my problem. Also, the np.finfo statements seem to exist in multiple modules of scipy. Without addressing all related modules manually, would it be any other solutions? Thanks! --Myung-Hwa On Sat, Nov 5, 2011 at 1:14 PM, wrote: > I would try putting print statements inside "decomp_schur.py" since that > is the module that you said is causing problems. > Print out the contents of the dictionary sys.modules just before the > import of decomp in "decomp_schur.py". Is 'decomp' in the dictionary? > What happens if you comment-out the import of decomp in "decomp_schur.py" > ? > -- > Cameron Hayne > macdev at hayne.net > > > > On 5-Nov-11, at 3:59 PM, Myunghwa Hwang wrote: > >> I am trying to run a simple django application in a cluster environment. >> And, my application hangs while it imports scipy.linalg, and both scipy >> and apache do not write out error messages. >> When I run my application in my local python shell, it imports >> scipy.linalg. But, somehow it does not when it is run by apache. >> So, after reading this message, please share any ideas about how to debug >> this problem or new solutions to address this issue or deploy my >> application. >> >> Now, let me explain our current setup. >> 1. OS >> -- The server is a compute cluster where each node runs centos 6 that was >> installed from a clean version of centos6 minimal.2. Apache >> >> -- Apache 2.2 was also manually installed from one of default linux >> repository. To be specific, it was installed from its source code together >> with httpd-dev. >> 3. Python >> -- Python 2.7.2 was also installed from its source code across all nodes >> in the cluster. Its source code was downloaded from python.org's ftp. >> 4. Python packages: nose, numpy, scipy >> -- Nose 1.1.2 was downloaded from pypi.python.org and installed from its >> source code. >> -- numpy 1.6.1 was downloaded and installed from a linux repository. When >> building numpy, gnu95 fortran complier was used. >> -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and blas >> from their source code.----- atlas was from sourceforge's 3.8.4 stable >> version. To compile altas, gcc was used. >> >> ----- lapack and blas was obtained from netlib.org's repository. To >> compile the package of lapack and blas, gforan was used. >> ----- Finally, after exporting paths to blas, lapack, and atlas, >> scipy-0.9.0 was installed from its source code. >> scipy was obtained from sourceforge.net's repository. >> A note that contains the above information about software installation is >> attached. >> >> All of the above were installed in the same way across all nodes in our >> cluster. >> Since I am the only user of the cluster who needs to run python web >> applications, >> I installed python virtualenv package in my local directory. >> Within my virtual environment, django-1.3 and pysal-1.2 (our own package) >> were installed. >> To deploy my web applications, we used mod_wsgi. >> mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. >> My application is attached. Basically, it is a 'hello world' application >> that tests if numpy, scipy, and pysal can be imported. >> In the attached file, lines 4-9 are just adding paths to django and pysal >> so that apache knows where to find these packages. >> Also, to let apache know where to find atlas-related packages, the path >> to those packages was added to the LD_LIBRARY_PATH environment variable in >> the /etc/sysconfig/httpd file. >> >> When I first ran my application, it just hung and wrote no message. >> So, across scipy.linalg modules, I added print out statements to figure >> out at which point the import was broken. >> Here is the messages I got when I imported scipy.linalg in my local >> python shell. >> ? ######################## >> ? starting linalg.__init__ >> ? pre __init__.__doc__ >> ? pre __init__.__version__ >> ? pre __init__.misc >> ? pre __init__.basic >> ? ####################### >> ? Starting basic >> ? pre basic.flinalg >> ? pre basic.lapack >> ? pre basic.misc >> ? pre basic.scipy.linalg >> ? pre basic.decomp_svd >> ? pre __init__.decomp >> ? ################ >> ? starting decomp >> ? pre decomp.array et al. >> ? pre decomp.calc_lwork >> ? pre decomp.LinAlgError >> ? pre decomp.get_lapack_funcs >> ? pre decomp.get_blas_funcs >> ? #################### >> ? Starting blas >> ? pre blas.scipy.linalg.fblas >> ? pre blas.scipy.linalg.cblas >> ? pre __init__.decomp_lu >> ? pre __init__.decomp_cholesky >> ? pre __init__.decomp_qr >> ? ################# >> ? Starting special_matrices >> ? pre special_matrices.math >> ? pre special_matrices.np >> ? pre __init__.decomp_svd >> ? pre __init__.decomp_schur >> ? ################## >> ? starting schur... >> ? pre decomp_schur.misc >> ? pre decomp_schur.LinAlgError >> ? pre decomp_schur.get_lapack_funcs >> ? pre decomp_schur.eigvals:**1320454147.23Fri Nov 4 17:49:07 2011 >> ? schur testing >> ? pre __init__.matfuncs >> ? ##################### >> ? Starting matfuncs >> ? pre matfuncs. asarray et al >> ? pre matfuncs.matrix >> ? pre matfuncs.np >> ? pre matfuncs.misc >> ? pre matfuncs.basic >> ? pre matfuncs.special_matrices >> ? pre matfuncs.decomp >> ? pre matfuncs.decomp_svd >> ? pre matfuncs.decomp_schur >> ? pre __init__.blas >> ? pre __init__.special_matrices >> When scipy.linalg is successfully imported, I should get these messages. >> But, when my web application tried to import scipy.linalg, the output >> messages stop at line 41. >> At line 41, decomp_schur.py tries to import decomp.py. Since decomp.py >> was already imported at line 16, scipy ignores it and continues to import >> other modules in my local shell. >> But, somehow, in apache-mod_wsgi environment, scipy failed to ignore or >> reload decomp.py and seems to kill my web application. >> This is really odd, because python does not give any message about this >> error and neither does apache. apache just hangs without sending out any >> response. >> Since lapack and blas functions were imported successfully, the problem >> seems not related to path setup. >> >> If anyone in the list has any insights into or experience into this kind >> of symptom, >> please share your insights and experience. In particular, debugging >> techniques or less-known installation/compilation problems would be helpful. >> I feel like I am at a dead end. So, please help me. >> >> Thanks for reading this post. >> I will look forward to your responses. >> >> -- Myung-Hwa Hwang >> >> -- >> Myung-Hwa Hwang >> GeoDa Center >> School of Geographical Sciences and Urban Planning >> Arizona State University >> mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu >> > > > > > -- Myung-Hwa Hwang GeoDa Center School of Geographical Sciences and Urban Planning Arizona State University mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhwang4 at gmail.com Sun Nov 6 23:51:32 2011 From: mhwang4 at gmail.com (Myunghwa Hwang) Date: Sun, 6 Nov 2011 21:51:32 -0700 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: Ralf and Bruce, Thanks for your answer. I just found that the problem area is the following lines: eps = np.finfo(float).eps feps = np.finfo(single).eps When scipy hits these lines, my app hangs. Do you have any ideas about why these lines raise the issue? Thanks, Myung-Hwa On Sun, Nov 6, 2011 at 2:49 PM, Ralf Gommers wrote: > > > On Sat, Nov 5, 2011 at 8:52 PM, Myunghwa Hwang wrote: > >> Hello, list! >> >> First of all, this will be a long message, due to the complexity of our >> environment. >> So, please be patient with my question. >> >> I am trying to run a simple django application in a cluster environment. >> And, my application hangs while it imports scipy.linalg, and both scipy >> and apache do not write out error messages. >> When I run my application in my local python shell, it imports >> scipy.linalg. But, somehow it does not when it is run by apache. >> So, after reading this message, please share any ideas about how to debug >> this problem or new solutions to address this issue or deploy my >> application. >> >> Now, let me explain our current setup. >> 1. OS >> -- The server is a compute cluster where each node runs centos 6 that was >> installed from a clean version of centos6 minimal. >> 2. Apache >> -- Apache 2.2 is also manually installed from one of default linux >> repository. >> To be specific, it is installed from source code together with >> httpd-dev. >> 3. Python >> -- Python 2.7.2 is also installed from source code across all nodes in >> the cluster. Its source code is downloaded from python.org's ftp. >> 4. Python packages: nose, numpy, scipy >> -- Nose 1.1.2 was downloaded from pypi.python.org and installed from its >> source code. >> -- numpy 1.6.1 was downloaded and installed from a linux repository. When >> building numpy, gnu95 fortran complier was used. >> -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and blas >> from their source code. >> ----- atlas was from sourceforge's 3.8.4 stable version. To compile >> altas, gcc was used. >> ----- lapack and blas was obtained from netlib.org's repository. To >> compile the package of lapack and blas, gforan was used. >> ----- Finally, after exporting paths to blas, lapack, and atlas, >> scipy-0.9.0 was installed from its source code. >> scipy was obtained from sourceforge.net's repository. >> >> All of the above were installed in the same way across all nodes in our >> cluster. >> Since I am the only user of the cluster who needs to run python web >> applications, >> I installed python virtualenv package in my local directory. >> Within my virtual environment, django-1.3 and pysal-1.2 (our own package) >> were installed. >> To deploy my web applications, we used mod_wsgi. >> mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. >> My application is attached. Basically, it is 'hello world' application >> that tests if numpy, scipy, and pysal can be imported. >> In the attached file, lines 4-9 are just adding paths to django and pysal >> so that apache knows where to find these packages. >> Also, to let apache know where to find atlas-related packages, the path >> to those packages was added to the LD_LIBRARY_PATH environment variable in >> the /etc/sysconfig/httpd file. >> >> When I first ran my application, it just hung and wrote no message. >> So, across scipy.linalg modules, I added print out statements to figure >> out at which point the import was broken. >> Here is the messages I got when I imported scipy.linalg in my local >> python shell. >> >> 1. ######################## >> 2. starting linalg.__init__ >> 3. pre __init__.__doc__ >> 4. pre __init__.__version__ >> 5. pre __init__.misc >> 6. pre __init__.basic >> 7. ####################### >> 8. Starting basic >> 9. pre basic.flinalg >> 10. pre basic.lapack >> 11. pre basic.misc >> 12. pre basic.scipy.linalg >> 13. pre basic.decomp_svd >> 14. pre __init__.decomp >> 15. ################ >> 16. starting decomp >> 17. pre decomp.array et al. >> 18. pre decomp.calc_lwork >> 19. pre decomp.LinAlgError >> 20. pre decomp.get_lapack_funcs >> 21. pre decomp.get_blas_funcs >> 22. #################### >> 23. Starting blas >> 24. pre blas.scipy.linalg.fblas >> 25. pre blas.scipy.linalg.cblas >> 26. pre __init__.decomp_lu >> 27. pre __init__.decomp_cholesky >> 28. pre __init__.decomp_qr >> 29. ################# >> 30. Starting special_matrices >> 31. pre special_matrices.math >> 32. pre special_matrices.np >> 33. pre __init__.decomp_svd >> 34. pre __init__.decomp_schur >> 35. ################## >> 36. starting schur... >> 37. pre decomp_schur.misc >> 38. pre decomp_schur.LinAlgError >> 39. pre decomp_schur.get_lapack_funcs >> 40. pre decomp_schur.eigvals:1320454147.23Fri Nov 4 17:49:07 2011 >> 41. schur testing >> 42. pre __init__.matfuncs >> 43. ##################### >> 44. Starting matfuncs >> 45. pre matfuncs. asarray et al >> 46. pre matfuncs.matrix >> 47. pre matfuncs.np >> 48. pre matfuncs.misc >> 49. pre matfuncs.basic >> 50. pre matfuncs.special_matrices >> 51. pre matfuncs.decomp >> 52. pre matfuncs.decomp_svd >> 53. pre matfuncs.decomp_schur >> 54. pre __init__.blas >> 55. pre __init__.special_matrices >> >> When scipy.linalg is successfully imported, I should get these messages. >> But, when my web application tried to import scipy.linalg, the output >> messages stop at line 41. >> At line 41, decomp_schur.py tries to import decomp.py. Since decomp.py >> was already imported at line 16, scipy ignores it and continues to import >> other modules in my local shell. >> But, somehow, in apache-mod_wsgi environment, scipy failed to ignore or >> reload decomp.py and seems to kill my web application. >> This is really odd, because python does not give any message about this >> error and neither does apache. apache just hangs without sending out any >> response. >> Since lapack and blas functions were imported successfully, the problem >> seems not related to path setup. >> >> If anyone in the list has any insights into or experience into this kind >> of symptom, >> please share your insights and experience. In particular, debugging >> techniques or less-known installation/compilation problems would be helpful. >> I feel like I am at a dead end. So, please help me. >> >> Thanks for reading this post. >> I will look forward to yo >> > > Looking at linalg/__init__.py the register_func calls are an obvious > candidate for causing this strange issue. of the few functions left in > decomp_schur.py, norm() and dot() have both gone through register_func, > which messes with the call stack. Could you comment out all calls to that > function and see if that helps? Replace functions that then become > unavailable with numpy ones of the same name where needed. > > Cheers, > Ralf > > > >> >> >> -- >> Myung-Hwa Hwang >> GeoDa Center >> School of Geographical Sciences and Urban Planning >> Arizona State University >> mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Myung-Hwa Hwang GeoDa Center School of Geographical Sciences and Urban Planning Arizona State University mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Mon Nov 7 02:58:45 2011 From: klonuo at gmail.com (klo uo) Date: Mon, 7 Nov 2011 08:58:45 +0100 Subject: [SciPy-User] [xpyx] Re: How to handle GRIB datasets? In-Reply-To: References: <4EB6DFBC.3060708@eos.ubc.ca> Message-ID: 'pygrib' [http://code.google.com/p/pygrib/] is interface to that same API PyNIO seems like more robust with support to additional formats like netCDF, HDF, shapefiles, ... On Sun, Nov 6, 2011 at 11:40 PM, Jonathan Rocher wrote: > Hi all, > > If you can use GPL packages, I would also mention GribAPI, which is > developed by the European Center For Medium Range weather Forecast: > http://www.ecmwf.int/products/data/software/grib_api.html > It is written in C with bindings for python and that's what they use all the > time. > > HTH > Jonathan > From jrocher at enthought.com Mon Nov 7 10:30:05 2011 From: jrocher at enthought.com (Jonathan Rocher) Date: Mon, 7 Nov 2011 09:30:05 -0600 Subject: [SciPy-User] [xpyx] Re: How to handle GRIB datasets? In-Reply-To: References: <4EB6DFBC.3060708@eos.ubc.ca> Message-ID: Interesting. Did you test the robustness on your data, or are you referring to having a 1 stop for multiple file formats? On Mon, Nov 7, 2011 at 1:58 AM, klo uo wrote: > 'pygrib' [http://code.google.com/p/pygrib/] is interface to that same API > > PyNIO seems like more robust with support to additional formats like > netCDF, HDF, shapefiles, ... > > > On Sun, Nov 6, 2011 at 11:40 PM, Jonathan Rocher > wrote: > > Hi all, > > > > If you can use GPL packages, I would also mention GribAPI, which is > > developed by the European Center For Medium Range weather Forecast: > > http://www.ecmwf.int/products/data/software/grib_api.html > > It is written in C with bindings for python and that's what they use all > the > > time. > > > > HTH > > Jonathan > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Jonathan Rocher, PhD Scientific software developer Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Mon Nov 7 10:38:51 2011 From: klonuo at gmail.com (klo uo) Date: Mon, 7 Nov 2011 16:38:51 +0100 Subject: [SciPy-User] [xpyx] Re: How to handle GRIB datasets? In-Reply-To: References: <4EB6DFBC.3060708@eos.ubc.ca> Message-ID: Writing only by the Paul's link contents. I'm using pygrib right now. I already have h5py and python-netcdf, and PyNIO seems to handle all this formats in one packet. Also I don't know if PyNIO can be used without PyNGL On Mon, Nov 7, 2011 at 4:30 PM, Jonathan Rocher wrote: > Interesting. Did you test the robustness on your data, or are you referring > to having a 1 stop for multiple file formats? > > On Mon, Nov 7, 2011 at 1:58 AM, klo uo wrote: >> >> 'pygrib' [http://code.google.com/p/pygrib/] is interface to that same API >> >> PyNIO seems like more robust with support to additional formats like >> netCDF, HDF, shapefiles, ... >> >> >> On Sun, Nov 6, 2011 at 11:40 PM, Jonathan Rocher >> wrote: >> > Hi all, >> > >> > If you can use GPL packages, I would also mention GribAPI, which is >> > developed by the European Center For Medium Range weather Forecast: >> > http://www.ecmwf.int/products/data/software/grib_api.html >> > It is written in C with bindings for python and that's what they use all >> > the >> > time. >> > >> > HTH >> > Jonathan >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Jonathan Rocher, PhD > Scientific software developer > Enthought, Inc. > jrocher at enthought.com > 1-512-536-1057 > http://www.enthought.com > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ralf.gommers at googlemail.com Mon Nov 7 13:47:28 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 7 Nov 2011 19:47:28 +0100 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: On Mon, Nov 7, 2011 at 5:43 AM, Myunghwa Hwang wrote: > Hi, Hayne! > > Thanks for your answer. > After trying out what you suggested (that is, commenting out the import of > decomp), > I found out the import of decomp was not the problem. > In decomp_schur, there are two lines checking something related to > rounding errors specific to a single machine as follows: > eps = np.finfo(float).eps > feps = numpy.finfo(single).eps > > If you execute just the above lines in your application instead of importing scipy, does it hang too? Ralf > When scipy reaches the above lines, my application hangs. > I found a web document where the author encountered the same problem with > these lines but in different contexts: > > http://stackoverflow.com/questions/7592565/when-embedding-cpython-in-java-why-does-this-hang > > The discussion in the web document is not applicable to my problem. > Also, the np.finfo statements seem to exist in multiple modules of scipy. > Without addressing all related modules manually, > would it be any other solutions? > > Thanks! > > --Myung-Hwa > > > On Sat, Nov 5, 2011 at 1:14 PM, wrote: > >> I would try putting print statements inside "decomp_schur.py" since that >> is the module that you said is causing problems. >> Print out the contents of the dictionary sys.modules just before the >> import of decomp in "decomp_schur.py". Is 'decomp' in the dictionary? >> What happens if you comment-out the import of decomp in "decomp_schur.py" >> ? >> -- >> Cameron Hayne >> macdev at hayne.net >> >> >> >> On 5-Nov-11, at 3:59 PM, Myunghwa Hwang wrote: >> >>> I am trying to run a simple django application in a cluster environment. >>> And, my application hangs while it imports scipy.linalg, and both scipy >>> and apache do not write out error messages. >>> When I run my application in my local python shell, it imports >>> scipy.linalg. But, somehow it does not when it is run by apache. >>> So, after reading this message, please share any ideas about how to >>> debug this problem or new solutions to address this issue or deploy my >>> application. >>> >>> Now, let me explain our current setup. >>> 1. OS >>> -- The server is a compute cluster where each node runs centos 6 that >>> was installed from a clean version of centos6 minimal.2. Apache >>> >>> -- Apache 2.2 was also manually installed from one of default linux >>> repository. To be specific, it was installed from its source code together >>> with httpd-dev. >>> 3. Python >>> -- Python 2.7.2 was also installed from its source code across all nodes >>> in the cluster. Its source code was downloaded from python.org's ftp. >>> 4. Python packages: nose, numpy, scipy >>> -- Nose 1.1.2 was downloaded from pypi.python.org and installed from >>> its source code. >>> -- numpy 1.6.1 was downloaded and installed from a linux repository. >>> When building numpy, gnu95 fortran complier was used. >>> -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and blas >>> from their source code.----- atlas was from sourceforge's 3.8.4 stable >>> version. To compile altas, gcc was used. >>> >>> ----- lapack and blas was obtained from netlib.org's repository. To >>> compile the package of lapack and blas, gforan was used. >>> ----- Finally, after exporting paths to blas, lapack, and atlas, >>> scipy-0.9.0 was installed from its source code. >>> scipy was obtained from sourceforge.net's repository. >>> A note that contains the above information about software installation >>> is attached. >>> >>> All of the above were installed in the same way across all nodes in our >>> cluster. >>> Since I am the only user of the cluster who needs to run python web >>> applications, >>> I installed python virtualenv package in my local directory. >>> Within my virtual environment, django-1.3 and pysal-1.2 (our own >>> package) were installed. >>> To deploy my web applications, we used mod_wsgi. >>> mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. >>> My application is attached. Basically, it is a 'hello world' application >>> that tests if numpy, scipy, and pysal can be imported. >>> In the attached file, lines 4-9 are just adding paths to django and >>> pysal so that apache knows where to find these packages. >>> Also, to let apache know where to find atlas-related packages, the path >>> to those packages was added to the LD_LIBRARY_PATH environment variable in >>> the /etc/sysconfig/httpd file. >>> >>> When I first ran my application, it just hung and wrote no message. >>> So, across scipy.linalg modules, I added print out statements to figure >>> out at which point the import was broken. >>> Here is the messages I got when I imported scipy.linalg in my local >>> python shell. >>> ? ######################## >>> ? starting linalg.__init__ >>> ? pre __init__.__doc__ >>> ? pre __init__.__version__ >>> ? pre __init__.misc >>> ? pre __init__.basic >>> ? ####################### >>> ? Starting basic >>> ? pre basic.flinalg >>> ? pre basic.lapack >>> ? pre basic.misc >>> ? pre basic.scipy.linalg >>> ? pre basic.decomp_svd >>> ? pre __init__.decomp >>> ? ################ >>> ? starting decomp >>> ? pre decomp.array et al. >>> ? pre decomp.calc_lwork >>> ? pre decomp.LinAlgError >>> ? pre decomp.get_lapack_funcs >>> ? pre decomp.get_blas_funcs >>> ? #################### >>> ? Starting blas >>> ? pre blas.scipy.linalg.fblas >>> ? pre blas.scipy.linalg.cblas >>> ? pre __init__.decomp_lu >>> ? pre __init__.decomp_cholesky >>> ? pre __init__.decomp_qr >>> ? ################# >>> ? Starting special_matrices >>> ? pre special_matrices.math >>> ? pre special_matrices.np >>> ? pre __init__.decomp_svd >>> ? pre __init__.decomp_schur >>> ? ################## >>> ? starting schur... >>> ? pre decomp_schur.misc >>> ? pre decomp_schur.LinAlgError >>> ? pre decomp_schur.get_lapack_funcs >>> ? pre decomp_schur.eigvals:**1320454147.23Fri Nov 4 17:49:07 >>> 2011 >>> ? schur testing >>> ? pre __init__.matfuncs >>> ? ##################### >>> ? Starting matfuncs >>> ? pre matfuncs. asarray et al >>> ? pre matfuncs.matrix >>> ? pre matfuncs.np >>> ? pre matfuncs.misc >>> ? pre matfuncs.basic >>> ? pre matfuncs.special_matrices >>> ? pre matfuncs.decomp >>> ? pre matfuncs.decomp_svd >>> ? pre matfuncs.decomp_schur >>> ? pre __init__.blas >>> ? pre __init__.special_matrices >>> When scipy.linalg is successfully imported, I should get these messages. >>> But, when my web application tried to import scipy.linalg, the output >>> messages stop at line 41. >>> At line 41, decomp_schur.py tries to import decomp.py. Since decomp.py >>> was already imported at line 16, scipy ignores it and continues to import >>> other modules in my local shell. >>> But, somehow, in apache-mod_wsgi environment, scipy failed to ignore or >>> reload decomp.py and seems to kill my web application. >>> This is really odd, because python does not give any message about this >>> error and neither does apache. apache just hangs without sending out any >>> response. >>> Since lapack and blas functions were imported successfully, the problem >>> seems not related to path setup. >>> >>> If anyone in the list has any insights into or experience into this kind >>> of symptom, >>> please share your insights and experience. In particular, debugging >>> techniques or less-known installation/compilation problems would be helpful. >>> I feel like I am at a dead end. So, please help me. >>> >>> Thanks for reading this post. >>> I will look forward to your responses. >>> >>> -- Myung-Hwa Hwang >>> >>> -- >>> Myung-Hwa Hwang >>> GeoDa Center >>> School of Geographical Sciences and Urban Planning >>> Arizona State University >>> mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu >>> >> >> >> >> >> > > > -- > Myung-Hwa Hwang > GeoDa Center > School of Geographical Sciences and Urban Planning > Arizona State University > mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Nov 8 12:58:08 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 8 Nov 2011 10:58:08 -0700 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: On Mon, Nov 7, 2011 at 11:47 AM, Ralf Gommers wrote: > > > On Mon, Nov 7, 2011 at 5:43 AM, Myunghwa Hwang wrote: > >> Hi, Hayne! >> >> Thanks for your answer. >> After trying out what you suggested (that is, commenting out the import >> of decomp), >> I found out the import of decomp was not the problem. >> In decomp_schur, there are two lines checking something related to >> rounding errors specific to a single machine as follows: >> eps = np.finfo(float).eps >> feps = numpy.finfo(single).eps >> >> If you execute just the above lines in your application instead of > importing scipy, does it hang too? > > The current finfo is kind of a mess and could use a rewrite. If we truly stick to ieee it could also be reduced to a set of known tables, with maybe an exception thrown in for some of the PPC. I don't think it works for PPC at the moment in any case. Myunghwa, what hardware are you running on? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Tue Nov 8 13:28:18 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 08 Nov 2011 12:28:18 -0600 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: <4EB974C2.4060106@gmail.com> On 11/08/2011 11:58 AM, Charles R Harris wrote: > > > On Mon, Nov 7, 2011 at 11:47 AM, Ralf Gommers > > wrote: > > > > On Mon, Nov 7, 2011 at 5:43 AM, Myunghwa Hwang > wrote: > > Hi, Hayne! > > Thanks for your answer. > After trying out what you suggested (that is, commenting out > the import of decomp), > I found out the import of decomp was not the problem. > In decomp_schur, there are two lines checking something > related to rounding errors specific to a single machine as > follows: > eps = np.finfo(float).eps > feps = numpy.finfo(single).eps > > If you execute just the above lines in your application instead of > importing scipy, does it hang too? > > > The current finfo is kind of a mess and could use a rewrite. If we > truly stick to ieee it could also be reduced to a set of known tables, > with maybe an exception thrown in for some of the PPC. I don't think > it works for PPC at the moment in any case. > > Myunghwa, what hardware are you running on? > > > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Also, what is the state of selinux in your centos 6 server? There is also this rather common ctypes bug that you should see in the Apache error log file (which is really important to have access to). http://stackoverflow.com/questions/3762566/occasional-ctypes-error-importing-numpy-from-mod-wsgi-django-app http://bugs.python.org/issue5504 Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From k0ala.gmane at augrime.net Tue Nov 8 16:52:21 2011 From: k0ala.gmane at augrime.net (k0ala) Date: Tue, 8 Nov 2011 21:52:21 +0000 (UTC) Subject: [SciPy-User] scipy 0.9 fails tests in windows XP Message-ID: Dear group, I am doing a clean install of Windows XP SP3, and I have setup Python 2.7, Numpy 1.6.1 and Scipy 0.9, all via the standard Windows installers. Python seems to work well enough. However, when I follow the recommendations on testing the installations: (http://www.scipy.org/FAQ#head-75a5d2cc3678224d8e72fb4f58aa0f0639428722) import numpy numpy.test(level=1) import scipy scipy.test(level=1) I run into problems. First, there is an error message that does not understand the "level=1" argument. When I run numpy.test() -- i.e. without arguments, there is a longer error: it says I have to install "nose" for tests. This I do via easy_install, and numpy.test() then terminates with an "OK" ruling. So far so good, but now there is a problem I don't know how to solve. Running scipy.test() results in two failures. The final lines are: Ran 4728 tests in 69.475s FAILED (KNOWNFAIL=12, SKIP=42, failures=2) I am attaching the output of the session below. Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\koa>python Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. ->>> import numpy ->>> numpy.test() Running unit tests for numpy Traceback (most recent call last): File "", line 1, in File "C:\Python27\lib\site-packages\numpy\testing\nosetester.py", line 318, in test self._show_system_info() File "C:\Python27\lib\site-packages\numpy\testing\nosetester.py", line 187, in _show_system_info nose = import_nose() File "C:\Python27\lib\site-packages\numpy\testing\nosetester.py", line 69, in import_nose raise ImportError(msg) ImportError: Need nose >= 0.10.0 for tests - see http://somethingaboutorange.com /mrl/projects/nose ->>> numpy.test(level=1) Traceback (most recent call last): File "", line 1, in TypeError: test() got an unexpected keyword argument 'level' ->>> quit() koa at cobila ~ $ easy_install nose Searching for nose Reading http://pypi.python.org/simple/nose/ Reading http://somethingaboutorange.com/mrl/projects/nose/ Reading http://readthedocs.org/docs/nose/ Best match: nose 1.1.2 Downloading http://pypi.python.org/packages/source/n/nose/nose-1.1.2.tar.gz#md5= 144f237b615e23f21f6a50b2183aa817 Processing nose-1.1.2.tar.gz Running nose-1.1.2\setup.py -q bdist_egg --dist-dir c:\cygwin\tmp\easy_install-x 0cka7\nose-1.1.2\egg-dist-tmp-rm8rsq Adding nose 1.1.2 to easy-install.pth file Installing nosetests-script.py script to C:\Python27\Scripts Installing nosetests.exe script to C:\Python27\Scripts Installing nosetests.exe.manifest script to C:\Python27\Scripts Installing nosetests-2.7-script.py script to C:\Python27\Scripts Installing nosetests-2.7.exe script to C:\Python27\Scripts Installing nosetests-2.7.exe.manifest script to C:\Python27\Scripts Installed c:\python27\lib\site-packages\nose-1.1.2-py2.7.egg Processing dependencies for nose Finished processing dependencies for nose koa at cobila ~ $ python Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. ->>> import numpy ->>> numpy.test(level=1) Traceback (most recent call last): File "", line 1, in TypeError: test() got an unexpected keyword argument 'level' ->>> numpy.test() Running unit tests for numpy NumPy version 1.6.1 NumPy is installed in C:\Python27\lib\site-packages\numpy Python version 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel) ] nose version 1.1.2 ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ..........................................................................K..... ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ......K...............................................................K..K...... ........................K...SK.S.......S........................................ ......................................S......................................... ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ......................K.........K............................................... ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ..S............................................................................. ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ........................................ ---------------------------------------------------------------------- Ran 2999 tests in 22.263s OK (KNOWNFAIL=8, SKIP=5) ->>> import scipy ->>> scipy.test() Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in C:\Python27\lib\site-packages\numpy SciPy version 0.9.0 SciPy is installed in C:\Python27\lib\site-packages\scipy Python version 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel) ] nose version 1.1.2 ................................................................................ ................................................................................ .............................................K.................................. ................................................................................ ....K..K........................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ............................................................................SSSS SS......SSSSSS......SSSS........................................................ .........S.........F............................................................ ................................................................................ ..........................................................K..................... ................................................................................ ................................................................................ ......SSSSS.........S........................................................... ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ..................................................SSSSSSSSSSS................... ................................................................................ .............................................................K.................. .............................................K.................................. ................................................................................ .........................................KK..................................... ................................................................................ ................................................................................ ................................................................................ ................................................................................ .................................C:\Python27\lib\site-packages\scipy\special\tes ts\test_basic.py:1589: RuntimeWarning: invalid value encountered in absolute assert_(np.abs(c2) >= 1e300, (v, z)) .........................K.K.................................................... ................................................................................ ................................................................................ ................................................................................ ................................................................................ ....K........K.........SSSSSSS.................................................. ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................C:\Python27\lib\site-packages\scipy\stats\distributions.py:3546: RuntimeWarning: overflow encountered in exp return exp(c*x-exp(x)-gamln(c)) .................................C:\Python27\lib\site-packages\scipy\stats\distr ibutions.py:3955: RuntimeWarning: invalid value encountered in sqrt vals = 2*(bt+1.0)*sqrt(b-2.0)/((b-3.0)*sqrt(b)) ................................................................................ ................................................................................ ................................................................................ .........................................................S...................... ................................................................................ ................................................................................ ........F....................................................................... ................................................................................ ................................................................................ ................................................................................ ................................................................................ ...... ====================================================================== FAIL: Test singular pair ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\nose-1.1.2-py2.7.egg\nose\case.py", line 1 97, in runTest self.test(*self.arg) File "C:\Python27\lib\site-packages\scipy\linalg\tests\test_decomp.py", line 2 02, in test_singular self._check_gen_eig(A, B) File "C:\Python27\lib\site-packages\scipy\linalg\tests\test_decomp.py", line 1 89, in _check_gen_eig err_msg=msg) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 800, in asse rt_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 636, in asse rt_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals array([[22, 34, 31, 31, 17], [45, 45, 42, 19, 29], [39, 47, 49, 26, 34], [27, 31, 26, 21, 15], [38, 44, 44, 24, 30]]) array([[13, 26, 25, 17, 24], [31, 46, 40, 26, 37], [26, 40, 19, 25, 25], [16, 25, 27, 14, 23], [24, 35, 18, 21, 22]]) (mismatch 25.0%) x: array([ -2.45037885e-01 +0.00000000e+00j, 5.17637463e-16 -4.01120590e-08j, 5.17637463e-16 +4.01120590e-08j, 2.00000000e+00 +0.00000000e+00j]) y: array([ -3.74550285e-01 +0.00000000e+00j, -5.17716907e-17 -1.15230800e-08j, -5.17716907e-17 +1.15230800e-08j, 2.00000000e+00 +0.00000000e+00j]) ====================================================================== FAIL: test_expon (test_morestats.TestAnderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python27\lib\site-packages\scipy\stats\tests\test_morestats.py", line 72, in test_expon assert_array_less(crit[:-1], A) File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 869, in asse rt_array_less header='Arrays are not less-ordered') File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 613, in asse rt_array_compare chk_same_position(x_id, y_id, hasval='inf') File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 588, in chk_ same_position raise AssertionError(msg) AssertionError: Arrays are not less-ordered x and y inf location mismatch: x: array([ 0.911, 1.065, 1.325, 1.587]) y: array(inf) Ran 4728 tests in 69.475s FAILED (KNOWNFAIL=12, SKIP=42, failures=2) Is this going to be a problem for using scipy? Do you have any recommendations for getting it to work, or for doing a cleaner install? Thanks, best, k0ala P.S. I replaced the ">>>" by "->>>" because gmane thought I was "top-posting"... From Wolfgang.Mader at fdm.uni-freiburg.de Tue Nov 8 19:21:53 2011 From: Wolfgang.Mader at fdm.uni-freiburg.de (Wolfgang Mader) Date: Tue, 08 Nov 2011 19:21:53 -0500 Subject: [SciPy-User] Matplotlib and Python 3 Message-ID: <1874104.9T19riNeDu@killbill> Hello, I have read that great efford is under way to port Matplotlib to Python 3. There even is a branch in the git repo. What I could not find, is there a plan for mergen this into master? Is there a release target date for Matplotlib using Python 3? Thank you. Wolfgang From ralf.gommers at googlemail.com Wed Nov 9 02:09:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 9 Nov 2011 08:09:21 +0100 Subject: [SciPy-User] scipy 0.9 fails tests in windows XP In-Reply-To: References: Message-ID: On Tue, Nov 8, 2011 at 10:52 PM, k0ala wrote: > Dear group, > > I am doing a clean install of Windows XP SP3, and I have setup Python 2.7, > Numpy > 1.6.1 and Scipy 0.9, all via the standard Windows installers. Python seems > to > work well enough. > > However, when I follow the recommendations on testing the installations: > (http://www.scipy.org/FAQ#head-75a5d2cc3678224d8e72fb4f58aa0f0639428722) > import numpy > numpy.test(level=1) > import scipy > scipy.test(level=1) > Those are old, fixed now. > > I run into problems. First, there is an error message that does not > understand > the "level=1" argument. When I run numpy.test() -- i.e. without arguments, > there > is a longer error: it says I have to install "nose" for tests. This I do > via > easy_install, and numpy.test() then terminates with an "OK" ruling. > > You did the right things. > So far so good, but now there is a problem I don't know how to solve. > Running > scipy.test() results in two failures. The final lines are: > > Ran 4728 tests in 69.475s > FAILED (KNOWNFAIL=12, SKIP=42, failures=2) > > > I am attaching the output of the session below. > > > Microsoft Windows XP [Version 5.1.2600] > (C) Copyright 1985-2001 Microsoft Corp. > > C:\Documents and Settings\koa>python > Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] > on win > 32 > Type "help", "copyright", "credits" or "license" for more information. > ->>> import numpy > ->>> numpy.test() > Running unit tests for numpy > Traceback (most recent call last): > File "", line 1, in > File "C:\Python27\lib\site-packages\numpy\testing\nosetester.py", line > 318, in > test > self._show_system_info() > File "C:\Python27\lib\site-packages\numpy\testing\nosetester.py", line > 187, in > _show_system_info > nose = import_nose() > File "C:\Python27\lib\site-packages\numpy\testing\nosetester.py", line > 69, in > import_nose > raise ImportError(msg) > ImportError: Need nose >= 0.10.0 for tests - see > http://somethingaboutorange.com > /mrl/projects/nose > ->>> numpy.test(level=1) > Traceback (most recent call last): > File "", line 1, in > TypeError: test() got an unexpected keyword argument 'level' > ->>> quit() > > > > koa at cobila ~ > $ easy_install nose > Searching for nose > Reading http://pypi.python.org/simple/nose/ > Reading http://somethingaboutorange.com/mrl/projects/nose/ > Reading http://readthedocs.org/docs/nose/ > Best match: nose 1.1.2 > Downloading > http://pypi.python.org/packages/source/n/nose/nose-1.1.2.tar.gz#md5= > 144f237b615e23f21f6a50b2183aa817 > Processing nose-1.1.2.tar.gz > Running nose-1.1.2\setup.py -q bdist_egg --dist-dir > c:\cygwin\tmp\easy_install-x > 0cka7\nose-1.1.2\egg-dist-tmp-rm8rsq > Adding nose 1.1.2 to easy-install.pth file > Installing nosetests-script.py script to C:\Python27\Scripts > Installing nosetests.exe script to C:\Python27\Scripts > Installing nosetests.exe.manifest script to C:\Python27\Scripts > Installing nosetests-2.7-script.py script to C:\Python27\Scripts > Installing nosetests-2.7.exe script to C:\Python27\Scripts > Installing nosetests-2.7.exe.manifest script to C:\Python27\Scripts > > Installed c:\python27\lib\site-packages\nose-1.1.2-py2.7.egg > Processing dependencies for nose > Finished processing dependencies for nose > > > > koa at cobila ~ > $ python > Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] > on win > 32 > Type "help", "copyright", "credits" or "license" for more information. > ->>> import numpy > ->>> numpy.test(level=1) > Traceback (most recent call last): > File "", line 1, in > TypeError: test() got an unexpected keyword argument 'level' > ->>> numpy.test() > Running unit tests for numpy > NumPy version 1.6.1 > NumPy is installed in C:\Python27\lib\site-packages\numpy > Python version 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit > (Intel) > ] > nose version 1.1.2 > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ..........................................................................K..... > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ......K...............................................................K..K...... > > ........................K...SK.S.......S........................................ > > ......................................S......................................... > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ......................K.........K............................................... > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ..S............................................................................. > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > ........................................ > ---------------------------------------------------------------------- > Ran 2999 tests in 22.263s > > OK (KNOWNFAIL=8, SKIP=5) > > ->>> import scipy > ->>> scipy.test() > Running unit tests for scipy > NumPy version 1.6.1 > NumPy is installed in C:\Python27\lib\site-packages\numpy > SciPy version 0.9.0 > SciPy is installed in C:\Python27\lib\site-packages\scipy > Python version 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit > (Intel) > ] > nose version 1.1.2 > > ................................................................................ > > ................................................................................ > > .............................................K.................................. > > ................................................................................ > > ....K..K........................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ............................................................................SSSS > > SS......SSSSSS......SSSS........................................................ > > .........S.........F............................................................ > > ................................................................................ > > ..........................................................K..................... > > ................................................................................ > > ................................................................................ > > ......SSSSS.........S........................................................... > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ..................................................SSSSSSSSSSS................... > > ................................................................................ > > .............................................................K.................. > > .............................................K.................................. > > ................................................................................ > > .........................................KK..................................... > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > .................................C:\Python27\lib\site-packages\scipy\special\tes > ts\test_basic.py:1589: RuntimeWarning: invalid value encountered in > absolute > assert_(np.abs(c2) >= 1e300, (v, z)) > > .........................K.K.................................................... > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ....K........K.........SSSSSSS.................................................. > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................C:\Python27\lib\site-packages\scipy\stats\distributions.py:3546: > RuntimeWarning: overflow encountered in exp > return exp(c*x-exp(x)-gamln(c)) > > .................................C:\Python27\lib\site-packages\scipy\stats\distr > ibutions.py:3955: RuntimeWarning: invalid value encountered in sqrt > vals = 2*(bt+1.0)*sqrt(b-2.0)/((b-3.0)*sqrt(b)) > > ................................................................................ > > ................................................................................ > > ................................................................................ > > .........................................................S...................... > > ................................................................................ > > ................................................................................ > > ........F....................................................................... > > ................................................................................ > > ................................................................................ > > ................................................................................ > > ................................................................................ > ...... > ====================================================================== > FAIL: Test singular pair > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "C:\Python27\lib\site-packages\nose-1.1.2-py2.7.egg\nose\case.py", > line 1 > 97, in runTest > self.test(*self.arg) > File "C:\Python27\lib\site-packages\scipy\linalg\tests\test_decomp.py", > line 2 > 02, in test_singular > self._check_gen_eig(A, B) > File "C:\Python27\lib\site-packages\scipy\linalg\tests\test_decomp.py", > line 1 > 89, in _check_gen_eig > err_msg=msg) > File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 800, in > asse > rt_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 636, in > asse > rt_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 6 decimals > > array([[22, 34, 31, 31, 17], > [45, 45, 42, 19, 29], > [39, 47, 49, 26, 34], > [27, 31, 26, 21, 15], > [38, 44, 44, 24, 30]]) > array([[13, 26, 25, 17, 24], > [31, 46, 40, 26, 37], > [26, 40, 19, 25, 25], > [16, 25, 27, 14, 23], > [24, 35, 18, 21, 22]]) > (mismatch 25.0%) > x: array([ -2.45037885e-01 +0.00000000e+00j, > 5.17637463e-16 -4.01120590e-08j, > 5.17637463e-16 +4.01120590e-08j, 2.00000000e+00 > +0.00000000e+00j]) > y: array([ -3.74550285e-01 +0.00000000e+00j, > -5.17716907e-17 -1.15230800e-08j, > -5.17716907e-17 +1.15230800e-08j, 2.00000000e+00 > +0.00000000e+00j]) > > This one can't be reproduced on most other systems, it may have something to do with your ATLAS or hardware. See http://mail.scipy.org/pipermail/scipy-dev/2011-January/015868.html ====================================================================== > FAIL: test_expon (test_morestats.TestAnderson) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "C:\Python27\lib\site-packages\scipy\stats\tests\test_morestats.py", > line > 72, in test_expon > assert_array_less(crit[:-1], A) > File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 869, in > asse > rt_array_less > header='Arrays are not less-ordered') > File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 613, in > asse > rt_array_compare > chk_same_position(x_id, y_id, hasval='inf') > File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 588, in > chk_ > same_position > raise AssertionError(msg) > AssertionError: > Arrays are not less-ordered > > x and y inf location mismatch: > x: array([ 0.911, 1.065, 1.325, 1.587]) > y: array(inf) > > This is only a problem in the test, not with the functionality. It has been corrected for the next scipy release. > Ran 4728 tests in 69.475s > > FAILED (KNOWNFAIL=12, SKIP=42, failures=2) > > > > Is this going to be a problem for using scipy? Do you have any > recommendations > for getting it to work, or for doing a cleaner install? > > That one failure isn't likely to give you problems. If you want to investigate anyway, you can start with checking the questions by Pauli in the thread I linked to. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From _kfj at yahoo.com Wed Nov 9 04:16:12 2011 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Wed, 9 Nov 2011 09:16:12 +0000 (UTC) Subject: [SciPy-User] evaluating B-Splines made with scipy.signal.cspline2d? Message-ID: Hi group! in brief: I'm looking for an efficient way to evaluate B-splines generated by scipy.signal.cspline2d() at arbitrary float coordinates to interpolate image data. I am doing image processing and need a method to interpolate an image at arbitrary float coordinates. Scipy.signal kindly provides a fast and efficient routine to calculate spline coefficients, but the module seems not to adress using this spline for interpolation. I have written some code to calculate the interpolated value at arbitrary positions, but this code is cumbersome and slow - I have to calculate the value of the basis function 8 times (I suspect that takes a good while using signal.bspline(), haven't done any measurements), plus a fair bit of matrix manipulation to pick the relevant window of spline coefficients to multiply with the basis function values - my code (without any frills or checks, just to convey the basic idea) looks like this: # assume the spline coefficients are in a 2D array 'cf' # and (x,y) is the position to interpolate at def cf_matrix ( cf , x , y ) : return cf [ x - 1 : x + 3 , y - 1 : y + 3 ] def base_matrix ( x , y ) : x0 = x - floor ( x ) y0 = y - floor ( y ) rng = arange ( 1 , -3 , -1 ) xv = rng + x0 yv = rng + y0 xb = signal.bspline ( xv , 3 ) yb = signal.bspline ( yv , 3 ) xx , yy = meshgrid ( xb , yb ) return xx * yy def interpolate ( cf , x , y ) : m = cf_matrix ( cf , x , y ) b = base_matrix ( x , y ) return sum ( m * b ) obviously this is fine to use for a few hundred points or so, but for real images it's not practical. In Scipy.interpolate, there is scipy.interpolate.bisplev() to evaluate the splines generated by scipy.interpolate.bisplrep(), but these splines are of the more general form with arbitraryly spaced knot points, therefore the relevant data structures contain the knot vectors, and I assume that all sorts of performance gains that could be derived from the fact that the splines from the signal package are using equally-spaced samples are inapplicable in the general case. I'd appreciate suggestions on how to proceed or hints at what I'm missing. From k0ala.gmane at augrime.net Wed Nov 9 04:55:27 2011 From: k0ala.gmane at augrime.net (k0ala) Date: Wed, 9 Nov 2011 09:55:27 +0000 (UTC) Subject: [SciPy-User] scipy 0.9 fails tests in windows XP References: Message-ID: Ralf Gommers googlemail.com> writes: > > This one can't be reproduced on most other systems, it may have something to > do with your ATLAS or hardware. See http://mail.scipy.org/pipermail/scipy-dev/2011-January/015868.html > > > Ran 4728 tests in 69.475s > FAILED (KNOWNFAIL=12, SKIP=42, failures=2) > > Is this going to be a problem for using scipy? Do you have any recommendations > for getting it to work, or for doing a cleaner install? > > That one failure isn't likely to give you problems. If you want to investigate > anyway, you can start with checking the questions by Pauli in the thread I > linked to.Cheers,Ralf > Hi Ralf, Thanks a lot for the quick reply! I will follow up on it, because if I understand correctly, if this problem were to occur while I am using the software, it would not raise any flags or warnings, just output incorrect results. I'm not really comfortable using the library knowing that sometimes eigenvector computations are way off... If I find any solutions, I will follow up on this post. Best, k0ala From jdh2358 at gmail.com Wed Nov 9 05:53:34 2011 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 9 Nov 2011 04:53:34 -0600 Subject: [SciPy-User] Matplotlib and Python 3 In-Reply-To: <1874104.9T19riNeDu@killbill> References: <1874104.9T19riNeDu@killbill> Message-ID: On Nov 8, 2011, at 6:21 PM, Wolfgang Mader wrote: > Hello, > > I have read that great efford is under way to port Matplotlib to Python 3. > There even is a branch in the git repo. What I could not find, is there a plan > for mergen this into master? Is there a release target date for Matplotlib > using Python 3? mpl questions are best directed to the mpl mailing lists. Michael Droettboom a pull request, "The "MEGA Python 3.x Branch merge" so it's happening soon. https://github.com/matplotlib/matplotlib/pull/565 > JDH From zachary.pincus at yale.edu Wed Nov 9 09:16:21 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 9 Nov 2011 09:16:21 -0500 Subject: [SciPy-User] evaluating B-Splines made with scipy.signal.cspline2d? In-Reply-To: References: Message-ID: Hi Kay, This doesn't answer your specific question, but look at scipy.ndimage.map_coordinates() for general-purpose spline interpolation of regularly-spaced (e.g. image) data. If you want to repeatedly interpolate the same data, you can get the spline coefficients with: scipy.ndimage.spline_filter() and pass them to map_coordinates() with the "prefilter=False" option. It is curious that while scipy.signal has cspline1d() and cspline1d_eval(), there is no cspline2d_eval() function... hopefully someone else can weight in on what's going on here. Zach On Nov 9, 2011, at 4:16 AM, Kay F. Jahnke wrote: > Hi group! > > in brief: I'm looking for an efficient way to evaluate B-splines generated by > scipy.signal.cspline2d() at arbitrary float coordinates to interpolate image > data. > > I am doing image processing and need a method to interpolate an image at > arbitrary float coordinates. Scipy.signal kindly provides a fast and efficient > routine to calculate spline coefficients, but the module seems not to adress > using this spline for interpolation. > > I have written some code to calculate the interpolated value at arbitrary > positions, but this code is cumbersome and slow - I have to calculate the > value of the basis function 8 times (I suspect that takes a good while using > signal.bspline(), haven't done any measurements), plus a fair bit of matrix > manipulation to pick the relevant window of spline coefficients to multiply > with the basis function values - my code (without any frills or checks, > just to convey the basic idea) looks like this: > > # assume the spline coefficients are in a 2D array 'cf' > # and (x,y) is the position to interpolate at > > def cf_matrix ( cf , x , y ) : > return cf [ x - 1 : x + 3 , y - 1 : y + 3 ] > > def base_matrix ( x , y ) : > x0 = x - floor ( x ) > y0 = y - floor ( y ) > rng = arange ( 1 , -3 , -1 ) > xv = rng + x0 > yv = rng + y0 > xb = signal.bspline ( xv , 3 ) > yb = signal.bspline ( yv , 3 ) > xx , yy = meshgrid ( xb , yb ) > return xx * yy > > def interpolate ( cf , x , y ) : > m = cf_matrix ( cf , x , y ) > b = base_matrix ( x , y ) > return sum ( m * b ) > > obviously this is fine to use for a few hundred points or so, but for real > images it's not practical. > > In Scipy.interpolate, there is scipy.interpolate.bisplev() to evaluate the > splines generated by scipy.interpolate.bisplrep(), but these splines are of > the more general form with arbitraryly spaced knot points, therefore the > relevant data structures contain the knot vectors, and I assume that all sorts > of performance gains that could be derived from the fact that the splines from > the signal package are using equally-spaced samples are inapplicable in the > general case. > > I'd appreciate suggestions on how to proceed or hints at what I'm missing. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From emmanuelle.gouillart at nsup.org Tue Nov 8 17:59:18 2011 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 8 Nov 2011 23:59:18 +0100 Subject: [SciPy-User] [JOB] junior developer on 3-D image processing Message-ID: <20111108225918.GC17131@phare.normalesup.org> Dear all, I attach below the description of a 1-year junior developer position available in my research team. The goal of this project is to gather and improve existing Python and ImageJ pieces of code on 3-D image processing, and to build a user-friendly and good-quality package to be used by several research teams. Image processing algorithms of general interest (beyond the teams involved in the project) will be contributed to the scikits-image (http://skimage.org/) Python module. A previous experience with Python and/or open-source software development is a definite plus. Please spread the word! Cheers, Emmanuelle ******************* Hiring a junior developer on 3-D image processing in the joint lab CNRS/Saint-Gobain (Paris) Description of the position --------------------------- (see also http://www.svi.cnrs-bellevue.fr/wikimedia/index.php/EDDAM_developer_position) The goal of this 1-year project is to build a user-friendly and good-quality 3-D image processing package from the miscellaneous codebases written by different research teams working together on X-ray tomography. In a first phase, you will strongly interact with the researchers of the different teams in order to list and collect the different algorithms used by the teams (including a few state-of-the-art algorithms used by specialists on image processing collaborating with the teams). You will integrate these algorithms (starting with those of most common interest) into a common package, taking special care of usability (documentation, installation, ...) and robustness of the code (testing, handling of different file formats). The package will consist of a Python package and a set of ImageJ routines. For easier maintenance, some algorithms of general interest will be contributed instead to the open-source Python scikits-image ``skimage``. Scientific context ------------------ The French EDDAM project (http://www.svi.cnrs-bellevue.fr/wikimedia/index.php/EDDAM) focusses on the ultrafast 3-D X-ray imaging of amorphous materials under mechanical load or thermal treatment. Thanks to state-of-the-art developements of synchrotron X-ray tomography, it is now possible to image the evolution of materials at timescales below 1s, giving unprecendented insights into the transformation of materials. Such experiments typically produce huge datasets, for which efficient automated image processing methods are required. Requirements ------------ * Good abilities for team working. Objective-driven mindset. * Knowledge of programming and some scientific computing. Knowledge of one of the languages/sotfwares used by the team (Python, ImageJ + Java, Matlab) is a plus. * Interest for quality assurance and best practices in software development: documentation, testing, version control. * Interest and curiosity on image processing. A previous experience on image processing or a mathematical-oriented mindset is a plus. Practical aspects ----------------- The position is available from February 2012. The salary depends on the level of experience of the candidate, from 1650 euros (net salary) per month (master's degree) to 1930 euros per month (PhD degree). If you are finishing a master's degree and you are looking for an internship, it is also possible to start with an internship before taking the 1-year position. Your employer will be the CNRS (French National Research Center), and you will be located at the joint unit CNRS/Saint-Gobain (http://www.svi.cnrs-bellevue.fr), in Aubervilliers, close to Paris (France). Candidates should send a CV with cover letter to Emmanuelle Gouillart at emmanuelle.gouillart at nsup.org. From johnl at cs.wisc.edu Wed Nov 9 11:15:04 2011 From: johnl at cs.wisc.edu (J. David Lee) Date: Wed, 09 Nov 2011 10:15:04 -0600 Subject: [SciPy-User] np_inline 0.3 released Message-ID: <4EBAA708.60401@cs.wisc.edu> Hello. This morning I released version 0.3 of np_inline, a module for in-lining C code in python. np_inline is a simple alternative to weave.inline for embedding C code in python. Its main selling points are: 1) simplicity - Implemented in a single file with less than 500 lines including the C-file template, comments, docstrings, and whitespace. 2) simplicity - Generated C files are human-readable and reasonably formatted. 3) multiprocessing support - Works properly with python threads or processes. I've been using this module daily for over a year without issue, and would be happy to hear any comments or criticisms. A source distribution and documentation is available on the web at: http://pages.cs.wisc.edu/~johnl/np_inline/ or on github: https://github.com/johnnylee/np_inline or using easy_install or pip. Thanks, David From danielstefanmader at googlemail.com Wed Nov 9 12:12:22 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Wed, 9 Nov 2011 18:12:22 +0100 Subject: [SciPy-User] multidimensional least squares fitting Message-ID: Hi everyone, I'd like to do some rather simple multidimensional curve fitting. Simple, because usually it's only a plane, or a weak 2nd order surface. Here's the same question which I tried to follow, but I have no clue how to feed the 2D arrays into leastsq(): http://stackoverflow.com/questions/529184/simple-multidimensional-curve-fitting Likely I am just missing a small piece of information, and I'd be happy to get a clue :) Thanks in advance, and here's some code to demonstrate what I want and to get started, Daniel import pylab import scipy import scipy.optimize from mpl_toolkits.mplot3d import Axes3D #import sys,os,platform #if platform.system() == 'Windows': # home = os.environ['HOMESHARE'] #elif platform.system() == 'Linux': # home = os.environ['HOME'] #sys.path.append(home + '/python') #sys.path.append(home + '/11_PythonWork') #import pylabSettings ##****************************************************************************** ##****************************************************************************** ''' f = p0 + p1*x + p2*y ''' ##------------------------------------------------------------------------------ def __residual(params, f, x, y): ''' Define fit function; Return residual error. ''' p0, p1, p2 = params return p0 + p1*x + p2*y - f ## load raw data (=create some dummy data): dataX = scipy.arange(0,11,1) dataY = dataX/10. dataZ = 0.5 + 1.1*dataX + 1.5*dataY dataXX, dataYY = scipy.meshgrid(dataX,dataY) dataZZ = 0.5 + 1.1*dataXX + 1.5*dataYY ## plot data pylab.close('all') fig = pylab.figure() ax = Axes3D(fig) ax.plot_wireframe(dataXX, dataYY, dataZZ) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') pylab.show() ## guess initial values for parameters p0 = [0., 1., 1.] print __residual(p0, dataZZ, dataXX, dataYY) ## works but is not 2D! p1, p_cov = scipy.optimize.leastsq(__residual, x0=p0, args=(dataZ, dataX, dataY)) print p1 ## doesn't work :() p1, p_cov = scipy.optimize.leastsq(__residual, x0=p0, args=(dataZZ, dataXX, dataYY)) print p1 From _kfj at yahoo.com Wed Nov 9 12:14:00 2011 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Wed, 9 Nov 2011 17:14:00 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?evaluating_B-Splines_made_with=09scipy=2Es?= =?utf-8?q?ignal=2Ecspline2d=3F?= References: Message-ID: thank you very much for your helpful hint. I composed a longer reply, but the web interface insists I have lines longer than 80 chars (which I haven't) and bounces my reply. Kay From _kfj at yahoo.com Wed Nov 9 12:19:33 2011 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Wed, 9 Nov 2011 17:19:33 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?evaluating_B-Splines_made_with=09scipy=2Es?= =?utf-8?q?ignal=2Ecspline2d=3F?= References: Message-ID: I did have lines over 80 chars after all... Zachary Pincus yale.edu> writes: > This doesn't answer your specific question, but look at > scipy.ndimage.map_coordinates() > for general-purpose spline interpolation of regularly-spaced (e.g. image) > data. If you want to repeatedly interpolate the same data, you can get the > spline coefficients with: > scipy.ndimage.spline_filter() and pass them to map_coordinates() > with the "prefilter=False" option. Thank you very much for your helpful hint. I tried out the code you suggested and it seems to do just what I want. It seems to me that I can also process the coefficient matrix I get from signal.cspline2d() with ndimage.map_coordinates() as well as being able to have ndimage.spline_filter() generate the coefficients, though the coefficient matrices the two routines provide aren't identical. The resulting program is fast; both the calculation of the coefficient matrix for a 512X512 image and the interpolation of an equally-sized output took under a second on my system. > It is curious that while scipy.signal has cspline1d() and cspline1d_eval(), > there is no cspline2d_eval() function... hopefully someone else can weight > in on what's going on here. curious indeed, but ndimage seems to solve the problem. Still signal.cspline2d_eval() should be put on the wish-list :-) From josef.pktd at gmail.com Wed Nov 9 12:21:57 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 9 Nov 2011 12:21:57 -0500 Subject: [SciPy-User] multidimensional least squares fitting In-Reply-To: References: Message-ID: On Wed, Nov 9, 2011 at 12:12 PM, Daniel Mader wrote: > Hi everyone, > > I'd like to do some rather simple multidimensional curve fitting. > Simple, because usually it's only a plane, or a weak 2nd order > surface. > > Here's the same question which I tried to follow, but I have no clue > how to feed the 2D arrays into leastsq(): > http://stackoverflow.com/questions/529184/simple-multidimensional-curve-fitting If you just want to estimate a multiple linear or polynomial function, then optimize.leastsq is overkill, linalg is enough. The premade solution is to use: statsmodels.sourceforge.net/generated/scikits.statsmodels.regression.linear_model.OLS.html Josef > > Likely I am just missing a small piece of information, and I'd be > happy to get a clue :) > > Thanks in advance, > and here's some code to demonstrate what I want and to get started, > Daniel > > > import pylab > import scipy > import scipy.optimize > > from mpl_toolkits.mplot3d import Axes3D > > #import sys,os,platform > #if platform.system() == 'Windows': > # ? ?home = os.environ['HOMESHARE'] > #elif platform.system() == 'Linux': > # ? ?home = os.environ['HOME'] > #sys.path.append(home + '/python') > #sys.path.append(home + '/11_PythonWork') > #import pylabSettings > > ##****************************************************************************** > ##****************************************************************************** > > ''' > f = p0 + p1*x + p2*y > ''' > > ##------------------------------------------------------------------------------ > def __residual(params, f, x, y): > ? ?''' > ? ?Define fit function; > ? ?Return residual error. > ? ?''' > ? ?p0, p1, p2 = params > ? ?return p0 + p1*x + p2*y - f > > ## load raw data (=create some dummy data): > dataX = scipy.arange(0,11,1) > dataY = dataX/10. > dataZ = 0.5 + 1.1*dataX + 1.5*dataY > dataXX, dataYY = scipy.meshgrid(dataX,dataY) > dataZZ = 0.5 + 1.1*dataXX + 1.5*dataYY > > ## plot data > pylab.close('all') > fig = pylab.figure() > ax = Axes3D(fig) > ax.plot_wireframe(dataXX, dataYY, dataZZ) > ax.set_xlabel('x') > ax.set_ylabel('y') > ax.set_zlabel('z') > pylab.show() > > ## guess initial values for parameters > p0 = [0., 1., 1.] > print __residual(p0, dataZZ, dataXX, dataYY) > > ## works but is not 2D! > p1, p_cov = scipy.optimize.leastsq(__residual, x0=p0, args=(dataZ, > dataX, dataY)) > print p1 > > ## doesn't work :() > p1, p_cov = scipy.optimize.leastsq(__residual, x0=p0, args=(dataZZ, > dataXX, dataYY)) > print p1 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From danielstefanmader at googlemail.com Wed Nov 9 13:39:29 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Wed, 9 Nov 2011 19:39:29 +0100 Subject: [SciPy-User] multidimensional least squares fitting In-Reply-To: References: Message-ID: Hi Josef, thanks a lot for this quick answer, I'll definitely have a look at your suggestion! However, I'd be very interested if this is possible with leastsq, too? Maybe I don't need it right now, but you never you what's next in life :) Thanks again, greetings, Daniel 2011/11/9 : > On Wed, Nov 9, 2011 at 12:12 PM, Daniel Mader > wrote: >> Hi everyone, >> >> I'd like to do some rather simple multidimensional curve fitting. >> Simple, because usually it's only a plane, or a weak 2nd order >> surface. >> >> Here's the same question which I tried to follow, but I have no clue >> how to feed the 2D arrays into leastsq(): >> http://stackoverflow.com/questions/529184/simple-multidimensional-curve-fitting > > If you just want to estimate a multiple linear or polynomial function, > then optimize.leastsq is overkill, linalg is enough. > > The premade solution is to use: > > statsmodels.sourceforge.net/generated/scikits.statsmodels.regression.linear_model.OLS.html > > Josef > >> >> Likely I am just missing a small piece of information, and I'd be >> happy to get a clue :) >> >> Thanks in advance, >> and here's some code to demonstrate what I want and to get started, >> Daniel >> >> >> import pylab >> import scipy >> import scipy.optimize >> >> from mpl_toolkits.mplot3d import Axes3D >> >> #import sys,os,platform >> #if platform.system() == 'Windows': >> # ? ?home = os.environ['HOMESHARE'] >> #elif platform.system() == 'Linux': >> # ? ?home = os.environ['HOME'] >> #sys.path.append(home + '/python') >> #sys.path.append(home + '/11_PythonWork') >> #import pylabSettings >> >> ##****************************************************************************** >> ##****************************************************************************** >> >> ''' >> f = p0 + p1*x + p2*y >> ''' >> >> ##------------------------------------------------------------------------------ >> def __residual(params, f, x, y): >> ? ?''' >> ? ?Define fit function; >> ? ?Return residual error. >> ? ?''' >> ? ?p0, p1, p2 = params >> ? ?return p0 + p1*x + p2*y - f >> >> ## load raw data (=create some dummy data): >> dataX = scipy.arange(0,11,1) >> dataY = dataX/10. >> dataZ = 0.5 + 1.1*dataX + 1.5*dataY >> dataXX, dataYY = scipy.meshgrid(dataX,dataY) >> dataZZ = 0.5 + 1.1*dataXX + 1.5*dataYY >> >> ## plot data >> pylab.close('all') >> fig = pylab.figure() >> ax = Axes3D(fig) >> ax.plot_wireframe(dataXX, dataYY, dataZZ) >> ax.set_xlabel('x') >> ax.set_ylabel('y') >> ax.set_zlabel('z') >> pylab.show() >> >> ## guess initial values for parameters >> p0 = [0., 1., 1.] >> print __residual(p0, dataZZ, dataXX, dataYY) >> >> ## works but is not 2D! >> p1, p_cov = scipy.optimize.leastsq(__residual, x0=p0, args=(dataZ, >> dataX, dataY)) >> print p1 >> >> ## doesn't work :() >> p1, p_cov = scipy.optimize.leastsq(__residual, x0=p0, args=(dataZZ, >> dataXX, dataYY)) >> print p1 >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From dhanjal at telecom-paristech.fr Wed Nov 9 13:55:10 2011 From: dhanjal at telecom-paristech.fr (Charanpal Dhanjal) Date: Wed, 09 Nov 2011 18:55:10 +0000 Subject: [SciPy-User] Incorrect eigenvectors/values from scipy.sparse.linalg.lobpcg Message-ID: <40b427a91667361fbf2a8b425c88148a@telecom-paristech.fr> Hi all, I wanted to use scipy.sparse.linalg.lobpcg and found that it does not seem to work as described. Here is some code demonstrating the problem: ============================================ import numpy import scipy.sparse.linalg numpy.random.seed(21) n = 20 A = numpy.random.rand(n, n) A = A.T.dot(A) s1, U1 = numpy.linalg.eig(A) print(s1) k = 5 X = numpy.random.rand(A.shape[0], k) w, V = scipy.sparse.linalg.lobpcg(A, X, tol=1e-30, largest=True, maxiter=100) print(w) print(numpy.sum(V**2, 0)) print(numpy.linalg.norm(A.dot(V) - V.dot(numpy.diag(w)))) ============================================ and the output on my machine is: [ 106.1436 4.8357 4.6157 3.6735 2.9786 2.7774 2.4734 1.8922 1.8146 1.5884 1.0942 0.985 0.7575 0.6286 0.4411 0.2275 0.0039 0.0501 0.0364 0.136 ] [ 0.2275+0.j 0.0039+0.j 0.0501+0.j 0.0364+0.j 0.1360+0.j] [ 2.7735 4.7396 3.8368 4.5054 14.7127] 406.94471445 So, it looks like lobpcg is finding the smallest eigenvalues even though largest=True. When I change largest=False, then lobpcg returns the 2nd to 5th largest eigenvectors, i.e. it excludes the very largest for some reason. Furthermore, the norms of the eigenvectors are not 1 and the last print statement shows the eigenvectors are just not correct. Any ideas about this? I tried playing around with tol and maxiter parameters to no avail. I read that some bugs in this function had been fixed in version 0.9 (the version I am using). Also, I am using python 2.7.2 and numpy 1.6.1 on Ubuntu 11.10, if that helps. Thanks in advance for any help, Charanpal From charlesr.harris at gmail.com Wed Nov 9 14:08:44 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Nov 2011 12:08:44 -0700 Subject: [SciPy-User] evaluating B-Splines made with scipy.signal.cspline2d? In-Reply-To: References: Message-ID: On Wed, Nov 9, 2011 at 10:19 AM, Kay F. Jahnke <_kfj at yahoo.com> wrote: > I did have lines over 80 chars after all... > > Zachary Pincus yale.edu> writes: > > > This doesn't answer your specific question, but look at > > scipy.ndimage.map_coordinates() > > for general-purpose spline interpolation of regularly-spaced (e.g. image) > > data. If you want to repeatedly interpolate the same data, you can get > the > > spline coefficients with: > > scipy.ndimage.spline_filter() and pass them to map_coordinates() > > with the "prefilter=False" option. > > Thank you very much for your helpful hint. I tried out the code you > suggested > and it seems to do just what I want. It seems to me that I can also > process the > coefficient matrix I get from signal.cspline2d() with > ndimage.map_coordinates() > as well as being able to have ndimage.spline_filter() generate the > coefficients, > though the coefficient matrices the two routines provide aren't identical. > > I believe they both use the same algorithm, i.e., uniform b-splines and prefiltering to get the coefficients, but the boundary conditions may be different. I think the one in signal uses reflection at the ends, which is the common case. Of course, one of the routines may also be buggy ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Nov 9 14:12:54 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 9 Nov 2011 14:12:54 -0500 Subject: [SciPy-User] multidimensional least squares fitting In-Reply-To: References: Message-ID: On Wed, Nov 9, 2011 at 1:39 PM, Daniel Mader wrote: > Hi Josef, > > thanks a lot for this quick answer, I'll definitely have a look at > your suggestion! > > However, I'd be very interested if this is possible with leastsq, too? > Maybe I don't need it right now, but you never you what's next in life > :) there is also optimize.curve_fit as wrapper for least_sq your error function returns 2d instead of 1d def __residual(params, f, x, y): ''' Define fit function; Return residual error. ''' p0, p1, p2 = params #x,y,f = map(np.ravel, (x,y,f)) return np.ravel(p0 + p1*x + p2*y - f) you need one ravel I didn't look at the rest Josef > > Thanks again, > greetings, > > Daniel > > > > 2011/11/9 ?: >> On Wed, Nov 9, 2011 at 12:12 PM, Daniel Mader >> wrote: >>> Hi everyone, >>> >>> I'd like to do some rather simple multidimensional curve fitting. >>> Simple, because usually it's only a plane, or a weak 2nd order >>> surface. >>> >>> Here's the same question which I tried to follow, but I have no clue >>> how to feed the 2D arrays into leastsq(): >>> http://stackoverflow.com/questions/529184/simple-multidimensional-curve-fitting >> >> If you just want to estimate a multiple linear or polynomial function, >> then optimize.leastsq is overkill, linalg is enough. >> >> The premade solution is to use: >> >> statsmodels.sourceforge.net/generated/scikits.statsmodels.regression.linear_model.OLS.html >> >> Josef >> >>> >>> Likely I am just missing a small piece of information, and I'd be >>> happy to get a clue :) >>> >>> Thanks in advance, >>> and here's some code to demonstrate what I want and to get started, >>> Daniel >>> >>> >>> import pylab >>> import scipy >>> import scipy.optimize >>> >>> from mpl_toolkits.mplot3d import Axes3D >>> >>> #import sys,os,platform >>> #if platform.system() == 'Windows': >>> # ? ?home = os.environ['HOMESHARE'] >>> #elif platform.system() == 'Linux': >>> # ? ?home = os.environ['HOME'] >>> #sys.path.append(home + '/python') >>> #sys.path.append(home + '/11_PythonWork') >>> #import pylabSettings >>> >>> ##****************************************************************************** >>> ##****************************************************************************** >>> >>> ''' >>> f = p0 + p1*x + p2*y >>> ''' >>> >>> ##------------------------------------------------------------------------------ >>> def __residual(params, f, x, y): >>> ? ?''' >>> ? ?Define fit function; >>> ? ?Return residual error. >>> ? ?''' >>> ? ?p0, p1, p2 = params >>> ? ?return p0 + p1*x + p2*y - f >>> >>> ## load raw data (=create some dummy data): >>> dataX = scipy.arange(0,11,1) >>> dataY = dataX/10. >>> dataZ = 0.5 + 1.1*dataX + 1.5*dataY >>> dataXX, dataYY = scipy.meshgrid(dataX,dataY) >>> dataZZ = 0.5 + 1.1*dataXX + 1.5*dataYY >>> >>> ## plot data >>> pylab.close('all') >>> fig = pylab.figure() >>> ax = Axes3D(fig) >>> ax.plot_wireframe(dataXX, dataYY, dataZZ) >>> ax.set_xlabel('x') >>> ax.set_ylabel('y') >>> ax.set_zlabel('z') >>> pylab.show() >>> >>> ## guess initial values for parameters >>> p0 = [0., 1., 1.] >>> print __residual(p0, dataZZ, dataXX, dataYY) >>> >>> ## works but is not 2D! >>> p1, p_cov = scipy.optimize.leastsq(__residual, x0=p0, args=(dataZ, >>> dataX, dataY)) >>> print p1 >>> >>> ## doesn't work :() >>> p1, p_cov = scipy.optimize.leastsq(__residual, x0=p0, args=(dataZZ, >>> dataXX, dataYY)) >>> print p1 >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From danielstefanmader at googlemail.com Wed Nov 9 14:52:17 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Wed, 9 Nov 2011 20:52:17 +0100 Subject: [SciPy-User] multidimensional least squares fitting In-Reply-To: References: Message-ID: > there is also optimize.curve_fit as wrapper for least_sq I thought this would only work with y = f(x)? Will try! > your error function returns 2d instead of 1d > you need one ravel > return np.ravel(p0 + p1*x + p2*y - f) Thank you very much, that's it! Works nicely! From tsyu80 at gmail.com Wed Nov 9 17:02:06 2011 From: tsyu80 at gmail.com (Tony Yu) Date: Wed, 9 Nov 2011 17:02:06 -0500 Subject: [SciPy-User] Ticket #1187: ode crashes if rhs returns a tuple instead of a list Message-ID: I just want to draw attention to the bug report in http://projects.scipy.org/scipy/ticket/1187. Basically, scipy.integrate.ode takes a function as input, and the error occurs if that function returns a tuple (instead of, e.g., a list). If there isn't a simple fix (I can't tell b/c the error occurs within C-code, which I'm not at all proficient in), then I think this should print a more informative error message. Best, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Wed Nov 9 22:32:09 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 9 Nov 2011 21:32:09 -0600 Subject: [SciPy-User] Ticket #1187: ode crashes if rhs returns a tuple instead of a list In-Reply-To: References: Message-ID: On Wed, Nov 9, 2011 at 4:02 PM, Tony Yu wrote: > I just want to draw attention to the bug report in > http://projects.scipy.org/scipy/ticket/1187. Basically, scipy.integrate.ode > takes a function as input, and the error occurs if that function returns a > tuple (instead of, e.g., a list). > > If there isn't a simple fix (I can't tell b/c the error occurs within > C-code, which I'm not at all proficient in), then I think this should print > a more informative error message. > > Best, > -Tony > There are 2 full releases and an release candidate since Scipy 0.7.2 was released (2010-04-22). So, could you please update your numpy and scipy installations accordingly? Works for scipy.10.0.rc1 (last part below) Bruce >>> while r.successful() and r.t < t1: r.integrate(r.t+dt) print r.t, r.y array([-0.71+0.237j, 0.40+0.j ]) 1.0 [-0.71+0.237j 0.40+0.j ] array([ 0.191-0.524j, 0.222+0.j ]) 2.0 [ 0.191-0.524j 0.222+0.j ] array([ 0.472+0.527j, 0.154+0.j ]) 3.0 [ 0.472+0.527j 0.154+0.j ] array([-0.619+0.307j, 0.118+0.j ]) 4.0 [-0.619+0.307j 0.118+0.j ] array([ 0.023-0.614j, 0.095+0.j ]) 5.0 [ 0.023-0.614j 0.095+0.j ] array([ 0.586+0.34j, 0.080+0.j ]) 6.0 [ 0.586+0.34j 0.080+0.j ] array([-0.521+0.445j, 0.069+0.j ]) 7.0 [-0.521+0.445j 0.069+0.j ] array([-0.160-0.612j, 0.061+0.j ]) 8.0 [-0.160-0.612j 0.061+0.j ] array([ 0.649+0.15j, 0.054+0.j ]) 9.0 [ 0.649+0.15j 0.054+0.j ] array([-0.384+0.564j, 0.049+0.j ]) 10.0 [-0.384+0.564j 0.049+0.j ] >>> From warren.weckesser at enthought.com Wed Nov 9 22:48:40 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 9 Nov 2011 21:48:40 -0600 Subject: [SciPy-User] Ticket #1187: ode crashes if rhs returns a tuple instead of a list In-Reply-To: References: Message-ID: Bruce, did you change the return values of f and jac to be tuples instead of lists? It crashes when I run it. ticket1187demo.py: ----- import scipy from scipy.integrate import ode print print "scipy version:", scipy.__version__ print y0, t0 = [1.0j, 2.0], 0 def f(t, y, arg1): return (1j*arg1*y[0] + y[1], -arg1*y[1]**2) def jac(t, y, arg1): return ([1j*arg1, 1], [0, -arg1*2*y[1]]) r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True) r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0) t1 = 10 dt = 1 while r.successful() and r.t < t1: r.integrate(r.t+dt) print r.t, r.y ----- Run it: $ python ticket1187demo.py scipy version: 0.11.0.dev-96e39ec 0-th dimension must be 2 but got 0 (not defined). rv_cb_arr is NULL Call-back cb_f_in_zvode__user__routines failed. Traceback (most recent call last): File "ticket1187demo.py", line 23, in r.integrate(r.t+dt) File "/Users/warren/local_scipy/lib/python2.7/site-packages/scipy/integrate/_ode.py", line 333, in integrate self.f_params, self.jac_params) File "/Users/warren/local_scipy/lib/python2.7/site-packages/scipy/integrate/_ode.py", line 760, in run args[5:])) SystemError: NULL result without error in PyObject_Call Warren On Wed, Nov 9, 2011 at 9:32 PM, Bruce Southey wrote: > On Wed, Nov 9, 2011 at 4:02 PM, Tony Yu wrote: > > I just want to draw attention to the bug report in > > http://projects.scipy.org/scipy/ticket/1187. Basically, > scipy.integrate.ode > > takes a function as input, and the error occurs if that function returns > a > > tuple (instead of, e.g., a list). > > > > If there isn't a simple fix (I can't tell b/c the error occurs within > > C-code, which I'm not at all proficient in), then I think this should > print > > a more informative error message. > > > > Best, > > -Tony > > > > There are 2 full releases and an release candidate since Scipy 0.7.2 > was released (2010-04-22). > So, could you please update your numpy and scipy installations accordingly? > > Works for scipy.10.0.rc1 (last part below) > > Bruce > > >>> while r.successful() and r.t < t1: > r.integrate(r.t+dt) > print r.t, r.y > > > array([-0.71+0.237j, 0.40+0.j ]) > 1.0 [-0.71+0.237j 0.40+0.j ] > array([ 0.191-0.524j, 0.222+0.j ]) > 2.0 [ 0.191-0.524j 0.222+0.j ] > array([ 0.472+0.527j, 0.154+0.j ]) > 3.0 [ 0.472+0.527j 0.154+0.j ] > array([-0.619+0.307j, 0.118+0.j ]) > 4.0 [-0.619+0.307j 0.118+0.j ] > array([ 0.023-0.614j, 0.095+0.j ]) > 5.0 [ 0.023-0.614j 0.095+0.j ] > array([ 0.586+0.34j, 0.080+0.j ]) > 6.0 [ 0.586+0.34j 0.080+0.j ] > array([-0.521+0.445j, 0.069+0.j ]) > 7.0 [-0.521+0.445j 0.069+0.j ] > array([-0.160-0.612j, 0.061+0.j ]) > 8.0 [-0.160-0.612j 0.061+0.j ] > array([ 0.649+0.15j, 0.054+0.j ]) > 9.0 [ 0.649+0.15j 0.054+0.j ] > array([-0.384+0.564j, 0.049+0.j ]) > 10.0 [-0.384+0.564j 0.049+0.j ] > >>> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Wed Nov 9 23:36:17 2011 From: tsyu80 at gmail.com (Tony Yu) Date: Wed, 9 Nov 2011 23:36:17 -0500 Subject: [SciPy-User] Ticket #1187: ode crashes if rhs returns a tuple instead of a list In-Reply-To: References: Message-ID: On Wed, Nov 9, 2011 at 10:48 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > Bruce, did you change the return values of f and jac to be tuples instead > of lists? It crashes when I run it. > > ticket1187demo.py: > ----- > import scipy > from scipy.integrate import ode > > print > print "scipy version:", scipy.__version__ > print > > y0, t0 = [1.0j, 2.0], 0 > > def f(t, y, arg1): > return (1j*arg1*y[0] + y[1], -arg1*y[1]**2) > > def jac(t, y, arg1): > return ([1j*arg1, 1], [0, -arg1*2*y[1]]) > > > r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True) > r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0) > t1 = 10 > dt = 1 > > while r.successful() and r.t < t1: > r.integrate(r.t+dt) > print r.t, r.y > ----- > > Run it: > > $ python ticket1187demo.py > > scipy version: 0.11.0.dev-96e39ec > > 0-th dimension must be 2 but got 0 (not defined). > rv_cb_arr is NULL > Call-back cb_f_in_zvode__user__routines failed. > Traceback (most recent call last): > File "ticket1187demo.py", line 23, in > r.integrate(r.t+dt) > File > "/Users/warren/local_scipy/lib/python2.7/site-packages/scipy/integrate/_ode.py", > line 333, in integrate > self.f_params, self.jac_params) > File > "/Users/warren/local_scipy/lib/python2.7/site-packages/scipy/integrate/_ode.py", > line 760, in run > args[5:])) > SystemError: NULL result without error in PyObject_Call > > > Warren > > > > On Wed, Nov 9, 2011 at 9:32 PM, Bruce Southey wrote: > >> On Wed, Nov 9, 2011 at 4:02 PM, Tony Yu wrote: >> > I just want to draw attention to the bug report in >> > http://projects.scipy.org/scipy/ticket/1187. Basically, >> scipy.integrate.ode >> > takes a function as input, and the error occurs if that function >> returns a >> > tuple (instead of, e.g., a list). >> > >> > If there isn't a simple fix (I can't tell b/c the error occurs within >> > C-code, which I'm not at all proficient in), then I think this should >> print >> > a more informative error message. >> > >> > Best, >> > -Tony >> > >> >> There are 2 full releases and an release candidate since Scipy 0.7.2 >> was released (2010-04-22). >> So, could you please update your numpy and scipy installations >> accordingly? >> >> Works for scipy.10.0.rc1 (last part below) >> >> Bruce >> >> I forgot to mention it before, but like Warren, I'm using a recent version (0.11.0.dev-96e39ec) and still see the error. -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthen at gmail.com Thu Nov 10 06:36:12 2011 From: matthen at gmail.com (Matt Henderson) Date: Thu, 10 Nov 2011 11:36:12 +0000 Subject: [SciPy-User] Dotting two sparse rows Message-ID: Hi there, I need to dot two csr matrices with shapes (1, 25675), (1, 25675) together, giving a single number. These rows are extremely sparse- with around 20 non-zero entries each. What's the fastest way to do this? I know the following works: np.dot(a, b.T).todense()[0,0] Cheers, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Thu Nov 10 08:01:28 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 10 Nov 2011 07:01:28 -0600 Subject: [SciPy-User] Dotting two sparse rows In-Reply-To: References: Message-ID: On Thu, Nov 10, 2011 at 5:36 AM, Matt Henderson wrote: > Hi there, > > I need to dot two csr matrices with shapes (1, 25675), (1, 25675) > together, giving a single number. These rows are extremely sparse- with > around 20 non-zero entries each. What's the fastest way to do this? I know > the following works: > > np.dot(a, b.T).todense()[0,0] > > Looks like a.multiply(b).sum() is faster: In [248]: a Out[248]: <1x30000 sparse matrix of type '' with 81 stored elements in Compressed Sparse Row format> In [249]: b Out[249]: <1x30000 sparse matrix of type '' with 77 stored elements in Compressed Sparse Row format> In [250]: %timeit np.dot(a, b.T).todense()[0,0] 1000 loops, best of 3: 470 us per loop In [251]: %timeit a.dot(b.T).todense()[0,0] 1000 loops, best of 3: 408 us per loop In [252]: %timeit a.multiply(b).sum() 10000 loops, best of 3: 182 us per loop Warren Cheers, > Matt > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Thu Nov 10 09:39:37 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 10 Nov 2011 08:39:37 -0600 Subject: [SciPy-User] Ticket #1187: ode crashes if rhs returns a tuple instead of a list In-Reply-To: References: Message-ID: <4EBBE229.105@gmail.com> On 11/09/2011 04:02 PM, Tony Yu wrote: > I just want to draw attention to the bug report in > http://projects.scipy.org/scipy/ticket/1187. Basically, > scipy.integrate.ode takes a function as input, and the error occurs if > that function returns a tuple (instead of, e.g., a list). > > If there isn't a simple fix (I can't tell b/c the error occurs within > C-code, which I'm not at all proficient in), then I think this should > print a more informative error message. > > Best, > -Tony > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Ah Now I understand, perhaps providing the actual code to the ticket would avoid confusion. Here you are replacing a mutable object with an immutable object so that may be an issue. Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Thu Nov 10 10:05:48 2011 From: tsyu80 at gmail.com (Tony Yu) Date: Thu, 10 Nov 2011 10:05:48 -0500 Subject: [SciPy-User] Ticket #1187: ode crashes if rhs returns a tuple instead of a list In-Reply-To: <4EBBE229.105@gmail.com> References: <4EBBE229.105@gmail.com> Message-ID: On Thu, Nov 10, 2011 at 9:39 AM, Bruce Southey wrote: > ** > On 11/09/2011 04:02 PM, Tony Yu wrote: > > I just want to draw attention to the bug report in > http://projects.scipy.org/scipy/ticket/1187. Basically, > scipy.integrate.ode takes a function as input, and the error occurs if that > function returns a tuple (instead of, e.g., a list). > > If there isn't a simple fix (I can't tell b/c the error occurs within > C-code, which I'm not at all proficient in), then I think this should print > a more informative error message. > > Best, > -Tony > > > _______________________________________________ > SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > Ah > Now I understand, perhaps providing the actual code to the ticket would > avoid confusion. > > Here you are replacing a mutable object with an immutable object so that > may be an issue. > I was thinking the same; for example, if the input function returns an array it also works fine. I'm not sure whether the correct behavior is to simply cast the tuple to a list or to simply raise a more informative error. Unfortunately, I don't understand the C-code well enough to submit a patch. Best, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Thu Nov 10 10:33:22 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Thu, 10 Nov 2011 08:33:22 -0700 Subject: [SciPy-User] Error using idlsave in IPython Message-ID: Hello, Could you help me to clarify the following idlsave data accessing issue? Quick update: This seems to be an issue with IPython. s1['dn'] call works fine within regular python shell. I1 from scipy import io I2 s1 = io.idl.readsav('test.sav') I3 s1? Type: AttrDict Base Class: String Form: {'dn': array([ 1.02282184e+07, 1.05383408e+07, 1.08758739e+07, 1.12449965e+07, 1. <...> (('r', 'R'), '>f8'), (('v', 'V'), '>f8')]), 'tfit': array([ 4.82394886e+02, 4.18176107e-01])} Namespace: Interactive Length: 11 File: /usr/lib64/python2.7/site-packages/scipy/io/idl.py Definition: s1(self, name) ### Why is the following call failing? ### I4 s1['dn'] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /home/gsever/Desktop/python-repo/ipython/IPython/core/prefilter.pyc in prefilter_lines(self, lines, continue_prompt) 358 for lnum, line in enumerate(llines) ]) 359 else: --> 360 out = self.prefilter_line(llines[0], continue_prompt) 361 362 return out /home/gsever/Desktop/python-repo/ipython/IPython/core/prefilter.pyc in prefilter_line(self, line, continue_prompt) 333 return normal_handler.handle(line_info) 334 --> 335 prefiltered = self.prefilter_line_info(line_info) 336 # print "prefiltered line: %r" % prefiltered 337 return prefiltered /home/gsever/Desktop/python-repo/ipython/IPython/core/prefilter.pyc in prefilter_line_info(self, line_info) 273 # print "prefilter_line_info: ", line_info 274 handler = self.find_handler(line_info) --> 275 return handler.handle(line_info) 276 277 def find_handler(self, line_info): /home/gsever/Desktop/python-repo/ipython/IPython/core/prefilter.pyc in handle(self, line_info) 813 814 force_auto = isinstance(obj, IPyAutocall) --> 815 auto_rewrite = getattr(obj, 'rewrite', True) 816 817 if esc == ESC_QUOTE: /usr/lib64/python2.7/site-packages/scipy/io/idl.pyc in __getitem__(self, name) 657 658 def __getitem__(self, name): --> 659 return super(AttrDict, self).__getitem__(name.lower()) 660 661 def __setitem__(self, key, value): KeyError: 'rewrite' Back then I was accessing this key via (when idlsave was a separate module): s1.variables['dn'] but it is long gone in the scipy. The following works. Is this the correct way to access variables from this dictionary? I14 s1.get('dn') O14 array([ 1.02282184e+07, 1.05383408e+07, 1.08758739e+07, 1.12449965e+07, 1.16508267e+07, 1.20997100e+07, Thanks. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Thu Nov 10 11:26:32 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Thu, 10 Nov 2011 09:26:32 -0700 Subject: [SciPy-User] [IPython-User] Error using idlsave in IPython In-Reply-To: References: Message-ID: On Thu, Nov 10, 2011 at 9:09 AM, Thomas Kluyver wrote: > On 10 November 2011 15:33, G?khan Sever wrote: > >> Hello, >> >> Could you help me to clarify the following idlsave data accessing issue? >> Quick update: This seems to be an issue with IPython. s1['dn'] call works >> fine within regular python shell. >> >> > The AttrDict object tries to do item lookup from attribute access, but if > the name is not found, it doesn't translate the error to an AttributeError. > IPython looks for a particular attribute to control its input filtering > behaviour, but it's not prepared for getattr() to raise a KeyError. IPython > should probably catch arbitrary errors there. Feel free to file an issue so > we don't forget about it: > > https://github.com/ipython/ipython/issues > > Thomas > Thanks Thomas, created https://github.com/ipython/ipython/issues/988 -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Fri Nov 11 08:38:28 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 11 Nov 2011 07:38:28 -0600 Subject: [SciPy-User] Ticket #1187: ode crashes if rhs returns a tuple instead of a list In-Reply-To: References: <4EBBE229.105@gmail.com> Message-ID: On Thu, Nov 10, 2011 at 9:05 AM, Tony Yu wrote: > > > On Thu, Nov 10, 2011 at 9:39 AM, Bruce Southey wrote: > >> ** >> On 11/09/2011 04:02 PM, Tony Yu wrote: >> >> I just want to draw attention to the bug report in >> http://projects.scipy.org/scipy/ticket/1187. Basically, >> scipy.integrate.ode takes a function as input, and the error occurs if that >> function returns a tuple (instead of, e.g., a list). >> >> If there isn't a simple fix (I can't tell b/c the error occurs within >> C-code, which I'm not at all proficient in), then I think this should print >> a more informative error message. >> >> >> Best, >> -Tony >> >> >> _______________________________________________ >> SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user >> >> Ah >> Now I understand, perhaps providing the actual code to the ticket would >> avoid confusion. >> >> Here you are replacing a mutable object with an immutable object so that >> may be an issue. >> > > I was thinking the same; for example, if the input function returns an > array it also works fine. I'm not sure whether the correct behavior is to > simply cast the tuple to a list or to simply raise a more informative > error. Unfortunately, I don't understand the C-code well enough to submit a > patch. > > Looks like a bug in f2py. I've added some comments to the ticket: http://projects.scipy.org/scipy/ticket/1187 Warren > Best, > -Tony > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Thu Nov 10 11:09:16 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 10 Nov 2011 16:09:16 +0000 Subject: [SciPy-User] [IPython-User] Error using idlsave in IPython In-Reply-To: References: Message-ID: On 10 November 2011 15:33, G?khan Sever wrote: > Hello, > > Could you help me to clarify the following idlsave data accessing issue? > Quick update: This seems to be an issue with IPython. s1['dn'] call works > fine within regular python shell. > > The AttrDict object tries to do item lookup from attribute access, but if the name is not found, it doesn't translate the error to an AttributeError. IPython looks for a particular attribute to control its input filtering behaviour, but it's not prepared for getattr() to raise a KeyError. IPython should probably catch arbitrary errors there. Feel free to file an issue so we don't forget about it: https://github.com/ipython/ipython/issues Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Nov 11 15:16:10 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 11 Nov 2011 21:16:10 +0100 Subject: [SciPy-User] np_inline 0.3 released In-Reply-To: <4EBAA708.60401@cs.wisc.edu> References: <4EBAA708.60401@cs.wisc.edu> Message-ID: On Wed, Nov 9, 2011 at 5:15 PM, J. David Lee wrote: > Hello. > > This morning I released version 0.3 of np_inline, a module for in-lining > C code in python. np_inline is a simple alternative to weave.inline for > embedding C code in python. Its main selling points are: > > 1) simplicity - Implemented in a single file with less than 500 lines > including the C-file template, comments, docstrings, and whitespace. > 2) simplicity - Generated C files are human-readable and reasonably > formatted. > 3) multiprocessing support - Works properly with python threads or > processes. > > I've been using this module daily for over a year without issue, and > would be happy to hear any comments or criticisms. > Python 3 support can also be a selling point. Weave doesn't (and may never) have it. Ralf > > A source distribution and documentation is available on the web at: > > http://pages.cs.wisc.edu/~johnl/np_inline/ > > or on github: > > https://github.com/johnnylee/np_inline > > or using easy_install or pip. > > Thanks, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.anton.letnes at gmail.com Fri Nov 11 19:02:19 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Sat, 12 Nov 2011 01:02:19 +0100 Subject: [SciPy-User] ImportError: *.so: cannot open shared object file: No such file or directory In-Reply-To: <8A18D8FA4293104C9A710494FD6C273CB63568C1@hq-ex-mb03.ad.navteq.com> References: <8A18D8FA4293104C9A710494FD6C273CB63568C1@hq-ex-mb03.ad.navteq.com> Message-ID: Assuming bash, type this into your shell to export the variable for as long as you keep your shell running. If you want it to stick permanently, add the line to ~/.bashrc. export LD_LIBRARY_PATH=/folder/that/contains/libs:$LD_LIBRARY_PATH Cheers Paul On 2. nov. 2011, at 18:58, Pundurs, Mark wrote: > Thanks, David! How do I (a Linux newbie) add paths to environment variable LD_LIBRARY_PATH? > > ------------------------------ > > Date: Wed, 2 Nov 2011 15:51:56 +0000 > From: David Cournapeau > > Hi Mark, > > On Wed, Nov 2, 2011 at 3:38 PM, Pundurs, Mark wrote: >> I want to use the function stats.norm.isf, but no matter how I try to import it I end up with the error "ImportError: .so: cannot open shared object file: No such file or directory". The .so files cited do exist in /usr/lib (as symbolic links to other .so files that also exist in that directory). From what I've read, that's where they're supposed to be - but I think the Python installation is in a nonstandard location. Is that the problem? How can I work around it? > > I believe RHEL 4 uses g77 as its default fortran compiler, so you have > a custom gfortran build somewhere, am I right ? > > If so, you need to add the paths where libgfortran.so and liblapack.so > are to the environment variable LD_LIBRARY_PATH. Given that scipy has > been built (by someone else for you ?), you may want to ask them about > it for the exact locations of those libraries. > > cheers, > > David > > The information contained in this communication may be CONFIDENTIAL and is intended only for the use of the recipient(s) named above. If you are not the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication, or any of its contents, is strictly prohibited. If you have received this communication in error, please notify the sender and delete/destroy the original message and any copy of it from your computer or paper files. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ralf.gommers at googlemail.com Sun Nov 13 14:19:48 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 13 Nov 2011 20:19:48 +0100 Subject: [SciPy-User] ANN: SciPy 0.10.0 released Message-ID: Hi all, I am pleased to announce the availability of SciPy 0.10.0. For this release over a 100 tickets and pull requests have been closed, and many new features have been added. Some of the highlights are: - support for Bento as a build system for scipy - generalized and shift-invert eigenvalue problems in sparse.linalg - addition of discrete-time linear systems in the signal module Sources and binaries can be found at , release notes are copied below. Enjoy, The SciPy developers ========================== SciPy 0.10.0 Release Notes ========================== .. contents:: SciPy 0.10.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a limited number of deprecations and backwards-incompatible changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.10.x branch, and on adding new features on the development master branch. Release highlights: - Support for Bento as optional build system. - Support for generalized eigenvalue problems, and all shift-invert modes available in ARPACK. This release requires Python 2.4-2.7 or 3.1- and NumPy 1.5 or greater. New features ============ Bento: new optional build system -------------------------------- Scipy can now be built with `Bento `_. Bento has some nice features like parallel builds and partial rebuilds, that are not possible with the default build system (distutils). For usage instructions see BENTO_BUILD.txt in the scipy top-level directory. Currently Scipy has three build systems, distutils, numscons and bento. Numscons is deprecated and is planned and will likely be removed in the next release. Generalized and shift-invert eigenvalue problems in ``scipy.sparse.linalg`` --------------------------------------------------------------------------- The sparse eigenvalue problem solver functions ``scipy.sparse.eigs/eigh`` now support generalized eigenvalue problems, and all shift-invert modes available in ARPACK. Discrete-Time Linear Systems (``scipy.signal``) ----------------------------------------------- Support for simulating discrete-time linear systems, including ``scipy.signal.dlsim``, ``scipy.signal.dimpulse``, and ``scipy.signal.dstep``, has been added to SciPy. Conversion of linear systems from continuous-time to discrete-time representations is also present via the ``scipy.signal.cont2discrete`` function. Enhancements to ``scipy.signal`` -------------------------------- A Lomb-Scargle periodogram can now be computed with the new function ``scipy.signal.lombscargle``. The forward-backward filter function ``scipy.signal.filtfilt`` can now filter the data in a given axis of an n-dimensional numpy array. (Previously it only handled a 1-dimensional array.) Options have been added to allow more control over how the data is extended before filtering. FIR filter design with ``scipy.signal.firwin2`` now has options to create filters of type III (zero at zero and Nyquist frequencies) and IV (zero at zero frequency). Additional decomposition options (``scipy.linalg``) --------------------------------------------------- A sort keyword has been added to the Schur decomposition routine (``scipy.linalg.schur``) to allow the sorting of eigenvalues in the resultant Schur form. Additional special matrices (``scipy.linalg``) ---------------------------------------------- The functions ``hilbert`` and ``invhilbert`` were added to ``scipy.linalg``. Enhancements to ``scipy.stats`` ------------------------------- * The *one-sided form* of Fisher's exact test is now also implemented in ``stats.fisher_exact``. * The function ``stats.chi2_contingency`` for computing the chi-square test of independence of factors in a contingency table has been added, along with the related utility functions ``stats.contingency.margins`` and ``stats.contingency.expected_freq``. Basic support for Harwell-Boeing file format for sparse matrices ---------------------------------------------------------------- Both read and write are support through a simple function-based API, as well as a more complete API to control number format. The functions may be found in scipy.sparse.io. The following features are supported: * Read and write sparse matrices in the CSC format * Only real, symmetric, assembled matrix are supported (RUA format) Deprecated features =================== ``scipy.maxentropy`` -------------------- The maxentropy module is unmaintained, rarely used and has not been functioning well for several releases. Therefore it has been deprecated for this release, and will be removed for scipy 0.11. Logistic regression in scikits.learn is a good alternative for this functionality. The ``scipy.maxentropy.logsumexp`` function has been moved to ``scipy.misc``. ``scipy.lib.blas`` ------------------ There are similar BLAS wrappers in ``scipy.linalg`` and ``scipy.lib``. These have now been consolidated as ``scipy.linalg.blas``, and ``scipy.lib.blas`` is deprecated. Numscons build system --------------------- The numscons build system is being replaced by Bento, and will be removed in one of the next scipy releases. Backwards-incompatible changes ============================== The deprecated name `invnorm` was removed from ``scipy.stats.distributions``, this distribution is available as `invgauss`. The following deprecated nonlinear solvers from ``scipy.optimize`` have been removed:: - ``broyden_modified`` (bad performance) - ``broyden1_modified`` (bad performance) - ``broyden_generalized`` (equivalent to ``anderson``) - ``anderson2`` (equivalent to ``anderson``) - ``broyden3`` (obsoleted by new limited-memory broyden methods) - ``vackar`` (renamed to ``diagbroyden``) Other changes ============= ``scipy.constants`` has been updated with the CODATA 2010 constants. ``__all__`` dicts have been added to all modules, which has cleaned up the namespaces (particularly useful for interactive work). An API section has been added to the documentation, giving recommended import guidelines and specifying which submodules are public and which aren't. Authors ======= This release contains work by the following people (contributed at least one patch to this release, names in alphabetical order): * Jeff Armstrong + * Matthew Brett * Lars Buitinck + * David Cournapeau * FI$H 2000 + * Michael McNeil Forbes + * Matty G + * Christoph Gohlke * Ralf Gommers * Yaroslav Halchenko * Charles Harris * Thouis (Ray) Jones + * Chris Jordan-Squire + * Robert Kern * Chris Lasher + * Wes McKinney + * Travis Oliphant * Fabian Pedregosa * Josef Perktold * Thomas Robitaille + * Pim Schellart + * Anthony Scopatz + * Skipper Seabold + * Fazlul Shahriar + * David Simcha + * Scott Sinclair + * Andrey Smirnov + * Collin RM Stocks + * Martin Teichmann + * Jake Vanderplas + * Ga?l Varoquaux + * Pauli Virtanen * Stefan van der Walt * Warren Weckesser * Mark Wiebe + A total of 35 people contributed to this release. People with a "+" by their names contributed a patch for the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun Nov 13 14:40:18 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 13 Nov 2011 12:40:18 -0700 Subject: [SciPy-User] MODIS data and true-color plotting Message-ID: Hello groups, I have two questions about working with MODIS data. 1-) Is there any light Pythonic HDF-EOS wrapper to handle HDF-EOS data other than PyNIO [http://www.pyngl.ucar.edu/Nio.shtml] Although, I have managed to install that package from its source, it took me many hours to figure out all the installation quirks. Something simpler to build and mainly for HDFEOS data?? 2-) Another similar question: Has anybody attempted to create true-color MODIS images (like the ones shown at [ http://rapidfire.sci.gsfc.nasa.gov/realtime/]) in Python? So far, I have seen one clear tutorial [ftp://ftp.ssec.wisc.edu/pub/IMAPP/MODIS/TrueColor/] to create natural color images, but uses ms2gt [ http://nsidc.org/data/modis/ms2gt/], NDVI and IDL. Except the reflectance correction via NDVI, ms2gt and IDL parts seem to be implemented in Python. Till now, I have some progress combining GOES imagery with aircraft data. My next task is to combine MODIS data with aircraft and radar data. I would be happy to get some guidance and code support if there is any previous work been done using Python. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaldorusso at gmail.com Sun Nov 13 15:27:28 2011 From: arnaldorusso at gmail.com (Arnaldo Russo) Date: Sun, 13 Nov 2011 18:27:28 -0200 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: Hi G?khan, Try to use "pyhdf" wget -c http://ufpr.dl.sourceforge.net/project/pysclint/pyhdf/0.8.3/pyhdf -0.8.3.tar.gz tar zxvf pyhdf-0.8.3.tar.gz cd pyhdf-0.8.3/ sudo apt-get install libhdf4-dev sudo apt-get install python2.6-dev cd pyhdf sudo apt-get install swig swig -python hdfext.i cd .. export INCLUDE_DIRS=/usr/include/hdf export LIBRARY_DIRS=/usr/lib export NOSZIP=1 python setup.py install May be you could use h4toh5 and use pytables. 2 - I'm trying to get ocean color variables through pyhdf package, but I'm still working on. If you have any advance, post here your feedback, please. Regards, Arnaldo ---- *Arnaldo D'Amaral Pereira Granja Russo* Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia Universidade Federal do Rio Grande e-mail arnaldorusso [at] gmail [dot] com tel (53) 3233-6855 2011/11/13 G?khan Sever > Hello groups, > > I have two questions about working with MODIS data. > > 1-) Is there any light Pythonic HDF-EOS wrapper to handle HDF-EOS data > other than PyNIO [http://www.pyngl.ucar.edu/Nio.shtml] Although, I have > managed to install that package from its source, it took me many hours to > figure out all the installation quirks. Something simpler to build and > mainly for HDFEOS data?? > > 2-) Another similar question: Has anybody attempted to create true-color > MODIS images (like the ones shown at [ > http://rapidfire.sci.gsfc.nasa.gov/realtime/]) in Python? So far, I have > seen one clear tutorial [ > ftp://ftp.ssec.wisc.edu/pub/IMAPP/MODIS/TrueColor/] to create natural > color images, but uses ms2gt [http://nsidc.org/data/modis/ms2gt/], NDVI > and IDL. Except the reflectance correction via NDVI, ms2gt and IDL parts > seem to be implemented in Python. > > Till now, I have some progress combining GOES imagery with aircraft data. > My next task is to combine MODIS data with aircraft and radar data. I would > be happy to get some guidance and code support if there is any previous > work been done using Python. > > Thanks. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 13 15:35:07 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 13 Nov 2011 13:35:07 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 12:40 PM, G?khan Sever wrote: > Hello groups, > > I have two questions about working with MODIS data. > > 1-) Is there any light Pythonic HDF-EOS wrapper to handle HDF-EOS data > other than PyNIO [http://www.pyngl.ucar.edu/Nio.shtml] Although, I have > managed to install that package from its source, it took me many hours to > figure out all the installation quirks. Something simpler to build and > mainly for HDFEOS data?? > > 2-) Another similar question: Has anybody attempted to create true-color > MODIS images (like the ones shown at [ > http://rapidfire.sci.gsfc.nasa.gov/realtime/]) in Python? So far, I have > seen one clear tutorial [ > ftp://ftp.ssec.wisc.edu/pub/IMAPP/MODIS/TrueColor/] to create natural > color images, but uses ms2gt [http://nsidc.org/data/modis/ms2gt/], NDVI > and IDL. Except the reflectance correction via NDVI, ms2gt and IDL parts > seem to be implemented in Python. > > Till now, I have some progress combining GOES imagery with aircraft data. > My next task is to combine MODIS data with aircraft and radar data. I would > be happy to get some guidance and code support if there is any previous > work been done using Python. > > It's kind of a pain, no? I ended up using the java swath tool and a script. I'm not proud of it, but I've attached it in case you find parts of it is useful. I hope it was a working version of the script ;) The swath tool omits some of the useful projections. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: amwse.py Type: text/x-python Size: 15954 bytes Desc: not available URL: From gokhansever at gmail.com Sun Nov 13 16:42:20 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 13 Nov 2011 14:42:20 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 1:27 PM, Arnaldo Russo wrote: > Hi G?khan, > > Try to use "pyhdf" > > wget -c http://ufpr.dl.sourceforge.net/project/pysclint/pyhdf/0.8.3/pyhdf > -0.8.3.tar.gz > tar zxvf pyhdf-0.8.3.tar.gz > cd pyhdf-0.8.3/ > sudo apt-get install libhdf4-dev > sudo apt-get install python2.6-dev > cd pyhdf > sudo apt-get install swig > swig -python hdfext.i > cd .. > export INCLUDE_DIRS=/usr/include/hdf > export LIBRARY_DIRS=/usr/lib > export NOSZIP=1 > python setup.py install > > Hi, I got pyhdf working. It has a little different way of accessing data content comparing to PyNIO and netcdf4-python. Example: I43 from pyhdf.SD import * I44 aa = SD('MOD021KM.A2006180.1920.005.2010179032452.hdf') I45 aa.select('Latitude')[:].shape O45 (406, 271) Installation of pyhdf was much simpler. Just depends on hdf4. > May be you could use h4toh5 and use pytables. > Do this conversion have much advantage rather than pure command based access to data? > > 2 - I'm trying to get ocean color variables through pyhdf package, but I'm > still working on. > If you have any advance, post here your feedback, please. > Which level of product are you working with? Once I figure out true-color plotting, I am going to bring cloud effective radius data from MOD06 [ http://modis.gsfc.nasa.gov/data/dataprod/dataproducts.php?MOD_NUMBER=06] > > Regards, > Arnaldo > > ---- > *Arnaldo D'Amaral Pereira Granja Russo* > Lab. de Estudos dos Oceanos e Clima > Instituto de Oceanografia > Universidade Federal do Rio Grande > e-mail arnaldorusso [at] gmail [dot] com > tel (53) 3233-6855 > > > > 2011/11/13 G?khan Sever > >> Hello groups, >> >> I have two questions about working with MODIS data. >> >> 1-) Is there any light Pythonic HDF-EOS wrapper to handle HDF-EOS data >> other than PyNIO [http://www.pyngl.ucar.edu/Nio.shtml] Although, I have >> managed to install that package from its source, it took me many hours to >> figure out all the installation quirks. Something simpler to build and >> mainly for HDFEOS data?? >> >> 2-) Another similar question: Has anybody attempted to create true-color >> MODIS images (like the ones shown at [ >> http://rapidfire.sci.gsfc.nasa.gov/realtime/]) in Python? So far, I have >> seen one clear tutorial [ >> ftp://ftp.ssec.wisc.edu/pub/IMAPP/MODIS/TrueColor/] to create natural >> color images, but uses ms2gt [http://nsidc.org/data/modis/ms2gt/], NDVI >> and IDL. Except the reflectance correction via NDVI, ms2gt and IDL parts >> seem to be implemented in Python. >> >> Till now, I have some progress combining GOES imagery with aircraft data. >> My next task is to combine MODIS data with aircraft and radar data. I would >> be happy to get some guidance and code support if there is any previous >> work been done using Python. >> >> Thanks. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun Nov 13 16:49:02 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 13 Nov 2011 14:49:02 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 1:35 PM, Charles R Harris wrote: > > > It's kind of a pain, no? I ended up using the java swath tool and a > script. I'm not proud of it, but I've attached it in case you find parts of > it is useful. I hope it was a working version of the script ;) The swath > tool omits some of the useful projections. > > Chuck > Hi, Could you provide a working example version of the script (with data and other scripts included)? I am still not clear about this swath2grid conversion action :) Can ms2gt have a Python equivalent? Isn't basemap doing a similar conversion? For a reference I also have a similar question asked over [ http://stackoverflow.com/questions/7802459/combined-atmospheric-data-visualization ] One notable tool is at ccplot.org for satellite data plotting. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 13 18:39:28 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 13 Nov 2011 16:39:28 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 2:49 PM, G?khan Sever wrote: > > > On Sun, Nov 13, 2011 at 1:35 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: >> >> >> It's kind of a pain, no? I ended up using the java swath tool and a >> script. I'm not proud of it, but I've attached it in case you find parts of >> it is useful. I hope it was a working version of the script ;) The swath >> tool omits some of the useful projections. >> >> Chuck >> > > Hi, > > Could you provide a working example version of the script (with data and > other scripts included)? I am still not clear about this swath2grid > conversion action :) Can ms2gt have a Python equivalent? Isn't basemap > doing a similar conversion? > > I'll take a look, I think I still have both the geolocation file and the data file somewhere, but they are rather large so it might be easier if you just download the files from the NASA sight once I can recall exactly where that was. You need both for the level 2 products in order to disentangle the overlapping push broom swaths and the viewing geometry and interpolate the result for the projection. It's a non trivial problem that I left to the swath tool. Some of the other tools are supposed to be able to do that also, but the documentation wasn't good enough that I wanted to pursue that line. Also, installing the swath tool on linux leaves some needed info in the .bash_profile file that you might want to move to .bashrc, I don't know how things work on windows. I think the higher level data products are easier to deal with. I wasn't overjoyed with the state of the NASA public software ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 13 19:17:15 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 13 Nov 2011 17:17:15 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 4:39 PM, Charles R Harris wrote: > > > On Sun, Nov 13, 2011 at 2:49 PM, G?khan Sever wrote: > >> >> >> On Sun, Nov 13, 2011 at 1:35 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >>> >>> >>> It's kind of a pain, no? I ended up using the java swath tool and a >>> script. I'm not proud of it, but I've attached it in case you find parts of >>> it is useful. I hope it was a working version of the script ;) The swath >>> tool omits some of the useful projections. >>> >>> Chuck >>> >> >> Hi, >> >> Could you provide a working example version of the script (with data and >> other scripts included)? I am still not clear about this swath2grid >> conversion action :) Can ms2gt have a Python equivalent? Isn't basemap >> doing a similar conversion? >> >> > I'll take a look, I think I still have both the geolocation file and the > data file somewhere, but they are rather large so it might be easier if you > just download the files from the NASA sight once I can recall exactly where > that was. You need both for the level 2 products in order to disentangle > the overlapping push broom swaths and the viewing geometry and interpolate > the result for the projection. It's a non trivial problem that I left to > the swath tool. Some of the other tools are supposed to be able to do that > also, but the documentation wasn't good enough that I wanted to pursue that > line. Also, installing the swath tool on linux leaves some needed info in > the .bash_profile file that you might want to move to .bashrc, I don't know > how things work on windows. I think the higher level data products are > easier to deal with. > > I wasn't overjoyed with the state of the NASA public software ;) > > The data files can be found at ftp://ladsweb.nascom.nasa.gov/allData/5, with *AQUA* MYD021KM -- 1 km pixels MYD02HKM -- half km pixels MYD02QKM -- quarter km pixels MYD02SSH -- 5 km subsampling of 1km pixels. *TERRA* MOD021KM -- 1 km pixels MOD02HKM -- half km pixels MOD02QKM -- quarter km pixels MOD02SSH -- 5 km subsampling of 1 km pixels. The geolocation files start with M{Y,O}DO3 with the year/day/file#/005 part the same as for the data files. For instance MYD02QKM.A2006215.0330.005.2009251204640.hdf is the data file and MYD03.A2006215.0330.005.2009251122219.hdf is the geo-location file. Too run a conversion In [1]: import amwse In [2]: amwse.granule_to_h5("MYD02QKM.A2006215.0330.005.2009251204640.hdf") In [3]: amwse.make_cutouts("MYD02QKM.A2006215.0330.005.2009251204640.h5") Note that I was only interested in the spectral bands with 250 m spatial resolution, which were just two. The lower resolution data sets contain more bands. If you get something good working I'd be interested. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun Nov 13 19:20:12 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 13 Nov 2011 17:20:12 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 4:39 PM, Charles R Harris wrote: > >> I'll take a look, I think I still have both the geolocation file and the > data file somewhere, but they are rather large so it might be easier if you > just download the files from the NASA sight once I can recall exactly where > that was. You need both for the level 2 products in order to disentangle > the overlapping push broom swaths and the viewing geometry and interpolate > the result for the projection. It's a non trivial problem that I left to > the swath tool. Some of the other tools are supposed to be able to do that > also, but the documentation wasn't good enough that I wanted to pursue that > line. Also, installing the swath tool on linux leaves some needed info in > the .bash_profile file that you might want to move to .bashrc, I don't know > how things work on windows. I think the higher level data products are > easier to deal with. > > I wasn't overjoyed with the state of the NASA public software ;) > > Chuck > Thanks. One thing that is still not clear in my mind is the use of ms2gt or any other swath 2 grid converters (java tools for your example): are these mandatory tools to handle MODIS data? I understand that ms2gt does map projection specialized for MODIS data. So far so good. But what does it mean to plot already projected data in Basemap? What is the solution to this problem? Somehow the IDL True Color tutorial that I provided originally gives me guidance to construct the RGB array, but from that point onward I am not sure how to plot that data in basemap with/without using ms2gt. Any pointers for this map plotting? -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 13 19:39:46 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 13 Nov 2011 17:39:46 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 5:20 PM, G?khan Sever wrote: > > > On Sun, Nov 13, 2011 at 4:39 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >>> I'll take a look, I think I still have both the geolocation file and the >> data file somewhere, but they are rather large so it might be easier if you >> just download the files from the NASA sight once I can recall exactly where >> that was. You need both for the level 2 products in order to disentangle >> the overlapping push broom swaths and the viewing geometry and interpolate >> the result for the projection. It's a non trivial problem that I left to >> the swath tool. Some of the other tools are supposed to be able to do that >> also, but the documentation wasn't good enough that I wanted to pursue that >> line. Also, installing the swath tool on linux leaves some needed info in >> the .bash_profile file that you might want to move to .bashrc, I don't know >> how things work on windows. I think the higher level data products are >> easier to deal with. >> >> I wasn't overjoyed with the state of the NASA public software ;) >> >> Chuck >> > > Thanks. One thing that is still not clear in my mind is the use of ms2gt > or any other swath 2 grid converters (java tools for your example): are > these mandatory tools to handle MODIS data? I understand that ms2gt does > map projection specialized for MODIS data. So far so good. But what does it > mean to plot already projected data in Basemap? What is the solution to > this problem? Somehow the IDL True Color tutorial that I provided > originally gives me guidance to construct the RGB array, but from that > point onward I am not sure how to plot that data in basemap with/without > using ms2gt. Any pointers for this map plotting? > You need to use the same projection as basemap and I don't know what that is or what ancillary data is needed, matplotlib might be a better list for that. There are a ton of available projections in the swath tool, except the orthographic one that I wanted ;) The ms2gt tool is new to me, looks like it might be quite useful and easier to use than swath2grid. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 13 19:53:39 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 13 Nov 2011 17:53:39 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 5:39 PM, Charles R Harris wrote: > > > On Sun, Nov 13, 2011 at 5:20 PM, G?khan Sever wrote: > >> >> >> On Sun, Nov 13, 2011 at 4:39 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>>> I'll take a look, I think I still have both the geolocation file and >>> the data file somewhere, but they are rather large so it might be easier if >>> you just download the files from the NASA sight once I can recall exactly >>> where that was. You need both for the level 2 products in order to >>> disentangle the overlapping push broom swaths and the viewing geometry and >>> interpolate the result for the projection. It's a non trivial problem that >>> I left to the swath tool. Some of the other tools are supposed to be able >>> to do that also, but the documentation wasn't good enough that I wanted to >>> pursue that line. Also, installing the swath tool on linux leaves some >>> needed info in the .bash_profile file that you might want to move to >>> .bashrc, I don't know how things work on windows. I think the higher level >>> data products are easier to deal with. >>> >>> I wasn't overjoyed with the state of the NASA public software ;) >>> >>> Chuck >>> >> >> Thanks. One thing that is still not clear in my mind is the use of ms2gt >> or any other swath 2 grid converters (java tools for your example): are >> these mandatory tools to handle MODIS data? I understand that ms2gt does >> map projection specialized for MODIS data. So far so good. But what does it >> mean to plot already projected data in Basemap? What is the solution to >> this problem? Somehow the IDL True Color tutorial that I provided >> originally gives me guidance to construct the RGB array, but from that >> point onward I am not sure how to plot that data in basemap with/without >> using ms2gt. Any pointers for this map plotting? >> > > You need to use the same projection as basemap and I don't know what that > is or what ancillary data is needed, matplotlib might be a better list for > that. There are a ton of available projections in the swath tool, except > the orthographic one that I wanted ;) The ms2gt tool is new to me, looks > like it might be quite useful and easier to use than swath2grid. > > I should mention that if you just plot the raw data you will see double in spots. You need software that deals with the overlapping whisk broom swaths and puts them on the map. There are many available projections for the output of that process, so the final product isn't modis specific. There are also file formats that are standard, geotiff for example, that other software can use. So that might be a better output file type for you to use than the h5 file format I was using. The GDAL software suite might have something for that, it may even be possible to use it for the swath conversion, but if so it is a secret known only to the developers ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaldorusso at gmail.com Mon Nov 14 06:56:00 2011 From: arnaldorusso at gmail.com (Arnaldo Russo) Date: Mon, 14 Nov 2011 09:56:00 -0200 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: Hi G?khan > 2 - I'm trying to get ocean color variables through pyhdf package, but I'm >> still working on. >> If you have any advance, post here your feedback, please. >> > > Which level of product are you working with? Once I figure out true-color > plotting, I am going to bring cloud effective radius data from MOD06 [ > http://modis.gsfc.nasa.gov/data/dataprod/dataproducts.php?MOD_NUMBER=06] > I'm have working with l2 processed images by SeaDAS. It has many flags over bash shell, and is much funcional. But the rest of process working I am trying to run over python instead matlab. Chlorophyll and Rrs match up, relations with empirical algorithm and another products. > > >> >> Regards, >> Arnaldo >> >> ---- >> *Arnaldo D'Amaral Pereira Granja Russo* >> Lab. de Estudos dos Oceanos e Clima >> Instituto de Oceanografia >> Universidade Federal do Rio Grande >> e-mail arnaldorusso [at] gmail [dot] com >> tel (53) 3233-6855 >> >> >> >> 2011/11/13 G?khan Sever >> >>> Hello groups, >>> >>> I have two questions about working with MODIS data. >>> >>> 1-) Is there any light Pythonic HDF-EOS wrapper to handle HDF-EOS data >>> other than PyNIO [http://www.pyngl.ucar.edu/Nio.shtml] Although, I have >>> managed to install that package from its source, it took me many hours to >>> figure out all the installation quirks. Something simpler to build and >>> mainly for HDFEOS data?? >>> >>> 2-) Another similar question: Has anybody attempted to create true-color >>> MODIS images (like the ones shown at [ >>> http://rapidfire.sci.gsfc.nasa.gov/realtime/]) in Python? So far, I >>> have seen one clear tutorial [ >>> ftp://ftp.ssec.wisc.edu/pub/IMAPP/MODIS/TrueColor/] to create natural >>> color images, but uses ms2gt [http://nsidc.org/data/modis/ms2gt/], NDVI >>> and IDL. Except the reflectance correction via NDVI, ms2gt and IDL parts >>> seem to be implemented in Python. >>> >>> Till now, I have some progress combining GOES imagery with aircraft >>> data. My next task is to combine MODIS data with aircraft and radar data. I >>> would be happy to get some guidance and code support if there is any >>> previous work been done using Python. >>> >>> Thanks. >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > -- > G?khan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.anton.letnes at gmail.com Mon Nov 14 09:20:30 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Mon, 14 Nov 2011 15:20:30 +0100 Subject: [SciPy-User] test issues with 0.10 Message-ID: <2999878D-E4FB-4044-98C4-E1DA48A980C1@gmail.com> Hi, when trying to build scipy 0.10 with python 2.7 (from homebrew/python.org), numpy 1.6.1 (built manually) on Mac OS X 10.7.2 and with gcc 4.6.0 (built from source), scipy.test() breaks. Any ideas? Cheers Paul Build and install with: i-courant ~/src/scipy-0.10.0 % ATLAS= BLAS=/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib LAPACK=/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib CC=/usr/bin/gcc FC=/usr/local/bin/gfortran python setup.py build i-courant ~/src/scipy-0.10.0 % python setup.py install Test: i-courant ~/src % python -c 'import scipy; scipy.test(verbose=2)' Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy SciPy version 0.10.0 SciPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy Python version 2.7.2 (default, Oct 9 2011, 18:03:13) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] nose version 1.1.2 Tests cophenet(Z) on tdist data set. ... ok Tests cophenet(Z, Y) on tdist data set. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) with empty linkage and condensed distance matrix. ... ok Tests num_obs_linkage with observation matrices of multiple sizes. ... ok Tests fcluster(Z, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests from_mlab_linkage on empty linkage array. ... ok Tests from_mlab_linkage on linkage array with multiple rows. ... ok Tests from_mlab_linkage on linkage array with single row. ... ok Tests inconsistency matrix calculation (depth=1) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=1, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=2, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=3, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=4, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=1) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a single linkage. ... ok Tests is_isomorphic on test case #1 (one flat cluster, different labellings) ... ok Tests is_isomorphic on test case #2 (two flat clusters, different labelings) ... ok Tests is_isomorphic on test case #3 (no flat clusters) ... ok Tests is_isomorphic on test case #4A (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #4B (3 flat clusters, different labelings, nonisomorphic) ... ok Tests is_isomorphic on test case #4C (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling, slightly non-isomorphic.) Run 3 times. ... ok Tests is_monotonic(Z) on 1x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting False. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 1). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 2). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 3). Expecting False ... ok Tests is_monotonic(Z) on 3x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on an empty linkage. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on Iris data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Perturbing. Expecting False. ... ok Tests is_valid_im(R) on im over 2 observations. ... ok Tests is_valid_im(R) on im over 3 observations. ... ok Tests is_valid_im(R) with 3 columns. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link counts. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height means. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height standard deviations. ... ok Tests is_valid_im(R) with 5 columns. ... ok Tests is_valid_im(R) with empty inconsistency matrix. ... ok Tests is_valid_im(R) with integer type. ... ok Tests is_valid_linkage(Z) on linkage over 2 observations. ... ok Tests is_valid_linkage(Z) on linkage over 3 observations. ... ok Tests is_valid_linkage(Z) with 3 columns. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative counts. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative distances. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (left). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (right). ... ok Tests is_valid_linkage(Z) with 5 columns. ... ok Tests is_valid_linkage(Z) with empty linkage. ... ok Tests is_valid_linkage(Z) with integer type. ... ok Tests leaders using a flat clustering generated by single linkage. ... ok Tests leaves_list(Z) on a 1x4 linkage. ... ok Tests leaves_list(Z) on a 2x4 linkage. ... ok Tests leaves_list(Z) on the Iris data set using average linkage. ... ok Tests leaves_list(Z) on the Iris data set using centroid linkage. ... ok Tests leaves_list(Z) on the Iris data set using complete linkage. ... ok Tests leaves_list(Z) on the Iris data set using median linkage. ... ok Tests leaves_list(Z) on the Iris data set using single linkage. ... ok Tests leaves_list(Z) on the Iris data set using ward linkage. ... ok Tests linkage(Y, 'average') on the tdist data set. ... ok Tests linkage(Y, 'centroid') on the Q data set. ... ok Tests linkage(Y, 'complete') on the Q data set. ... ok Tests linkage(Y, 'complete') on the tdist data set. ... ok Tests linkage(Y) where Y is a 0x4 linkage matrix. Exception expected. ... ok Tests linkage(Y, 'single') on the Q data set. ... ok Tests linkage(Y, 'single') on the tdist data set. ... ok Tests linkage(Y, 'weighted') on the Q data set. ... ok Tests linkage(Y, 'weighted') on the tdist data set. ... ok Tests maxdists(Z) on the Q data set using centroid linkage. ... ok Tests maxdists(Z) on the Q data set using complete linkage. ... ok Tests maxdists(Z) on the Q data set using median linkage. ... ok Tests maxdists(Z) on the Q data set using single linkage. ... ok Tests maxdists(Z) on the Q data set using Ward linkage. ... ok Tests maxdists(Z) on empty linkage. Expecting exception. ... ok Tests maxdists(Z) on linkage with one cluster. ... ok Tests maxinconsts(Z, R) on the Q data set using centroid linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using complete linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using median linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using single linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using Ward linkage. ... ok Tests maxinconsts(Z, R) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxinconsts(Z, R) on empty linkage. Expecting exception. ... ok Tests maxinconsts(Z, R) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 0) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 0) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 1) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 1) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 2) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 2) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 3) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3.3). Expecting exception. ... ok Tests maxRstat(Z, R, -1). Expecting exception. ... ok Tests maxRstat(Z, R, 4). Expecting exception. ... ok Tests num_obs_linkage(Z) on linkage over 2 observations. ... ok Tests num_obs_linkage(Z) on linkage over 3 observations. ... ok Tests num_obs_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests num_obs_linkage(Z) with empty linkage. ... ok Tests to_mlab_linkage on linkage array with multiple rows. ... ok Tests to_mlab_linkage on empty linkage array. ... ok Tests to_mlab_linkage on linkage array with single row. ... ok test_hierarchy.load_testing_files ... ok Ticket #505. ... ok Testing that kmeans2 init methods work. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 and its results. ... ok Regression test for #546: fail when k arg is 0. ... ok This will cause kmean to have a cluster with no points. ... ok test_kmeans_simple (test_vq.TestKMean) ... ok test_large_features (test_vq.TestKMean) ... ok test_py_vq (test_vq.TestVq) ... ok test_py_vq2 (test_vq.TestVq) ... ok test_vq (test_vq.TestVq) ... ok Test special rank 1 vq algo, python implementation. ... ok test_codata.test_find ... ok test_codata.test_basic_table_parse ... ok test_codata.test_basic_lookup ... ok test_codata.test_find_all ... ok test_codata.test_find_single ... ok test_codata.test_2002_vs_2006 ... ok Check that updating stored values with exact ones worked. ... ok test_constants.test_fahrenheit_to_celcius ... ok test_constants.test_celcius_to_kelvin ... ok test_constants.test_kelvin_to_celcius ... ok test_constants.test_fahrenheit_to_kelvin ... ok test_constants.test_kelvin_to_fahrenheit ... ok test_constants.test_celcius_to_fahrenheit ... ok test_constants.test_lambda_to_nu ... ok test_constants.test_nu_to_lambda ... ok test_definition (test_basic.TestDoubleFFT) ... ok test_djbfft (test_basic.TestDoubleFFT) ... ok test_n_argument_real (test_basic.TestDoubleFFT) ... ok test_definition (test_basic.TestDoubleIFFT) ... FAIL test_definition_real (test_basic.TestDoubleIFFT) ... ok test_djbfft (test_basic.TestDoubleIFFT) ... FAIL test_random_complex (test_basic.TestDoubleIFFT) ... zsh: segmentation fault python -c 'import scipy; scipy.test(verbose=2)' From guziy.sasha at gmail.com Mon Nov 14 12:05:36 2011 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Mon, 14 Nov 2011 12:05:36 -0500 Subject: [SciPy-User] kdtree, custom distance function Message-ID: Hello, I am trying to use scipy.spatial.kdtree to interpolate data from a lat/lon grid to a set of points (also with lat/lon coordinates). Is it possible to specify a custom distance function for the kdtree that should be used for querying? Also is there a function that computes distance on a sphere in scipy.spatial? thanks -- Oleksandr Huziy UQAM -------------- next part -------------- An HTML attachment was scrubbed... URL: From kooleyba at gmail.com Mon Nov 14 12:11:04 2011 From: kooleyba at gmail.com (Denis Yershov) Date: Mon, 14 Nov 2011 23:11:04 +0600 Subject: [SciPy-User] convolution of 2d arrays of exotic complex numbers (dual and split) Message-ID: i used scipy.signal.convolve to work with complex-number arrays. now i need to convolve arrays of dual numbers (i*i=0) and split-complex numbers (i*i=1). so i have a class ( http://dl.dropbox.com/u/4988243/iComplex.py) which implements this numbers: class PlainComplex(Complex): def __mul__(self, other): other = ToComplex(other) return PlainComplex(self.re*other.re - self.im*other.im, self.im*other.re + self.re*other.im) class DualComplex(Complex): def __mul__(self, other): other = ToComplex(other) return DualComplex(self.re*other.re, self.im*other.re + self.re*other.im) class SplitComplex(Complex): def __mul__(self, other): other = ToComplex(other) return SplitComplex(self.re*other.re + self.im*other.im, self.im*other.re + self.re*other.im) as you can see only multiplication function is being changed. the question: is it possible to feed arrays of such objects to scipy's convolution function? when i try to do it for PlainComplex class it seems to work wrong (different from using python's complex numbers): http://dl.dropbox.com/u/4988243/sandbox.py - my complex http://dl.dropbox.com/u/4988243/sandbox2.py - python complex or maybe there is some alternative solution? thanx! -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Mon Nov 14 12:30:41 2011 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Mon, 14 Nov 2011 12:30:41 -0500 Subject: [SciPy-User] convolution of 2d arrays of exotic complex numbers (dual and split) In-Reply-To: References: Message-ID: Hi Denis, I am not sure that this can work, since it is possible that convolution is computed using Fourier transform (performance gain) So maybe you get something weird when it tries to multiply ordinary complex numbers with your special complex numbers during the fourier transform. Though ?I am not a great sspecialist. Have a nice evening, I suppose)) -- Oleksandr Huziy 2011/11/14 Denis Yershov > i used scipy.signal.convolve to work with complex-number arrays. > now i need to convolve arrays of dual numbers (i*i=0) and split-complex > numbers (i*i=1). so i have a class ( > http://dl.dropbox.com/u/4988243/iComplex.py) which implements this > numbers: > > class PlainComplex(Complex): > def __mul__(self, other): > other = ToComplex(other) > return PlainComplex(self.re*other.re - self.im*other.im, > self.im*other.re + self.re*other.im) > > class DualComplex(Complex): > def __mul__(self, other): > other = ToComplex(other) > return DualComplex(self.re*other.re, > self.im*other.re + self.re*other.im) > > class SplitComplex(Complex): > def __mul__(self, other): > other = ToComplex(other) > return SplitComplex(self.re*other.re + self.im*other.im, > self.im*other.re + self.re*other.im) > > as you can see only multiplication function is being changed. > the question: is it possible to feed arrays of such objects to scipy's > convolution function? when i try to do it for PlainComplex class it seems > to work wrong (different from using python's complex numbers): > http://dl.dropbox.com/u/4988243/sandbox.py - my complex > http://dl.dropbox.com/u/4988243/sandbox2.py - python complex > > or maybe there is some alternative solution? > thanx! > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Mon Nov 14 14:48:15 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 14 Nov 2011 20:48:15 +0100 Subject: [SciPy-User] test issues with 0.10 In-Reply-To: <2999878D-E4FB-4044-98C4-E1DA48A980C1@gmail.com> References: <2999878D-E4FB-4044-98C4-E1DA48A980C1@gmail.com> Message-ID: On Mon, Nov 14, 2011 at 3:20 PM, Paul Anton Letnes < paul.anton.letnes at gmail.com> wrote: > Hi, when trying to build scipy 0.10 with python 2.7 (from homebrew/ > python.org), numpy 1.6.1 (built manually) on Mac OS X 10.7.2 and with gcc > 4.6.0 (built from source), scipy.test() breaks. Any ideas? > > Was everything including python itself built with gcc 4.6.0? What was the last version of scipy that did work for you? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.anton.letnes at gmail.com Mon Nov 14 15:17:35 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Mon, 14 Nov 2011 21:17:35 +0100 Subject: [SciPy-User] test issues with 0.10 In-Reply-To: References: <2999878D-E4FB-4044-98C4-E1DA48A980C1@gmail.com> Message-ID: <4496B335-79BC-4444-9A4D-78B38ACC0F8D@gmail.com> On 14. nov. 2011, at 20:48, Ralf Gommers wrote: > > > On Mon, Nov 14, 2011 at 3:20 PM, Paul Anton Letnes wrote: > Hi, when trying to build scipy 0.10 with python 2.7 (from homebrew/python.org), numpy 1.6.1 (built manually) on Mac OS X 10.7.2 and with gcc 4.6.0 (built from source), scipy.test() breaks. Any ideas? > > Was everything including python itself built with gcc 4.6.0? No; I'm sorry about the screwup on version numbers. Here are the correct ones: i-courant ~/src/scipy-0.10.0 % python Python 2.7.2 (default, Oct 9 2011, 18:03:13) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. so python was built with Apple GCC 4.2.1. The same goes for numpy. The reason is that numpy and scipy setup tries to send the -faltivec flag to the compiler, which only Apple's gcc accepts. Same gcc version with scipy. The gfortran version, however, is 4.6.0. Apple's gcc installation does not include gfortran, so that's why I installed a separate gfortran. (As a side question, what's a reliable way of getting all this (recent gfortran, numpy, scipy) installed? Homebrew is not very helpful with alternative gcc installations, neither is Apple?) > What was the last version of scipy that did work for you? 0.10.b2 was working fine, although not all tests passed. I figured it would be ironed out before release. (I did not use any of the problematic sub-modules anyway.) Cheers Paul From ralf.gommers at googlemail.com Mon Nov 14 15:56:36 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 14 Nov 2011 21:56:36 +0100 Subject: [SciPy-User] test issues with 0.10 In-Reply-To: <4496B335-79BC-4444-9A4D-78B38ACC0F8D@gmail.com> References: <2999878D-E4FB-4044-98C4-E1DA48A980C1@gmail.com> <4496B335-79BC-4444-9A4D-78B38ACC0F8D@gmail.com> Message-ID: On Mon, Nov 14, 2011 at 9:17 PM, Paul Anton Letnes < paul.anton.letnes at gmail.com> wrote: > > On 14. nov. 2011, at 20:48, Ralf Gommers wrote: > > > > > > > On Mon, Nov 14, 2011 at 3:20 PM, Paul Anton Letnes < > paul.anton.letnes at gmail.com> wrote: > > Hi, when trying to build scipy 0.10 with python 2.7 (from homebrew/ > python.org), numpy 1.6.1 (built manually) on Mac OS X 10.7.2 and with gcc > 4.6.0 (built from source), scipy.test() breaks. Any ideas? > > > > Was everything including python itself built with gcc 4.6.0? > > No; I'm sorry about the screwup on version numbers. Here are the correct > ones: > > i-courant ~/src/scipy-0.10.0 % python > Python 2.7.2 (default, Oct 9 2011, 18:03:13) > [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > so python was built with Apple GCC 4.2.1. The same goes for numpy. The > reason is that numpy and scipy setup tries to send the -faltivec flag to > the compiler, which only Apple's gcc accepts. Same gcc version with scipy. > The gfortran version, however, is 4.6.0. Apple's gcc installation does not > include gfortran, so that's why I installed a separate gfortran. > That's not the correct gfortran. Through homebrew you should be able to get the right one, or download it directly: ( http://r.research.att.com/gfortran-lion-5666-3.pkg). It is version 4.2.4, but may report itself as 4.2.1. The -faltivec thing was a numpy distutils issue that is fixed in numpy master. It shouldn't have given you any problems, it just caused you to miss out on SSE optimizations. > > (As a side question, what's a reliable way of getting all this (recent > gfortran, numpy, scipy) installed? Homebrew is not very helpful with > alternative gcc installations, neither is Apple?) > On OS X 10.7 this is still a bit painful due to the switch to llvm-gcc as default compiler. But if you get the right gfortran, linked also from http://scipy.org/Installing_SciPy/Mac_OS_X, and set gcc as default compiler you should be good to go. Of course there are binary installers too that work on 10.7 if you don't want to fight with compilers. That's just a matter of downloading Python from python.org and numpy/scipy from Sourceforge. > > > What was the last version of scipy that did work for you? > > 0.10.b2 was working fine, although not all tests passed. I figured it > would be ironed out before release. (I did not use any of the problematic > sub-modules anyway.) > > With the same compilers, that's odd. No changes to fftpack code or tests went in since beta 2. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajesh at tryangles.in Mon Nov 14 21:05:12 2011 From: rajesh at tryangles.in (RAJESH J) Date: Tue, 15 Nov 2011 07:35:12 +0530 Subject: [SciPy-User] Install Scipy in a TG environment Message-ID: <1321322712.10458.5.camel@chandrahas> Hello All, I have installed tg2.1 on a virtual environment successfully. I would like to install scipy,numpy and matplotlib in the same environment. I have activated the environment and when trying to install following error message appears. func(*targs, **kargs) File "/tmp/easy_install-BYfIWo/numpy-1.6.1/numpy/distutils/misc_util.py", line 252, in clean_up_temporary_directory ImportError: No module named numpy.distutils Error in sys.exitfunc: Traceback (most recent call last): File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/tmp/easy_install-BYfIWo/numpy-1.6.1/numpy/distutils/misc_util.py", line 252, in clean_up_temporary_directory ImportError: No module named numpy.distutils Please help me in this regard. Regards, Rajesh J From gustavo.goretkin at gmail.com Tue Nov 15 00:37:25 2011 From: gustavo.goretkin at gmail.com (Gustavo Goretkin) Date: Tue, 15 Nov 2011 00:37:25 -0500 Subject: [SciPy-User] kdtree, custom distance function In-Reply-To: References: Message-ID: The kdtree algorithm uses axis-aligned partitions on the space, so I do not think it can work with a general distance metric. I am not speaking from much experience, but you may want to consider "GNAT" described here [1] and implemented in this library [2], which contains python bindings for much of the library's functionality, but apparently not the data structures. Gustavo [1] http://infolab.stanford.edu/~sergey/near.html [2] http://ompl.kavrakilab.org/classompl_1_1NearestNeighborsGNAT.html On Mon, Nov 14, 2011 at 12:05 PM, Oleksandr Huziy wrote: > Hello, > > I am trying to use scipy.spatial.kdtree to interpolate data from a lat/lon > grid to a set of points (also with lat/lon coordinates). > > Is it possible to specify a custom distance function for the kdtree that > should be used for querying? > > Also is there a function that computes distance on a sphere in > scipy.spatial? > > thanks > > -- > Oleksandr Huziy > UQAM > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From cournape at gmail.com Tue Nov 15 01:09:31 2011 From: cournape at gmail.com (David Cournapeau) Date: Tue, 15 Nov 2011 06:09:31 +0000 Subject: [SciPy-User] Install Scipy in a TG environment In-Reply-To: <1321322712.10458.5.camel@chandrahas> References: <1321322712.10458.5.camel@chandrahas> Message-ID: On Tue, Nov 15, 2011 at 2:05 AM, RAJESH J wrote: > Hello All, > I have installed tg2.1 on a virtual environment successfully. > > I would like to install scipy,numpy and matplotlib in the same > environment. > > I have activated the environment and when trying to install following > error message appears. > > ? func(*targs, **kargs) > ?File > "/tmp/easy_install-BYfIWo/numpy-1.6.1/numpy/distutils/misc_util.py", > line 252, in clean_up_temporary_directory > ImportError: No module named numpy.distutils > Error in sys.exitfunc: > Traceback (most recent call last): > ?File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs > ? ?func(*targs, **kargs) > ?File > "/tmp/easy_install-BYfIWo/numpy-1.6.1/numpy/distutils/misc_util.py", > line 252, in clean_up_temporary_directory > ImportError: No module named numpy.distutils Don't use easy_install, but install numpy the standard way (python setup.py install from numpy sources). David From paul.anton.letnes at gmail.com Tue Nov 15 03:00:50 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Tue, 15 Nov 2011 09:00:50 +0100 Subject: [SciPy-User] test issues with 0.10 In-Reply-To: References: <2999878D-E4FB-4044-98C4-E1DA48A980C1@gmail.com> <4496B335-79BC-4444-9A4D-78B38ACC0F8D@gmail.com> Message-ID: <671546CF-1424-41E0-8AF5-5785425F2712@gmail.com> Trying again; to me it does not look like the compiler is the issue here. But what do I know :) OT: Does anyone know what the difficulty is with building a working gfortran on OS X? This is something I never understood properly. Build: i-courant ~/src/scipy-0.10.0 % ATLAS= BLAS=/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib LAPACK=/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib CC=/usr/bin/gcc FC=/usr/local/bin/gfortran-4.2 CXX=/usr/bin/g++ python setup.py build Install: i-courant ~/src/scipy-0.10.0 % python setup.py install Test results: i-courant /tmp % python -c 'import scipy; scipy.test(verbose=1)' [8:56:33 on 11-11-15] Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy SciPy version 0.10.0 SciPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy Python version 2.7.2 (default, Oct 9 2011, 18:03:13) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] nose version 1.1.2 ...................................................................................................................................................................................F.python(78903) malloc: *** error for object 0x104bf78e8: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug zsh: abort python -c 'import scipy; scipy.test(verbose=1)' i-courant /tmp/paulanto % python -c 'import scipy; scipy.test(verbose=2)' [8:56:52 on 11-11-15] Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy SciPy version 0.10.0 SciPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy Python version 2.7.2 (default, Oct 9 2011, 18:03:13) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] nose version 1.1.2 Tests cophenet(Z) on tdist data set. ... ok Tests cophenet(Z, Y) on tdist data set. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) with empty linkage and condensed distance matrix. ... ok Tests num_obs_linkage with observation matrices of multiple sizes. ... ok Tests fcluster(Z, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests from_mlab_linkage on empty linkage array. ... ok Tests from_mlab_linkage on linkage array with multiple rows. ... ok Tests from_mlab_linkage on linkage array with single row. ... ok Tests inconsistency matrix calculation (depth=1) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=1, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=2, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=3, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=4, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=1) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a single linkage. ... ok Tests is_isomorphic on test case #1 (one flat cluster, different labellings) ... ok Tests is_isomorphic on test case #2 (two flat clusters, different labelings) ... ok Tests is_isomorphic on test case #3 (no flat clusters) ... ok Tests is_isomorphic on test case #4A (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #4B (3 flat clusters, different labelings, nonisomorphic) ... ok Tests is_isomorphic on test case #4C (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling, slightly non-isomorphic.) Run 3 times. ... ok Tests is_monotonic(Z) on 1x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting False. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 1). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 2). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 3). Expecting False ... ok Tests is_monotonic(Z) on 3x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on an empty linkage. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on Iris data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Perturbing. Expecting False. ... ok Tests is_valid_im(R) on im over 2 observations. ... ok Tests is_valid_im(R) on im over 3 observations. ... ok Tests is_valid_im(R) with 3 columns. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link counts. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height means. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height standard deviations. ... ok Tests is_valid_im(R) with 5 columns. ... ok Tests is_valid_im(R) with empty inconsistency matrix. ... ok Tests is_valid_im(R) with integer type. ... ok Tests is_valid_linkage(Z) on linkage over 2 observations. ... ok Tests is_valid_linkage(Z) on linkage over 3 observations. ... ok Tests is_valid_linkage(Z) with 3 columns. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative counts. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative distances. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (left). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (right). ... ok Tests is_valid_linkage(Z) with 5 columns. ... ok Tests is_valid_linkage(Z) with empty linkage. ... ok Tests is_valid_linkage(Z) with integer type. ... ok Tests leaders using a flat clustering generated by single linkage. ... ok Tests leaves_list(Z) on a 1x4 linkage. ... ok Tests leaves_list(Z) on a 2x4 linkage. ... ok Tests leaves_list(Z) on the Iris data set using average linkage. ... ok Tests leaves_list(Z) on the Iris data set using centroid linkage. ... ok Tests leaves_list(Z) on the Iris data set using complete linkage. ... ok Tests leaves_list(Z) on the Iris data set using median linkage. ... ok Tests leaves_list(Z) on the Iris data set using single linkage. ... ok Tests leaves_list(Z) on the Iris data set using ward linkage. ... ok Tests linkage(Y, 'average') on the tdist data set. ... ok Tests linkage(Y, 'centroid') on the Q data set. ... ok Tests linkage(Y, 'complete') on the Q data set. ... ok Tests linkage(Y, 'complete') on the tdist data set. ... ok Tests linkage(Y) where Y is a 0x4 linkage matrix. Exception expected. ... ok Tests linkage(Y, 'single') on the Q data set. ... ok Tests linkage(Y, 'single') on the tdist data set. ... ok Tests linkage(Y, 'weighted') on the Q data set. ... ok Tests linkage(Y, 'weighted') on the tdist data set. ... ok Tests maxdists(Z) on the Q data set using centroid linkage. ... ok Tests maxdists(Z) on the Q data set using complete linkage. ... ok Tests maxdists(Z) on the Q data set using median linkage. ... ok Tests maxdists(Z) on the Q data set using single linkage. ... ok Tests maxdists(Z) on the Q data set using Ward linkage. ... ok Tests maxdists(Z) on empty linkage. Expecting exception. ... ok Tests maxdists(Z) on linkage with one cluster. ... ok Tests maxinconsts(Z, R) on the Q data set using centroid linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using complete linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using median linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using single linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using Ward linkage. ... ok Tests maxinconsts(Z, R) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxinconsts(Z, R) on empty linkage. Expecting exception. ... ok Tests maxinconsts(Z, R) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 0) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 0) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 1) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 1) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 2) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 2) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 3) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3.3). Expecting exception. ... ok Tests maxRstat(Z, R, -1). Expecting exception. ... ok Tests maxRstat(Z, R, 4). Expecting exception. ... ok Tests num_obs_linkage(Z) on linkage over 2 observations. ... ok Tests num_obs_linkage(Z) on linkage over 3 observations. ... ok Tests num_obs_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests num_obs_linkage(Z) with empty linkage. ... ok Tests to_mlab_linkage on linkage array with multiple rows. ... ok Tests to_mlab_linkage on empty linkage array. ... ok Tests to_mlab_linkage on linkage array with single row. ... ok test_hierarchy.load_testing_files ... ok Ticket #505. ... ok Testing that kmeans2 init methods work. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 and its results. ... ok Regression test for #546: fail when k arg is 0. ... ok This will cause kmean to have a cluster with no points. ... ok test_kmeans_simple (test_vq.TestKMean) ... ok test_large_features (test_vq.TestKMean) ... ok test_py_vq (test_vq.TestVq) ... ok test_py_vq2 (test_vq.TestVq) ... ok test_vq (test_vq.TestVq) ... ok Test special rank 1 vq algo, python implementation. ... ok test_codata.test_find ... ok test_codata.test_basic_table_parse ... ok test_codata.test_basic_lookup ... ok test_codata.test_find_all ... ok test_codata.test_find_single ... ok test_codata.test_2002_vs_2006 ... ok Check that updating stored values with exact ones worked. ... ok test_constants.test_fahrenheit_to_celcius ... ok test_constants.test_celcius_to_kelvin ... ok test_constants.test_kelvin_to_celcius ... ok test_constants.test_fahrenheit_to_kelvin ... ok test_constants.test_kelvin_to_fahrenheit ... ok test_constants.test_celcius_to_fahrenheit ... ok test_constants.test_lambda_to_nu ... ok test_constants.test_nu_to_lambda ... ok test_definition (test_basic.TestDoubleFFT) ... ok test_djbfft (test_basic.TestDoubleFFT) ... ok test_n_argument_real (test_basic.TestDoubleFFT) ... ok test_definition (test_basic.TestDoubleIFFT) ... FAIL test_definition_real (test_basic.TestDoubleIFFT) ... ok test_djbfft (test_basic.TestDoubleIFFT) ... zsh: segmentation fault python -c 'import scipy; scipy.test(verbose=2)' On 14. nov. 2011, at 21:56, Ralf Gommers wrote: > > > On Mon, Nov 14, 2011 at 9:17 PM, Paul Anton Letnes wrote: > > On 14. nov. 2011, at 20:48, Ralf Gommers wrote: > > > > > > > On Mon, Nov 14, 2011 at 3:20 PM, Paul Anton Letnes wrote: > > Hi, when trying to build scipy 0.10 with python 2.7 (from homebrew/python.org), numpy 1.6.1 (built manually) on Mac OS X 10.7.2 and with gcc 4.6.0 (built from source), scipy.test() breaks. Any ideas? > > > > Was everything including python itself built with gcc 4.6.0? > > No; I'm sorry about the screwup on version numbers. Here are the correct ones: > > i-courant ~/src/scipy-0.10.0 % python > Python 2.7.2 (default, Oct 9 2011, 18:03:13) > [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > so python was built with Apple GCC 4.2.1. The same goes for numpy. The reason is that numpy and scipy setup tries to send the -faltivec flag to the compiler, which only Apple's gcc accepts. Same gcc version with scipy. The gfortran version, however, is 4.6.0. Apple's gcc installation does not include gfortran, so that's why I installed a separate gfortran. > > That's not the correct gfortran. Through homebrew you should be able to get the right one, or download it directly: (http://r.research.att.com/gfortran-lion-5666-3.pkg). It is version 4.2.4, but may report itself as 4.2.1. > > The -faltivec thing was a numpy distutils issue that is fixed in numpy master. It shouldn't have given you any problems, it just caused you to miss out on SSE optimizations. > > (As a side question, what's a reliable way of getting all this (recent gfortran, numpy, scipy) installed? Homebrew is not very helpful with alternative gcc installations, neither is Apple?) > > On OS X 10.7 this is still a bit painful due to the switch to llvm-gcc as default compiler. But if you get the right gfortran, linked also from http://scipy.org/Installing_SciPy/Mac_OS_X, and set gcc as default compiler you should be good to go. Of course there are binary installers too that work on 10.7 if you don't want to fight with compilers. That's just a matter of downloading Python from python.org and numpy/scipy from Sourceforge. > > > What was the last version of scipy that did work for you? > > 0.10.b2 was working fine, although not all tests passed. I figured it would be ironed out before release. (I did not use any of the problematic sub-modules anyway.) > > With the same compilers, that's odd. No changes to fftpack code or tests went in since beta 2. > > Ralf > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From paul.anton.letnes at gmail.com Tue Nov 15 05:04:23 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Tue, 15 Nov 2011 11:04:23 +0100 Subject: [SciPy-User] test issues with 0.10 In-Reply-To: <671546CF-1424-41E0-8AF5-5785425F2712@gmail.com> References: <2999878D-E4FB-4044-98C4-E1DA48A980C1@gmail.com> <4496B335-79BC-4444-9A4D-78B38ACC0F8D@gmail.com> <671546CF-1424-41E0-8AF5-5785425F2712@gmail.com> Message-ID: I have now built netlib BLAS+LAPACK (liblapack.a and libblas.a), and linked against these when building. Most tests pass, but there are still 16 failures. I'm not sure whether this is known, so I am posting it here in case it is of any interest. Paul i-courant ~/src % python -c 'import scipy; scipy.test(verbose=1)' Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy SciPy version 0.10.0 SciPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy Python version 2.7.2 (default, Oct 9 2011, 18:03:13) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] nose version 1.1.2 ...................................................................................................................................................../usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/cluster/vq.py:588: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " .......................................................................K............................................................................................................/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py:674: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....../usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py:605: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ........................K..K....../usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/numeric.py:1920: RuntimeWarning: invalid value encountered in absolute return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) ............................................................................................................................................................................................................................................................................................................................................................................................................................................/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes warnings.warn("Unfamiliar format bytes", WavFileWarning) /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/wavfile.py:121: WavFileWarning: chunk not understood warnings.warn("chunk not understood", WavFileWarning) ....................................................................................F..FF......................................................................................................................................SSSSSS......SSSSSS......SSSS.....................FFF.........................................F....FF.......S............................................................................................................................................................................................................................................................K......................................................................................................................................................................................................SSSSS............S........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS.........../usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py:63: UserWarning: Single-precision types in `eigs` and `eighs` are not supported on the OSX platform currently. Double precision routines are used instead. warnings.warn("Single-precision types in `eigs` and `eighs` " ....F.F.....................F...........F.F..............................................................................................F........................F.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K...............................................................K...........................................................................................................................................................KK.............................................................................................................................................................................................................................................................................................................................................................................................................................................K.K.............................................................................................................................................................................................................................................................................................................................................................................................K........K..............SSSSSSS..........................................................................................................................................................S......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ====================================================================== FAIL: test_asum (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", line 58, in test_asum assert_almost_equal(f([3,-4,5]),12) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 12 ====================================================================== FAIL: test_dot (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", line 67, in test_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: -9 ====================================================================== FAIL: test_nrm2 (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", line 78, in test_nrm2 assert_almost_equal(f([3,-4,5]),math.sqrt(50)) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 7.0710678118654755 ====================================================================== FAIL: test_basic.TestNorm.test_overflow ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 581, in test_overflow assert_almost_equal(norm(a), a) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 452, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 800, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals (mismatch 100.0%) x: array(-0.0) y: array([ 1.00000002e+20], dtype=float32) ====================================================================== FAIL: test_basic.TestNorm.test_stable ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 586, in test_stable assert_almost_equal(norm(a) - 1e4, 0.5) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: -10000.0 DESIRED: 0.5 ====================================================================== FAIL: test_basic.TestNorm.test_types ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 568, in test_types assert_allclose(norm(x), np.sqrt(14), rtol=tol) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=2.38419e-06, atol=0 (mismatch 100.0%) x: array(1.0842021724855044e-19) y: array(3.7416573867739413) ====================================================================== FAIL: test_asum (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", line 99, in test_asum assert_almost_equal(f([3,-4,5]),12) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 12 ====================================================================== FAIL: test_dot (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", line 109, in test_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: -9 ====================================================================== FAIL: test_nrm2 (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", line 127, in test_nrm2 assert_almost_equal(f([3,-4,5]),math.sqrt(50)) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 7.0710678118654755 ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 235, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=LM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 0.23815642, 0.1763755 ], [-0.10785346, -0.32103487], [ 0.12468303, -0.11230416],... y: array([[ 0.23815642, 0.24814051], [-0.10785347, -0.15634772], [ 0.12468302, 0.05671416],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 235, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=LM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 0.23815693, -0.33630507], [-0.10785286, 0.02168 ], [ 0.12468344, -0.11036437],... y: array([[ 0.23815643, -0.2405392 ], [-0.10785349, 0.14390968], [ 0.12468311, -0.04574991],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 235, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=LA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 28.80129188, -0.6379945 ], [ 34.79312355, 0.27066791], [-270.23255444, 0.4851834 ],... y: array([[ 3.93467650e+03, -6.37994494e-01], [ 3.90913859e+03, 2.70667916e-01], [ -3.62176382e+04, 4.85183382e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 235, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 0.26260981, 0.23815559], [-0.09760907, -0.10785484], [ 0.06149647, 0.12468203],... y: array([[ 0.23744165, 0.2381564 ], [-0.13633069, -0.10785359], [ 0.03132561, 0.12468301],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 235, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 0.29524244, -0.2381569 ], [-0.08169955, 0.10785299], [ 0.06645597, -0.12468332],... y: array([[ 0.24180251, -0.23815646], [-0.14191195, 0.10785349], [ 0.03568392, -0.12468307],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 235, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:general, typ=f, which=SM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[-0.10940548, 0.01676016], [-0.07154097, 0.4628113 ], [ 0.06895222, 0.49206394],... y: array([[-0.10940547, 0.05459438], [-0.07154103, 0.31407543], [ 0.06895217, 0.37578294],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 235, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:general, typ=f, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[-0.4404992 , -0.01935683], [-0.25650678, -0.11053132], [-0.36893024, -0.13223556],... y: array([[-0.44017013, -0.0193569 ], [-0.25525379, -0.11053158], [-0.36818443, -0.13223571],... ---------------------------------------------------------------------- Ran 5093 tests in 50.678s FAILED (KNOWNFAIL=12, SKIP=42, failures=16) From pav at iki.fi Tue Nov 15 06:38:18 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 15 Nov 2011 12:38:18 +0100 Subject: [SciPy-User] test issues with 0.10 In-Reply-To: <4496B335-79BC-4444-9A4D-78B38ACC0F8D@gmail.com> References: <2999878D-E4FB-4044-98C4-E1DA48A980C1@gmail.com> <4496B335-79BC-4444-9A4D-78B38ACC0F8D@gmail.com> Message-ID: 14.11.2011 21:17, Paul Anton Letnes kirjoitti: [clip] >> What was the last version of scipy that did work for you? > > 0.10.b2 was working fine, although not all tests passed. Would be great if you can check which commit between v0.10.0b2 and v0.10.0 introduces the issue. However, I'm suspecting that it's a memory corruption issue, which can mean that the commit found to unveil the issue has nothing to do with the problem itself. (Git can help you with finding the faulty commit: http://book.git-scm.com/5_finding_issues_-_git_bisect.html git bisect start git bisect good v0.10.0b2 git bisect bad v0.10.0 ...) > I figured it would be ironed out before release. (I did not > use any of the problematic sub-modules anyway.) Well, those ARPACK failures which we could reproduce were addressed. I'm a bit surprised to see that you still get 'f' mode failures in the ARPACK tests --- it switches to double precision on OSX, so the corresponding 'd' mode tests should fail, too. I'd double check this kind of results by a completely clean build, i.e., remove the scipy installation and the build directory, and rebuild. -- Pauli Virtanen From laserson at mit.edu Tue Nov 15 15:19:48 2011 From: laserson at mit.edu (Uri Laserson) Date: Tue, 15 Nov 2011 15:19:48 -0500 Subject: [SciPy-User] Segmentation fault in scipy linkage function with large data set Message-ID: Hi all, I am trying to cluster a data set with almost 50,000 objects using hierarchical clustering. I generate a distance matrix like so: Y = pdist( unique_seqs, vdj.clusteringcore.levenshtein ) and then try to perform the linkage like so: Z = sp.cluster.hierarchy.linkage(Y,method=linkage) The distance matrix is computed fine (albeit after 10 hours or so), and the segfault occurs in the `linkage` function. However, I run the same script on many other inputs that are smaller, and it finishes successfully. This one largest input is giving me problems. You can see the memory usage as a function input size here: https://picasaweb.google.com/lh/photo/KjPHcosMKxrehK22tslr4A?feat=directlink and the CPU time here: https://picasaweb.google.com/lh/photo/ygS_njM80Olja04pRRP2vw?feat=directlink Each point is one execution of the script with a different set of input sequences. The vertical blue line shows the size of the current input, which is causing the segfaults. Does anyone have any ideas/suggestions as to what the problem is here. When I searched for other possible solutions, I found my own reporting on the same problem in the past: http://projects.scipy.org/scipy/ticket/967 However in that case, I was able to reduce the input size so that I don't segfault. I am running these on a large linux cluster running python 2.7.1 using numpy 1.5.0b1 and scipy 0.8.0. According to the cluster administrators, the process did *not* make any sudden large requests for resources that were unmet. Debugging here is especially hard because it take 10 hours to get to the segfault...sigh... Thanks! Uri ....................................................................................... Uri Laserson Graduate Student | Biomedical Engineering | Church Lab Harvard-MIT Division of Health Sciences and Technology M +1 617 910 0447 laserson at mit.edu http://web.mit.edu/laserson/www/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From deshpande.jaidev at gmail.com Tue Nov 15 19:15:32 2011 From: deshpande.jaidev at gmail.com (Jaidev Deshpande) Date: Wed, 16 Nov 2011 05:45:32 +0530 Subject: [SciPy-User] Cython vs Vectorized Numpy vs MATLAB Message-ID: Hi, I have two questions. 1. Why does the 'spline.m' function in MATLAB perform much faster than the same interpolation in NumPy? (In MATLAB the first function call takes time but the subsequent calls are much faster.) 2. I wrote a cubic spline interpolation algorithm with NumPy and I vectorized it. Is it surprising that the Cython compiled version of the same function is no faster? I guess that shouldn't happen, as parts of the code would be compiled into C. Although, please note that I used Cython on the vectorized code *as it is*, without adding static types. I know, stupid thing to do, but shouldn't it have given me *some *speed-up? In my problem, for the same data MATLAB takes ............................................................. 0.038 seconds Vectorized NumPy takes................................................. 1.0342 second The above, Cythonized ................................................... 0.997 seconds Functions in the scipy.interpolate package take.................. well above 1 second (I've tried out almost everything scipy has to offer) Since MATLAB is the fastest, I am trying to get Python to be atleast as fast if not faster. Is that even possible? How? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From hturesson at gmail.com Tue Nov 15 20:46:05 2011 From: hturesson at gmail.com (Hjalmar Turesson) Date: Tue, 15 Nov 2011 20:46:05 -0500 Subject: [SciPy-User] Cython vs Vectorized Numpy vs MATLAB In-Reply-To: References: Message-ID: Hi, splmake and spleval (in scipy.interpolate) appear to run as fast as spline in matlab. They are approximately 30 times faster than cspline1d and cspline1d_eval. Best, Hjalmar On Tue, Nov 15, 2011 at 7:15 PM, Jaidev Deshpande < deshpande.jaidev at gmail.com> wrote: > Hi, > > I have two questions. > > 1. Why does the 'spline.m' function in MATLAB perform much faster than the > same interpolation in NumPy? > (In MATLAB the first function call takes time but the subsequent calls are > much faster.) > > 2. I wrote a cubic spline interpolation algorithm with NumPy and I > vectorized it. Is it surprising that the Cython compiled version of the > same function is no faster? I guess that shouldn't happen, as parts of the > code would be compiled into C. Although, please note that I used Cython on > the vectorized code *as it is*, without adding static types. I know, > stupid thing to do, but shouldn't it have given me *some *speed-up? > > In my problem, for the same data > > MATLAB takes ............................................................. > 0.038 seconds > Vectorized NumPy takes................................................. > 1.0342 second > The above, Cythonized ................................................... > 0.997 seconds > Functions in the scipy.interpolate package take.................. well > above 1 second > (I've tried out almost everything scipy has to offer) > > Since MATLAB is the fastest, I am trying to get Python to be atleast as > fast if not faster. Is that even possible? How? > > Thanks > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.alonso at gmail.com Wed Nov 16 00:04:02 2011 From: jonathan.alonso at gmail.com (jajabinker) Date: Tue, 15 Nov 2011 21:04:02 -0800 (PST) Subject: [SciPy-User] [SciPy-user] scipy.test() curious failure Message-ID: <32852072.post@talk.nabble.com> HI ALL! Hope you can help me! After i run scipy.test() i get the following output: http://pastebin.com/eQwnKDy4 Thats that error. Has any1 seen it? I have installed numpy no errors while running numpy.test(). In case you spot a bug in my atlas. Here is my configure But the compilation was clean. ../configure -b 64 -Fa alg -fPIC --prefix=/home/$USER/numpyscipy/atlas --with-netlib-lapack=/home/$USER/numpyscipy/lapack-3.4.0/liblapack.a THANKS! -- View this message in context: http://old.nabble.com/scipy.test%28%29-curious-failure-tp32852072p32852072.html Sent from the Scipy-User mailing list archive at Nabble.com. From pierre.raybaut at gmail.com Wed Nov 16 04:43:00 2011 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Wed, 16 Nov 2011 10:43:00 +0100 Subject: [SciPy-User] ANN: Spyder v2.1.2 Message-ID: Hi all, On the behalf of Spyder's development team (http://code.google.com/p/spyderlib/people/list), I'm pleased to announce that Spyder v2.1.2 has been released and is available for Windows XP/Vista/7, GNU/Linux and MacOS X: http://code.google.com/p/spyderlib/ As this is mostly a maintenance release, a lot of bugs were fixed and some minor features were added: http://code.google.com/p/spyderlib/wiki/ChangeLog Spyder is a free, open-source (MIT license) interactive development environment for the Python language with advanced editing, interactive testing, debugging and introspection features. Originally designed to provide MATLAB-like features (integrated help, interactive console, variable explorer with GUI-based editors for dictionaries, NumPy arrays, ...), it is strongly oriented towards scientific computing and software development. Thanks to the `spyderlib` library, Spyder also provides powerful ready-to-use widgets: embedded Python console (example: http://packages.python.org/guiqwt/_images/sift3.png), NumPy array editor (example: http://packages.python.org/guiqwt/_images/sift2.png), dictionary editor, source code editor, etc. Description of key features with tasty screenshots can be found at: http://code.google.com/p/spyderlib/wiki/Features On Windows platforms, Spyder is also available as a stand-alone executable (don't forget to disable UAC on Vista/7). This all-in-one portable version is still experimental (for example, it does not embed sphinx -- meaning no rich text mode for the object inspector) but it should provide a working version of Spyder for Windows platforms without having to install anything else (except Python 2.x itself, of course). Don't forget to follow Spyder updates/news: * on the project website: http://code.google.com/p/spyderlib/ * and on our official blog: http://spyder-ide.blogspot.com/ Last, but not least, we welcome any contribution that helps making Spyder an efficient scientific development/computing environment. Join us to help creating your favourite environment! (http://code.google.com/p/spyderlib/wiki/NoteForContributors) Enjoy! -Pierre From d.s.seljebotn at astro.uio.no Wed Nov 16 04:49:08 2011 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 Nov 2011 10:49:08 +0100 Subject: [SciPy-User] Cython vs Vectorized Numpy vs MATLAB In-Reply-To: References: Message-ID: <4EC38714.2020200@astro.uio.no> On 11/16/2011 01:15 AM, Jaidev Deshpande wrote: > Hi, > > I have two questions. > > 1. Why does the 'spline.m' function in MATLAB perform much faster than > the same interpolation in NumPy? > (In MATLAB the first function call takes time but the subsequent calls > are much faster.) > > 2. I wrote a cubic spline interpolation algorithm with NumPy and I > vectorized it. Is it surprising that the Cython compiled version of the > same function is no faster? I guess that shouldn't happen, as parts of > the code would be compiled into C. Although, please note that I used > Cython on the vectorized code *as it is*, without adding static types. I > know, stupid thing to do, but shouldn't it have given me *some *speed-up? Why do you think that it should? Dag Sverre From pav at iki.fi Wed Nov 16 05:03:34 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 16 Nov 2011 11:03:34 +0100 Subject: [SciPy-User] Cython vs Vectorized Numpy vs MATLAB In-Reply-To: References: Message-ID: 16.11.2011 01:15, Jaidev Deshpande kirjoitti: > 1. Why does the 'spline.m' function in MATLAB perform much faster than > the same interpolation in NumPy? > (In MATLAB the first function call takes time but the subsequent calls > are much faster.) It's a completely different algorithm. The routines in scipy.interpolate try to find the best knots for the spline, whereas the Matlab one is a simple cubic spline interpolation using the data points as the knots. From franckkalala at googlemail.com Wed Nov 16 05:57:28 2011 From: franckkalala at googlemail.com (franck kalala) Date: Wed, 16 Nov 2011 10:57:28 +0000 Subject: [SciPy-User] how to generate very small random number Message-ID: Hello Folk I would like to generate very small random number in scipy. the command random() generate random number in (0,1). I am doing a simulation that involve very small numbers. for example I am doing somthing like this: >>> import random >>> a = random.random() >>> if a < 10**-6: do something But the 'do domething' is never executed, because 10**-6 is very small, and in many cases the random number generate is large, I would like then to generate very small number of order of 10**-6 for example, Any idea on how to do that? Cheers F -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Nov 16 05:58:59 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Nov 2011 10:58:59 +0000 Subject: [SciPy-User] how to generate very small random number In-Reply-To: References: Message-ID: On Wed, Nov 16, 2011 at 10:57, franck kalala wrote: > Hello Folk > > I would like to generate very small random number in scipy. > > the command? random()? generate random number in? (0,1). > > I am doing a simulation that involve very small numbers. > > for example I am doing somthing like this: > >>>> import random >>>> a = random.random() >>>> if a < 10**-6: > ????????? do something > > But the 'do domething' is never executed,? because?? 10**-6 is very small, > and in many cases? the random number generate is large, > > I would like then to generate very small number of order? of? 10**-6 for > example, > > Any idea on how to do that? np.random.uniform(0.0, 1e-5) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From marquett at iap.fr Tue Nov 15 11:18:29 2011 From: marquett at iap.fr (Jean-Baptiste Marquette) Date: Tue, 15 Nov 2011 17:18:29 +0100 Subject: [SciPy-User] test issues with 0.10 In-Reply-To: References: Message-ID: Le 14 nov. 2011 ? 20:54, Ralf Gommers a ?crit : > I've run nosetests on my Mac (64-bit 10.7.2 build on EPD) which fails on the following test: > > test_definition (test_basic.TestDoubleIFFT) ... FAIL > test_definition_real (test_basic.TestDoubleIFFT) ... ok > test_djbfft (test_basic.TestDoubleIFFT) ... python(60968) malloc: *** error for object 0x105435b58: incorrect checksum for freed object - object was probably modified after being freed. > *** set a breakpoint in malloc_error_break to debug > Abort trap: 6 > > The exact same issue was just reported on scipy-user, thread "test issues with 0.10". Can you please move the follow-up over there? > > What compilers did you use? And do you know if EPD was built with the same compilers? Hi Ralf, My compiler is that coming with the latest version of Xcode: macprojb:workdir marquett$ gcc -v Using built-in specs. Target: i686-apple-darwin11 Configured with: /private/var/tmp/llvmgcc42/llvmgcc42-2336.1~1/src/configure --disable-checking --enable-werror --prefix=/Developer/usr/llvm-gcc-4.2 --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib --build=i686-apple-darwin11 --enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.1~1/dst-llvmCore/Developer/usr/local --program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11 --target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1 Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) gfortran comes from the HPC site: macprojb:workdir marquett$ gfortran -v Utilisation des specs internes. COLLECT_GCC=gfortran COLLECT_LTO_WRAPPER=/usr/local/libexec/gcc/x86_64-apple-darwin10.7.0/4.6.0/lto-wrapper Target: x86_64-apple-darwin10.7.0 Configur? avec: ../gcc-4.6.0/configure --enable-languages=fortran Mod?le de thread: posix gcc version 4.6.0 (GCC) Unfortunately I don't know very much about those used for EPD. I just got the 64-bit binary from the Enthought academic link. HTH, Cheers Jean-Baptiste -------------- next part -------------- An HTML attachment was scrubbed... URL: From bill.janssen at gmail.com Tue Nov 15 18:05:47 2011 From: bill.janssen at gmail.com (Bill Janssen) Date: Tue, 15 Nov 2011 15:05:47 -0800 (PST) Subject: [SciPy-User] scipy.misc.lena returns wrong array type? Message-ID: <271a701f-a5ef-401a-a1d5-9b30712b8f1d@u24g2000pru.googlegroups.com> I've been trying to figure out why scipy.misc.lena() return an array of 8-bit grayscale values but with the dtype as int32. Seems to me it should be an array of uint8? As an added bonus, then this would then work with the PIL .fromarray() method, which it currently doesn't, because PIL thinks that arrays of int32 are RGB or RGBA images). From jonathan.alonso at gmail.com Tue Nov 15 21:55:32 2011 From: jonathan.alonso at gmail.com (jajabinker) Date: Tue, 15 Nov 2011 18:55:32 -0800 (PST) Subject: [SciPy-User] [SciPy-user] scipy.test() curious failure Message-ID: <32852072.post@talk.nabble.com> HI ALL! Hope you can help me! After i run scipy.test() i get the following output: http://pastebin.com/eQwnKDy4 Thats that error. Has any1 seen it? I have installed numpy no errors while running numpy.test(). In case you spot a bug in my atlas. Here is my configure But the compilation was clean. ../configure -b 64 -Fa alg -fPIC --prefix=/home/$USER/numpyscipy/atlas --with-netlib-lapack=/home/$USER/numpyscipy/lapack-3.4.0/liblapack.a THANKS! -- View this message in context: http://old.nabble.com/scipy.test%28%29-curious-failure-tp32852072p32852072.html Sent from the Scipy-User mailing list archive at Nabble.com. From bsouthey at gmail.com Wed Nov 16 09:35:53 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 16 Nov 2011 08:35:53 -0600 Subject: [SciPy-User] [SciPy-user] scipy.test() curious failure In-Reply-To: <32852072.post@talk.nabble.com> References: <32852072.post@talk.nabble.com> Message-ID: <4EC3CA49.7010001@gmail.com> On 11/15/2011 11:04 PM, jajabinker wrote: > HI ALL! Hope you can help me! > > After i run scipy.test() i get the following output: > > http://pastebin.com/eQwnKDy4 > > Thats that error. > Has any1 seen it? > I have installed numpy no errors while running numpy.test(). > > In case you spot a bug in my atlas. Here is my configure > But the compilation was clean. > > > ../configure -b 64 -Fa alg -fPIC --prefix=/home/$USER/numpyscipy/atlas > --with-netlib-lapack=/home/$USER/numpyscipy/lapack-3.4.0/liblapack.a > > > > THANKS! > I do not get it on Fedora but you using an old gcc (GCC 4.1.2 20080704). I presume that blas etc were compiled with the same version as Python2.7 - if not then you should do so. What values do you get for the norm(a) in the test from scipy and numpy? If you get '1000.0' then perhaps something is not being called correctly as line 13 of scipy/linalg/misc.py is failing on your system: 'ord' should be None because of the call ' a.ndim == 1' should be 'True' 'a.dtype.char' should be 'f' Bruce $ python Python 2.7 (r27:82500, Sep 16 2010, 18:02:00) [GCC 4.5.1 20100907 (Red Hat 4.5.1-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy as sp >>> import numpy as np >>> import scipy.linalg >>> a = np.array([1e4] + [1]*10000, dtype=np.float32) >>> a array([ 1.00000000e+04, 1.00000000e+00, 1.00000000e+00, ..., 1.00000000e+00, 1.00000000e+00, 1.00000000e+00], dtype=float32) >>> sp.linalg.norm(a) 10000.5 >>> np.linalg.norm(a) 10000.0 >>> a.dtype.char 'f' >>> a.ndim == 1 True >>> From jonathan.alonso at gmail.com Wed Nov 16 10:04:34 2011 From: jonathan.alonso at gmail.com (jajabinker) Date: Wed, 16 Nov 2011 07:04:34 -0800 (PST) Subject: [SciPy-User] [SciPy-user] scipy.test() curious failure In-Reply-To: <4EC3CA49.7010001@gmail.com> References: <32852072.post@talk.nabble.com> <4EC3CA49.7010001@gmail.com> Message-ID: <32855321.post@talk.nabble.com> Bruce Southey wrote: > > On 11/15/2011 11:04 PM, jajabinker wrote: >> HI ALL! Hope you can help me! >> >> After i run scipy.test() i get the following output: >> >> http://pastebin.com/eQwnKDy4 >> >> Thats that error. >> Has any1 seen it? >> I have installed numpy no errors while running numpy.test(). >> >> In case you spot a bug in my atlas. Here is my configure >> But the compilation was clean. >> >> >> ../configure -b 64 -Fa alg -fPIC >> --prefix=/home/$USER/numpyscipy/atlas >> --with-netlib-lapack=/home/$USER/numpyscipy/lapack-3.4.0/liblapack.a >> >> >> >> THANKS! >> > I do not get it on Fedora but you using an old gcc (GCC 4.1.2 20080704). > I presume that blas etc were compiled with the same version as Python2.7 > - if not then you should do so. > > What values do you get for the norm(a) in the test from scipy and numpy? > If you get '1000.0' then perhaps something is not being called correctly > as line 13 of scipy/linalg/misc.py is failing on your system: > 'ord' should be None because of the call > ' a.ndim == 1' should be 'True' > 'a.dtype.char' should be 'f' > > Bruce > > $ python > Python 2.7 (r27:82500, Sep 16 2010, 18:02:00) > [GCC 4.5.1 20100907 (Red Hat 4.5.1-3)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy as sp > >>> import numpy as np > >>> import scipy.linalg > >>> a = np.array([1e4] + [1]*10000, dtype=np.float32) > >>> a > array([ 1.00000000e+04, 1.00000000e+00, 1.00000000e+00, ..., > 1.00000000e+00, 1.00000000e+00, 1.00000000e+00], > dtype=float32) > >>> sp.linalg.norm(a) > 10000.5 > >>> np.linalg.norm(a) > 10000.0 > >>> a.dtype.char > 'f' > >>> a.ndim == 1 > True > >>> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > Hi, thanks for taking the time to reply! I compiled BLAS as a shared library, but i did not explicitly define the compiler for either so it should be the same. I will retry though ASAP. As predicted i get : >>>> np.linalg.norm(a) > 10000.0 > > misc.py :def norm(a, ord=None): > # Differs from numpy only in non-finite handling and the use of > # blas > a = np.asarray_chkfinite(a) > if ord in (None, 2) and (a.ndim == 1) and (a.dtype.char in 'fdFD'): > # use blas for fast and stable euclidean norm > func_name = _nrm2_prefix.get(a.dtype.char, 'd') + 'nrm2' > nrm2 = getattr(fblas, func_name) > return nrm2(a) > return np.linalg.norm(a, ord=ord) > -- View this message in context: http://old.nabble.com/scipy.test%28%29-curious-failure-tp32852072p32855321.html Sent from the Scipy-User mailing list archive at Nabble.com. From questions.anon at gmail.com Wed Nov 16 19:46:22 2011 From: questions.anon at gmail.com (questions anon) Date: Thu, 17 Nov 2011 11:46:22 +1100 Subject: [SciPy-User] match extent of arrays Message-ID: Hello All, I have an array that I created from a shapefile using gdal.RasterizeLayer and then ReadAsArray(). I would like the array I created from a shapefile to match my array from a netcdf file. I am not sure how I go about this. The extents of the shapefile/raster/array are: x_min, x_max, y_min, y_max 140.962408758 149.974994992 -39.1366533667 -33.9813898583 The extents and size for my array from netcdf files are: 139.8 max longitude: 150.0 min latitude -39.2 max latitude: -33.6 LAT size 106 LON size 193 Any ideas on how I might achieve this? The overal goal is to then use the array from a shapefile as a mask for my netcdf files. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Wed Nov 16 20:03:45 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 16 Nov 2011 18:03:45 -0700 Subject: [SciPy-User] MODIS data and true-color plotting In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 12:40 PM, G?khan Sever wrote: > Hello groups, > > I have two questions about working with MODIS data. > > 1-) Is there any light Pythonic HDF-EOS wrapper to handle HDF-EOS data > other than PyNIO [http://www.pyngl.ucar.edu/Nio.shtml] Although, I have > managed to install that package from its source, it took me many hours to > figure out all the installation quirks. Something simpler to build and > mainly for HDFEOS data?? > > 2-) Another similar question: Has anybody attempted to create true-color > MODIS images (like the ones shown at [ > http://rapidfire.sci.gsfc.nasa.gov/realtime/]) in Python? So far, I have > seen one clear tutorial [ > ftp://ftp.ssec.wisc.edu/pub/IMAPP/MODIS/TrueColor/] to create natural > color images, but uses ms2gt [http://nsidc.org/data/modis/ms2gt/], NDVI > and IDL. Except the reflectance correction via NDVI, ms2gt and IDL parts > seem to be implemented in Python. > > Till now, I have some progress combining GOES imagery with aircraft data. > My next task is to combine MODIS data with aircraft and radar data. I would > be happy to get some guidance and code support if there is any previous > work been done using Python. > > Thanks. > Hello all, Here is my answer to my 2nd question: http://imageshack.us/photo/my-images/713/modistrue1km.png/ Python script is at http://code.google.com/p/ccnworks/source/browse/trunk/modis/true.py This code is based on TrueColor tutorial of Liam Gumley [ ftp://ftp.ssec.wisc.edu/pub/IMAPP/MODIS/TrueColor/] and Peter Kuma's ccplot tool [http://ccplot.org/] Currently it only plots true color images from Aqua or Terra using Level 1B data. For the example image I use the data provided via TrueColor link. Notes: 1-) Using PyNIO to open hdf-eos files, from ccplot using cctk for 2D data interpolation, basemap for map plotting and numpy/scipy for other essentials. 2-) crefl.1km.hdf is reflectance corrected data using NVDI's crefl program. Set-up TrueColor tutorial and you should be able to get these correction applied data by calling its main script or set-up another. 3-) MOD03.hdf is the geolocation data. 4-) No need to run ms2gt or any other swath to grid conversion tools, since the interpolation routine handles this step. 5-) Code is in 100 lines. Unlike TrueColor tutorial it only works for 1KM resolution. HKM and QKM resolution plotting requires additional steps. It takes about 2.5 seconds to get the plot on my screen. Let me know if you have any comments or other suggestions. Thanks. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhwang4 at gmail.com Wed Nov 16 20:39:39 2011 From: mhwang4 at gmail.com (Myunghwa Hwang) Date: Wed, 16 Nov 2011 18:39:39 -0700 Subject: [SciPy-User] scipy import problem in apache-mod_wsgi environment In-Reply-To: References: Message-ID: Hi, Hayne, Charles, and Bruce! Thanks for your response. I was out of town and got into other projects. For Hayne's question, when I included only the two lines including the finfo function, my application hung. For Charles's question, all computer nodes in our server use centos 6 linux system. For hardware specifics, I know only they are all 64-bit systems. For Bruce's question, I am aware of the selinux problem. It was not the issue. Actually, my problem seemed related to how mod_wsgi handles multiple python processes. My problem was solved by including an apache directive related to mod_wsgi. Particularly, I added the following line into my apache configuration file: WSGIApplicationGroup %{GLOBAL} Here is the explanation of this apache directive: The application group name will be set to the empty string. Any WSGI applications in the global application group will always be executed within the context of the first interpreter created by Python when it is initialised. Forcing a WSGI application to run within the first interpreter can be necessary when a third party C extension module for Python has used the simplified threading API for manipulation of the Python GIL and thus will not run correctly within any additional sub interpreters created by Python. Still, I am not quite sure about the real source that caused my problem. But, it is gone now. This is all thanks to you and scipy mailing list. Again, I really appreciate your help. Myunghwa On Mon, Nov 7, 2011 at 11:47 AM, Ralf Gommers wrote: > > > On Mon, Nov 7, 2011 at 5:43 AM, Myunghwa Hwang wrote: > >> Hi, Hayne! >> >> Thanks for your answer. >> After trying out what you suggested (that is, commenting out the import >> of decomp), >> I found out the import of decomp was not the problem. >> In decomp_schur, there are two lines checking something related to >> rounding errors specific to a single machine as follows: >> eps = np.finfo(float).eps >> feps = numpy.finfo(single).eps >> >> If you execute just the above lines in your application instead of > importing scipy, does it hang too? > > Ralf > > > >> When scipy reaches the above lines, my application hangs. >> I found a web document where the author encountered the same problem with >> these lines but in different contexts: >> >> http://stackoverflow.com/questions/7592565/when-embedding-cpython-in-java-why-does-this-hang >> >> The discussion in the web document is not applicable to my problem. >> Also, the np.finfo statements seem to exist in multiple modules of scipy. >> Without addressing all related modules manually, >> would it be any other solutions? >> >> Thanks! >> >> --Myung-Hwa >> >> >> On Sat, Nov 5, 2011 at 1:14 PM, wrote: >> >>> I would try putting print statements inside "decomp_schur.py" since that >>> is the module that you said is causing problems. >>> Print out the contents of the dictionary sys.modules just before the >>> import of decomp in "decomp_schur.py". Is 'decomp' in the dictionary? >>> What happens if you comment-out the import of decomp in >>> "decomp_schur.py" ? >>> -- >>> Cameron Hayne >>> macdev at hayne.net >>> >>> >>> >>> On 5-Nov-11, at 3:59 PM, Myunghwa Hwang wrote: >>> >>>> I am trying to run a simple django application in a cluster environment. >>>> And, my application hangs while it imports scipy.linalg, and both scipy >>>> and apache do not write out error messages. >>>> When I run my application in my local python shell, it imports >>>> scipy.linalg. But, somehow it does not when it is run by apache. >>>> So, after reading this message, please share any ideas about how to >>>> debug this problem or new solutions to address this issue or deploy my >>>> application. >>>> >>>> Now, let me explain our current setup. >>>> 1. OS >>>> -- The server is a compute cluster where each node runs centos 6 that >>>> was installed from a clean version of centos6 minimal.2. Apache >>>> >>>> -- Apache 2.2 was also manually installed from one of default linux >>>> repository. To be specific, it was installed from its source code together >>>> with httpd-dev. >>>> 3. Python >>>> -- Python 2.7.2 was also installed from its source code across all >>>> nodes in the cluster. Its source code was downloaded from python.org's >>>> ftp. >>>> 4. Python packages: nose, numpy, scipy >>>> -- Nose 1.1.2 was downloaded from pypi.python.org and installed from >>>> its source code. >>>> -- numpy 1.6.1 was downloaded and installed from a linux repository. >>>> When building numpy, gnu95 fortran complier was used. >>>> -- To install scipy, we installed atlas-3.8.4, lapack-3.3.1, and blas >>>> from their source code.----- atlas was from sourceforge's 3.8.4 stable >>>> version. To compile altas, gcc was used. >>>> >>>> ----- lapack and blas was obtained from netlib.org's repository. To >>>> compile the package of lapack and blas, gforan was used. >>>> ----- Finally, after exporting paths to blas, lapack, and atlas, >>>> scipy-0.9.0 was installed from its source code. >>>> scipy was obtained from sourceforge.net's repository. >>>> A note that contains the above information about software installation >>>> is attached. >>>> >>>> All of the above were installed in the same way across all nodes in our >>>> cluster. >>>> Since I am the only user of the cluster who needs to run python web >>>> applications, >>>> I installed python virtualenv package in my local directory. >>>> Within my virtual environment, django-1.3 and pysal-1.2 (our own >>>> package) were installed. >>>> To deploy my web applications, we used mod_wsgi. >>>> mod-wsgi was compiled with python-2.7.2 and loaded into apache-2.2. >>>> My application is attached. Basically, it is a 'hello world' >>>> application that tests if numpy, scipy, and pysal can be imported. >>>> In the attached file, lines 4-9 are just adding paths to django and >>>> pysal so that apache knows where to find these packages. >>>> Also, to let apache know where to find atlas-related packages, the path >>>> to those packages was added to the LD_LIBRARY_PATH environment variable in >>>> the /etc/sysconfig/httpd file. >>>> >>>> When I first ran my application, it just hung and wrote no message. >>>> So, across scipy.linalg modules, I added print out statements to figure >>>> out at which point the import was broken. >>>> Here is the messages I got when I imported scipy.linalg in my local >>>> python shell. >>>> ? ######################## >>>> ? starting linalg.__init__ >>>> ? pre __init__.__doc__ >>>> ? pre __init__.__version__ >>>> ? pre __init__.misc >>>> ? pre __init__.basic >>>> ? ####################### >>>> ? Starting basic >>>> ? pre basic.flinalg >>>> ? pre basic.lapack >>>> ? pre basic.misc >>>> ? pre basic.scipy.linalg >>>> ? pre basic.decomp_svd >>>> ? pre __init__.decomp >>>> ? ################ >>>> ? starting decomp >>>> ? pre decomp.array et al. >>>> ? pre decomp.calc_lwork >>>> ? pre decomp.LinAlgError >>>> ? pre decomp.get_lapack_funcs >>>> ? pre decomp.get_blas_funcs >>>> ? #################### >>>> ? Starting blas >>>> ? pre blas.scipy.linalg.fblas >>>> ? pre blas.scipy.linalg.cblas >>>> ? pre __init__.decomp_lu >>>> ? pre __init__.decomp_cholesky >>>> ? pre __init__.decomp_qr >>>> ? ################# >>>> ? Starting special_matrices >>>> ? pre special_matrices.math >>>> ? pre special_matrices.np >>>> ? pre __init__.decomp_svd >>>> ? pre __init__.decomp_schur >>>> ? ################## >>>> ? starting schur... >>>> ? pre decomp_schur.misc >>>> ? pre decomp_schur.LinAlgError >>>> ? pre decomp_schur.get_lapack_funcs >>>> ? pre decomp_schur.eigvals:**1320454147.23Fri Nov 4 17:49:07 >>>> 2011 >>>> ? schur testing >>>> ? pre __init__.matfuncs >>>> ? ##################### >>>> ? Starting matfuncs >>>> ? pre matfuncs. asarray et al >>>> ? pre matfuncs.matrix >>>> ? pre matfuncs.np >>>> ? pre matfuncs.misc >>>> ? pre matfuncs.basic >>>> ? pre matfuncs.special_matrices >>>> ? pre matfuncs.decomp >>>> ? pre matfuncs.decomp_svd >>>> ? pre matfuncs.decomp_schur >>>> ? pre __init__.blas >>>> ? pre __init__.special_matrices >>>> When scipy.linalg is successfully imported, I should get these messages. >>>> But, when my web application tried to import scipy.linalg, the output >>>> messages stop at line 41. >>>> At line 41, decomp_schur.py tries to import decomp.py. Since decomp.py >>>> was already imported at line 16, scipy ignores it and continues to import >>>> other modules in my local shell. >>>> But, somehow, in apache-mod_wsgi environment, scipy failed to ignore or >>>> reload decomp.py and seems to kill my web application. >>>> This is really odd, because python does not give any message about this >>>> error and neither does apache. apache just hangs without sending out any >>>> response. >>>> Since lapack and blas functions were imported successfully, the problem >>>> seems not related to path setup. >>>> >>>> If anyone in the list has any insights into or experience into this >>>> kind of symptom, >>>> please share your insights and experience. In particular, debugging >>>> techniques or less-known installation/compilation problems would be helpful. >>>> I feel like I am at a dead end. So, please help me. >>>> >>>> Thanks for reading this post. >>>> I will look forward to your responses. >>>> >>>> -- Myung-Hwa Hwang >>>> >>>> -- >>>> Myung-Hwa Hwang >>>> GeoDa Center >>>> School of Geographical Sciences and Urban Planning >>>> Arizona State University >>>> mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu >>>> >>> >>> >>> >>> >>> >> >> >> -- >> Myung-Hwa Hwang >> GeoDa Center >> School of Geographical Sciences and Urban Planning >> Arizona State University >> mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Myung-Hwa Hwang GeoDa Center School of Geographical Sciences and Urban Planning Arizona State University mhwang4 at gmail.com or Myunghwa.Hwang at asu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From questions.anon at gmail.com Wed Nov 16 21:56:58 2011 From: questions.anon at gmail.com (questions anon) Date: Thu, 17 Nov 2011 13:56:58 +1100 Subject: [SciPy-User] match extent of arrays In-Reply-To: References: Message-ID: I have had part of this question answered elsewhere so thought I would post the result. import random from osgeo import gdal, ogr import numpy as N from netCDF4 import Dataset from numpy import ma as MA from osgeo import gdal, gdalnumeric, ogr, osr shapefile=r"E:/GIS_layers/test/Vic_dissolve.shp" xmin,ymin,xmax,ymax=[139.8,-39.2,150.0,-33.6] #Your extents as given above ncols,nrows=[193,106] #Your rows/cols as given above maskvalue = 1 xres=(xmax-xmin)/float(ncols) yres=(ymax-ymin)/float(nrows) geotransform=(xmin,xres,0,ymax,0, -yres) src_ds = ogr.Open(shapefile) src_lyr=src_ds.GetLayer() dst_ds = gdal.GetDriverByName('MEM').Create('', ncols, nrows, 1 ,gdal.GDT_Byte) dst_rb = dst_ds.GetRasterBand(1) dst_rb.Fill(0) #initialise raster with zeros dst_rb.SetNoDataValue(0) dst_ds.SetGeoTransform(geotransform) err = gdal.RasterizeLayer(dst_ds, [maskvalue], src_lyr) dst_ds.FlushCache() mask_arr=dst_ds.GetRasterBand(1).ReadAsArray() On Thu, Nov 17, 2011 at 11:46 AM, questions anon wrote: > Hello All, > I have an array that I created from a shapefile using gdal.RasterizeLayer > and then ReadAsArray(). > I would like the array I created from a shapefile to match my array from a > netcdf file. > I am not sure how I go about this. > > The extents of the shapefile/raster/array are: > > x_min, x_max, y_min, y_max 140.962408758 149.974994992 -39.1366533667 -33.9813898583 > > > The extents and size for my array from netcdf files are: > > 139.8 max longitude: 150.0 min latitude -39.2 max latitude: -33.6 > > LAT size 106 LON size 193 > > > Any ideas on how I might achieve this? > The overal goal is to then use the array from a shapefile as a mask for my > netcdf files. > Thanks > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Fri Nov 18 05:15:12 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Fri, 18 Nov 2011 11:15:12 +0100 Subject: [SciPy-User] Cython vs Vectorized Numpy vs MATLAB In-Reply-To: References: Message-ID: <1321611312.2608.12.camel@newpride> On Wed, 2011-11-16 at 05:45 +0530, Jaidev Deshpande wrote: > Although, please note that I used Cython on the vectorized code *as it > is*, without adding static types. I know, stupid thing to do, but > shouldn't it have given me *some *speed-up? > Vectorized NumPy > takes................................................. > 1.0342 second > The above, > Cythonized ................................................... > 0.997 seconds It did give you *some* speed-up, didn't it? Why would you expect more, if all you are doing is calling Python primitives directly via Cython-compiled code instead of parsing the Python file and byte-compiling it first? -- Sincerely yours, Yury V. Zaytsev From denis-bz-gg at t-online.de Fri Nov 18 06:39:32 2011 From: denis-bz-gg at t-online.de (denis) Date: Fri, 18 Nov 2011 03:39:32 -0800 (PST) Subject: [SciPy-User] kdtree, custom distance function In-Reply-To: References: Message-ID: Oleksandr, a general method is to map lat long <-> x,y,z on the sphere, build a kdtree on the 3 coordinates, map near neighbors in xyz back to lat long. Euclidean distance between two points xyz and x'y'z' is not great- circle distance but should be near enough ? (This is from Bentley p. 92, http://cm.bell-labs.com/cm/cs/pearls). cheers -- denis On Nov 14, 6:05?pm, Oleksandr Huziy wrote: > Hello, > > I am trying to use scipy.spatial.kdtree to interpolate data from a lat/lon > grid to a set of points (also with lat/lon coordinates). > > Is it possible to specify a custom distance function for the kdtree that > should be used for querying? From deshpande.jaidev at gmail.com Fri Nov 18 11:39:53 2011 From: deshpande.jaidev at gmail.com (Jaidev Deshpande) Date: Fri, 18 Nov 2011 22:09:53 +0530 Subject: [SciPy-User] Cython vs Vectorized Numpy vs MATLAB In-Reply-To: <1321611312.2608.12.camel@newpride> References: <1321611312.2608.12.camel@newpride> Message-ID: Haha, thanks :) Yeah, I guess that's not how Cython works. I think I'll go and read a little bit more about NumPy and Cython. Thanks again! On Fri, Nov 18, 2011 at 3:45 PM, Yury V. Zaytsev wrote: > On Wed, 2011-11-16 at 05:45 +0530, Jaidev Deshpande wrote: > > Although, please note that I used Cython on the vectorized code *as it > > is*, without adding static types. I know, stupid thing to do, but > > shouldn't it have given me *some *speed-up? > > > Vectorized NumPy > > takes................................................. > > 1.0342 second > > The above, > > Cythonized ................................................... > > 0.997 seconds > > It did give you *some* speed-up, didn't it? > > Why would you expect more, if all you are doing is calling Python > primitives directly via Cython-compiled code instead of parsing the > Python file and byte-compiling it first? > > -- > Sincerely yours, > Yury V. Zaytsev > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Fri Nov 18 12:33:27 2011 From: sturla at molden.no (Sturla Molden) Date: Fri, 18 Nov 2011 18:33:27 +0100 Subject: [SciPy-User] Cython vs Vectorized Numpy vs MATLAB In-Reply-To: References: <1321611312.2608.12.camel@newpride> Message-ID: <4EC696E7.50000@molden.no> NumPy will still be NumPy. It does not care if you call it from Python or Cython. In some circumstances, using loops in Cython instead of NumPy will help. That is, when the NumPy expression is memory-bound and the Cython loop is compute-bound. In all other circumstances, you should expect no significant difference. Sturla Den 18.11.2011 17:39, skrev Jaidev Deshpande: > Haha, thanks :) > > Yeah, I guess that's not how Cython works. I think I'll go and read a > little bit more about NumPy and Cython. > > Thanks again! > > On Fri, Nov 18, 2011 at 3:45 PM, Yury V. Zaytsev wrote: > >> On Wed, 2011-11-16 at 05:45 +0530, Jaidev Deshpande wrote: >>> Although, please note that I used Cython on the vectorized code *as it >>> is*, without adding static types. I know, stupid thing to do, but >>> shouldn't it have given me *some *speed-up? >>> Vectorized NumPy >>> takes................................................. >>> 1.0342 second >>> The above, >>> Cythonized ................................................... >>> 0.997 seconds >> It did give you *some* speed-up, didn't it? >> >> Why would you expect more, if all you are doing is calling Python >> primitives directly via Cython-compiled code instead of parsing the >> Python file and byte-compiling it first? >> >> -- >> Sincerely yours, >> Yury V. Zaytsev >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Nov 19 09:19:12 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 19 Nov 2011 06:19:12 -0800 Subject: [SciPy-User] test issues with 0.10 In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 8:18 AM, Jean-Baptiste Marquette wrote: > > Le 14 nov. 2011 ? 20:54, Ralf Gommers a ?crit : > > I've run nosetests on my Mac (64-bit 10.7.2 build on EPD) which fails on >> the following test: >> >> test_definition (test_basic.TestDoubleIFFT) ... FAIL >> test_definition_real (test_basic.TestDoubleIFFT) ... ok >> test_djbfft (test_basic.TestDoubleIFFT) ... python(60968) malloc: *** >> error for object 0x105435b58: incorrect checksum for freed object - object >> was probably modified after being freed. >> *** set a breakpoint in malloc_error_break to debug >> Abort trap: 6 >> >> The exact same issue was just reported on scipy-user, thread "test issues > with 0.10". Can you please move the follow-up over there? > > What compilers did you use? And do you know if EPD was built with the same > compilers? > > > Hi Ralf, > > My compiler is that coming with the latest version of Xcode: > > macprojb:workdir marquett$ gcc -v > Using built-in specs. > Target: i686-apple-darwin11 > Configured with: > /private/var/tmp/llvmgcc42/llvmgcc42-2336.1~1/src/configure > --disable-checking --enable-werror --prefix=/Developer/usr/llvm-gcc-4.2 > --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ > --program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --with-slibdir=/usr/lib --build=i686-apple-darwin11 > --enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.1~1/dst-llvmCore/Developer/usr/local > --program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11 > --target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1 > Thread model: posix > gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) > > The default compiler on OS X 10.7 is giving problems, I don't think I've seen a report of someone using it successfully yet. You should switch to the normal (non-LLVM) gcc, which is also available by default on 10.7. > gfortran comes from the HPC site: > > HPC compilers are also known to be problematic. Please use the one that's linked to on http://scipy.org/Installing_SciPy/Mac_OS_X Cheers, Ralf > macprojb:workdir marquett$ gfortran -v > Utilisation des specs internes. > COLLECT_GCC=gfortran > > COLLECT_LTO_WRAPPER=/usr/local/libexec/gcc/x86_64-apple-darwin10.7.0/4.6.0/lto-wrapper > Target: x86_64-apple-darwin10.7.0 > Configur? avec: ../gcc-4.6.0/configure --enable-languages=fortran > Mod?le de thread: posix > gcc version 4.6.0 (GCC) > > Unfortunately I don't know very much about those used for EPD. I just got > the 64-bit binary from the Enthought academic link. > > HTH, > Cheers > Jean-Baptiste > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.stowell at eecs.qmul.ac.uk Sat Nov 19 14:19:47 2011 From: dan.stowell at eecs.qmul.ac.uk (Dan Stowell) Date: Sat, 19 Nov 2011 19:19:47 +0000 Subject: [SciPy-User] fmin_cg fmin_bfgs "Desired error not necessarily achieveddue to precision loss" Message-ID: <4EC80153.4060501@eecs.qmul.ac.uk> Hi, I'm translating a fairly straightforward optimisation code example from octave. (Attached - it does a quadratic regression, with a tweaked regularisation function.) Both fmin_cg and fmin_bfgs give me poor convergence and this warning: "Desired error not necessarily achieveddue to precision loss" This is with various regularisation strengths, with normalised data, and with high-precision data (float128). Is there something I can do to enable these to converge properly? Thanks Dan (Using ubuntu 11.04, python 2.7.1, scipy 0.8) -- Dan Stowell Postdoctoral Research Assistant Centre for Digital Music Queen Mary, University of London Mile End Road, London E1 4NS http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm http://www.mcld.co.uk/ -------------- next part -------------- A non-text attachment was scrubbed... Name: arcsml.py Type: text/x-python Size: 3061 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: data.csv Type: text/csv Size: 198 bytes Desc: not available URL: From questions.anon at gmail.com Sun Nov 20 01:40:03 2011 From: questions.anon at gmail.com (questions anon) Date: Sun, 20 Nov 2011 17:40:03 +1100 Subject: [SciPy-User] mask an array using another array Message-ID: I am trying to mask one array using another array. I have created a masked array using mask=MA.masked_equal(myarray,0), that looks something like: [1 - - 1, 1 1 - 1, 1 1 1 1, - 1 - 1] I have an array of values that I want to mask whereever my mask has a nan. how do I do this? I have looked at http://www.cawcr.gov.au/bmrc/climdyn/staff/lih/pubs/docs/masks.pdf but the command: d = array(a, mask=c.mask() does not work. I basically want to do exactly what that article does in that equation. Any feedback will be greatly appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From schneider.felipe.5 at gmail.com Sun Nov 20 19:42:08 2011 From: schneider.felipe.5 at gmail.com (Felipe Schneider) Date: Sun, 20 Nov 2011 22:42:08 -0200 Subject: [SciPy-User] Optimization, Matlab/Octave and Duplication Message-ID: Hi all, I'm new to SciPy and even scientific computing in general, but I have some basic experience with Octave. Reading about Python and how it can be amazing but still very fast and all, I started to get some interest on SciPy/NumPy. I would like to ask some questions, hoping it's the right place to make them: 1. It seems Octave is really slow compared to the SciPy approach. I would like to know if it's due to some low level coding, Cython, etc. How's the procedure when someone wants to, say, create a new routine for SciPy? 2. Is there anything similar to Matlab's Toolboxes or Octave's Octave-Forge? Or is it all a huge pack? 3. I was searching for a LP solver and it seems SciPy doesn't have it! But there's cvxopt, am I wrong? So, there's no future plans on this area, I presume, i. e., no LP solver for SciPy? I would like to have a general answer (i. e., "when and when not should SciPy has this/that funtionality?"), which leads to my third question... 4. It seems that there are way more than one implemention in the Python world for a lot of things, am I wrong? How come? Why reinventing the wheel? Where does SciPy stand on this matter? Really thanks, Felipe. From briedel at wisc.edu Sun Nov 20 23:37:06 2011 From: briedel at wisc.edu (Benedikt Riedel) Date: Sun, 20 Nov 2011 22:37:06 -0600 Subject: [SciPy-User] Optimization, Matlab/Octave and Duplication In-Reply-To: References: Message-ID: On Sun, Nov 20, 2011 at 18:42, Felipe Schneider wrote: > Hi all, > > I'm new to SciPy and even scientific computing in general, but I have > some basic experience with Octave. Reading about Python and how it can > be amazing but still very fast and all, I started to get some interest > on SciPy/NumPy. I would like to ask some questions, hoping it's the > right place to make them: > > 1. It seems Octave is really slow compared to the SciPy approach. I > would like to know if it's due to some low level coding, Cython, etc. > How's the procedure when someone wants to, say, create a new routine > for SciPy? Octave is based on C. Scipy is based mostly on Fortran libraries (LAPACK, BLAS), which in numerical calculations is much faster than C. > > 2. Is there anything similar to Matlab's Toolboxes or Octave's > Octave-Forge? Or is it all a huge pack? Have you looked at Sage? It is basically Mathematica, but with Numpy, etc, as a backend. http://www.sagemath.org/ > > 3. I was searching for a LP solver and it seems SciPy doesn't have it! > But there's cvxopt, am I wrong? So, there's no future plans on this > area, I presume, i. e., no LP solver for SciPy? I would like to have a > general answer (i. e., "when and when not should SciPy has this/that > funtionality?"), which leads to my third question... Sage has that as far as I recall. > > 4. It seems that there are way more than one implemention in the > Python world for a lot of things, am I wrong? How come? Why > reinventing the wheel? Where does SciPy stand on this matter? Python is build around usability and readability of the code. There are many ways to do one thing so many different people can use and many different paths lead to the same result. Stiff programming boundaries are a pain for new user to get into, but python tries to eliminate that. Scipy as far as I can tell is a little stiffer than regular python because of the specific function that it calls, but with python being the language you use it is much for flexible than lets say F77. > > Really thanks, > Felipe. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Benedikt Riedel Graduate Student University of Wisconsin-Madison Department of Physics Office: 4244C Chamberlin Hall Tel: ?+1-608-301-5736 From cournape at gmail.com Mon Nov 21 02:39:35 2011 From: cournape at gmail.com (David Cournapeau) Date: Mon, 21 Nov 2011 07:39:35 +0000 Subject: [SciPy-User] Optimization, Matlab/Octave and Duplication In-Reply-To: References: Message-ID: On Mon, Nov 21, 2011 at 12:42 AM, Felipe Schneider wrote: > > 2. Is there anything similar to Matlab's Toolboxes or Octave's > Octave-Forge? Or is it all a huge pack? There is nothing like octave-forge, but there is pypi which contains a lot of software (not restricted to scientific packages). > > 3. I was searching for a LP solver and it seems SciPy doesn't have it! > But there's cvxopt, am I wrong? So, there's no future plans on this > area, I presume, i. e., no LP solver for SciPy? I would like to have a > general answer (i. e., "when and when not should SciPy has this/that > funtionality?"), which leads to my third question... > > 4. It seems that there are way more than one implemention in the > Python world for a lot of things, am I wrong? How come? Why > reinventing the wheel? Where does SciPy stand on this matter? Python is an open platform, which means the barrier of entry is much lower than most platforms like e.g. matlab or mathematica. Hence the volume of packages is much larger than what you can see otherwise, which contributes to this "NIH" feeling. There is always a tension between openness and consistency. That being said, when one package is significantly better than the other, it will usually "win". David From dan.stowell at eecs.qmul.ac.uk Mon Nov 21 03:58:29 2011 From: dan.stowell at eecs.qmul.ac.uk (Dan Stowell) Date: Mon, 21 Nov 2011 08:58:29 +0000 Subject: [SciPy-User] Optimization, Matlab/Octave and Duplication In-Reply-To: References: Message-ID: <4ECA12B5.8@eecs.qmul.ac.uk> On 21/11/11 00:42, Felipe Schneider wrote: > Hi all, > > I'm new to SciPy and even scientific computing in general, but I have > some basic experience with Octave. Reading about Python and how it can > be amazing but still very fast and all, I started to get some interest > on SciPy/NumPy. I would like to ask some questions, hoping it's the > right place to make them: > > 1. It seems Octave is really slow compared to the SciPy approach. I > would like to know if it's due to some low level coding, Cython, etc. > How's the procedure when someone wants to, say, create a new routine > for SciPy? One of the reasons for slowness in Octave might be the data copying where python often passes-by-reference. Matlab includes some 'tricks' to avoid too much data copying, but Octave doesn't have those tricks, so it loses out compared against both matlab and python. > 2. Is there anything similar to Matlab's Toolboxes or Octave's > Octave-Forge? Or is it all a huge pack? Scikits is one collection that you might think of in that way: http://scikits.appspot.com/ > 3. I was searching for a LP solver and it seems SciPy doesn't have it! > But there's cvxopt, am I wrong? So, there's no future plans on this > area, I presume, i. e., no LP solver for SciPy? I would like to have a > general answer (i. e., "when and when not should SciPy has this/that > funtionality?"), which leads to my third question... You might find it in scikits, or http://openopt.org HTH Dan > 4. It seems that there are way more than one implemention in the > Python world for a lot of things, am I wrong? How come? Why > reinventing the wheel? Where does SciPy stand on this matter? > > Really thanks, > Felipe. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Dan Stowell Postdoctoral Research Assistant Centre for Digital Music Queen Mary, University of London Mile End Road, London E1 4NS http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm http://www.mcld.co.uk/ From blattnem at gmail.com Mon Nov 21 04:40:12 2011 From: blattnem at gmail.com (Marcel Blattner) Date: Mon, 21 Nov 2011 10:40:12 +0100 Subject: [SciPy-User] Optimization, Matlab/Octave and Duplication In-Reply-To: <4ECA12B5.8@eecs.qmul.ac.uk> References: <4ECA12B5.8@eecs.qmul.ac.uk> Message-ID: Is there somebody, who has experience with R and sparse data structures from a performance point of view (compared to scipy)? Regards Marcel On Mon, Nov 21, 2011 at 9:58 AM, Dan Stowell wrote: > On 21/11/11 00:42, Felipe Schneider wrote: > > Hi all, > > > > I'm new to SciPy and even scientific computing in general, but I have > > some basic experience with Octave. Reading about Python and how it can > > be amazing but still very fast and all, I started to get some interest > > on SciPy/NumPy. I would like to ask some questions, hoping it's the > > right place to make them: > > > > 1. It seems Octave is really slow compared to the SciPy approach. I > > would like to know if it's due to some low level coding, Cython, etc. > > How's the procedure when someone wants to, say, create a new routine > > for SciPy? > > One of the reasons for slowness in Octave might be the data copying > where python often passes-by-reference. Matlab includes some 'tricks' to > avoid too much data copying, but Octave doesn't have those tricks, so it > loses out compared against both matlab and python. > > > 2. Is there anything similar to Matlab's Toolboxes or Octave's > > Octave-Forge? Or is it all a huge pack? > > Scikits is one collection that you might think of in that way: > http://scikits.appspot.com/ > > > 3. I was searching for a LP solver and it seems SciPy doesn't have it! > > But there's cvxopt, am I wrong? So, there's no future plans on this > > area, I presume, i. e., no LP solver for SciPy? I would like to have a > > general answer (i. e., "when and when not should SciPy has this/that > > funtionality?"), which leads to my third question... > > You might find it in scikits, or http://openopt.org > > HTH > Dan > > > 4. It seems that there are way more than one implemention in the > > Python world for a lot of things, am I wrong? How come? Why > > reinventing the wheel? Where does SciPy stand on this matter? > > > > Really thanks, > > Felipe. > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Dan Stowell > Postdoctoral Research Assistant > Centre for Digital Music > Queen Mary, University of London > Mile End Road, London E1 4NS > http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm > http://www.mcld.co.uk/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason-sage at creativetrax.com Mon Nov 21 06:07:43 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Mon, 21 Nov 2011 05:07:43 -0600 Subject: [SciPy-User] Optimization, Matlab/Octave and Duplication In-Reply-To: References: Message-ID: <4ECA30FF.2030906@creativetrax.com> On 11/20/11 10:37 PM, Benedikt Riedel wrote: >> > 3. I was searching for a LP solver and it seems SciPy doesn't have it! >> > But there's cvxopt, am I wrong? So, there's no future plans on this >> > area, I presume, i. e., no LP solver for SciPy? I would like to have a >> > general answer (i. e., "when and when not should SciPy has this/that >> > funtionality?"), which leads to my third question... > Sage has that as far as I recall. > Yes. See http://www.sagemath.org/doc/thematic_tutorials/linear_programming.html and http://www.sagemath.org/doc/reference/sage/numerical/mip.html Thanks, Jason From lou_boog2000 at yahoo.com Mon Nov 21 07:18:54 2011 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Mon, 21 Nov 2011 04:18:54 -0800 (PST) Subject: [SciPy-User] Optimization, Matlab/Octave and Duplication In-Reply-To: References: Message-ID: <1321877934.72412.YahooMailNeo@web34401.mail.mud.yahoo.com> From: Benedikt Riedel To: SciPy Users List Sent: Sunday, November 20, 2011 11:37 PM Subject: Re: [SciPy-User] Optimization, Matlab/Octave and Duplication On Sun, Nov 20, 2011 at 18:42, Felipe Schneider [cut] > 2. Is there anything similar to Matlab's Toolboxes or Octave's > Octave-Forge? Or is it all a huge pack? Have you looked at Sage? It is basically Mathematica, but with Numpy, etc, as a backend. http://www.sagemath.org/ --------- I will add a second recommendation to SAGE. ?It's a big package, but I've never had trouble installing it. ?It contains a *lot* of various libraries. ?Many on symbolic math (like Mathematica), but a lot of Python modules, too, including ctypes and Cython which are good for speeding up python functions, and plotting modules. ?It's a self contained package. ?It has a whole Python interpreter in it. ?Nothing else to install. ? ? -- Lou Pecora, my views are my own. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From raycores at gmail.com Mon Nov 21 11:29:36 2011 From: raycores at gmail.com (Lynn Oliver) Date: Mon, 21 Nov 2011 08:29:36 -0800 Subject: [SciPy-User] interp1d results vs. MatLab interp1 Message-ID: <803D30DA-4E4B-4D7E-B26C-28D8C31911F8@gmail.com> I'm converting a MatLab program to Python, and I'm having problems understanding why scipy.interpolate.interp1d is giving different results than MatLab interp1. In MatLab the usage is slightly different: yi = interp1(x,Y,xi,'cubic') While in SciPy it's like this: f = interp1d(x,Y,kind='cubic') yi = f(xi) For a trivial example the results are the same: MatLab: interp1([0 1 2 3 4], [0 1 2 3 4],[1.5 2.5 3.5],'cubic') 1.5000 2.5000 3.5000 Python: interp1d([1,2,3,4],[1,2,3,4],kind='cubic')([1.5,2.5,3.5]) array([ 1.5, 2.5, 3.5]) But for a real-world example they are not the same: x = 0.0000e+000 2.1333e+001 3.2000e+001 1.6000e+004 2.1333e+004 2.3994e+004 Y = -6 -6 20 20 -6 -6 xi = 0.00000 11.72161 23.44322 35.16484 Matlab: -6.0000 -12.3303 -3.7384 22.7127 Python: -6. -15.63041012 -2.04908267 30.43054192 Any thoughts as to how I can get results that are consistent with MatLab? Thanks- Lynn -------------- next part -------------- An HTML attachment was scrubbed... URL: From perry at stsci.edu Mon Nov 21 11:51:18 2011 From: perry at stsci.edu (Perry Greenfield) Date: Mon, 21 Nov 2011 11:51:18 -0500 Subject: [SciPy-User] [job] two STScI positions Message-ID: <833ABCF7-8E82-4DC3-A764-EA230D5A901D@stsci.edu> STScI has posted two positions in the Science Software Branch: astronomical applications developer: https://rn11.ultipro.com/SPA1004/jobboard/JobDetails.aspx?__ID=*B12AC582943DA7BA software distribution and installation support: https://rn11.ultipro.com/SPA1004/jobboard/JobDetails.aspx?__ID=*61BE073DB16951DE From pav at iki.fi Mon Nov 21 11:53:09 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 21 Nov 2011 17:53:09 +0100 Subject: [SciPy-User] interp1d results vs. MatLab interp1 In-Reply-To: <803D30DA-4E4B-4D7E-B26C-28D8C31911F8@gmail.com> References: <803D30DA-4E4B-4D7E-B26C-28D8C31911F8@gmail.com> Message-ID: 21.11.2011 17:29, Lynn Oliver kirjoitti: > I'm converting a MatLab program to Python, and I'm having problems > understanding why scipy.interpolate.interp1d is giving different results > than MatLab interp1. With cubic splines, there is freedom in choosing the interpolants, so there are many different "cubic" spline interpolation schemes. Matlab's interp1's 'cubic' mode apparently produces a C1 continuous spline that is monotonicity-preserving. I don't think such a mode is currently implemented in Scipy. -- Pauli Virtanen From charlesr.harris at gmail.com Mon Nov 21 13:13:58 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 21 Nov 2011 11:13:58 -0700 Subject: [SciPy-User] interp1d results vs. MatLab interp1 In-Reply-To: References: <803D30DA-4E4B-4D7E-B26C-28D8C31911F8@gmail.com> Message-ID: On Mon, Nov 21, 2011 at 9:53 AM, Pauli Virtanen wrote: > 21.11.2011 17:29, Lynn Oliver kirjoitti: > > I'm converting a MatLab program to Python, and I'm having problems > > understanding why scipy.interpolate.interp1d is giving different results > > than MatLab interp1. > > With cubic splines, there is freedom in choosing the interpolants, so > there are many different "cubic" spline interpolation schemes. > > Matlab's interp1's 'cubic' mode apparently produces a C1 continuous > spline that is monotonicity-preserving. I don't think such a mode is > currently implemented in Scipy. > > The boundary conditions can make a difference. I expect, given De Boor's participation, that the Matlab spline uses not-a-knot boundary conditions when no other boundary conditions are specified. I'm not sure what interp1d does. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Nov 21 13:28:37 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 21 Nov 2011 19:28:37 +0100 Subject: [SciPy-User] interp1d results vs. MatLab interp1 In-Reply-To: References: <803D30DA-4E4B-4D7E-B26C-28D8C31911F8@gmail.com> Message-ID: 21.11.2011 19:13, Charles R Harris kirjoitti: [clip] > The boundary conditions can make a difference. I expect, given De Boor's > participation, that the Matlab spline uses not-a-knot boundary > conditions when no other boundary conditions are specified. I'm not sure > what interp1d does. It's not only the boundary conditions: you can also make a choice whether you want C2 contiguity, or if you stick with C1 which gives you more freedom to play around with other things such as monotonicity. -- Pauli Virtanen From charlesr.harris at gmail.com Mon Nov 21 14:39:58 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 21 Nov 2011 12:39:58 -0700 Subject: [SciPy-User] interp1d results vs. MatLab interp1 In-Reply-To: References: <803D30DA-4E4B-4D7E-B26C-28D8C31911F8@gmail.com> Message-ID: On Mon, Nov 21, 2011 at 11:28 AM, Pauli Virtanen wrote: > 21.11.2011 19:13, Charles R Harris kirjoitti: > [clip] > > The boundary conditions can make a difference. I expect, given De Boor's > > participation, that the Matlab spline uses not-a-knot boundary > > conditions when no other boundary conditions are specified. I'm not sure > > what interp1d does. > > It's not only the boundary conditions: you can also make a choice > whether you want C2 contiguity, or if you stick with C1 which gives you > more freedom to play around with other things such as monotonicity. > > Is that an option in interp1d? That is usually done for b-splines by using repeated knot points. When the knot points are isolated then the spline and all derivatives except the last non-zero one are continuous. Each repeat of the knot point drops the number of continuity conditions by one, so that in the cubic spline case a knot point repeated four times allows the spline to be discontinuous at that point, whereas zero knot points, i.e., between knot points, requires continuity to all orders. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From schneider.felipe.5 at gmail.com Mon Nov 21 19:34:41 2011 From: schneider.felipe.5 at gmail.com (Felipe Schneider) Date: Mon, 21 Nov 2011 22:34:41 -0200 Subject: [SciPy-User] Optimization, Matlab/Octave and Duplication In-Reply-To: <1321877934.72412.YahooMailNeo@web34401.mail.mud.yahoo.com> References: <1321877934.72412.YahooMailNeo@web34401.mail.mud.yahoo.com> Message-ID: Thank you all for the answers. I just have another one. Is there any interest in having quadratic and linear programming solvers within SciPy? And, does it have anything to do with licensing of third-party software? My question is motivated by the fact that I searched for open-source packages for these kind of applications and I almost found only (L)GPL'ed ones, which of course are not compatible with the license SciPy uses (found Fortran simplex routines here [1] though, but I have no idea which kind of licensing is used there). [1]: http://www.netlib.org/toms/ From strikerbot121 at gmail.com Tue Nov 22 12:45:04 2011 From: strikerbot121 at gmail.com (Matthew Arceri) Date: Tue, 22 Nov 2011 12:45:04 -0500 Subject: [SciPy-User] Basic bandpass filtering/fourier transform of live audio? Message-ID: I'm a highschool senior working on an engineering project where I'm trying to make a 3D positioning system using microphones. The problem I'm having is that I can't find anyway to filter specific frequencies from a live microphone input. I've found a number of python libraries that seem able to achieve just that but they any examples I find are for far more complicated things. SciPi is the only one with a decent following that could possibly help me. So, to reiterate, I wanted to know how I could choose an audio port and look for a specific frequency being picked up on that port. (I'm using Ubuntu, Python 2.7.1, and the latest version of Numpy + Scipy) -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.l.anderson at phonecoop.coop Tue Nov 22 13:43:32 2011 From: j.l.anderson at phonecoop.coop (Joseph Anderson) Date: Tue, 22 Nov 2011 18:43:32 +0000 Subject: [SciPy-User] Basic bandpass filtering/fourier transform of live audio? In-Reply-To: References: Message-ID: Hello Matthew, For this application I would strongly consider using a system designed and optimised for real-time DSP. SuperCollider would be my first choice: http://supercollider.sourceforge.net/ My kind regards, J Anderson On 22 Nov 2011, at 5:45 pm, Matthew Arceri wrote: > I'm a highschool senior working on an engineering project where I'm trying to make a 3D positioning system using microphones. The problem I'm having is that I can't find anyway to filter specific frequencies from a live microphone input. I've found a number of python libraries that seem able to achieve just that but they any examples I find are for far more complicated things. SciPi is the only one with a decent following that could possibly help me. So, to reiterate, I wanted to know how I could choose an audio port and look for a specific frequency being picked up on that port. (I'm using Ubuntu, Python 2.7.1, and the latest version of Numpy + Scipy) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From PSchmidt at watlow.com Tue Nov 22 15:11:06 2011 From: PSchmidt at watlow.com (Schmidt, Phil) Date: Tue, 22 Nov 2011 20:11:06 +0000 Subject: [SciPy-User] strange behavior calling odeint from brentq Message-ID: <0782B99B7E1D1745B7A6567C05E892ED0F5839@WATEXC2010.Watlow.com> Hello, I am implementing the shooting method using optimize.brentq() and integrate.odeint(). The following is an outline of my code: def objective(t2, *args): t1 = args[0] x_init = args[1] x_target = args[2] x = odeint(dxdt, x_init, [t1, t2]) return x - x_target t_target = brentq(objective, t1, t2, args=(t1, x_init, x_target)) I have observed that if I place do-nothing statements in the objective function (e.g., print statements or dummy assignments like t1=t1), sometimes I will get different answers for t_target. I have not identified a pattern for when this may or may not occur, but presumably there is some dependency between brentq() and odeint(). I am running Scipy 0.9.0rc3, Python 2.6.5, Windows XP. Can anyone explain why this is happening, and point me to the "right" way to do what I'm attemtping? Thanks, Phil ______________________________________________________________________ This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited. All e-mails and attachments sent and received are subject to monitoring, reading and archival by Watlow. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware". Watlow accepts no liability for any damage caused by any such code transmitted by or accompanying this e-mail or any attachment. Please note that any views or opinions presented in this email are solely those of the author and do not necessarily represent those of Watlow. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.stowell at eecs.qmul.ac.uk Wed Nov 23 04:41:57 2011 From: dan.stowell at eecs.qmul.ac.uk (Dan Stowell) Date: Wed, 23 Nov 2011 09:41:57 +0000 Subject: [SciPy-User] fmin_cg fmin_bfgs "Desired error not necessarily achieveddue to precision loss" In-Reply-To: <4EC80153.4060501@eecs.qmul.ac.uk> References: <4EC80153.4060501@eecs.qmul.ac.uk> Message-ID: <4ECCBFE5.4050308@eecs.qmul.ac.uk> (Bump) Anyone got any suggestions about this "precision loss" issue, please? I found this message from last year, suggesting that using dot instead of sum might help (yuck): http://comments.gmane.org/gmane.comp.python.numeric.general/41268 - but no difference here, I still get the optimisation stopping after three iterations with that complaint. Any tips welcome Thanks Dan On 19/11/11 19:19, Dan Stowell wrote: > Hi, > > I'm translating a fairly straightforward optimisation code example from > octave. (Attached - it does a quadratic regression, with a tweaked > regularisation function.) > > Both fmin_cg and fmin_bfgs give me poor convergence and this warning: > > "Desired error not necessarily achieveddue to precision loss" > > This is with various regularisation strengths, with normalised data, and > with high-precision data (float128). > > Is there something I can do to enable these to converge properly? > > Thanks > Dan > > (Using ubuntu 11.04, python 2.7.1, scipy 0.8) > -- Dan Stowell Postdoctoral Research Assistant Centre for Digital Music Queen Mary, University of London Mile End Road, London E1 4NS http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm http://www.mcld.co.uk/ From eraldo.pomponi at gmail.com Wed Nov 23 07:23:17 2011 From: eraldo.pomponi at gmail.com (Eraldo Pomponi) Date: Wed, 23 Nov 2011 13:23:17 +0100 Subject: [SciPy-User] Integration over Voronoi cells Message-ID: Hi folks, I'm working on the integration of a function like: K(r,t) = 1/(2piDr)exp[-r^2/Dt] [1] over Voronoi cells (r is the distance from the point at which is associated the cell). I googled a lot and I found this two useful hints: http://stackoverflow.com/questions/5941113/looking-for-python-package-for-numerical-integration-over-a-tessellated-domain http://mathforum.org/kb/message.jspa?messageID=4963570&tstart=0 but I'm still not able to understand how I should do this integration. I have the function that returns the segments ( list of [(x_star,y_start),(x_sop,y,stop)] ) necessary to construct the Voronoi cells associated to a set of points. Could someone suggest how to proceed ? There's also a numerical problem connected with the integration due to the singularity in r==0. Could you suggest which is a reasonable stable integration method available in scipy that could handle the function [1]? Cheers, Eraldo -------------- next part -------------- An HTML attachment was scrubbed... URL: From deil.christoph at googlemail.com Wed Nov 23 09:15:47 2011 From: deil.christoph at googlemail.com (Christoph Deil) Date: Wed, 23 Nov 2011 15:15:47 +0100 Subject: [SciPy-User] Numpy / Scipy build / test errors on Mac OS X Lion with Macports Message-ID: Hi, I would like to use numpy / scipy git master on Mac OS X Lion with XCode 4.2. The recommendation at http://www.scipy.org/Installing_SciPy/Mac_OS_X is to use the official Python distribution, but I already have a ton of other libraries and python packages (e.g. ipython 0.11 with working qtconsole) installed with the Macports python, and I guess it is not possible to mix packages installed with the Macports / official python? Would it be helpful if I file tickets with the build and test logs for numpy and scipy using the XCode and Macports compilers, or are these problems well-known and simply too hard to fix? I frequently had build problems with numpy / scipy also on other machines and would like to understand the issues better. E.g. does numpy need a Fortran compiler or is it all C/C++? Why are there often problems with gfortran versions/builds other than http://r.research.att.com/gfortran-lion-5666-3.pkg? Is the problem that the fortran compilers or the numpy / scipy libraries are non-standard-complient? Does anyone have a reference that explains building C/C++/Fortran Python extensions in general or specifically for numpy / scipy? Thanks! Christoph From robert.kern at gmail.com Wed Nov 23 09:26:40 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 Nov 2011 14:26:40 +0000 Subject: [SciPy-User] Numpy / Scipy build / test errors on Mac OS X Lion with Macports In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 14:15, Christoph Deil wrote: > Hi, > > I would like to use numpy / scipy git master on Mac OS X Lion with XCode 4.2. > > The recommendation at http://www.scipy.org/Installing_SciPy/Mac_OS_X is to use the official Python distribution, but I already have a ton of other libraries and python packages (e.g. ipython 0.11 with working qtconsole) installed with the Macports python, and I guess it is not possible to mix packages installed with the Macports / official python? Probably not. > Would it be helpful if I file tickets with the build and test logs for numpy and scipy using the XCode and Macports compilers, or are these problems well-known and simply too hard to fix? I don't know what problems you are referring to, so yes, reporting them would help. > I frequently had build problems with numpy / scipy also on other machines and would like to understand the issues better. > E.g. does numpy need a Fortran compiler or is it all C/C++? numpy is all C. The only time you need a Fortran compiler to build numpy is if you link against Fortran-compiled BLAS/LAPACK libraries. scipy does have Fortran code that it needs to compile. > Why are there often problems with gfortran versions/builds other than http://r.research.att.com/gfortran-lion-5666-3.pkg? Is the problem that the fortran compilers or the numpy / scipy libraries are non-standard-complient? The "problem", such as it is, is that Apple extended gcc to add several flags for handling its multiple -arch flags and for implementing OS X's special brand of dynamic linking, both of which are necessary to build Python extensions for framework builds of Python on OS X. However, Apple does not provide similarly extended builds of gfortran. They leave that to third parties. The R group that makes the gfortran binaries at http://r.research.att.com have consistently made good builds of gfortran that provide these flags. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From bala.biophysics at gmail.com Wed Nov 23 09:29:29 2011 From: bala.biophysics at gmail.com (Bala subramanian) Date: Wed, 23 Nov 2011 15:29:29 +0100 Subject: [SciPy-User] meshgrid for 3D Message-ID: Friends, I have a data file containing three vectors x,y,z and want to create a coordinate matrices from three vectors. While i know that the numpy's meshgrid function can be used for two vectors, i dnt any tool which i can use for three dimension. Kindly suggest me some solution. Thanks, Bala -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.dat Type: application/x-ns-proxy-autoconfig Size: 920 bytes Desc: not available URL: From josef.pktd at gmail.com Wed Nov 23 09:48:50 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Nov 2011 09:48:50 -0500 Subject: [SciPy-User] fmin_cg fmin_bfgs "Desired error not necessarily achieveddue to precision loss" In-Reply-To: <4ECCBFE5.4050308@eecs.qmul.ac.uk> References: <4EC80153.4060501@eecs.qmul.ac.uk> <4ECCBFE5.4050308@eecs.qmul.ac.uk> Message-ID: On Wed, Nov 23, 2011 at 4:41 AM, Dan Stowell wrote: > (Bump) > > Anyone got any suggestions about this "precision loss" issue, please? > > I found this message from last year, suggesting that using dot instead > of sum might help (yuck): > http://comments.gmane.org/gmane.comp.python.numeric.general/41268 > > - but no difference here, I still get the optimisation stopping after > three iterations with that complaint. something is wrong with the gradient calculation If I drop fprime in the call to fmin_bfgs, then it converges after 11 to 14 iterations (600 in the last case) fmin also doesn't have any problems with convergence (I'm using just float64) Josef > > Any tips welcome > > Thanks > Dan > > > > On 19/11/11 19:19, Dan Stowell wrote: >> Hi, >> >> I'm translating a fairly straightforward optimisation code example from >> octave. (Attached - it does a quadratic regression, with a tweaked >> regularisation function.) >> >> Both fmin_cg and fmin_bfgs give me poor convergence and this warning: >> >> "Desired error not necessarily achieveddue to precision loss" >> >> This is with various regularisation strengths, with normalised data, and >> with high-precision data (float128). >> >> Is there something I can do to enable these to converge properly? >> >> Thanks >> Dan >> >> (Using ubuntu 11.04, python 2.7.1, scipy 0.8) >> > > -- > Dan Stowell > Postdoctoral Research Assistant > Centre for Digital Music > Queen Mary, University of London > Mile End Road, London E1 4NS > http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm > http://www.mcld.co.uk/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From friedrichromstedt at gmail.com Wed Nov 23 09:49:59 2011 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Wed, 23 Nov 2011 15:49:59 +0100 Subject: [SciPy-User] Integration over Voronoi cells In-Reply-To: References: Message-ID: 2011/11/23 Eraldo Pomponi : > Hi folks, > I'm working on the integration of a function like: > K(r,t) = 1/(2piDr)exp[-r^2/Dt] ? ? ? ? ? ? ? ? ? ? ? [1] > over Voronoi cells (r is the distance from the point at which is associated > the cell). > I googled a lot and?I found this two useful hints: > http://stackoverflow.com/questions/5941113/looking-for-python-package-for-numerical-integration-over-a-tessellated-domain > http://mathforum.org/kb/message.jspa?messageID=4963570&tstart=0 > but I'm still not able?to understand how I should do this integration. > I have the function that returns the segments ( list of > [(x_star,y_start),(x_sop,y,stop)] ) necessary to construct the Voronoi cells > associated to a set of points. > Could someone suggest how to proceed ? > There's also a numerical problem connected with the integration due to the > singularity in r==0. > Could you suggest which is a reasonable stable integration?method available > in scipy > that could handle the function [1]? Well, AISI, your function is radially symmetric, so the integration over 1/r can be done analytically within a small disk where the Gaussian is approximately one. Since: 2 \pi r * [1 / (2 \pi D r)] is just 1/D constantly. So you're left with [the integration up to] R_0/D when R_0 is the radius of that disk (where the integration is done analytically). [No more divergence, since you cut out the centre region.] You could also just integrate with the full Gaussian, since the kernel just integrates up the lengthes of the disk perimeters, if you understand. All you need to do, AISI atm, is to calculate a function that gives you the perimeter length of the circle which is inside your Voronoi cell. This is zero at r = 0 and will stay bounded for all finite radii. So integration should be fairly straightforward. You could even just do a sum over a 1D grid for the radius, since the function varies slowly, this should be fast and easy (both in runtime as in coding time). You might not even need scipy for this. Additionally, the Gaussian just introduces a suppression of radii which are farther outside. It makes the complete function bounded even for an infinitely large Voronoi cell :-) The kernel 1/r makes things easy and convenient, instead of making things troublesome: 1) because it exhibits rotational invariance; 2) because it converges nicely in 2D for an integral over a circle line I might overlook something obvious. Is the function you gave really the full function or only the "kernel", the weighting function? Friedrich P.S.: You divide your tringles resulting from the centre point and the boundary lines into two parts, separated by the closest point on the boundary, which is always in the bounds of that boundary line. Then you just have a perimeter length which is linear in r until you touch the line, and then it'll be a littel more complicated. I think you'll figure it out. P.P.S.: You're really lucky that your kernel is 1/r and not 1/r^2, iirc ;-) From dan.stowell at eecs.qmul.ac.uk Wed Nov 23 10:02:09 2011 From: dan.stowell at eecs.qmul.ac.uk (Dan Stowell) Date: Wed, 23 Nov 2011 15:02:09 +0000 Subject: [SciPy-User] fmin_cg fmin_bfgs "Desired error not necessarily achieveddue to precision loss" In-Reply-To: References: <4EC80153.4060501@eecs.qmul.ac.uk> <4ECCBFE5.4050308@eecs.qmul.ac.uk> Message-ID: <4ECD0AF1.7020901@eecs.qmul.ac.uk> On 23/11/2011 14:48, josef.pktd at gmail.com wrote: > On Wed, Nov 23, 2011 at 4:41 AM, Dan Stowell > wrote: >> (Bump) >> >> Anyone got any suggestions about this "precision loss" issue, please? >> >> I found this message from last year, suggesting that using dot instead >> of sum might help (yuck): >> http://comments.gmane.org/gmane.comp.python.numeric.general/41268 >> >> - but no difference here, I still get the optimisation stopping after >> three iterations with that complaint. > > something is wrong with the gradient calculation > > If I drop fprime in the call to fmin_bfgs, then it converges after 11 > to 14 iterations (600 in the last case) > > fmin also doesn't have any problems with convergence > > (I'm using just float64) > > Josef Thanks, you're absolutely right. (Also, plain 'fmin' converges easily.) I've found the problem now. I was assuming that the function would preserve the shape of my parameter vector (a column vector), whereas it was feeding a row vector into my functions, causing wrong behaviour. A bit of reshape fixed it. Thanks Dan >> Any tips welcome >> >> Thanks >> Dan >> >> >> >> On 19/11/11 19:19, Dan Stowell wrote: >>> Hi, >>> >>> I'm translating a fairly straightforward optimisation code example from >>> octave. (Attached - it does a quadratic regression, with a tweaked >>> regularisation function.) >>> >>> Both fmin_cg and fmin_bfgs give me poor convergence and this warning: >>> >>> "Desired error not necessarily achieveddue to precision loss" >>> >>> This is with various regularisation strengths, with normalised data, and >>> with high-precision data (float128). >>> >>> Is there something I can do to enable these to converge properly? >>> >>> Thanks >>> Dan >>> >>> (Using ubuntu 11.04, python 2.7.1, scipy 0.8) >>> >> >> -- >> Dan Stowell >> Postdoctoral Research Assistant >> Centre for Digital Music >> Queen Mary, University of London >> Mile End Road, London E1 4NS >> http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm >> http://www.mcld.co.uk/ >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Dan Stowell Postdoctoral Research Assistant Centre for Digital Music Queen Mary, University of London Mile End Road, London E1 4NS http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm http://www.mcld.co.uk/ From pitch006 at umn.edu Wed Nov 23 12:11:58 2011 From: pitch006 at umn.edu (David Pitchford) Date: Wed, 23 Nov 2011 11:11:58 -0600 Subject: [SciPy-User] Problems installing scipy 0.10.0 using a local installation of numpy 1.6.1 Message-ID: I am trying to install scipy on a lab computer for research at my university. I do not have root access to these machines, so when I need to install new python modules I usually install them locally to a folder I have control over (using python setup.py build), then add it to my PYTHONPATH variable. I got numpy 1.6.1 working this way, but when I try to install scipy 0.10.0 with my PYTHONPATH variable pointing to build/lib.linux-x86_64-2.6/ (relative to the top-level directory numpy is in, numpy-1.6.1), I get some errors that appear to involve files and directories that were expected and not found. Here is part of the output from my build command: umfpack_info: libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib64 libraries umfpack not found in /usr/lib libraries umfpack not found in /opt/local/lib /export/scratch/pitch/trunk/pyserver/numpy-1.6.1/build/lib.linux-x86_64-2.6/numpy/distutils/system_info.py:460: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE non-existing path in 'scipy/spatial': '/export/scratch/pitch/trunk/pyserver/numpy-1.6.1/build/lib.linux-x86_64-2.6/numpy/core/include' non-existing path in 'scipy/spatial': '/export/scratch/pitch/trunk/pyserver/numpy-1.6.1/build/lib.linux-x86_64-2.6/numpy/core/include' non-existing path in 'scipy/special': '/export/scratch/pitch/trunk/pyserver/numpy-1.6.1/build/lib.linux-x86_64-2.6/numpy/core/include' non-existing path in 'scipy/special': '/export/scratch/pitch/trunk/pyserver/numpy-1.6.1/build/lib.linux-x86_64-2.6/numpy/core/include' Traceback (most recent call last): [Most of long stack trace omitted] File "/export/scratch/pitch/trunk/pyserver/numpy-1.6.1/build/lib.linux-x86_64-2.6/numpy/distutils/npy_pkg_config.py", line 309, in _read_config meta, vars, sections, reqs = parse_config(f, dirs) File "/export/scratch/pitch/trunk/pyserver/numpy-1.6.1/build/lib.linux-x86_64-2.6/numpy/distutils/npy_pkg_config.py", line 281, in parse_config raise PkgNotFound("Could not find file(s) %s" % str(filenames)) numpy.distutils.npy_pkg_config.PkgNotFound: Could not find file(s) ['/export/scratch/pitch/trunk/pyserver/numpy-1.6.1/build/lib.linux-x86_64-2.6/numpy/core/lib/npy-pkg-config/npymath.ini'] It seems like these files are supposed to be there, but they aren't, and I have no idea why. Is it possible to install scipy locally this way? If so, do I need to change something in my installation of numpy or PYTHONPATH so it finds the files? If not, I can ask my administrator to install the packages for me. (Which is a last resort, as I would expect it to take at least a week) -- -David Pitchford -------------- next part -------------- An HTML attachment was scrubbed... URL: From bala.biophysics at gmail.com Wed Nov 23 12:12:21 2011 From: bala.biophysics at gmail.com (Bala subramanian) Date: Wed, 23 Nov 2011 18:12:21 +0100 Subject: [SciPy-User] importing griddata Message-ID: Friends, I have scipy version 0.8.0 installed in my system running with ubuntu. When i give 'import scipy' or 'import scipy.interpolate' i dnt get any problem. When i give from scipy.interpolate import griddata, i get an import error message. Kindly write why this happens. Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) [GCC 4.5.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> *import scipy* >>> *from scipy.interpolate import griddata* Traceback (most recent call last): File "", line 1, in ImportError: cannot import name griddata >>> *import scipy.interpolate* However in the scipy cookbook ( http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data) importing the griddata is possible. Thanks, Bala -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Wed Nov 23 12:21:39 2011 From: lists at hilboll.de (Andreas H.) Date: Wed, 23 Nov 2011 18:21:39 +0100 Subject: [SciPy-User] importing griddata In-Reply-To: References: Message-ID: <4ECD2BA3.3090301@hilboll.de> When you look in the docs at http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html you see the line New in version 0.9. This should answer your question. Cheers, Andreas. Am 23.11.2011 18:12, schrieb Bala subramanian: > Friends, > I have scipy version 0.8.0 installed in my system running with ubuntu. > When i give 'import scipy' or 'import scipy.interpolate' i dnt get any > problem. When i give from scipy.interpolate import griddata, i get an > import error message. Kindly write why this happens. > > Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) > [GCC 4.5.2] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> *import scipy* >>>> *from scipy.interpolate import griddata* > Traceback (most recent call last): > File "", line 1, in > ImportError: cannot import name griddata >>>> *import scipy.interpolate* > > However in the scipy cookbook > (http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data) > importing the griddata is possible. > > Thanks, > Bala > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From deil.christoph at googlemail.com Wed Nov 23 12:54:22 2011 From: deil.christoph at googlemail.com (Christoph Deil) Date: Wed, 23 Nov 2011 18:54:22 +0100 Subject: [SciPy-User] Numpy / Scipy build / test errors on Mac OS X Lion with Macports In-Reply-To: References: Message-ID: <73CE5D4C-B0C9-4AF9-8E38-46D6F726FE93@googlemail.com> On Nov 23, 2011, at 3:26 PM, Robert Kern wrote: >> Would it be helpful if I file tickets with the build and test logs for numpy and scipy using the XCode and Macports compilers, or are these problems well-known and simply too hard to fix? > > I don't know what problems you are referring to, so yes, reporting > them would help. I tried to install numpy 2.0.0.dev-7f302cc on Mac OS X Lion 10.7.2 (11C74) XCode 4.2.1 (4D502) using these compilers / python: $ which python; python /opt/local/bin/python Python 2.7.2 (default, Nov 23 2011, 11:40:08) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin $ which gcc; gcc --version /usr/bin/gcc i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) $ which gfortran; gfortran --version /opt/local/bin/gfortran GNU Fortran (GCC) 4.4.6 The build log contains many errors such as: _configtest.c:1:20: error: endian.h: No such file or directory _configtest.c:5: error: size of array 'test_array' is negative _configtest.c:7: error: 'SIZEOF_LONGDOUBLE' undeclared (first use in this function) _configtest.c:8: error: 'HAVE_DECL_SIGNBIT' undeclared (first use in this function) _configtest.c:7: error: 'Py_UNICODE_WIDE' undeclared (first use in this function) but it all starts with this warning: numpy/core/setup_common.py:86: MismatchCAPIWarning: API mismatch detected, the C API version numbers have to be updated. Current C api version is 6, with checksum eb54c77ff4149bab310324cd7c0cb176, but recorded checksum for C API version 6 in codegen_dir/cversions.txt is e61d5dc51fa1c6459328266e215d6987. If functions were added in the C API, you have to update C_API_VERSION in numpy/core/setup_common.py. MismatchCAPIWarning) I tried rm -r and get a fresh git clone as suggested here: http://comments.gmane.org/gmane.comp.python.numeric.general/38033 Same result. Nevertheless I do get a numpy that works for the most part, running the full test gives only one RuntimeWarning related to the power function and one failure in polyfit: /Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/core.py:4778: RuntimeWarning: invalid value encountered in power np.power(out, 0.5, out=out, casting='unsafe') ====================================================================== FAIL: Tests polyfit ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/tests/test_extras.py", line 622, in test_polyfit assert_almost_equal(a, a_) File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", line 155, in assert_almost_equal err_msg=err_msg, verbose=verbose) File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", line 221, in assert_array_almost_equal header='Arrays are not almost equal') File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", line 186, in assert_array_compare verbose=verbose, header=header) File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/testing/utils.py", line 677, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.25134878, 1.14131297, 0.20519666, 0.01701 ]) y: array([ 1.9345248 , 0.49711011, 0.10202554, 0.00928034]) What is the reason for the C API Mismatch? I also see it with clang and on a linux box so I'm not sure if it's a problem with my installation or with numpy HEAD. How can I resolve it? Is this the cause for the following compile errors? Christoph -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Nov 23 13:04:54 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 Nov 2011 11:04:54 -0700 Subject: [SciPy-User] Numpy / Scipy build / test errors on Mac OS X Lion with Macports In-Reply-To: <73CE5D4C-B0C9-4AF9-8E38-46D6F726FE93@googlemail.com> References: <73CE5D4C-B0C9-4AF9-8E38-46D6F726FE93@googlemail.com> Message-ID: On Wed, Nov 23, 2011 at 10:54 AM, Christoph Deil < deil.christoph at googlemail.com> wrote: > > On Nov 23, 2011, at 3:26 PM, Robert Kern wrote: > > Would it be helpful if I file tickets with the build and test logs for > numpy and scipy using the XCode and Macports compilers, or are these > problems well-known and simply too hard to fix? > > > I don't know what problems you are referring to, so yes, reporting > them would help. > > > I tried to install numpy 2.0.0.dev-7f302cc on > Mac OS X Lion 10.7.2 (11C74) > XCode 4.2.1 (4D502) > > using these compilers / python: > $ which python; python > /opt/local/bin/python > Python 2.7.2 (default, Nov 23 2011, 11:40:08) > [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on > darwin > $ which gcc; gcc --version > /usr/bin/gcc > i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build > 5658) (LLVM build 2336.1.00) > $ which gfortran; gfortran --version > /opt/local/bin/gfortran > GNU Fortran (GCC) 4.4.6 > > The build log contains many errors such as: > _configtest.c:1:20: error: endian.h: No such file or directory > _configtest.c:5: error: size of array 'test_array' is negative > _configtest.c:7: error: 'SIZEOF_LONGDOUBLE' undeclared (first use in this > function) > _configtest.c:8: error: 'HAVE_DECL_SIGNBIT' undeclared (first use in this > function) > _configtest.c:7: error: 'Py_UNICODE_WIDE' undeclared (first use in this > function) > > but it all starts with this warning: > numpy/core/setup_common.py:86: MismatchCAPIWarning: API mismatch detected, > the C API version numbers have to be updated. Current C api version is 6, > with checksum eb54c77ff4149bab310324cd7c0cb176, but recorded checksum for C > API version 6 in codegen_dir/cversions.txt is > e61d5dc51fa1c6459328266e215d6987. If functions were added in the C API, you > have to update C_API_VERSION in numpy/core/setup_common.py. > MismatchCAPIWarning) > > I tried rm -r and get a fresh git clone as suggested here: > http://comments.gmane.org/gmane.comp.python.numeric.general/38033 > Same result. > > Nevertheless I do get a numpy that works for the most part, running the > full test > gives only one RuntimeWarning related to the power function and one > failure in polyfit: > > /Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/core.py:4778: > RuntimeWarning: invalid value encountered in power > np.power(out, 0.5, out=out, casting='unsafe') > > ====================================================================== > FAIL: Tests polyfit > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/tests/test_extras.py", > line 622, in test_polyfit > assert_almost_equal(a, a_) > File > "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", > line 155, in assert_almost_equal > err_msg=err_msg, verbose=verbose) > File > "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", > line 221, in assert_array_almost_equal > header='Arrays are not almost equal') > File > "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", > line 186, in assert_array_compare > verbose=verbose, header=header) > File > "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/testing/utils.py", > line 677, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 100.0%) > x: array([ 4.25134878, 1.14131297, 0.20519666, 0.01701 ]) > y: array([ 1.9345248 , 0.49711011, 0.10202554, 0.00928034]) > > Don't worry about this one, it comes from Travis changing the unmasked version of polyfit which changes the singular values, etc. What is the reason for the C API Mismatch? > I also see it with clang and on a linux box so I'm not sure if it's a > problem with my installation or with numpy HEAD. > How can I resolve it? > Is this the cause for the following compile errors? > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Nov 23 13:39:15 2011 From: cournape at gmail.com (David Cournapeau) Date: Wed, 23 Nov 2011 18:39:15 +0000 Subject: [SciPy-User] Numpy / Scipy build / test errors on Mac OS X Lion with Macports In-Reply-To: <73CE5D4C-B0C9-4AF9-8E38-46D6F726FE93@googlemail.com> References: <73CE5D4C-B0C9-4AF9-8E38-46D6F726FE93@googlemail.com> Message-ID: On Wed, Nov 23, 2011 at 5:54 PM, Christoph Deil wrote: > > On Nov 23, 2011, at 3:26 PM, Robert Kern wrote: > > Would it be helpful if I file tickets with the build and test logs for numpy > and scipy using the XCode and Macports compilers, or are these problems > well-known and simply too hard to fix? > > I don't know what problems you are referring to, so yes, reporting > them would help. > > I tried to install numpy 2.0.0.dev-7f302cc on > Mac OS X Lion 10.7.2 (11C74) > XCode 4.2.1 (4D502) > using these compilers / python: > $ which python; python > /opt/local/bin/python > Python 2.7.2 (default, Nov 23 2011, 11:40:08) > [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on > darwin > $ which gcc; gcc --version > /usr/bin/gcc > i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build > 5658) (LLVM build 2336.1.00) > $ which gfortran; gfortran --version > /opt/local/bin/gfortran > GNU Fortran (GCC) 4.4.6 > > The build log contains many errors such as: > _configtest.c:1:20: error: endian.h: No such file or directory > _configtest.c:5: error: size of array 'test_array' is negative > _configtest.c:7: error: 'SIZEOF_LONGDOUBLE' undeclared (first use in this > function) > _configtest.c:8: error: 'HAVE_DECL_SIGNBIT' undeclared (first use in this > function) > _configtest.c:7: error: 'Py_UNICODE_WIDE' undeclared (first use in this > function) Those are errors happening in configuration checks, i.e. most of them are expected to fail: this is expected. David From cournape at gmail.com Wed Nov 23 16:31:45 2011 From: cournape at gmail.com (David Cournapeau) Date: Wed, 23 Nov 2011 21:31:45 +0000 Subject: [SciPy-User] Problems installing scipy 0.10.0 using a local installation of numpy 1.6.1 In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 5:11 PM, David Pitchford wrote: > I am trying to install scipy on a lab computer for research at my > university. I do not have root access to these machines, so when I need to > install new python modules I usually install them locally to a folder I have > control over (using python setup.py build), then add it to my PYTHONPATH > variable. I got numpy 1.6.1 working this way, but when I try to install > scipy 0.10.0 with my PYTHONPATH variable pointing to > build/lib.linux-x86_64-2.6/ (relative to the top-level directory numpy is > in, numpy-1.6.1), This may sometimes work, but this is not the right way to do it: you need to install the packages. It is possible to install packages inside your home directory: if you use the --user option, you don't even have to set up anything, python will automatically look there: python setup.py install --user cheers, David From deil.christoph at googlemail.com Wed Nov 23 18:02:20 2011 From: deil.christoph at googlemail.com (Christoph Deil) Date: Thu, 24 Nov 2011 00:02:20 +0100 Subject: [SciPy-User] Numpy / Scipy build / test errors on Mac OS X Lion with Macports In-Reply-To: References: <73CE5D4C-B0C9-4AF9-8E38-46D6F726FE93@googlemail.com> Message-ID: <21DA75FE-6191-49EB-AB8E-D8EA68B84E90@googlemail.com> On Nov 23, 2011, at 7:04 PM, Charles R Harris wrote: > > > On Wed, Nov 23, 2011 at 10:54 AM, Christoph Deil wrote: > > On Nov 23, 2011, at 3:26 PM, Robert Kern wrote: > >>> Would it be helpful if I file tickets with the build and test logs for numpy and scipy using the XCode and Macports compilers, or are these problems well-known and simply too hard to fix? >> >> I don't know what problems you are referring to, so yes, reporting >> them would help. > > I tried to install numpy 2.0.0.dev-7f302cc on > Mac OS X Lion 10.7.2 (11C74) > XCode 4.2.1 (4D502) > > using these compilers / python: > $ which python; python > /opt/local/bin/python > Python 2.7.2 (default, Nov 23 2011, 11:40:08) > [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin > $ which gcc; gcc --version > /usr/bin/gcc > i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) > $ which gfortran; gfortran --version > /opt/local/bin/gfortran > GNU Fortran (GCC) 4.4.6 > > The build log contains many errors such as: > _configtest.c:1:20: error: endian.h: No such file or directory > _configtest.c:5: error: size of array 'test_array' is negative > _configtest.c:7: error: 'SIZEOF_LONGDOUBLE' undeclared (first use in this function) > _configtest.c:8: error: 'HAVE_DECL_SIGNBIT' undeclared (first use in this function) > _configtest.c:7: error: 'Py_UNICODE_WIDE' undeclared (first use in this function) > > but it all starts with this warning: > numpy/core/setup_common.py:86: MismatchCAPIWarning: API mismatch detected, the C API version numbers have to be updated. Current C api version is 6, with checksum eb54c77ff4149bab310324cd7c0cb176, but recorded checksum for C API version 6 in codegen_dir/cversions.txt is e61d5dc51fa1c6459328266e215d6987. If functions were added in the C API, you have to update C_API_VERSION in numpy/core/setup_common.py. > MismatchCAPIWarning) > > I tried rm -r and get a fresh git clone as suggested here: > http://comments.gmane.org/gmane.comp.python.numeric.general/38033 > Same result. > > Nevertheless I do get a numpy that works for the most part, running the full test > gives only one RuntimeWarning related to the power function and one failure in polyfit: > > /Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/core.py:4778: RuntimeWarning: invalid value encountered in power > np.power(out, 0.5, out=out, casting='unsafe') > > ====================================================================== > FAIL: Tests polyfit > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/tests/test_extras.py", line 622, in test_polyfit > assert_almost_equal(a, a_) > File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", line 155, in assert_almost_equal > err_msg=err_msg, verbose=verbose) > File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", line 221, in assert_array_almost_equal > header='Arrays are not almost equal') > File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/ma/testutils.py", line 186, in assert_array_compare > verbose=verbose, header=header) > File "/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/testing/utils.py", line 677, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 100.0%) > x: array([ 4.25134878, 1.14131297, 0.20519666, 0.01701 ]) > y: array([ 1.9345248 , 0.49711011, 0.10202554, 0.00928034]) > > > Don't worry about this one, it comes from Travis changing the unmasked version of polyfit which changes the singular values, etc. > > > What is the reason for the C API Mismatch? > I also see it with clang and on a linux box so I'm not sure if it's a problem with my installation or with numpy HEAD. > How can I resolve it? > Is this the cause for the following compile errors? > > > Chuck > Thank you all for the infos. I have opened a numpy and a scipy ticket to keep track: http://projects.scipy.org/numpy/ticket/1987 http://projects.scipy.org/scipy/ticket/1567 -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Nov 23 21:47:29 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Nov 2011 21:47:29 -0500 Subject: [SciPy-User] rvs and broadcasting Message-ID: rvs in scipy.stats distributions has a nasty broadcasting if location or scale are arrays and size is not defined for the same shape http://projects.scipy.org/scipy/ticket/1544 (also https://groups.google.com/group/pystatsmodels/browse_thread/thread/e757d73b2a06b962?hl=en ) I was playing with two solutions while I was writing a rvs for the truncated normal. 1) broadcast shape parameters, loc and scale, if they are arrays produce rvs in that shape, and, if in this case size is not the same or 1, then raise a ValueError essentially lower, upper, loc, scale = np.broadcast_arrays(lower, upper, loc, scale) if (np.size(lower) > 1) and (size != (1,)) and (lower.shape != size): raise ValueError('Do you really want this? Then do it yourself.') 2) broadcast shape parameters, loc and scale, for each of these create random variables given by size, the return shape is essentially broadcasted shape concatenated with size, for example assert_equal(truncnorm_rvs(lower*np.arange(4)[:,None], upper, loc=np.arange(5), scale=1, size=(2,3)).shape, (4, 5, 2, 3)) this version is attached. Any opinions about which version should be preferred? (As aside, truncnorm and other distribution with parameter dependent support might also have other problems than just the broadcasting of shape and scale.) Josef -------------- next part -------------- A non-text attachment was scrubbed... Name: random_truncnorm_1.py Type: text/x-python Size: 5149 bytes Desc: not available URL: From mail.to.daniel.platz at googlemail.com Thu Nov 24 09:16:22 2011 From: mail.to.daniel.platz at googlemail.com (Daniel Platz) Date: Thu, 24 Nov 2011 15:16:22 +0100 Subject: [SciPy-User] numerical integration with square root like singularity Message-ID: Hi, I am stuck with a problem in scipy. I want to numerically integrate a function with a square root like singularity at one end of the integration interval. The integral has the form \int_{0}^{1} f(x) * x / sqrt(1-x**2) dx or alternatively \int_{0}^{A} f(x) / sqrt(A-x) dx. Can the quad function in scipy deal with this kind of singularity. I tried to use the points argument of the quad function but I still get warning messages and do not know how much I can trust the results. Alternatively, I was wondering if I can implement a Chebyshev?Gauss quadrature myself to lift the singularity? Or is there a way to do this elegantly using scipy? I would be very glad about a short answer. Best regards, Daniel Platz From pav at iki.fi Thu Nov 24 09:25:27 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 24 Nov 2011 15:25:27 +0100 Subject: [SciPy-User] numerical integration with square root like singularity In-Reply-To: References: Message-ID: 24.11.2011 15:16, Daniel Platz kirjoitti: [clip] > \int_{0}^{A} f(x) / sqrt(A-x) dx. > > Can the quad function in scipy deal with this kind of singularity. I > tried to use the points argument of the quad function but I still get > warning messages and do not know how much I can trust the results. [clip] Yes, `quad` supports weight functions of this form (see its `weight` and `wvar` arguments). Run scipy.integrate.quad_explain() to find what to specify as these arguments -- Pauli Virtanen From eraldo.pomponi at gmail.com Thu Nov 24 09:46:30 2011 From: eraldo.pomponi at gmail.com (Eraldo Pomponi) Date: Thu, 24 Nov 2011 15:46:30 +0100 Subject: [SciPy-User] Integration over Voronoi cells In-Reply-To: References: Message-ID: Dear Friedrich, First of all let me thank you for your suggestions. It took a little bit to get into them but I still have some BIG doubts. > Well, AISI, your function is radially symmetric, so the integration > over 1/r can be done analytically within a small disk where the > Gaussian is approximately one. Since: > Not so clear why the Gaussian is approximately one (it is true just when r is small enough) > 2 \pi r * [1 / (2 \pi D r)] is just 1/D constantly. So you're left > with [the integration up to] R_0/D when R_0 is the radius of that disk > (where the integration is done analytically). [No more divergence, > since you cut out the centre region.] You could also just integrate > with the full Gaussian, since the kernel just integrates up the > lengthes of the disk perimeters, if you understand. Let me rewrite the function (it is just the kernel not the full function): K(r,t) = 1/(2piDr)exp[-r^2/Dt] [1] This function is rotational invariant [a] so a clever way to integrate it is to consider a disk of radius R0. The perimeter of this disk is equal to: l=2*pi*R0 [2] The term 1/(2*pi*D*r) gives a constant contribution along the path eq.2, equal to 1/D so what is left to do is to integrate the term: exp[-r^2/Dt] in the interval [R0/D,R0/D+delta] and multiply by the length of the chose path, i.e. eq.2. and sum on a 1D grid in the interval [0,R0]. ------>>>>> Is it correct? <<<<<----------- The term exp[-r^2/Dt] can be analytically integrated (removing t for convenience) : g(r) = sqrt(pi)*erf(r*sqrt(1/D))/(2*sqrt(1/D)) [3] Its defined integral in [R0/D,R0/D+delta], multiplied by the length of the choose path, give us: g'(r) = 2*pi*r * [ g(r/D + delta) - g(r/D)] [4] summing in the grid k=1..R0, we obtain the integral of eq.1 in [0,R0]: integral([1]) = sum([g'(k) for k in np.arange(0,R0,0.001)] ) What do you think ? > All you need to do, AISI atm, is to calculate a function that gives > you the perimeter length of the circle which is inside your Voronoi > cell. This is zero at r = 0 and will stay bounded for all finite > radii. So integration should be fairly straightforward. You could > even just do a sum over a 1D grid for the radius, since the function > varies slowly, this should be fast and easy (both in runtime as in > coding time). You might not even need scipy for this. Additionally, the Gaussian just introduces a suppression of radii > which are farther outside. It makes the complete function bounded > even for an infinitely large Voronoi cell :-) > > The kernel 1/r makes things easy and convenient, instead of making > things troublesome: > > 1) because it exhibits rotational invariance; > 2) because it converges nicely in 2D for an integral over a circle line > > I might overlook something obvious. Is the function you gave really > the full function or only the "kernel", the weighting function? > > Friedrich > > P.S.: You divide your tringles resulting from the centre point and the > boundary lines into two parts, separated by the closest point on the > boundary, which is always in the bounds of that boundary line. I didn't understood what you mean here. Sorry > you just have a perimeter length which is linear in r until you touch > the line, and then it'll be a littel more complicated. I think you'll > figure it out. > > Yes , I hope so. When the path in eq.2 is no more circular, the integration along it is more complicated. A bit difficult to figure out how to do that programmatically but this is the next problem. > P.P.S.: You're really lucky that your kernel is 1/r and not 1/r^2, iirc ;-) > I guess so but I think that I will fully understand this sentence just when my doubts will be cleared. Thanks a lot for your help. Cheers, Eraldo -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Nov 24 09:48:58 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 24 Nov 2011 07:48:58 -0700 Subject: [SciPy-User] numerical integration with square root like singularity In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 7:16 AM, Daniel Platz < mail.to.daniel.platz at googlemail.com> wrote: > Hi, > > I am stuck with a problem in scipy. I want to numerically integrate a > function with a square root like singularity at one end of the > integration interval. The integral has the form > > \int_{0}^{1} f(x) * x / sqrt(1-x**2) dx > > or alternatively > > \int_{0}^{A} f(x) / sqrt(A-x) dx. > > Can the quad function in scipy deal with this kind of singularity. I > tried to use the points argument of the quad function but I still get > warning messages and do not know how much I can trust the results. > > Alternatively, I was wondering if I can implement a Chebyshev?Gauss > quadrature myself to lift the singularity? Or is there a way to do > this elegantly using scipy? > > I would be very glad about a short answer. > > The substitution y^2 = A - x in the second form will eliminate the singularity. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From elofgren at email.unc.edu Thu Nov 24 02:56:36 2011 From: elofgren at email.unc.edu (Lofgren, Eric) Date: Thu, 24 Nov 2011 07:56:36 +0000 Subject: [SciPy-User] Problem with ODEINT Message-ID: <39807FBF-D746-42AF-A922-1AA5B1D35007@unc.edu> I'm attempting to solve a fairly simple compartmental SIR model (http://en.wikipedia.org/wiki/SIR_Model) using SciPy and particularly odeint . I've solved things like this before using this code, and they've always worked, so I'm frankly quite puzzled. The code is as follows: --- #Python implementation of continuous SIR model #Import Necessary Modules import numpy as np import scipy.integrate as spi #Solving the differential equation. Solves over t for initial conditions PopIn def eq_system(startState,t): '''Defining SIR System of Equations''' beta = startState[3] gamma = startState[4] #Creating an array of equations Eqs= np.zeros((3)) Eqs[0]= -beta * startState[0]*startState[1] Eqs[1]= beta * startState[0]*startState[1] - gamma*startState[1] Eqs[2]= gamma*startState[1] return Eqs def model_solve(t): '''Stores all model parameters, runs ODE solver and tracks results''' #Setting up how long the simulation will run t_start = 1 t_end = t t_step = 0.02 t_interval = np.arange(t_start,t_end, t_step) n_steps = (t_end-t_start)/t_step #Setting up initial population state #n_params is the number of parameters (beta and gamma in this case) S0 = 0.99999 I0 = 0.00001 R0 = 0.0 beta = 0.50 gamma = 1/10. n_params = 2 startPop = (S0, I0, R0, beta, gamma) #Create an array the size of the ODE solver output with the parameter values params = np.zeros((n_steps,n_params)) params[:,0] = beta params[:,1] = gamma timer = np.arange(n_steps).reshape(n_steps,1) SIR = spi.odeint(eq_system, startPop, t_interval) #Glue together ODE model output and parameter values in one big array output = hstack((timer,SIR,params)) return output def MonteCarlo_SIR(runs): holder = [ ] for i in range(runs): holder.append(model_solve(100)) results = np.hstack(holder) return results testing = MonteCarlo_SIR(10) print testing print testing.shape --- Ignore for a moment that there's absolutely no reason for a Monte Carlo simulation of a system made up of nothing but constants. And the poorly documented code - this is essentially a running musing right now, and wasn't intended to see the light of day until I hit this problem. When I run this code, sometimes it just works, and I get a nice tidy array of results. But sometimes, errors like the following crop up: lsoda-- warning..internal t (=r1) and h (=r2) are such that in the machine, t + h = t on the next step (h = step size). solver will continue anyway In above, R1 = 0.1000000000000E+01 R2 = 0.0000000000000E+00 intdy-- t (=r1) illegal In above message, R1 = 0.1020000000000E+01 t not in interval tcur - hu (= r1) to tcur (=r2) In above, R1 = 0.1000000000000E+01 R2 = 0.1000000000000E+01 intdy-- t (=r1) illegal In above message, R1 = 0.1040000000000E+01 t not in interval tcur - hu (= r1) to tcur (=r2) In above, R1 = 0.1000000000000E+01 R2 = 0.1000000000000E+01 lsoda-- trouble from intdy. itask = i1, tout = r1 In above message, I1 = 1 In above message, R1 = 0.1040000000000E+01 Illegal input detected (internal error). Run with full_output = 1 to get quantitative information. lsoda-- at t (=r1) and step size h (=r2), the corrector convergence failed repeatedly or with abs(h) = hmin In above, R1 = 0.6556352055692E+01 R2 = 0.2169078563087E-05 Repeated convergence failures (perhaps bad Jacobian or tolerances). Run with full_output = 1 to get quantitative information. lsoda-- warning..internal t (=r1) and h (=r2) are such that in the machine, t + h = t on the next step (h = step size). solver will continue anyway In above, R1 = 0.1000000000000E+01 R2 = 0.5798211925922E-81 lsoda-- warning..internal t (=r1) and h (=r2) are such that in the machine, t + h = t on the next step (h = step size). solver will continue anyway In above, R1 = 0.1000000000000E+01 R2 = 0.8614947121323E-85 ... lsoda-- warning..internal t (=r1) and h (=r2) are such that in the machine, t + h = t on the next step (h = step size). solver will continue anyway In above, R1 = 0.1000000000000E+01 R2 = 0.1722989424265E-83 lsoda-- above warning has been issued i1 times. it will not be issued again for this problem In above message, I1 = 10 I believe those three are are a full tour of the error messages that are cropping up. This definitely occurs a minority of the time, but it's common enough that 4 out of 10 runs of the code above produces at least one error message like that. It seems that the problem is the step size getting small enough that it's beyond the precision of the machine to deal with, but passing an argument of something like hmin = 0.0000001 or something that should be well within range doesn't seem to help. Any idea what's going on? The end goal of this is a somewhat more complex set of equations, and the parameters (i.e. beta and gamma) to be drawn from a distribution, but I'm somewhat concerned about the trouble the solver is seeming to have with what should be a pretty straightforward case. Any suggestions on how to fix this? Or is there another type of ODE solver I should be using? I confess my knowledge of how these things work is...limited. Thanks in advance, Eric From fccoelho at gmail.com Thu Nov 24 10:33:57 2011 From: fccoelho at gmail.com (Flavio Coelho) Date: Thu, 24 Nov 2011 13:33:57 -0200 Subject: [SciPy-User] Problem with ODEINT In-Reply-To: <39807FBF-D746-42AF-A922-1AA5B1D35007@unc.edu> References: <39807FBF-D746-42AF-A922-1AA5B1D35007@unc.edu> Message-ID: Hi, I have seen this before with odeint. It is probably related to the parameterization of your model, which is leading very large derivatives. This is causing the numerical solver to try to reduce the step size (h) to try to resolve the dynamics. h is apparently hitting its default lower limit (something in the vicinity of 1E-85). You can try to set a lower hmin, when calling odeint, but it would be best to go over your parameter values and check if they are reasonable. Disclaimer: Ihave not run your script nor checked your parameters, just took a lok at the error messages. good luck, Fl?vio On Thu, Nov 24, 2011 at 05:56, Lofgren, Eric wrote: > I'm attempting to solve a fairly simple compartmental SIR model ( > http://en.wikipedia.org/wiki/SIR_Model) using SciPy and particularly > odeint . I've solved things like this before using this code, and they've > always worked, so I'm frankly quite puzzled. The code is as follows: > > --- > > #Python implementation of continuous SIR model > > #Import Necessary Modules > import numpy as np > import scipy.integrate as spi > > #Solving the differential equation. Solves over t for initial conditions > PopIn > > def eq_system(startState,t): > '''Defining SIR System of Equations''' > beta = startState[3] > gamma = startState[4] > #Creating an array of equations > Eqs= np.zeros((3)) > Eqs[0]= -beta * startState[0]*startState[1] > Eqs[1]= beta * startState[0]*startState[1] - gamma*startState[1] > Eqs[2]= gamma*startState[1] > return Eqs > > def model_solve(t): > '''Stores all model parameters, runs ODE solver and tracks results''' > #Setting up how long the simulation will run > t_start = 1 > t_end = t > t_step = 0.02 > t_interval = np.arange(t_start,t_end, t_step) > n_steps = (t_end-t_start)/t_step > #Setting up initial population state > #n_params is the number of parameters (beta and gamma in this case) > S0 = 0.99999 > I0 = 0.00001 > R0 = 0.0 > beta = 0.50 > gamma = 1/10. > n_params = 2 > startPop = (S0, I0, R0, beta, gamma) > #Create an array the size of the ODE solver output with the parameter > values > params = np.zeros((n_steps,n_params)) > params[:,0] = beta > params[:,1] = gamma > timer = np.arange(n_steps).reshape(n_steps,1) > SIR = spi.odeint(eq_system, startPop, t_interval) > #Glue together ODE model output and parameter values in one big array > output = hstack((timer,SIR,params)) > return output > > def MonteCarlo_SIR(runs): > holder = [ ] > for i in range(runs): > holder.append(model_solve(100)) > results = np.hstack(holder) > return results > > testing = MonteCarlo_SIR(10) > > print testing > print testing.shape > > --- > > Ignore for a moment that there's absolutely no reason for a Monte Carlo > simulation of a system made up of nothing but constants. And the poorly > documented code - this is essentially a running musing right now, and > wasn't intended to see the light of day until I hit this problem. > > When I run this code, sometimes it just works, and I get a nice tidy array > of results. But sometimes, errors like the following crop up: > > lsoda-- warning..internal t (=r1) and h (=r2) are > such that in the machine, t + h = t on the next step > (h = step size). solver will continue anyway > In above, R1 = 0.1000000000000E+01 R2 = 0.0000000000000E+00 > intdy-- t (=r1) illegal > In above message, R1 = 0.1020000000000E+01 > t not in interval tcur - hu (= r1) to tcur (=r2) > In above, R1 = 0.1000000000000E+01 R2 = 0.1000000000000E+01 > intdy-- t (=r1) illegal > In above message, R1 = 0.1040000000000E+01 > t not in interval tcur - hu (= r1) to tcur (=r2) > In above, R1 = 0.1000000000000E+01 R2 = 0.1000000000000E+01 > lsoda-- trouble from intdy. itask = i1, tout = r1 > In above message, I1 = 1 > In above message, R1 = 0.1040000000000E+01 > Illegal input detected (internal error). > Run with full_output = 1 to get quantitative information. > > > lsoda-- at t (=r1) and step size h (=r2), the > corrector convergence failed repeatedly > or with abs(h) = hmin > In above, R1 = 0.6556352055692E+01 R2 = 0.2169078563087E-05 > Repeated convergence failures (perhaps bad Jacobian or tolerances). > Run with full_output = 1 to get quantitative information. > > > > lsoda-- warning..internal t (=r1) and h (=r2) are > such that in the machine, t + h = t on the next step > (h = step size). solver will continue anyway > In above, R1 = 0.1000000000000E+01 R2 = 0.5798211925922E-81 > lsoda-- warning..internal t (=r1) and h (=r2) are > such that in the machine, t + h = t on the next step > (h = step size). solver will continue anyway > In above, R1 = 0.1000000000000E+01 R2 = 0.8614947121323E-85 > ... > lsoda-- warning..internal t (=r1) and h (=r2) are > such that in the machine, t + h = t on the next step > (h = step size). solver will continue anyway > In above, R1 = 0.1000000000000E+01 R2 = 0.1722989424265E-83 > lsoda-- above warning has been issued i1 times. > it will not be issued again for this problem > In above message, I1 = 10 > > I believe those three are are a full tour of the error messages that are > cropping up. This definitely occurs a minority of the time, but it's common > enough that 4 out of 10 runs of the code above produces at least one error > message like that. It seems that the problem is the step size getting small > enough that it's beyond the precision of the machine to deal with, but > passing an argument of something like hmin = 0.0000001 or something that > should be well within range doesn't seem to help. > > Any idea what's going on? The end goal of this is a somewhat more complex > set of equations, and the parameters (i.e. beta and gamma) to be drawn from > a distribution, but I'm somewhat concerned about the trouble the solver is > seeming to have with what should be a pretty straightforward case. Any > suggestions on how to fix this? Or is there another type of ODE solver I > should be using? I confess my knowledge of how these things work > is...limited. > > Thanks in advance, > > Eric > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Fl?vio Code?o Coelho ================ +55(21) 3799-5567 Professor Escola de Matem?tica Aplicada Funda??o Get?lio Vargas Rio de Janeiro - RJ Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedrichromstedt at gmail.com Thu Nov 24 10:57:20 2011 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 24 Nov 2011 16:57:20 +0100 Subject: [SciPy-User] Integration over Voronoi cells In-Reply-To: References: Message-ID: 2011/11/24 Eraldo Pomponi : > Dear?Friedrich, > First of all let me thank you for your suggestions. It took a little bit to > get into them > but I still have some BIG?doubts. :-) Apparently I don't see any doubts on your side, rather a way to go. >> Well, AISI, your function is radially symmetric, so the integration >> over 1/r can be done analytically within a small disk where the >> Gaussian is approximately one. ?Since: > > Not so clear why the Gaussian is approximately one (it is true just when > r is small enough) "r is small enough" = "within a small disk" :-) >> 2 \pi r * [1 / (2 \pi D r)] ?is just 1/D constantly. ?So you're left >> with [the integration up to] R_0/D when R_0 is the radius of that disk >> (where the integration is done analytically). ?[No more divergence, >> since you cut out the centre region.] ?You could also just integrate >> with the full Gaussian, since the kernel just integrates up the >> lengthes of the disk perimeters, if you understand. > > Let me rewrite the function (it is just the kernel not the full function): That's a pity. Still you can average your operand undergoing the kernel multiplication just on the path length. That's what your integral does. > K(r,t) = 1/(2piDr)exp[-r^2/Dt] ? ? ? ? ? ? ? ? ? ? ? [1] > This function?is?rotational?invariant [a] so a clever way to integrate it is > to consider > a disk of radius R0. The perimeter of this disk is equal to: > l=2*pi*R0 ? ? ? ? ? ? [2] > The term ?1/(2*pi*D*r) gives a constant contribution along the path eq.2, > equal to 1/D so > what is left to do is to?integrate the term: > exp[-r^2/Dt] Exactly. You got this. :-) > in the interval [R0/D,R0/D+delta] and multiply by the length of the chose > path, i.e. eq.2. and sum > on a 1D grid in the interval [0,R0]. > ------>>>>> ? Is it correct? ?<<<<<----------- Not quite. But nearly. Let's rewrite the remaining Gaussian like this: G = exp(-r^2/(2 k^2)) where: (2 k^2) = Dt . Just to bring it into the well-known standard Gaussian form. The std deviation of your Gaussian (1 sigma range) is then k. So what you get is: I(R_1) = \int_0^{R_1} G(r) 1/D \mathrm{d}r . The integral over the angle along the circle line is already consumed! It walked into the 1/D term! > The term?exp[-r^2/Dt] can be analytically integrated (removing t for > convenience) : > g(r) = sqrt(pi)*erf(r*sqrt(1/D))/(2*sqrt(1/D)) ? [3] Well, afair, there is no closed form for the integral? Prove me wrong. I think erf (error function) is just the "name" for what the integral is, no? When we cannot write it down in a "closed form" we just invent a new primitive function to make it closed by definition. As I said, the closed form for erf just does not pop into my mind. Of course, you have some points of erf(x) defined, when you say erf is the integral starting at -oo. In fact, I(R_1) "is" just erf, you can just read it off the equation. In general, one more comment, it's IMHO not a good idea to just suppress parameters like t, it simplifies too much at the cost of unit inconstency. A physicist speaking :-) Better introduce another quantity and avoid confusion. Then you can safely say "for lim k to 0" and stuff. ... I'm not sure on the erf Eq. (3), but it's a straightforward linear variable substitution, speaking algebra. Just pull out the 1/D, transform r s.t. k = 1, and I think you should be able to apply erf. > Its defined integral in [R0/D,R0/D+delta], ?multiplied by the length of the Still, I have no clue why you have a /D in the span. > choose path, > give us: > g'(r) = 2*pi*r * [ g(r/D + delta) - g(r/D)] ? ?[4] > summing in the grid k=1..R0, we obtain the integral of ?eq.1 in [0,R0]: > integral([1]) = ?sum([g'(k) for k in np.arange(0,R0,0.001)] ) > What do you think ? Looks rather wrong. There might be some kernel of truth aside of that you did some integration I don't have the inclination to find the error in. That's up to you. But the 2 pi r should be consumed by the angular integral, you just integrate a constant! (The constant is 1/D) (And you integrate it over a radius quantity.) I would consider the following approach to understand the problem: 1. Figure out how to just integrate the kernel, without the "kerneled" function. And without trinangular bounds, just on the full circle. 2. Then introduce an angular, but constant bound on the angular integral (i.e. not \int_0^{2 \pi} but \int_Q where Q is a subset of [0, 2 pi]). 3. Next, you will be able to vary the Q with r, i.e. Q(r). You will see that the full integral (withour kerneled function) is just the integral over the measure length of Q with r. 4. And then, you can safely introduce the kerneled function (is there a term for "kerneled"?). AISI, it's just the average of the kerneled function over Q, integrated with r. AISI, from (4.), the full integral *is just the average of the kerneled function, with a weighting s.t. all circles have in net the same weight.* It's stunningly simple. Is it true? And forget about the small disk. You don't need it at all, your kernel cancels nicely with the r factor from the integration (path length). >> All you need to do, AISI atm, is to calculate a function that gives >> you the perimeter length of the circle which is inside your Voronoi >> cell. ?This is zero at r = 0 and will stay bounded for all finite >> radii. ?So integration should be fairly straightforward. ?You could >> even just do a sum over a 1D grid for the radius, since the function >> varies slowly, this should be fast and easy (both in runtime as in >> coding time). ?You might not even need scipy for this. >> >> Additionally, the Gaussian just introduces a suppression of radii >> which are farther outside. ?It makes the complete function bounded >> even for an infinitely large Voronoi cell :-) >> >> The kernel 1/r makes things easy and convenient, instead of making >> things troublesome: >> >> 1) ?because it exhibits rotational invariance; >> 2) ?because it converges nicely in 2D for an integral over a circle line >> >> I might overlook something obvious. ?Is the function you gave really >> the full function or only the "kernel", the weighting function? >> >> Friedrich >> >> P.S.: You divide your tringles resulting from the centre point and the >> boundary lines into two parts, separated by the closest point on the >> boundary, which is always in the bounds of that boundary line. > > I didn't understood what you mean here. Sorry > >> >> you just have a perimeter length which is linear in r until you touch >> the line, and then it'll be a littel more complicated. ?I think you'll >> figure it out. >> > > Yes , I hope so. When the path in eq.2 is no more circular, the integration > along it is more complicated. A bit difficult to figure out how to do that > programmatically?but this is the next problem. See above. Skip for now, I'd say. :-) >> P.P.S.: You're really lucky that your kernel is 1/r and not 1/r^2, iirc >> ;-) > I guess so but I think that I will fully understand this sentence just when > my > doubts will be cleared. Thanks a lot for your help. :-) Good luck, and please don't believe anything I say, just believe what you figure out yourself! Happy sciencing! Friedrich :-) P.S.: What's your kerneled function? I.e. does it have some symmetries (or not). And why integrating over Voronoi cells? Since the integral, thanks to the Gaussian, converges also on full domain, I believe if the Voronoi cells are introduced artificially you can drop them and do the full integral without the hassle do track the boundaries. Just a shot in the dark. From warren.weckesser at enthought.com Thu Nov 24 11:07:11 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 24 Nov 2011 10:07:11 -0600 Subject: [SciPy-User] Problem with ODEINT In-Reply-To: <39807FBF-D746-42AF-A922-1AA5B1D35007@unc.edu> References: <39807FBF-D746-42AF-A922-1AA5B1D35007@unc.edu> Message-ID: On Thu, Nov 24, 2011 at 1:56 AM, Lofgren, Eric wrote: > I'm attempting to solve a fairly simple compartmental SIR model ( > http://en.wikipedia.org/wiki/SIR_Model) using SciPy and particularly > odeint . I've solved things like this before using this code, and they've > always worked, so I'm frankly quite puzzled. The code is as follows: > > --- > > #Python implementation of continuous SIR model > > #Import Necessary Modules > import numpy as np > import scipy.integrate as spi > > #Solving the differential equation. Solves over t for initial conditions > PopIn > > def eq_system(startState,t): > '''Defining SIR System of Equations''' > beta = startState[3] > gamma = startState[4] > #Creating an array of equations > Eqs= np.zeros((3)) > Eqs[0]= -beta * startState[0]*startState[1] > Eqs[1]= beta * startState[0]*startState[1] - gamma*startState[1] > Eqs[2]= gamma*startState[1] > return Eqs > > def model_solve(t): > '''Stores all model parameters, runs ODE solver and tracks results''' > #Setting up how long the simulation will run > t_start = 1 > t_end = t > t_step = 0.02 > t_interval = np.arange(t_start,t_end, t_step) > n_steps = (t_end-t_start)/t_step > #Setting up initial population state > #n_params is the number of parameters (beta and gamma in this case) > S0 = 0.99999 > I0 = 0.00001 > R0 = 0.0 > beta = 0.50 > gamma = 1/10. > n_params = 2 > startPop = (S0, I0, R0, beta, gamma) > #Create an array the size of the ODE solver output with the parameter > values > params = np.zeros((n_steps,n_params)) > params[:,0] = beta > params[:,1] = gamma > timer = np.arange(n_steps).reshape(n_steps,1) > SIR = spi.odeint(eq_system, startPop, t_interval) > #Glue together ODE model output and parameter values in one big array > output = hstack((timer,SIR,params)) > return output > > def MonteCarlo_SIR(runs): > holder = [ ] > for i in range(runs): > holder.append(model_solve(100)) > results = np.hstack(holder) > return results > > testing = MonteCarlo_SIR(10) > > print testing > print testing.shape > > --- > > Ignore for a moment that there's absolutely no reason for a Monte Carlo > simulation of a system made up of nothing but constants. And the poorly > documented code - this is essentially a running musing right now, and > wasn't intended to see the light of day until I hit this problem. > > When I run this code, sometimes it just works, and I get a nice tidy array > of results. But sometimes, errors like the following crop up: > > lsoda-- warning..internal t (=r1) and h (=r2) are > such that in the machine, t + h = t on the next step > (h = step size). solver will continue anyway > In above, R1 = 0.1000000000000E+01 R2 = 0.0000000000000E+00 > intdy-- t (=r1) illegal > In above message, R1 = 0.1020000000000E+01 > t not in interval tcur - hu (= r1) to tcur (=r2) > In above, R1 = 0.1000000000000E+01 R2 = 0.1000000000000E+01 > intdy-- t (=r1) illegal > In above message, R1 = 0.1040000000000E+01 > t not in interval tcur - hu (= r1) to tcur (=r2) > In above, R1 = 0.1000000000000E+01 R2 = 0.1000000000000E+01 > lsoda-- trouble from intdy. itask = i1, tout = r1 > In above message, I1 = 1 > In above message, R1 = 0.1040000000000E+01 > Illegal input detected (internal error). > Run with full_output = 1 to get quantitative information. > > > lsoda-- at t (=r1) and step size h (=r2), the > corrector convergence failed repeatedly > or with abs(h) = hmin > In above, R1 = 0.6556352055692E+01 R2 = 0.2169078563087E-05 > Repeated convergence failures (perhaps bad Jacobian or tolerances). > Run with full_output = 1 to get quantitative information. > > > > lsoda-- warning..internal t (=r1) and h (=r2) are > such that in the machine, t + h = t on the next step > (h = step size). solver will continue anyway > In above, R1 = 0.1000000000000E+01 R2 = 0.5798211925922E-81 > lsoda-- warning..internal t (=r1) and h (=r2) are > such that in the machine, t + h = t on the next step > (h = step size). solver will continue anyway > In above, R1 = 0.1000000000000E+01 R2 = 0.8614947121323E-85 > ... > lsoda-- warning..internal t (=r1) and h (=r2) are > such that in the machine, t + h = t on the next step > (h = step size). solver will continue anyway > In above, R1 = 0.1000000000000E+01 R2 = 0.1722989424265E-83 > lsoda-- above warning has been issued i1 times. > it will not be issued again for this problem > In above message, I1 = 10 > > I believe those three are are a full tour of the error messages that are > cropping up. This definitely occurs a minority of the time, but it's common > enough that 4 out of 10 runs of the code above produces at least one error > message like that. It seems that the problem is the step size getting small > enough that it's beyond the precision of the machine to deal with, but > passing an argument of something like hmin = 0.0000001 or something that > should be well within range doesn't seem to help. > > Any idea what's going on? The end goal of this is a somewhat more complex > set of equations, and the parameters (i.e. beta and gamma) to be drawn from > a distribution, but I'm somewhat concerned about the trouble the solver is > seeming to have with what should be a pretty straightforward case. Any > suggestions on how to fix this? Or is there another type of ODE solver I > should be using? I confess my knowledge of how these things work > is...limited. > > Thanks in advance, > > Eric > > Eric, You have given odeint an initial condition of length 5, but the function that defines your system is returning a vector of only length 3. Don't do that. Instead of including the parameters beta and gamma in the initial condition, you can make them explicit arguments. For example: def eq_system(state, t, beta, gamma): '''Defining SIR System of Equations''' Eqs = np.zeros((3)) Eqs[0] = -beta * state[0]*state[1] Eqs[1] = beta * state[0]*state[1] - gamma*state[1] Eqs[2] = gamma*state[1] return Eqs Then call odeint like this: startPop = (S0, I0, R0) SIR = spi.odeint(eq_system, startPop, t_interval, args=(beta, gamma)) When I make these changes to your script, I don't get any errors, and it runs much faster. There are additional examples of using parameters with odeint in the SciPy Cookbook: http://www.scipy.org/Cookbook/CoupledSpringMassSystem http://www.scipy.org/Cookbook/KdV If you really must include the parameters in the state vector, then you must also define differential equations for them in your system (i.e. d(beta)/dt = 0, etc): def eq_system(state,t): '''Defining SIR System of Equations''' beta = state[3] gamma = state[4] #Creating an array of equations Eqs = np.zeros(5) Eqs[0] = -beta * state[0]*state[1] Eqs[1] = beta * state[0]*state[1] - gamma*state[1] Eqs[2] = gamma*state[1] Eqs[3] = 0 Eqs[4] = 0 return Eqs Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedrichromstedt at gmail.com Thu Nov 24 14:50:43 2011 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 24 Nov 2011 20:50:43 +0100 Subject: [SciPy-User] rvs and broadcasting In-Reply-To: References: Message-ID: 2011/11/24 : > 1) broadcast shape parameters, loc and scale, if they are arrays > produce rvs in that shape, and, if in this case size is not the same > or 1, then raise a ValueError > essentially > ? ?lower, upper, loc, scale = np.broadcast_arrays(lower, upper, loc, scale) > ? ?if (np.size(lower) > 1) and (size != (1,)) and (lower.shape != size): > ? ? ? ?raise ValueError('Do you really want this? Then do it yourself.') > > 2) broadcast shape parameters, loc and scale, ?for each of these > create random variables given by size, the return shape is essentially > broadcasted shape concatenated with size, for example > > assert_equal(truncnorm_rvs(lower*np.arange(4)[:,None], upper, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?loc=np.arange(5), scale=1, size=(2,3)).shape, > ? ? ? ? ? ? ? ? ? ? (4, 5, 2, 3)) > > this version is attached. > > Any opinions about which version should be preferred? > > (As aside, truncnorm and other distribution with parameter dependent > support might also have other problems than just the broadcasting of > shape and scale.) I don't know the context but the first solution is *a lot* more readable. I'm not even interested in understanding the cryptic second one. I don't know if all of the checks done in the ``if`` scope are really necessary, but I just believe you. Alternatively, since you essentially want to check if things are broadcastable AISI, why not just broadcasting them with a result array of shape ``size`` you need to generate anyway along the way, broadcasting all three and catching the ``ValueError: shape mismatch`` exception? Broadcasting is cheap since it plays stride stricks IIRC: >>> x = numpy.asarray([1, 2]) >>> x2, y = numpy.broadcast_arrays(x, [[1, 2], [3, 4]]) >>> x2.strides (0, 4) Would maybe make the algorithm more general? Might be there are some side effects I'm overlooking atm. I'm not paying too much attention, since I already made my point that I would prefer the first one, and my suggestion is not crucial since you'll test it anyway. Friedrich P.S.: Maybe there is a numpy function around to just test compatibility instead of testing + broadcasting at the same time in direct succession? From josef.pktd at gmail.com Thu Nov 24 15:21:52 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Nov 2011 15:21:52 -0500 Subject: [SciPy-User] rvs and broadcasting In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 2:50 PM, Friedrich Romstedt wrote: > 2011/11/24 ?: >> 1) broadcast shape parameters, loc and scale, if they are arrays >> produce rvs in that shape, and, if in this case size is not the same >> or 1, then raise a ValueError >> essentially >> ? ?lower, upper, loc, scale = np.broadcast_arrays(lower, upper, loc, scale) >> ? ?if (np.size(lower) > 1) and (size != (1,)) and (lower.shape != size): >> ? ? ? ?raise ValueError('Do you really want this? Then do it yourself.') >> >> 2) broadcast shape parameters, loc and scale, ?for each of these >> create random variables given by size, the return shape is essentially >> broadcasted shape concatenated with size, for example >> >> assert_equal(truncnorm_rvs(lower*np.arange(4)[:,None], upper, >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?loc=np.arange(5), scale=1, size=(2,3)).shape, >> ? ? ? ? ? ? ? ? ? ? (4, 5, 2, 3)) >> >> this version is attached. >> >> Any opinions about which version should be preferred? >> >> (As aside, truncnorm and other distribution with parameter dependent >> support might also have other problems than just the broadcasting of >> shape and scale.) > > I don't know the context but the first solution is *a lot* more > readable. ?I'm not even interested in understanding the cryptic second > one. The first case is almost what numpy random does. When it allows array parameters, then size has to correspond to the parameter shape. size is just an nuisance parameter in this case, and is only used for double checking. The simple fix to scipy's rvs would then just to raise an exception if size != broadcasted shape. The second case is not so difficult to understand: for each parameter vector generate random variables given by size. size is not redundant, and I have vectorized random sampling >>> rvs = truncnorm_rvs(lower, upper, loc=np.arange(5), scale=1, size=4) >>> rvs array([[-0.37710312, 0.66820212, 1.00771998, -0.15534072], [ 0.81969069, 0.05458267, 0.57364918, 2.87949887], [ 3.40269956, 1.59968417, 2.56405538, 0.78657868], [ 2.06382416, 4.3537613 , 5.0337632 , 2.37801841], [ 4.32319984, 0.6052688 , 3.72383011, 4.11273451]]) >>> rvs.mean(-1) array([ 0.28586957, 1.08185535, 2.08825445, 3.45734177, 3.19125831]) > > I don't know if all of the checks done in the ``if`` scope are really > necessary, but I just believe you. > > Alternatively, since you essentially want to check if things are > broadcastable AISI, why not just broadcasting them with a result array > of shape ``size`` you need to generate anyway along the way, > broadcasting all three and catching the ``ValueError: shape mismatch`` > exception? I would still have to broadcast twice, once to get the shape of the parameters and once to include the additional "size" dimension. to your PS: I don't know of a numpy function that just calculates the broadcasted shape without actually doing the broadcasting. > > Broadcasting is cheap since it plays stride stricks IIRC: > >>>> x = numpy.asarray([1, 2]) >>>> x2, y = numpy.broadcast_arrays(x, [[1, 2], [3, 4]]) >>>> x2.strides > (0, 4) > > Would maybe make the algorithm more general? I don't see where it could be more general, but there might be ways to make the shape handling more straight forward. size is given as a shape tuple, so I cannot use broadcast_arrays for it without actually creating the array. thanks, Josef > > Might be there are some side effects I'm overlooking atm. ?I'm not > paying too much attention, since I already made my point that I would > prefer the first one, and my suggestion is not crucial since you'll > test it anyway. > > Friedrich > > P.S.: Maybe there is a numpy function around to just test > compatibility instead of testing + broadcasting at the same time in > direct succession? > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jeremy at jeremysanders.net Thu Nov 24 15:45:26 2011 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Thu, 24 Nov 2011 20:45:26 +0000 Subject: [SciPy-User] ANN: Veusz 1.14, a scientific plotting package Message-ID: Veusz 1.14 ---------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Copyright (C) 2003-2011 Jeremy Sanders and contributors. Licenced under the GPL (version 2 or greater). Veusz is a Qt4 based scientific plotting package. It is written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF/SVG output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Data can be captured from external sources such as Internet sockets or other programs. Changes in 1.14: * Added interactive tutorial * Points in graphs can be colored depending on another dataset and the scale shown in a colorbar widget * Improved CSV import - better data type detection - locale-specific numeric and date formats - single/multiple/none header modes - option to skip lines at top of file - better handling of missing values * Data can be imported from clipboard * Substantially reduced size of output SVG files * In standard data import, descriptor can be left blank to generate dataset names colX * Axis plotting range can be interactively manipulated * If axis is in date-time format, show and allow the min and max values to be in date-time format * ImageFile widget can have image data embedded in document file * Fit widget can update the fit parameters and fit quality to a label widget * Allow editing of 2D datasets in data edit dialog * Add copy and paste dataset command to dataset browser context menu Minor and API changes: * Examples added to help menu * Picker shows date values as dates * Allow descriptor statement in standard data files after a comment character, e.g. "#descriptor x y" * Added some further color maps * Draw key symbols for vector field widget * Import plugin changes - Register classes rather than instances (backward compatibility is retained) - Plugins can return constants and functions (see Constant and Function types) - Add DatasetDateTime for returning date-time datasets * Custom definitions - Add RemoveCustom API to remove custom definitions - AddCustom API can specify order where custom definition is added * C++ code to speed up plotting points of different sizes / colors * Expand files by default in data navigator window * Select created datasets in data edit dialog * Tooltip wrapping used in data navigator window * Grid lines are dropped if they overlap with edge of graph Bug fixes * Fix initial extension in export dialog * Fix crash on hiding pages * Fixed validation for numeric values * Position of grid lines in perpendicular direction for non default positions * Catch errors in example import plugin * Fix crash for non existent key symbols * Fix crash when mismatch of dataset sizes when combining 1D datasets to make 2D dataset Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Bar graphs * Vector field plots * Box plots * Polar plots * Ternary plots * Plotting dates * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * EPS/PDF/PNG/SVG/EMF export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV, FITS, NPY/NPZ, QDP, binary and user-plugin importing * Data can be captured from external sources * User defined functions, constants and can import external Python functions * Plugin interface to allow user to write or load code to - import data using new formats - make new datasets, optionally linked to existing datasets - arbitrarily manipulate the document * Data picker * Interactive tutorial * Multithreaded rendering Requirements for source install: Python (2.4 or greater required) http://www.python.org/ Qt >= 4.4 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional: Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits pyemf >= 2.0.0 (optional for EMF export) http://pyemf.sourceforge.net/ PyMinuit >= 1.1.2 (optional improved fitting) http://code.google.com/p/pyminuit/ For EMF and better SVG export, PyQt >= 4.6 or better is required, to fix a bug in the C++ wrapping For documentation on using Veusz, see the "Documents" directory. The manual is in PDF, HTML and text format (generated from docbook). The examples are also useful documentation. Please also see and contribute to the Veusz wiki: http://barmag.net/veusz-wiki/ Issues with the current version: * Some recent versions of PyQt/SIP will causes crashes when exporting SVG files. Update to 4.7.4 (if released) or a recent snapshot to solve this problem. If you enjoy using Veusz, we would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the Git repository at https://github.com/jeremysanders/veusz.git. From lanceboyle at qwest.net Fri Nov 25 04:01:32 2011 From: lanceboyle at qwest.net (Jerry) Date: Fri, 25 Nov 2011 02:01:32 -0700 Subject: [SciPy-User] ANN: Veusz 1.14, a scientific plotting package In-Reply-To: References: Message-ID: <20A78005-D464-4B3F-97EE-F0B5728F2830@qwest.net> On Nov 24, 2011, at 1:45 PM, Jeremy Sanders wrote: > Veusz 1.14 > > Requirements for source install: > Python (2.4 or greater required) > http://www.python.org/ > Qt >= 4.4 (free edition) > http://www.trolltech.com/products/qt/ > PyQt >= 4.3 (SIP is required to be installed first) > http://www.riverbankcomputing.co.uk/pyqt/ > http://www.riverbankcomputing.co.uk/sip/ Oops. "The page that you are trying to access is not available" Jerry From jeremy at jeremysanders.net Fri Nov 25 04:35:13 2011 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Fri, 25 Nov 2011 09:35:13 +0000 Subject: [SciPy-User] ANN: Veusz 1.14, a scientific plotting package References: <20A78005-D464-4B3F-97EE-F0B5728F2830@qwest.net> Message-ID: Jerry wrote: > Oops. "The page that you are trying to access is not available" Thanks - I'll fix those links. Jeremy From elofgren at email.unc.edu Thu Nov 24 11:54:34 2011 From: elofgren at email.unc.edu (Lofgren, Eric) Date: Thu, 24 Nov 2011 16:54:34 +0000 Subject: [SciPy-User] Problem with ODEINT In-Reply-To: References: Message-ID: I checked the parameters, and they seem to both be reasonable values for the system in question. What's especially confusing to be is if I change the code to something like: SIR = spi.odeint(eq_system, startPop, t_interval, hmin=1e-20) where I've set a smaller step-size, I still get the occasional error like this: lsoda-- warning..internal t (=r1) and h (=r2) are such that in the machine, t + h = t on the next step (h = step size). solver will continue anyway In above, R1 = 0.1000000000000E+01 R2 = 0.5798216400482E-81 lsoda-- at t(=r1) and step size h(=r2), the error test failed repeatedly or with abs(h) = hmin In above, R1 = 0.1000000000000E+01 R2 = 0.5798216400482E-81 Repeated error test failures (internal error). Run with full_output = 1 to get quantitative information. Which suggests the solver is blazing right past the minimum step size I just set. Eric On Nov 24, 2011, at 10:56 AM, wrote: > Date: Thu, 24 Nov 2011 13:33:57 -0200 > From: Flavio Coelho > Subject: Re: [SciPy-User] Problem with ODEINT > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > I have seen this before with odeint. It is probably related to the > parameterization of your model, which is leading very large derivatives. > This is causing the numerical solver to try to reduce the step size (h) to > try to resolve the dynamics. h is apparently hitting its default lower > limit (something in the vicinity of 1E-85). > > You can try to set a lower hmin, when calling odeint, but it would be best > to go over your parameter values and check if they are reasonable. > > Disclaimer: Ihave not run your script nor checked your parameters, just > took a lok at the error messages. > > good luck, > > Fl?vio From warren.weckesser at enthought.com Fri Nov 25 15:09:34 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 25 Nov 2011 14:09:34 -0600 Subject: [SciPy-User] strange behavior calling odeint from brentq In-Reply-To: <0782B99B7E1D1745B7A6567C05E892ED0F5839@WATEXC2010.Watlow.com> References: <0782B99B7E1D1745B7A6567C05E892ED0F5839@WATEXC2010.Watlow.com> Message-ID: On Tue, Nov 22, 2011 at 2:11 PM, Schmidt, Phil wrote: > Hello, > > I am implementing the shooting method using optimize.brentq() and > integrate.odeint(). The following is an outline of my code: > > def objective(t2, *args): > t1 = args[0] > x_init = args[1] > x_target = args[2] > x = odeint(dxdt, x_init, [t1, t2]) > return x - x_target > > t_target = brentq(objective, t1, t2, args=(t1, x_init, x_target)) > > I have observed that if I place do-nothing statements in the objective > function (e.g., print statements or dummy assignments like t1=t1), > sometimes I will get different answers for t_target. I have not identified > a pattern for when this may or may not occur, but presumably there is some > dependency between brentq() and odeint(). > > I am running Scipy 0.9.0rc3, Python 2.6.5, Windows XP. > > Can anyone explain why this is happening, and point me to the "right" way > to do what I'm attemtping? > > > > Thanks, > Phil > Phil, It is difficult to tell what might be happening based on just the outline that you showed. Can you include a complete, minimal example that we can run and possibly reproduce the problem? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Nov 25 17:07:04 2011 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 25 Nov 2011 14:07:04 -0800 Subject: [SciPy-User] rvs and broadcasting In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 6:47 PM, wrote: > rvs in scipy.stats distributions has a nasty broadcasting if location > or scale are arrays and size is not defined for the same shape > http://projects.scipy.org/scipy/ticket/1544 > > (also https://groups.google.com/group/pystatsmodels/browse_thread/thread/e757d73b2a06b962?hl=en > ) > > I was playing with two solutions while I was writing a rvs for the > truncated normal. > > 1) broadcast shape parameters, loc and scale, if they are arrays > produce rvs in that shape, and, if in this case size is not the same > or 1, then raise a ValueError > essentially > ? ?lower, upper, loc, scale = np.broadcast_arrays(lower, upper, loc, scale) > ? ?if (np.size(lower) > 1) and (size != (1,)) and (lower.shape != size): > ? ? ? ?raise ValueError('Do you really want this? Then do it yourself.') > > 2) broadcast shape parameters, loc and scale, ?for each of these > create random variables given by size, the return shape is essentially > broadcasted shape concatenated with size, for example > > assert_equal(truncnorm_rvs(lower*np.arange(4)[:,None], upper, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?loc=np.arange(5), scale=1, size=(2,3)).shape, > ? ? ? ? ? ? ? ? ? ? (4, 5, 2, 3)) > > this version is attached. > > Any opinions about which version should be preferred? I'm strongly in favor of option 2. The additional functionality is a little bit tricky to understand, but not much, and I can easily imagine cases where it'd be both useful and natural. And, option 2 is a strict superset of option 1 -- in option 1, the shape= parameter is useless when passing in parameter vectors, one should just leave it off in all cases. In option 2, you can still leave off the shape= parameter and get the same functionality; plus, you have the option of getting additional useful functionality by specifying it. So that's my 2 cents... -- Nathaniel From gustavo.goretkin at gmail.com Fri Nov 25 17:14:14 2011 From: gustavo.goretkin at gmail.com (Gustavo Goretkin) Date: Fri, 25 Nov 2011 17:14:14 -0500 Subject: [SciPy-User] array of matrices Message-ID: I want to create an array, call it R, of matrices [M1,M2, M3, ...] . So R is 3D. If v is a (column) vector, then np.dot(M1,v) matrix-multiples v by M1, and produces another (column) vector. I want R to be shaped so that np.dot(R,v) produces an array of column vectors [M1*v, M2*v, M3*v, ...] Specifically, I want to define a function that resembles: def rot_2d(theta): return np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) that works even if theta itself is an array. np.dot(rot_2d(np.array([0,np.pi/2,np.pi]),np.array([1,0]) produces an array like [ [1,0], [0,1], [-1,0] ]. Thanks! Gustav From wesmckinn at gmail.com Fri Nov 25 22:58:44 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Fri, 25 Nov 2011 22:58:44 -0500 Subject: [SciPy-User] ANN: pandas 0.6.0 released Message-ID: I'm pleased to announce the pandas 0.6.0 major release. It's been about one month since the last major release. It includes 155 commits and 16 pull requests closing 78 tickets on GitHub. Several new people contributed code to the project for this release. This upgrade is recommended for all users and should not cause any API breakage for 0.5.0 users. There are a lot of miscellaneous new functions and features, many performance enhancements, and a significant number of bugs and corner cases encountered since the 0.5.0 release. See the full release notes below and on GitHub. Some features to look forward to (or help with!) in the next couple releases: - NumPy datetime64 type integration - Enhanced GroupBy, especially for binning time series data - Further performance enhancements to existing functionality Many thanks to all the users who contributed code, bug reports, and suggestions for new features. best, Wes What is it ========== pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with ?relational? or ?labeled? data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. Links ===== Release Notes: https://github.com/wesm/pandas/blob/master/RELEASE.rst Documentation: http://pandas.sourceforge.net Installers: http://pypi.python.org/pypi/pandas Code Repository: http://github.com/wesm/pandas Mailing List: http://groups.google.com/group/pystatsmodels Blog: http://blog.wesmckinney.com pandas 0.6.0 ============ **Release date:** 11/25/2011 **API Changes** - Arithmetic methods like `sum` will attempt to sum dtype=object values by default instead of excluding them (GH #382) **New features / modules** - Add `melt` function to `pandas.core.reshape` - Add `level` parameter to group by level in Series and DataFrame descriptive statistics (PR #313) - Add `head` and `tail` methods to Series, analogous to to DataFrame (PR #296) - Add `Series.isin` function which checks if each value is contained in a passed sequence (GH #289) - Add `float_format` option to `Series.to_string` - Add `skip_footer` (GH #291) and `converters` (GH #343) options to `read_csv` and `read_table` - Add proper, tested weighted least squares to standard and panel OLS (GH #303) - Add `drop_duplicates` and `duplicated` functions for removing duplicate DataFrame rows and checking for duplicate rows, respectively (GH #319) - Implement logical (boolean) operators &, |, ^ on DataFrame (GH #347) - Add `Series.mad`, mean absolute deviation, matching DataFrame - Add `QuarterEnd` DateOffset (PR #321) - Add matrix multiplication function `dot` to DataFrame (GH #65) - Add `orient` option to `Panel.from_dict` to ease creation of mixed-type Panels (GH #359, #301) - Add `DataFrame.from_dict` with similar `orient` option - Can now pass list of tuples or list of lists to `DataFrame.from_records` for fast conversion to DataFrame (GH #357) - Can pass multiple levels to groupby, e.g. `df.groupby(level=[0, 1])` (GH #103) - Can sort by multiple columns in `DataFrame.sort_index` (GH #92, PR #362) - Add fast `get_value` and `put_value` methods to DataFrame and micro-performance tweaks (GH #360) - Add `cov` instance methods to Series and DataFrame (GH #194, PR #362) - Add bar plot option to `DataFrame.plot` (PR #348) - Add `idxmin` and `idxmax` functions to Series and DataFrame for computing index labels achieving maximum and minimum values (PR #286) - Add `read_clipboard` function for parsing DataFrame from OS clipboard, should work across platforms (GH #300) - Add `nunique` function to Series for counting unique elements (GH #297) - DataFrame constructor will use Series name if no columns passed (GH #373) - Support regular expressions and longer delimiters in read_table/read_csv, but does not handle quoted strings yet (GH #364) - Add `DataFrame.to_html` for formatting DataFrame to HTML (PR #387) - MaskedArray can be passed to DataFrame constructor and masked values will be converted to NaN (PR #396) - Add `DataFrame.boxplot` function (GH #368, others) - Can pass extra args, kwds to DataFrame.apply (GH #376) **Improvements to existing features** - Raise more helpful exception if date parsing fails in DateRange (GH #298) - Vastly improved performance of GroupBy on axes with a MultiIndex (GH #299) - Print level names in hierarchical index in Series repr (GH #305) - Return DataFrame when performing GroupBy on selected column and as_index=False (GH #308) - Can pass vector to `on` argument in `DataFrame.join` (GH #312) - Don't show Series name if it's None in the repr, also omit length for short Series (GH #317) - Show legend by default in `DataFrame.plot`, add `legend` boolean flag (GH #324) - Significantly improved performance of `Series.order`, which also makes np.unique called on a Series faster (GH #327) - Faster cythonized count by level in Series and DataFrame (GH #341) - Raise exception if dateutil 2.0 installed on Python 2.x runtime (GH #346) - Significant GroupBy performance enhancement with multiple keys with many "empty" combinations - New Cython vectorized function `map_infer` speeds up `Series.apply` and `Series.map` significantly when passed elementwise Python function, motivated by PR #355 - Cythonized `cache_readonly`, resulting in substantial micro-performance enhancements throughout the codebase (GH #361) - Special Cython matrix iterator for applying arbitrary reduction operations with 3-5x better performance than `np.apply_along_axis` (GH #309) - Add `raw` option to `DataFrame.apply` for getting better performance when the passed function only requires an ndarray (GH #309) - Improve performance of `MultiIndex.from_tuples` - Can pass multiple levels to `stack` and `unstack` (GH #370) - Can pass multiple values columns to `pivot_table` (GH #381) - Can call `DataFrame.delevel` with standard Index with name set (GH #393) - Use Series name in GroupBy for result index (GH #363) - Refactor Series/DataFrame stat methods to use common set of NaN-friendly function - Handle NumPy scalar integers at C level in Cython conversion routines **Bug fixes** - Fix bug in `DataFrame.to_csv` when writing a DataFrame with an index name (GH #290) - DataFrame should clear its Series caches on consolidation, was causing "stale" Series to be returned in some corner cases (GH #304) - DataFrame constructor failed if a column had a list of tuples (GH #293) - Ensure that `Series.apply` always returns a Series and implement `Series.round` (GH #314) - Support boolean columns in Cythonized groupby functions (GH #315) - `DataFrame.describe` should not fail if there are no numeric columns, instead return categorical describe (GH #323) - Fixed bug which could cause columns to be printed in wrong order in `DataFrame.to_string` if specific list of columns passed (GH #325) - Fix legend plotting failure if DataFrame columns are integers (GH #326) - Shift start date back by one month for Yahoo! Finance API in pandas.io.data (GH #329) - Fix `DataFrame.join` failure on unconsolidated inputs (GH #331) - DataFrame.min/max will no longer fail on mixed-type DataFrame (GH #337) - Fix `read_csv` / `read_table` failure when passing list to index_col that is not in ascending order (GH #349) - Fix failure passing Int64Index to Index.union when both are monotonic - Fix error when passing SparseSeries to (dense) DataFrame constructor - Added missing bang at top of setup.py (GH #352) - Change `is_monotonic` on MultiIndex so it properly compares the tuples - Fix MultiIndex outer join logic (GH #351) - Set index name attribute with single-key groupby (GH #358) - Bug fix in reflexive binary addition in Series and DataFrame for non-commutative operations (like string concatenation) (GH #353) - setupegg.py will invoke Cython (GH #192) - Fix block consolidation bug after inserting column into MultiIndex (GH #366) - Fix bug in join operations between Index and Int64Index (GH #367) - Handle min_periods=0 case in moving window functions (GH #365) - Fixed corner cases in DataFrame.apply/pivot with empty DataFrame (GH #378) - Fixed repr exception when Series name is a tuple - Always return DateRange from `asfreq` (GH #390) - Pass level names to `swaplavel` (GH #379) - Don't lose index names in `MultiIndex.droplevel` (GH #394) - Infer more proper return type in `DataFrame.apply` when no columns or rows depending on whether the passed function is a reduction (GH #389) - Always return NA/NaN from Series.min/max and DataFrame.min/max when all of a row/column/values are NA (GH #384) - Enable partial setting with .ix / advanced indexing (GH #397) - Handle mixed-type DataFrames correctly in unstack, do not lose type information (GH #403) - Fix integer name formatting bug in Index.format and in Series.__repr__ - Handle label types other than string passed to groupby (GH #405) - Fix bug in .ix-based indexing with partial retrieval when a label is not contained in a level - Index name was not being pickled (GH #408) - Level name should be passed to result index in GroupBy.apply (GH #416) Thanks ------ - Craig Austin - Marius Cobzarenco - Joel Cross - Jeff Hammerbacher - Adam Klein - Thomas Kluyver - Jev Kuznetsov - Kieran O'Mahony - Wouter Overmeire - Nathan Pinger - Christian Prinoth - Skipper Seabold - Chang She - Ted Square - Aman Thakral - Chris Uga - Dieter Vandenbussche - carljv - rsamson From krastanov.stefan at gmail.com Sat Nov 26 09:32:47 2011 From: krastanov.stefan at gmail.com (Stefan Krastanov) Date: Sat, 26 Nov 2011 06:32:47 -0800 (PST) Subject: [SciPy-User] Power Spectral Density in SciPy, not pylab In-Reply-To: References: Message-ID: <17684978.61.1322317967157.JavaMail.geo-discussion-forums@yqcw10> A very old question but I had the same problem and google pointed me here. Use mlab. from matplotlib import mlab powers, freqs = mlab.psd(blah_blah) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tazz_ben at wsu.edu Sat Nov 26 11:52:46 2011 From: tazz_ben at wsu.edu (tazz_ben) Date: Sat, 26 Nov 2011 16:52:46 +0000 Subject: [SciPy-User] Confusion about lognormal distribution functions Message-ID: Hi Group - So, what I'm trying to do is draw a firm size from a lognormal distribution in a simulation (I'm using a fortuna RNG outside of the scope of this question -- why instead of twister deals with my research question, for this purposes it is just important to say using the built in random draw from a specific distribution wouldn't work). But when I do something like this: from scipy.stats import lognorm lognorm.ppf(.5,1,50,50) The numbers that come out make no sense (I'm right in believing "loc" = "mean" and "scale" = "standard deviation"?). I've tried logging the numbers, un-logging the numbers, etc. I'm very confused on what it is doing. From elofgren at email.unc.edu Sat Nov 26 20:41:08 2011 From: elofgren at email.unc.edu (Lofgren, Eric) Date: Sun, 27 Nov 2011 01:41:08 +0000 Subject: [SciPy-User] Problem with ODEINT In-Reply-To: References: Message-ID: <81CB87CA-D183-4399-8675-79BFEFDCC175@unc.edu> > Eric, > You have given odeint an initial condition of length 5, but the function > that defines your system is returning a vector of only length 3. Don't do > that. > ... > Warren Warren- This does indeed seem to solve the problem, I haven't hit any errors in 1,000 or so runs, and it does indeed make the code run considerably faster. Thank you for the advice and help. Eric From robert.kern at gmail.com Sun Nov 27 12:39:04 2011 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Nov 2011 17:39:04 +0000 Subject: [SciPy-User] Confusion about lognormal distribution functions In-Reply-To: References: Message-ID: On Sat, Nov 26, 2011 at 16:52, tazz_ben wrote: > Hi Group - > > So, what I'm trying to do is draw a firm size from a lognormal > distribution in a simulation (I'm using a fortuna RNG outside of the scope > of this question -- why instead of twister deals with my research > question, for this purposes it is just important to say using the built in > random draw from a specific distribution wouldn't work). > > But when I do something like this: > > from scipy.stats import lognorm > > lognorm.ppf(.5,1,50,50) > > The numbers that come out make no sense (I'm right in believing "loc" = > "mean" and "scale" = "standard deviation"?). I've tried logging the > numbers, un-logging the numbers, etc. ?I'm very confused on what it is > doing. No, loc and scale mean exactly the same thing for every distribution. loc translates the distribution linearly and scale scales it. lognorm.pdf(x, s, loc=loc, scale=scale) == lognorm.pdf((x-loc)/scale, s)/scale They don't always map to particular parameters in standard parameterizations. However, they often do, so doing this lets us share the code for shifting and scaling in the base class rather than implementing it slightly differently for every distribution. In this case, you want to ignore the loc parameter entirely. The scale parameter corresponds to exp(mu) where mu is the mean of the underlying normal distribution. The shape parameter is the standard deviation of the underlying normal distribution. log(lognorm.ppf(p, s, scale=scale)) == norm.ppf(p, loc=log(scale), scale=s) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Sun Nov 27 12:49:24 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 27 Nov 2011 12:49:24 -0500 Subject: [SciPy-User] Confusion about lognormal distribution functions In-Reply-To: References: Message-ID: On Sun, Nov 27, 2011 at 12:39 PM, Robert Kern wrote: > On Sat, Nov 26, 2011 at 16:52, tazz_ben wrote: >> Hi Group - >> >> So, what I'm trying to do is draw a firm size from a lognormal >> distribution in a simulation (I'm using a fortuna RNG outside of the scope >> of this question -- why instead of twister deals with my research >> question, for this purposes it is just important to say using the built in >> random draw from a specific distribution wouldn't work). >> >> But when I do something like this: >> >> from scipy.stats import lognorm >> >> lognorm.ppf(.5,1,50,50) >> >> The numbers that come out make no sense (I'm right in believing "loc" = >> "mean" and "scale" = "standard deviation"?). I've tried logging the >> numbers, un-logging the numbers, etc. ?I'm very confused on what it is >> doing. > > No, loc and scale mean exactly the same thing for every distribution. > loc translates the distribution linearly and scale scales it. > > ?lognorm.pdf(x, s, loc=loc, scale=scale) == lognorm.pdf((x-loc)/scale, s)/scale > > They don't always map to particular parameters in standard > parameterizations. However, they often do, so doing this lets us share > the code for shifting and scaling in the base class rather than > implementing it slightly differently for every distribution. > > In this case, you want to ignore the loc parameter entirely. The scale > parameter corresponds to exp(mu) where mu is the mean of the > underlying normal distribution. The shape parameter is the standard > deviation of the underlying normal distribution. > > log(lognorm.ppf(p, s, scale=scale)) == norm.ppf(p, loc=log(scale), scale=s) just as background http://projects.scipy.org/scipy/ticket/1502 and several mailing list threads. It's a FAQ. It might be a case for writing a reparameterized wrapper class. Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From _kfj at yahoo.com Mon Nov 28 06:30:24 2011 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Mon, 28 Nov 2011 11:30:24 +0000 (UTC) Subject: [SciPy-User] how can I create B-splines of mutidimensional values? Message-ID: Hi group! I have the following problem: I have multidimensional values which I have sampled over a 2D grid. Now I want to interpolate the values (using B-spline interpolation) for arbirary (x,y) locations. Obviously this can be done by creating a separate B-spline for each dimension of of the values, interpolating at (x,y) and putting the results of the interpolations together, forming the multidimensional result. For performance reasons, this approach isn't optimal, though. Since the splines are evaluated at the same location, a fair deal of the calculations would be identical. But I haven't found a way to create a B-spline of the multidimensional values to exploit the fact that I'm evaluating several splines at the same position. I suppose my problem would be quite common - one typical case would be RGB image data: it would be silly to have separate splines for the R,G and B channels and triplicate the part of the calculation which only depends on the position where a value is interpolated. What I'm missing is a mechanism to calculate the spline with n-dimensional coefficients and interpolation routines yielding multidimensional values when used with these n-dimensional spline coefficients. Am I missing something? I'm not very proficient with numpy/scipy, so maybe I just can't see the obvious... Helpful hints welcome. Kay From zachary.pincus at yale.edu Mon Nov 28 06:38:55 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 28 Nov 2011 06:38:55 -0500 Subject: [SciPy-User] how can I create B-splines of mutidimensional values? In-Reply-To: References: Message-ID: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> scipy.ndimage.map_coordinates() performs b-spline interpolation of regularly-spaced data (spline order 0-5, with several options for boundary conditions). The syntax can seem a bit tricky at first, and you need to watch out for ringing artifacts at sharp transitions (as these are interpolating splines), but it should do the trick. Zach On Nov 28, 2011, at 6:30 AM, Kay F. Jahnke wrote: > Hi group! > > I have the following problem: I have multidimensional values which I have > sampled over a 2D grid. Now I want to interpolate the values (using B-spline > interpolation) for arbirary (x,y) locations. > > Obviously this can be done by creating a separate B-spline for each dimension of > of the values, interpolating at (x,y) and putting the results of the > interpolations together, forming the multidimensional result. > > For performance reasons, this approach isn't optimal, though. Since the splines > are evaluated at the same location, a fair deal of the calculations would be > identical. But I haven't found a way to create a B-spline of the > multidimensional values to exploit the fact that I'm evaluating several splines > at the same position. > > I suppose my problem would be quite common - one typical case would be RGB image > data: it would be silly to have separate splines for the R,G and B channels and > triplicate the part of the calculation which only depends on the position where > a value is interpolated. What I'm missing is a mechanism to calculate the spline > with n-dimensional coefficients and interpolation routines yielding > multidimensional values when used with these n-dimensional spline coefficients. > > Am I missing something? I'm not very proficient with numpy/scipy, so maybe I > just can't see the obvious... > > Helpful hints welcome. > > Kay > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From _kfj at yahoo.com Mon Nov 28 08:03:20 2011 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Mon, 28 Nov 2011 13:03:20 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?how_can_I_create_B-splines_of_mutidimensio?= =?utf-8?q?nal=09values=3F?= References: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> Message-ID: Zachary Pincus yale.edu> writes: > > scipy.ndimage.map_coordinates() performs b-spline > interpolation of regularly-spaced data my data are pairs of numbers, like complex numbers. I can't see a way of processing them. > (spline > order 0-5, with several options for boundary conditions). > The syntax can seem a bit tricky at first, and > you need to watch out for ringing artifacts at sharp > transitions (as these are interpolating splines), > but it should do the trick. > > Zach Thanks, Zach, but I tried all the routines in interpolate, ndimage and signal, and all of these only seem to use use one-dimensional values. Using the routines in ndimage, I can easily have a 3D array of floats and interpolate at arbitrary 3D coordinates, but this is not what I want. I want multidimensional values, not coordinates. My coordinates are plain 2D x,y coordinates, but the values defined over them are pairs of numbers. My data would look something like: (V1,V2) (V1,V2) .... (V1,V2) (V1,V2) (V1,V2) .... (V1,V2) ... (V1,V2) (V1,V2) .... (V1,V2) (a 2D matrix of pairs) I'd expect a spline coefficient matrix of the same shape (C1,C2) (C1,C2) ... (C1,C2) (C1,C2) (C1,C2) ... (C1,C2) ... (C1,C2) (C1,C2) ... (C1,C2) and, when interpolating at (x,y) I'd like a result (I0,I1) (a single pair) Kay From zachary.pincus at yale.edu Mon Nov 28 08:30:17 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 28 Nov 2011 08:30:17 -0500 Subject: [SciPy-User] how can I create B-splines of mutidimensional values? In-Reply-To: References: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> Message-ID: <4F619775-90DA-4B95-8B7D-F50161B3B0D0@yale.edu> >> scipy.ndimage.map_coordinates() performs b-spline >> interpolation of regularly-spaced data > > my data are pairs of numbers, like complex numbers. > I can't see a way of processing them. My apologies; I misread your email. The traditional way of interpolating moltivariate data is to do multiple univariate interpolations, as far as I can tell. (E.g. the "thin plate spline" and related literature for defining/manipulating image deformations deals in sparse transforms of (x_old, y_old) -> (x_new, y_new), not unlike what you describe. But all the operations are defined separately for (x_old, y_old) -> x_new and (x_old, y_old) -> y_new, simplifying matters.) Are you thinking that doing the operations "together" (getting spline coefficients simultaneously for the x and y mapping) would/should somehow yield different coefficients than doing them separately? I've certainly never seen anything like that, but I'm far from an expert on the matter. But, as above, from everything I've seen, you can just do the interpolations separately for x and y, and then knit the results together at the end. Zach >> (spline >> order 0-5, with several options for boundary conditions). >> The syntax can seem a bit tricky at first, and >> you need to watch out for ringing artifacts at sharp >> transitions (as these are interpolating splines), >> but it should do the trick. >> >> Zach > > Thanks, Zach, but I tried all the routines in interpolate, > ndimage and signal, and all of these only seem to use use > one-dimensional values. Using the routines in ndimage, I > can easily have a 3D array of floats and interpolate at > arbitrary 3D coordinates, but this is not what I want. > I want multidimensional values, not coordinates. My > coordinates are plain 2D x,y coordinates, but the values > defined over them are pairs of numbers. > > My data would look something like: > > (V1,V2) (V1,V2) .... (V1,V2) > (V1,V2) (V1,V2) .... (V1,V2) > ... > (V1,V2) (V1,V2) .... (V1,V2) > > (a 2D matrix of pairs) > > I'd expect a spline coefficient matrix of the same shape > > (C1,C2) (C1,C2) ... (C1,C2) > (C1,C2) (C1,C2) ... (C1,C2) > ... > (C1,C2) (C1,C2) ... (C1,C2) > > and, when interpolating at (x,y) I'd like a result > > (I0,I1) > > (a single pair) > > Kay > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From zachary.pincus at yale.edu Mon Nov 28 09:28:27 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 28 Nov 2011 09:28:27 -0500 Subject: [SciPy-User] how can I create B-splines of mutidimensional values? In-Reply-To: <4F619775-90DA-4B95-8B7D-F50161B3B0D0@yale.edu> References: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> <4F619775-90DA-4B95-8B7D-F50161B3B0D0@yale.edu> Message-ID: > Are you thinking that doing the operations "together" (getting spline coefficients simultaneously for the x and y mapping) would/should somehow yield different coefficients than doing them separately? I've certainly never seen anything like that, but I'm far from an expert on the matter. But, as above, from everything I've seen, you can just do the interpolations separately for x and y, and then knit the results together at the end. Oh and PS, as far as performance concerns about doing things this way? Don't worry about it until it becomes a bottleneck! Profile the code to determine whether it is, and then if so, you might consider re-writing a parallel coefficient-evaluation-loop in Cython or something. But ndimage is pretty zippy and it may well be that this won't turn out to be the performance hit you fear. Zach From _kfj at yahoo.com Mon Nov 28 12:08:50 2011 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Mon, 28 Nov 2011 17:08:50 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?how_can_I_create_B-splines_of=09mutidimens?= =?utf-8?q?ional=09values=3F?= References: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> <4F619775-90DA-4B95-8B7D-F50161B3B0D0@yale.edu> Message-ID: Zachary Pincus yale.edu> writes: > > > Are you thinking that doing the operations "together" (getting spline coefficients simultaneously for > the x and y mapping) would/should somehow yield different coefficients than doing them separately? I've > certainly never seen anything like that, but I'm far from an expert on the matter. But, as above, from > everything I've seen, you can just do the interpolations separately for x and y, and then knit the results > together at the end. I certainly hope that doing the interpolations together and separately should yield precisely the same values :) I really only have performance concerns. I can explain this with a very simple example. Let's suppose I just have two data points P0 and P1 with the values (A0,B0) and (A1,B1). Instead of using splines, let's do a linear interpolation, and let the location of Pi be determined by a single coordinate. The underlying function would be f(x), and let f(x0)=P0 and f(x1)=P1. If we interpolate f(x) at some arbitrary point between x0 and x1, we'd have to calculate ( A0 * (x1-x)/(x2-x1) + A1 * (x-x0)/(x2-x1) , B0 * (x1-x)/(x2-x1) + B1 * (x-x0)/(x2-x1) ) obviously, the only difference between the calculations in the first and second line are the A and B values; the remainder, what could be called the interpolation infrastructure, is precisely the same. So calculating it twice, as would be done if the problem were separated, would be inefficient. It would be a better strategy to calculate the weights first W0 = (x1-x)/(x2-x1) W1 = (x-x0)/(x2-x1) and arrive at the result by ( A0 * W0 + A1 * W1 , B0 * W0 , B1 * W1 ) in short, the code lends itself to vectorization - along the vectors making up individual values. If the values are N-tuples, the larger N is the more time can be saved. Now I do not know if these concerns are relevant in my 'real-world' case, but I assume they are. Naively I would assume that calculating a B-spline at a given location would require determining weights for the coefficients relevant to the calculation. This calculation would be the same for each component value, just as in my simple 1D example. So there should be saving potential here. > Oh and PS, as far as performance concerns about doing things this way? Don't worry about it until it becomes a > bottleneck! Profile the code to determine whether it is, and then if so, you might consider re-writing a > parallel coefficient-evaluation-loop in Cython or something. maybe my worries are indeed needless. But I'm doing image analysis and I've got real-time stuff at the back of my head - every milisecond I can save is a good milisecond ;-) > But ndimage is pretty zippy and it may well be that this won't turn out to be the performance hit you fear. For now I'll separate the problem, but I might have to dig deeper. I just had high hopes I could do without separation; after all everything in numpy/scipy is made to work together orthogonally, and when the software informed me it won't even handle complex values (this would be all I need currently) I couldn't quite believe it :( Thanks again for your reply. Kay From zachary.pincus at yale.edu Mon Nov 28 12:14:08 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 28 Nov 2011 12:14:08 -0500 Subject: [SciPy-User] how can I create B-splines of mutidimensional values? In-Reply-To: References: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> <4F619775-90DA-4B95-8B7D-F50161B3B0D0@yale.edu> Message-ID: > maybe my worries are indeed needless. But I'm doing image analysis and I've got > real-time stuff at the back of my head - every milisecond I can save is a good > milisecond ;-) Haha, fair enough. If you do need to write your own interpolator, note that you can still use scipy.ndimage.spline_filter() to generate the coefficients. You could then, as I mentioned, write a Cython module to erect all the "infrastructure" in parallel to do the interpolation, or perhaps you could even figure out how to vectorize it in pure numpy. (Perhaps that's not hard to do...) If you do either, definitely drop a line to the list because I'm sure folks might be interested in the code. Zach From _kfj at yahoo.com Mon Nov 28 12:20:03 2011 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Mon, 28 Nov 2011 17:20:03 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?how_can_I_create_B-splines=09of=09mutidime?= =?utf-8?q?nsional=09values=3F?= References: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> <4F619775-90DA-4B95-8B7D-F50161B3B0D0@yale.edu> Message-ID: Zachary Pincus yale.edu> writes: > You could then, as I mentioned, write a Cython module to erect all the > "infrastructure" in parallel to do the interpolation, (gulp) I may need another few years until I'm THAT advaced... I've just learnt a bit of swig and I'm not really so keen on learning yet another interface generator > or perhaps you could even figure out how to vectorize it in pure numpy. > (Perhaps that's not hard to do...) My suspicion. > If you do either, definitely drop a line to the list because I'm sure > folks might be interested in the code. sure will. It should make a difference to all applications along these lines - I mean, how do they go on about RGB images? do each channel separately? Someone must have thought of exploiting the obvious redundancies. Kay From zachary.pincus at yale.edu Mon Nov 28 13:42:44 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 28 Nov 2011 13:42:44 -0500 Subject: [SciPy-User] how can I create B-splines of mutidimensional values? In-Reply-To: References: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> <4F619775-90DA-4B95-8B7D-F50161B3B0D0@yale.edu> Message-ID: >> You could then, as I mentioned, write a Cython module to erect all the >> "infrastructure" in parallel to do the interpolation, > > (gulp) > > I may need another few years until I'm THAT advaced... > I've just learnt a bit of swig and I'm not really so keen on learning yet > another interface generator Haha... cython is actually less scary than all that. It's more a python-like language that can be "compiled" to C, and which transparently interfaces with normal python code. After adding a few type annotations, it's really easy to write loops that operate on numpy arrays at C-ish speed. >> or perhaps you could even figure out how to vectorize it in pure numpy. >> (Perhaps that's not hard to do...) > > My suspicion. > >> If you do either, definitely drop a line to the list because I'm sure >> folks might be interested in the code. > > sure will. It should make a difference to all applications along these lines - I > mean, how do they go on about RGB images? do each channel separately? Someone > must have thought of exploiting the obvious redundancies. Probably, but they might not have been writing in Python! After all, 3x slower doesn't change the big-O complexity, and for most tasks that's often going to be fine. I mean, everything equal it's always better to have faster, more general tools, but there's an API complexity cost, and someone needs to write the code, etc. etc. etc... Zach From ralf.gommers at googlemail.com Mon Nov 28 16:02:45 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 28 Nov 2011 22:02:45 +0100 Subject: [SciPy-User] meshgrid for 3D In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 3:29 PM, Bala subramanian wrote: > Friends, > I have a data file containing three vectors x,y,z and want to create a > coordinate matrices from three vectors. While i know that the numpy's > meshgrid function can be used for two vectors, i dnt any tool which i can > use for three dimension. Kindly suggest me some solution. > > Try the enhanced meshgrid attached to http://projects.scipy.org/numpy/ticket/966. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From _kfj at yahoo.com Mon Nov 28 17:00:42 2011 From: _kfj at yahoo.com (Kay F. Jahnke) Date: Mon, 28 Nov 2011 22:00:42 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?how_can_I_create=09B-splines=09of=09mutidi?= =?utf-8?q?mensional=09values=3F?= References: <3409D06E-6AD0-4D16-97D2-8E8EB5D4B078@yale.edu> <4F619775-90DA-4B95-8B7D-F50161B3B0D0@yale.edu> Message-ID: Zachary Pincus yale.edu> writes: > > >> You could then, as I mentioned, write a Cython module > Haha... cython is actually less scary than all that. It's more a python-like language that can be > "compiled" to C, and which transparently interfaces with normal python code. After adding a few type > annotations, it's really easy to write loops that operate on numpy arrays at C-ish speed. I had a good look at cython before I opted to use swig for my project - I had a large body of extant C++ code to deal with, and swig is better at dealing with existing stuff (you basically include the original C++ headers and add a bit of collateral code). Cython is more for writing new code. But I'd have to write the cython code to operate on numpy data and fit into the scipy environment. That's where the gulp really came from. I'd have to look at what the current code does and figure out how to write a new piece of scipy to fit in. > > how do they go on about RGB images? do each channel separately? Someone > > must have thought of exploiting the obvious redundancies. > > Probably, but they might not have been writing in Python! After all, 3x slower doesn't change the big-O > complexity, and for most tasks that's often going to be fine. I mean, everything equal it's always better > to have faster, more general tools, but there's an API complexity cost, and someone needs to write the > code, etc. etc. etc... Still I feel going that way is right. But maybe I can get lucky elsewhere - I suppose if I look around among the various image processing packages I might find just what I want. In hugin, which is the project I work with, they use vigra, and since vigra uses 'generic' programming I wouldn't be too surprised if the type of vectorization I anticipate is already there. I was just hoping to be able to do it all inside scipy rather than having to pull in yet another dependency. Kay From josef.pktd at gmail.com Tue Nov 29 10:14:03 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 29 Nov 2011 10:14:03 -0500 Subject: [SciPy-User] creating sparse indicator arrays Message-ID: Is there a simple or fast way to create a sparse indicator array, `a` below, without going through the dense matrix first? >>> from scipy import sparse >>> g = np.array([0, 0, 1, 1]) #categories, integers, >>> u = np.arange(2) #unique's, range(number_categories) >>> g[:,None] == u array([[ True, False], [ True, False], [False, True], [False, True]], dtype=bool) this is the one I want: >>> a = sparse.csc_matrix((g[:,None] == u)) >>> a <4x2 sparse matrix of type '' with 4 stored elements in Compressed Sparse Column format> >>> a.todense() matrix([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=int8) Thanks, Josef From njs at pobox.com Tue Nov 29 14:07:32 2011 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 29 Nov 2011 11:07:32 -0800 Subject: [SciPy-User] creating sparse indicator arrays In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 7:14 AM, wrote: > Is there a simple or fast way to create a sparse indicator array, `a` > below, without going through the dense matrix first? The standard way is to use the LIL or DOK sparse formats. If you want to use them then you'll have to do your construction "by hand", though -- you can't do the nice broadcasting tricks you're using below. Alternatively, constructing CSC or CSR format directly is not that hard, though it may take some time to wrap your head around the definitions... >>>> from scipy import sparse >>>> g = np.array([0, 0, 1, 1]) ? #categories, integers, >>>> u = np.arange(2) ? ?#unique's, ?range(number_categories) If 'u' is *always* going to be np.arange(number_categories), then actually this is quite trivial (untested code): data = np.ones(len(g), dtype=np.int8) indices = g indptr = np.arange(len(g)) a = np.csr_matrix((data, indices, indptr)) This gives you a CSR matrix, which you can either use as is or convert to CSC. If you want to build CSC directly, and want to support an arbitrary 'u' vector, then you could do something like (untested code): data = np.ones(len(g), dtype=np.int8) indices = np.empty(len(g), dtype=int) write_offset = 0 indptr = np.empty(number_categories, dtype=int) for col_i, category in enumerate(u): indptr[col_i] = write_offset rows = (data == category).nonzero()[0] indices[write_offset:write_offset + len(rows)] = rows write_offset += len(rows) Or you could just use a loop that fills in an LIL matrix :-) -- Nathaniel From josef.pktd at gmail.com Tue Nov 29 14:50:34 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 29 Nov 2011 14:50:34 -0500 Subject: [SciPy-User] creating sparse indicator arrays In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 2:07 PM, Nathaniel Smith wrote: > On Tue, Nov 29, 2011 at 7:14 AM, ? wrote: >> Is there a simple or fast way to create a sparse indicator array, `a` >> below, without going through the dense matrix first? > > The standard way is to use the LIL or DOK sparse formats. If you want > to use them then you'll have to do your construction "by hand", though > -- you can't do the nice broadcasting tricks you're using below. > Alternatively, constructing CSC or CSR format directly is not that > hard, though it may take some time to wrap your head around the > definitions... > >>>>> from scipy import sparse >>>>> g = np.array([0, 0, 1, 1]) ? #categories, integers, >>>>> u = np.arange(2) ? ?#unique's, ?range(number_categories) > > If 'u' is *always* going to be np.arange(number_categories), then > actually this is quite trivial (untested code): > > data = np.ones(len(g), dtype=np.int8) > indices = g > indptr = np.arange(len(g)) > a = np.csr_matrix((data, indices, indptr)) This works nicely (only "sparse" namespace) u = np.arange(number_categories) will be a code requirement (group or period labels are consecutive ints) > > This gives you a CSR matrix, which you can either use as is or convert to CSC. > > If you want to build CSC directly, and want to support an arbitrary > 'u' vector, then you could do something like (untested code): > > data = np.ones(len(g), dtype=np.int8) > indices = np.empty(len(g), dtype=int) > write_offset = 0 > indptr = np.empty(number_categories, dtype=int) > for col_i, category in enumerate(u): > ?indptr[col_i] = write_offset > ?rows = (data == category).nonzero()[0] > ?indices[write_offset:write_offset + len(rows)] = rows > ?write_offset += len(rows) I still need to check this. > > Or you could just use a loop that fills in an LIL matrix :-) I'm playing with panel data or general error component models. The main point of using sparse is to have a compact, non-loop version. In some previous attempts at sparse the cost of constructing the array with loops removed much of the advantage of using them, and I could just loop in the algorithm directly. Thanks, Josef > > -- Nathaniel > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Nov 29 15:25:02 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 29 Nov 2011 15:25:02 -0500 Subject: [SciPy-User] creating sparse indicator arrays In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 2:50 PM, wrote: > On Tue, Nov 29, 2011 at 2:07 PM, Nathaniel Smith wrote: >> On Tue, Nov 29, 2011 at 7:14 AM, ? wrote: >>> Is there a simple or fast way to create a sparse indicator array, `a` >>> below, without going through the dense matrix first? >> >> The standard way is to use the LIL or DOK sparse formats. If you want >> to use them then you'll have to do your construction "by hand", though >> -- you can't do the nice broadcasting tricks you're using below. >> Alternatively, constructing CSC or CSR format directly is not that >> hard, though it may take some time to wrap your head around the >> definitions... >> >>>>>> from scipy import sparse >>>>>> g = np.array([0, 0, 1, 1]) ? #categories, integers, >>>>>> u = np.arange(2) ? ?#unique's, ?range(number_categories) >> >> If 'u' is *always* going to be np.arange(number_categories), then >> actually this is quite trivial (untested code): >> >> data = np.ones(len(g), dtype=np.int8) >> indices = g >> indptr = np.arange(len(g)) >> a = np.csr_matrix((data, indices, indptr)) > > This works nicely ?(only "sparse" namespace) small correction: indptr needs to be 1 longer or it drops the last row (since I didn't RTFM, I don't know what it means, but it works) >>> g = np.array([0, 0, 1, 2, 1, 1, 2, 0]) >>> data = np.ones(len(g), dtype=np.int8) >>> indptr = np.arange(len(g)+1) #add 1 >>> a = sparse.csr_matrix((data, g, indptr)) >>> a.todense() matrix([[1, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 1, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0]], dtype=int8) >>> np.all(a.todense() == (g[:,None] == np.arange(3)).astype(int)) True Josef > > u = np.arange(number_categories) ?will be a code requirement > (group or period labels are consecutive ints) > >> >> This gives you a CSR matrix, which you can either use as is or convert to CSC. >> >> If you want to build CSC directly, and want to support an arbitrary >> 'u' vector, then you could do something like (untested code): >> >> data = np.ones(len(g), dtype=np.int8) >> indices = np.empty(len(g), dtype=int) >> write_offset = 0 >> indptr = np.empty(number_categories, dtype=int) >> for col_i, category in enumerate(u): >> ?indptr[col_i] = write_offset >> ?rows = (data == category).nonzero()[0] >> ?indices[write_offset:write_offset + len(rows)] = rows >> ?write_offset += len(rows) > > I still need to check this. > >> >> Or you could just use a loop that fills in an LIL matrix :-) > > I'm playing with panel data or general error component models. The > main point of using sparse is to have a compact, non-loop version. > > In some previous attempts at sparse the cost of constructing the array > with loops removed much of the advantage of using them, and I could > just loop in the algorithm directly. > > Thanks, > > Josef > >> >> -- Nathaniel >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From Wes.Barris at cobb-vantress.com Mon Nov 28 12:19:30 2011 From: Wes.Barris at cobb-vantress.com (Barris, Wes) Date: Mon, 28 Nov 2011 17:19:30 +0000 Subject: [SciPy-User] Help installing scipy Message-ID: <3D297B5E71FC574581CB23485AFD2C6846117E@WHQWEXCH03.tyson.com> I have a new CentOS 6 (64-bit) system. It is running the latest rpm version of python: python-2.6.5-3.el6_0.2.x86_64 I have also installed numpy: rpm -qa | fgrep numpy numpy-1.3.0-6.2.el6.x86_64 I have downloaded the latest software source of scipy and am trying to install it. When I run the first command I get an error: python setup.py build Traceback (most recent call last): File "setup.py", line 196, in setup_package() File "setup.py", line 147, in setup_package from numpy.distutils.core import setup ImportError: No module named distutils.core I don't know anything about python so I am not sure what is supposed to provide this module or where to look for it. python -c 'from numpy.f2py.diagnose import run; run()' Traceback (most recent call last): File "", line 1, in ImportError: No module named f2py.diagnose python -c 'import os,sys;print os.name,sys.platform' posix linux2 uname -a Linux cvirdlux01.tyson.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux -- Wes Barris This email and any files transmitted with it are confidential and intended solely for the use of the addressee. If you are not the intended addressee, then you have received this email in error and any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. Please notify us immediately of your unintended receipt by reply and then delete this email and your reply. Cobb-Vantress, Inc. and its subsidiaries and affiliates will not be held liable to any person resulting from the unintended or unauthorized use of any information contained in this email or as a result of any additions or deletions of information originally contained in this email. From ralf.gommers at googlemail.com Wed Nov 30 00:09:34 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 30 Nov 2011 06:09:34 +0100 Subject: [SciPy-User] Help installing scipy In-Reply-To: <3D297B5E71FC574581CB23485AFD2C6846117E@WHQWEXCH03.tyson.com> References: <3D297B5E71FC574581CB23485AFD2C6846117E@WHQWEXCH03.tyson.com> Message-ID: On Mon, Nov 28, 2011 at 6:19 PM, Barris, Wes wrote: > I have a new CentOS 6 (64-bit) system. It is running the latest > rpm version of python: > > python-2.6.5-3.el6_0.2.x86_64 > > I have also installed numpy: > > rpm -qa | fgrep numpy > numpy-1.3.0-6.2.el6.x86_64 > > I have downloaded the latest software source of scipy and am > trying to install it. Latest scipy requires a more recent numpy, 1.5.1 or 1.6.x. Or you can try to find a scipy rpm which matches your numpy version. As for your error below, you should first check that numpy is installed correctly and that Python knows where to find it: $ python -c "import numpy; numpy.test('full')" Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Nov 30 01:53:32 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Nov 2011 06:53:32 +0000 Subject: [SciPy-User] Help installing scipy In-Reply-To: <3D297B5E71FC574581CB23485AFD2C6846117E@WHQWEXCH03.tyson.com> References: <3D297B5E71FC574581CB23485AFD2C6846117E@WHQWEXCH03.tyson.com> Message-ID: On Mon, Nov 28, 2011 at 17:19, Barris, Wes wrote: > I have a new CentOS 6 (64-bit) system. ?It is running the latest > rpm version of python: > > python-2.6.5-3.el6_0.2.x86_64 > > I have also installed numpy: > > rpm -qa | fgrep numpy > numpy-1.3.0-6.2.el6.x86_64 > > I have downloaded the latest software source of scipy and am > trying to install it. ?When I run the first command I get an error: > > python setup.py build > Traceback (most recent call last): > ?File "setup.py", line 196, in > ? ?setup_package() > ?File "setup.py", line 147, in setup_package > ? ?from numpy.distutils.core import setup > ImportError: No module named distutils.core It looks like the RPM packager of numpy may have decided (poorly, in our opinion) to separate out the numpy.distutils and numpy.f2py packages into a separate numpy-devel RPM. Debian did this for a while too, but we managed to convince them otherwise. Please check to see if there is a numpy-devel package and if installing it lets you import numpy.distutils and numpy.f2py. However, as Ralf says, you will need a newer version of numpy for the latest release of scipy, so this would just be to confirm this theory for future reference. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From evilper at gmail.com Wed Nov 30 04:01:27 2011 From: evilper at gmail.com (Per Nielsen) Date: Wed, 30 Nov 2011 10:01:27 +0100 Subject: [SciPy-User] Subclassing scipy sparse matrix class Message-ID: Hi all I am trying to create a subclass of the sparse matrix class in scipy, to add some extra methods I need. I have tried to follow the guide on: http://www.scipy.org/Subclasses but without much luck, the view method does not exist for the sparse matrix class. Below is a script I have created ----------------- #!/usr/bin/env python from scipy.sparse.csr import csr_matrix as spmatrix class sparsematrix_addons(spmatrix): """ subclass for standard scipy sparse class to add missing functionality """ def __new__(cls, matrix): obj = spmatrix.__init__(cls, matrix) return obj def square_spmat(self, M): return M ** 2 def ravel_spmat(self, M): pass if __name__ == '__main__': import numpy as np x = (np.random.rand(10, 10) * 2).astype(int).astype(float) xsp = sparsematrix_addons(x) -------------------- However, this generates the following error: TypeError: unbound method __init__() must be called with csr_matrix instance as first argument (got type instance instead) I am not strong in python OOP, or OOP in general, so I am sure this is a rather trivial problem to solve. Anyone got any solutions or ideas to point me in the right direction? Thanks in advance, Per -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Nov 30 13:52:42 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Nov 2011 13:52:42 -0500 Subject: [SciPy-User] fastest linear interpolation ? Message-ID: Does anyone know which linear interpolation function in numpy or scipy is the fastest for evaluating a large number of points? for mainly or only 1d cases I'd like to approximate a non-linear function on a fine grid by linear interpolation. Number of points/segments should be large. The setup can be expensive, and includes calculating all the points, but I want fast evaluation. Thanks, Josef From alacast at gmail.com Wed Nov 30 15:25:43 2011 From: alacast at gmail.com (Alacast) Date: Wed, 30 Nov 2011 15:25:43 -0500 Subject: [SciPy-User] zscore axis functionality is borked Message-ID: axis=0 (the default) works fine. axis=1, etc, is clearly wrong. Am I misunderstanding how to use this, or is this a bug? In [16]: i = rand(4,4) In [17]: i Out[17]: array([[ 0.85367762, 0.25348857, 0.23572615, 0.50403358], [ 0.70199066, 0.81872151, 0.47357357, 0.20425537], [ 0.31042673, 0.25837984, 0.73550134, 0.57970176], [ 0.42828877, 0.60988596, 0.04059321, 0.73944219]]) In [18]: zscore(i, axis=0) Out[18]: array([[ 1.30128758, -0.96195723, -0.52119142, -0.01453907], [ 0.59653471, 1.38544585, 0.39284654, -1.55756529], [-1.22271057, -0.94164388, 1.39942427, 0.37494213], [-0.67511172, 0.51815526, -1.27107939, 1.19716222]]) In [19]: zscore(i[:,0]) Out[19]: array([ 1.30128758, 0.59653471, -1.22271057, -0.67511172]) In [20]: zscore(i[:,0])==zscore(i,axis=0)[:,0] Out[20]: array([ True, True, True, True], dtype=bool) In [21]: zscore(i, axis=1) Out[21]: array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], [-1.6379836 , -1.52125275, -1.86640069, -2.13571889], [-2.09968257, -2.15172946, -1.67460796, -1.83040754], [-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) #The above is obviously wrong, as everything has a negative z score In [22]: zscore(i[0,:]) Out[22]: array([ 1.56824016, -0.83321371, -0.90428403, 0.16925757]) In [23]: zscore(i[0,:])==zscore(i,axis=1)[0,:] Out[23]: array([False, False, False, False], dtype=bool) #Using axis=1 produces different results from taking a row directly. In [24]: zscore(i, axis=-1) Out[24]: array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], [-1.6379836 , -1.52125275, -1.86640069, -2.13571889], [-2.09968257, -2.15172946, -1.67460796, -1.83040754], [-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) #Getting rows by using axis=-1 is no better (this is the same result as axis=1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Nov 30 15:45:50 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Nov 2011 15:45:50 -0500 Subject: [SciPy-User] zscore axis functionality is borked In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 3:25 PM, Alacast wrote: > axis=0 (the default) works fine. axis=1, etc, is clearly wrong. Am I > misunderstanding how to use this, or is this a bug? > > In [16]: i = rand(4,4) > > In [17]: i > Out[17]: > array([[ 0.85367762, ?0.25348857, ?0.23572615, ?0.50403358], > ? ? ? ?[ 0.70199066, ?0.81872151, ?0.47357357, ?0.20425537], > ? ? ? ?[ 0.31042673, ?0.25837984, ?0.73550134, ?0.57970176], > ? ? ? ?[ 0.42828877, ?0.60988596, ?0.04059321, ?0.73944219]]) > > In [18]: zscore(i, axis=0) > Out[18]: > array([[ 1.30128758, -0.96195723, -0.52119142, -0.01453907], > ? ? ? ?[ 0.59653471, ?1.38544585, ?0.39284654, -1.55756529], > ? ? ? ?[-1.22271057, -0.94164388, ?1.39942427, ?0.37494213], > ? ? ? ?[-0.67511172, ?0.51815526, -1.27107939, ?1.19716222]]) > > In [19]: zscore(i[:,0]) > Out[19]: array([ 1.30128758, ?0.59653471, -1.22271057, -0.67511172]) > > In [20]: zscore(i[:,0])==zscore(i,axis=0)[:,0] > Out[20]: array([ True, ?True, ?True, ?True], dtype=bool) > > In [21]: zscore(i, axis=1) > Out[21]: > array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], > ? ? ? ?[-1.6379836 , -1.52125275, -1.86640069, -2.13571889], > ? ? ? ?[-2.09968257, -2.15172946, -1.67460796, -1.83040754], > ? ? ? ?[-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) > #The above is obviously wrong, as everything has a negative z score > > In [22]: zscore(i[0,:]) > Out[22]: array([ 1.56824016, -0.83321371, -0.90428403, ?0.16925757]) > > In [23]: zscore(i[0,:])==zscore(i,axis=1)[0,:] > Out[23]: array([False, False, False, False], dtype=bool) > #Using axis=1 produces different results from taking a row directly. > > In [24]: zscore(i, axis=-1) > Out[24]: > array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], > ? ? ? ?[-1.6379836 , -1.52125275, -1.86640069, -2.13571889], > ? ? ? ?[-2.09968257, -2.15172946, -1.67460796, -1.83040754], > ? ? ? ?[-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) > #Getting rows by using axis=-1 is no better (this is the same result as > axis=1 This looks like a serious bug to me. I don't know what happened here (. The docstring example also has negative numbers only. ??? I'm looking into it Thanks for reporting Josef > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed Nov 30 15:54:54 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Nov 2011 15:54:54 -0500 Subject: [SciPy-User] zscore axis functionality is borked In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 3:45 PM, wrote: > On Wed, Nov 30, 2011 at 3:25 PM, Alacast wrote: >> axis=0 (the default) works fine. axis=1, etc, is clearly wrong. Am I >> misunderstanding how to use this, or is this a bug? >> >> In [16]: i = rand(4,4) >> >> In [17]: i >> Out[17]: >> array([[ 0.85367762, ?0.25348857, ?0.23572615, ?0.50403358], >> ? ? ? ?[ 0.70199066, ?0.81872151, ?0.47357357, ?0.20425537], >> ? ? ? ?[ 0.31042673, ?0.25837984, ?0.73550134, ?0.57970176], >> ? ? ? ?[ 0.42828877, ?0.60988596, ?0.04059321, ?0.73944219]]) >> >> In [18]: zscore(i, axis=0) >> Out[18]: >> array([[ 1.30128758, -0.96195723, -0.52119142, -0.01453907], >> ? ? ? ?[ 0.59653471, ?1.38544585, ?0.39284654, -1.55756529], >> ? ? ? ?[-1.22271057, -0.94164388, ?1.39942427, ?0.37494213], >> ? ? ? ?[-0.67511172, ?0.51815526, -1.27107939, ?1.19716222]]) >> >> In [19]: zscore(i[:,0]) >> Out[19]: array([ 1.30128758, ?0.59653471, -1.22271057, -0.67511172]) >> >> In [20]: zscore(i[:,0])==zscore(i,axis=0)[:,0] >> Out[20]: array([ True, ?True, ?True, ?True], dtype=bool) >> >> In [21]: zscore(i, axis=1) >> Out[21]: >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], >> ? ? ? ?[-1.6379836 , -1.52125275, -1.86640069, -2.13571889], >> ? ? ? ?[-2.09968257, -2.15172946, -1.67460796, -1.83040754], >> ? ? ? ?[-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) >> #The above is obviously wrong, as everything has a negative z score >> >> In [22]: zscore(i[0,:]) >> Out[22]: array([ 1.56824016, -0.83321371, -0.90428403, ?0.16925757]) >> >> In [23]: zscore(i[0,:])==zscore(i,axis=1)[0,:] >> Out[23]: array([False, False, False, False], dtype=bool) >> #Using axis=1 produces different results from taking a row directly. >> >> In [24]: zscore(i, axis=-1) >> Out[24]: >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], >> ? ? ? ?[-1.6379836 , -1.52125275, -1.86640069, -2.13571889], >> ? ? ? ?[-2.09968257, -2.15172946, -1.67460796, -1.83040754], >> ? ? ? ?[-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) >> #Getting rows by using axis=-1 is no better (this is the same result as >> axis=1 > > This looks like a serious bug to me. I don't know what happened here (. > > The docstring example also has negative numbers only. > > ??? > > I'm looking into it > > Thanks for reporting a misplaced axis: if axis>0 then it calculates x - mean/std instead of (x - mean) / std now, how did this go through the testing ? Josef > > Josef > > >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> From warren.weckesser at enthought.com Wed Nov 30 15:57:14 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 30 Nov 2011 14:57:14 -0600 Subject: [SciPy-User] zscore axis functionality is borked In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 2:45 PM, wrote: > On Wed, Nov 30, 2011 at 3:25 PM, Alacast wrote: > > axis=0 (the default) works fine. axis=1, etc, is clearly wrong. Am I > > misunderstanding how to use this, or is this a bug? > > > > In [16]: i = rand(4,4) > > > > In [17]: i > > Out[17]: > > array([[ 0.85367762, 0.25348857, 0.23572615, 0.50403358], > > [ 0.70199066, 0.81872151, 0.47357357, 0.20425537], > > [ 0.31042673, 0.25837984, 0.73550134, 0.57970176], > > [ 0.42828877, 0.60988596, 0.04059321, 0.73944219]]) > > > > In [18]: zscore(i, axis=0) > > Out[18]: > > array([[ 1.30128758, -0.96195723, -0.52119142, -0.01453907], > > [ 0.59653471, 1.38544585, 0.39284654, -1.55756529], > > [-1.22271057, -0.94164388, 1.39942427, 0.37494213], > > [-0.67511172, 0.51815526, -1.27107939, 1.19716222]]) > > > > In [19]: zscore(i[:,0]) > > Out[19]: array([ 1.30128758, 0.59653471, -1.22271057, -0.67511172]) > > > > In [20]: zscore(i[:,0])==zscore(i,axis=0)[:,0] > > Out[20]: array([ True, True, True, True], dtype=bool) > > > > In [21]: zscore(i, axis=1) > > Out[21]: > > array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], > > [-1.6379836 , -1.52125275, -1.86640069, -2.13571889], > > [-2.09968257, -2.15172946, -1.67460796, -1.83040754], > > [-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) > > #The above is obviously wrong, as everything has a negative z score > > > > In [22]: zscore(i[0,:]) > > Out[22]: array([ 1.56824016, -0.83321371, -0.90428403, 0.16925757]) > > > > In [23]: zscore(i[0,:])==zscore(i,axis=1)[0,:] > > Out[23]: array([False, False, False, False], dtype=bool) > > #Using axis=1 produces different results from taking a row directly. > > > > In [24]: zscore(i, axis=-1) > > Out[24]: > > array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], > > [-1.6379836 , -1.52125275, -1.86640069, -2.13571889], > > [-2.09968257, -2.15172946, -1.67460796, -1.83040754], > > [-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) > > #Getting rows by using axis=-1 is no better (this is the same result as > > axis=1 > > This looks like a serious bug to me. I don't know what happened here (. > > The docstring example also has negative numbers only. > > ??? > > I'm looking into it > > This is a bug in zscore. There is a misplaced parenthesis in the code. This return ((a - np.expand_dims(mns, axis=axis) / np.expand_dims(sstd,axis=axis))) should be this return ((a - np.expand_dims(mns, axis=axis)) / np.expand_dims(sstd,axis=axis)) Warren > Thanks for reporting > > Josef > > > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Wed Nov 30 16:02:36 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 30 Nov 2011 15:02:36 -0600 Subject: [SciPy-User] zscore axis functionality is borked In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 2:54 PM, wrote: > On Wed, Nov 30, 2011 at 3:45 PM, wrote: > > On Wed, Nov 30, 2011 at 3:25 PM, Alacast wrote: > >> axis=0 (the default) works fine. axis=1, etc, is clearly wrong. Am I > >> misunderstanding how to use this, or is this a bug? > >> > >> In [16]: i = rand(4,4) > >> > >> In [17]: i > >> Out[17]: > >> array([[ 0.85367762, 0.25348857, 0.23572615, 0.50403358], > >> [ 0.70199066, 0.81872151, 0.47357357, 0.20425537], > >> [ 0.31042673, 0.25837984, 0.73550134, 0.57970176], > >> [ 0.42828877, 0.60988596, 0.04059321, 0.73944219]]) > >> > >> In [18]: zscore(i, axis=0) > >> Out[18]: > >> array([[ 1.30128758, -0.96195723, -0.52119142, -0.01453907], > >> [ 0.59653471, 1.38544585, 0.39284654, -1.55756529], > >> [-1.22271057, -0.94164388, 1.39942427, 0.37494213], > >> [-0.67511172, 0.51815526, -1.27107939, 1.19716222]]) > >> > >> In [19]: zscore(i[:,0]) > >> Out[19]: array([ 1.30128758, 0.59653471, -1.22271057, -0.67511172]) > >> > >> In [20]: zscore(i[:,0])==zscore(i,axis=0)[:,0] > >> Out[20]: array([ True, True, True, True], dtype=bool) > >> > >> In [21]: zscore(i, axis=1) > >> Out[21]: > >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], > >> [-1.6379836 , -1.52125275, -1.86640069, -2.13571889], > >> [-2.09968257, -2.15172946, -1.67460796, -1.83040754], > >> [-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) > >> #The above is obviously wrong, as everything has a negative z score > >> > >> In [22]: zscore(i[0,:]) > >> Out[22]: array([ 1.56824016, -0.83321371, -0.90428403, 0.16925757]) > >> > >> In [23]: zscore(i[0,:])==zscore(i,axis=1)[0,:] > >> Out[23]: array([False, False, False, False], dtype=bool) > >> #Using axis=1 produces different results from taking a row directly. > >> > >> In [24]: zscore(i, axis=-1) > >> Out[24]: > >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], > >> [-1.6379836 , -1.52125275, -1.86640069, -2.13571889], > >> [-2.09968257, -2.15172946, -1.67460796, -1.83040754], > >> [-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) > >> #Getting rows by using axis=-1 is no better (this is the same result as > >> axis=1 > > > > This looks like a serious bug to me. I don't know what happened here (. > > > > The docstring example also has negative numbers only. > > > > ??? > > > > I'm looking into it > > > > Thanks for reporting > > a misplaced axis: if axis>0 > then it calculates x - mean/std instead of (x - mean) / std > > now, how did this go through the testing ? > There is only one test for zscore, on a 1-d sample without the axis keyword. Warren > > Josef > > > > > Josef > > > > > >> > >> > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Nov 30 16:05:37 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Nov 2011 16:05:37 -0500 Subject: [SciPy-User] zscore axis functionality is borked In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 4:02 PM, Warren Weckesser wrote: > > > On Wed, Nov 30, 2011 at 2:54 PM, wrote: >> >> On Wed, Nov 30, 2011 at 3:45 PM, ? wrote: >> > On Wed, Nov 30, 2011 at 3:25 PM, Alacast wrote: >> >> axis=0 (the default) works fine. axis=1, etc, is clearly wrong. Am I >> >> misunderstanding how to use this, or is this a bug? >> >> >> >> In [16]: i = rand(4,4) >> >> >> >> In [17]: i >> >> Out[17]: >> >> array([[ 0.85367762, ?0.25348857, ?0.23572615, ?0.50403358], >> >> ? ? ? ?[ 0.70199066, ?0.81872151, ?0.47357357, ?0.20425537], >> >> ? ? ? ?[ 0.31042673, ?0.25837984, ?0.73550134, ?0.57970176], >> >> ? ? ? ?[ 0.42828877, ?0.60988596, ?0.04059321, ?0.73944219]]) >> >> >> >> In [18]: zscore(i, axis=0) >> >> Out[18]: >> >> array([[ 1.30128758, -0.96195723, -0.52119142, -0.01453907], >> >> ? ? ? ?[ 0.59653471, ?1.38544585, ?0.39284654, -1.55756529], >> >> ? ? ? ?[-1.22271057, -0.94164388, ?1.39942427, ?0.37494213], >> >> ? ? ? ?[-0.67511172, ?0.51815526, -1.27107939, ?1.19716222]]) >> >> >> >> In [19]: zscore(i[:,0]) >> >> Out[19]: array([ 1.30128758, ?0.59653471, -1.22271057, -0.67511172]) >> >> >> >> In [20]: zscore(i[:,0])==zscore(i,axis=0)[:,0] >> >> Out[20]: array([ True, ?True, ?True, ?True], dtype=bool) >> >> >> >> In [21]: zscore(i, axis=1) >> >> Out[21]: >> >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], >> >> ? ? ? ?[-1.6379836 , -1.52125275, -1.86640069, -2.13571889], >> >> ? ? ? ?[-2.09968257, -2.15172946, -1.67460796, -1.83040754], >> >> ? ? ? ?[-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) >> >> #The above is obviously wrong, as everything has a negative z score >> >> >> >> In [22]: zscore(i[0,:]) >> >> Out[22]: array([ 1.56824016, -0.83321371, -0.90428403, ?0.16925757]) >> >> >> >> In [23]: zscore(i[0,:])==zscore(i,axis=1)[0,:] >> >> Out[23]: array([False, False, False, False], dtype=bool) >> >> #Using axis=1 produces different results from taking a row directly. >> >> >> >> In [24]: zscore(i, axis=-1) >> >> Out[24]: >> >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], >> >> ? ? ? ?[-1.6379836 , -1.52125275, -1.86640069, -2.13571889], >> >> ? ? ? ?[-2.09968257, -2.15172946, -1.67460796, -1.83040754], >> >> ? ? ? ?[-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) >> >> #Getting rows by using axis=-1 is no better (this is the same result as >> >> axis=1 >> > >> > This looks like a serious bug to me. I don't know what happened here (. >> > >> > The docstring example also has negative numbers only. >> > >> > ??? >> > >> > I'm looking into it >> > >> > Thanks for reporting >> >> a misplaced axis: if axis>0 >> then it calculates ? x - mean/std instead of (x - mean) / std >> >> now, how did this go through the testing ? > > > > > There is only one test for zscore, on a 1-d sample without the axis keyword. which just show that we shouldn't trust changesets that say "stats: rewrite of zscore functions, ticket:1083 regression tests pass, still need tests for enhancements" http://projects.scipy.org/scipy/changeset/6169 my mistake (maybe January 2nd wasn't a good day.) Josef > > Warren > > >> >> >> Josef >> >> > >> > Josef >> > >> > >> >> >> >> >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From warren.weckesser at enthought.com Wed Nov 30 16:10:26 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 30 Nov 2011 15:10:26 -0600 Subject: [SciPy-User] zscore axis functionality is borked In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 3:05 PM, wrote: > On Wed, Nov 30, 2011 at 4:02 PM, Warren Weckesser > wrote: > > > > > > On Wed, Nov 30, 2011 at 2:54 PM, wrote: > >> > >> On Wed, Nov 30, 2011 at 3:45 PM, wrote: > >> > On Wed, Nov 30, 2011 at 3:25 PM, Alacast wrote: > >> >> axis=0 (the default) works fine. axis=1, etc, is clearly wrong. Am I > >> >> misunderstanding how to use this, or is this a bug? > >> >> > >> >> In [16]: i = rand(4,4) > >> >> > >> >> In [17]: i > >> >> Out[17]: > >> >> array([[ 0.85367762, 0.25348857, 0.23572615, 0.50403358], > >> >> [ 0.70199066, 0.81872151, 0.47357357, 0.20425537], > >> >> [ 0.31042673, 0.25837984, 0.73550134, 0.57970176], > >> >> [ 0.42828877, 0.60988596, 0.04059321, 0.73944219]]) > >> >> > >> >> In [18]: zscore(i, axis=0) > >> >> Out[18]: > >> >> array([[ 1.30128758, -0.96195723, -0.52119142, -0.01453907], > >> >> [ 0.59653471, 1.38544585, 0.39284654, -1.55756529], > >> >> [-1.22271057, -0.94164388, 1.39942427, 0.37494213], > >> >> [-0.67511172, 0.51815526, -1.27107939, 1.19716222]]) > >> >> > >> >> In [19]: zscore(i[:,0]) > >> >> Out[19]: array([ 1.30128758, 0.59653471, -1.22271057, -0.67511172]) > >> >> > >> >> In [20]: zscore(i[:,0])==zscore(i,axis=0)[:,0] > >> >> Out[20]: array([ True, True, True, True], dtype=bool) > >> >> > >> >> In [21]: zscore(i, axis=1) > >> >> Out[21]: > >> >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], > >> >> [-1.6379836 , -1.52125275, -1.86640069, -2.13571889], > >> >> [-2.09968257, -2.15172946, -1.67460796, -1.83040754], > >> >> [-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) > >> >> #The above is obviously wrong, as everything has a negative z score > >> >> > >> >> In [22]: zscore(i[0,:]) > >> >> Out[22]: array([ 1.56824016, -0.83321371, -0.90428403, 0.16925757]) > >> >> > >> >> In [23]: zscore(i[0,:])==zscore(i,axis=1)[0,:] > >> >> Out[23]: array([False, False, False, False], dtype=bool) > >> >> #Using axis=1 produces different results from taking a row directly. > >> >> > >> >> In [24]: zscore(i, axis=-1) > >> >> Out[24]: > >> >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], > >> >> [-1.6379836 , -1.52125275, -1.86640069, -2.13571889], > >> >> [-2.09968257, -2.15172946, -1.67460796, -1.83040754], > >> >> [-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) > >> >> #Getting rows by using axis=-1 is no better (this is the same result > as > >> >> axis=1 > >> > > >> > This looks like a serious bug to me. I don't know what happened here > (. > >> > > >> > The docstring example also has negative numbers only. > >> > > >> > ??? > >> > > >> > I'm looking into it > >> > > >> > Thanks for reporting > >> > >> a misplaced axis: if axis>0 > >> then it calculates x - mean/std instead of (x - mean) / std > >> > >> now, how did this go through the testing ? > > > > > > > > > > There is only one test for zscore, on a 1-d sample without the axis > keyword. > > which just show that we shouldn't trust changesets that say > > "stats: rewrite of zscore functions, ticket:1083 regression tests > pass, still need tests for enhancements" > > http://projects.scipy.org/scipy/changeset/6169 > > my mistake (maybe January 2nd wasn't a good day.) > > Josef > > Thanks for the link. Looks like zmap has the same bug. :( Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Nov 30 16:25:19 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Nov 2011 16:25:19 -0500 Subject: [SciPy-User] zscore axis functionality is borked In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 4:10 PM, Warren Weckesser wrote: > > > On Wed, Nov 30, 2011 at 3:05 PM, wrote: >> >> On Wed, Nov 30, 2011 at 4:02 PM, Warren Weckesser >> wrote: >> > >> > >> > On Wed, Nov 30, 2011 at 2:54 PM, wrote: >> >> >> >> On Wed, Nov 30, 2011 at 3:45 PM, ? wrote: >> >> > On Wed, Nov 30, 2011 at 3:25 PM, Alacast wrote: >> >> >> axis=0 (the default) works fine. axis=1, etc, is clearly wrong. Am I >> >> >> misunderstanding how to use this, or is this a bug? >> >> >> >> >> >> In [16]: i = rand(4,4) >> >> >> >> >> >> In [17]: i >> >> >> Out[17]: >> >> >> array([[ 0.85367762, ?0.25348857, ?0.23572615, ?0.50403358], >> >> >> ? ? ? ?[ 0.70199066, ?0.81872151, ?0.47357357, ?0.20425537], >> >> >> ? ? ? ?[ 0.31042673, ?0.25837984, ?0.73550134, ?0.57970176], >> >> >> ? ? ? ?[ 0.42828877, ?0.60988596, ?0.04059321, ?0.73944219]]) >> >> >> >> >> >> In [18]: zscore(i, axis=0) >> >> >> Out[18]: >> >> >> array([[ 1.30128758, -0.96195723, -0.52119142, -0.01453907], >> >> >> ? ? ? ?[ 0.59653471, ?1.38544585, ?0.39284654, -1.55756529], >> >> >> ? ? ? ?[-1.22271057, -0.94164388, ?1.39942427, ?0.37494213], >> >> >> ? ? ? ?[-0.67511172, ?0.51815526, -1.27107939, ?1.19716222]]) >> >> >> >> >> >> In [19]: zscore(i[:,0]) >> >> >> Out[19]: array([ 1.30128758, ?0.59653471, -1.22271057, -0.67511172]) >> >> >> >> >> >> In [20]: zscore(i[:,0])==zscore(i,axis=0)[:,0] >> >> >> Out[20]: array([ True, ?True, ?True, ?True], dtype=bool) >> >> >> >> >> >> In [21]: zscore(i, axis=1) >> >> >> Out[21]: >> >> >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], >> >> >> ? ? ? ?[-1.6379836 , -1.52125275, -1.86640069, -2.13571889], >> >> >> ? ? ? ?[-2.09968257, -2.15172946, -1.67460796, -1.83040754], >> >> >> ? ? ? ?[-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) >> >> >> #The above is obviously wrong, as everything has a negative z score >> >> >> >> >> >> In [22]: zscore(i[0,:]) >> >> >> Out[22]: array([ 1.56824016, -0.83321371, -0.90428403, ?0.16925757]) >> >> >> >> >> >> In [23]: zscore(i[0,:])==zscore(i,axis=1)[0,:] >> >> >> Out[23]: array([False, False, False, False], dtype=bool) >> >> >> #Using axis=1 produces different results from taking a row directly. >> >> >> >> >> >> In [24]: zscore(i, axis=-1) >> >> >> Out[24]: >> >> >> array([[-0.99378502, -1.59397407, -1.61173649, -1.34342906], >> >> >> ? ? ? ?[-1.6379836 , -1.52125275, -1.86640069, -2.13571889], >> >> >> ? ? ? ?[-2.09968257, -2.15172946, -1.67460796, -1.83040754], >> >> >> ? ? ? ?[-1.29796925, -1.11637205, -1.68566481, -0.98681582]]) >> >> >> #Getting rows by using axis=-1 is no better (this is the same result >> >> >> as >> >> >> axis=1 >> >> > >> >> > This looks like a serious bug to me. I don't know what happened here >> >> > (. >> >> > >> >> > The docstring example also has negative numbers only. >> >> > >> >> > ??? >> >> > >> >> > I'm looking into it >> >> > >> >> > Thanks for reporting >> >> >> >> a misplaced axis: if axis>0 >> >> then it calculates ? x - mean/std instead of (x - mean) / std >> >> >> >> now, how did this go through the testing ? >> > >> > >> > >> > >> > There is only one test for zscore, on a 1-d sample without the axis >> > keyword. >> >> which just show that we shouldn't trust changesets that say >> >> "stats: rewrite of zscore functions, ticket:1083 regression tests >> pass, still need tests for enhancements" >> >> http://projects.scipy.org/scipy/changeset/6169 >> >> my mistake ?(maybe January 2nd wasn't a good day.) >> >> Josef >> > > > Thanks for the link.? Looks like zmap has the same bug. :( copy paste errors? I just don't know why I didn't do basic checks like this in the final version >>> assert_equal(zscore(x.T, axis=0).T, zscore(x, axis=1)) >>> a = zscore(x, axis=1) >>> a.var(1) array([ 1., 1., 1., 1.]) >>> a.mean(1) array([ 0.00000000e+00, -1.11022302e-16, 0.00000000e+00, 1.94289029e-16]) Josef > > Warren > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed Nov 30 20:41:57 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Nov 2011 20:41:57 -0500 Subject: [SciPy-User] [Numpy-discussion] what statistical module to use for python? In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 1:16 PM, Chao YUE wrote: > Hi all, This is more a question for the scipy-user mailing list since that is for more general question. I would also like to know since I have a biased or selective view. > > I just want to broadly ask what statistical package are you guys using? I > mean routine statistical function like linear regression, GLM, ANOVA... etc. > > I know there is SciKits packages like statsmodels, but are there more > general and complete ones? (Not counting rpy2 since it's not available on Windows anymore.) I think there are more complete packages on specific topics, but nothing in python that is complete and general, that's where statsmodels tries to be. sklearn is machine learning oriented but covers also a large area of statistical methods. Besides scipy.stats, statsmodels and sklearn, I don't know any that target to be general and not field specific. (scipy and numpy have also features that make do-it-yourself easy.) But there are many more field or topic specific packages, ...... (Bayesian, spatial, discrete choice (transport), and then by scientific field.) http://www.scipy.org/Topical_Software doesn't include a statistics section An overview or survey of packages and statistical methods (in a very broad definition) would be useful. Thanks, Josef > > thanks to all, > > Chao > -- > *********************************************************************************** > Chao YUE > Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL) > UMR 1572 CEA-CNRS-UVSQ > Batiment 712 - Pe 119 > 91191 GIF Sur YVETTE Cedex > Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16 > ************************************************************************************ > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Wed Nov 30 21:04:18 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Nov 2011 21:04:18 -0500 Subject: [SciPy-User] creating sparse indicator arrays In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 3:25 PM, wrote: > On Tue, Nov 29, 2011 at 2:50 PM, ? wrote: >> On Tue, Nov 29, 2011 at 2:07 PM, Nathaniel Smith wrote: >>> On Tue, Nov 29, 2011 at 7:14 AM, ? wrote: >>>> Is there a simple or fast way to create a sparse indicator array, `a` >>>> below, without going through the dense matrix first? >>> >>> The standard way is to use the LIL or DOK sparse formats. If you want >>> to use them then you'll have to do your construction "by hand", though >>> -- you can't do the nice broadcasting tricks you're using below. >>> Alternatively, constructing CSC or CSR format directly is not that >>> hard, though it may take some time to wrap your head around the >>> definitions... >>> >>>>>>> from scipy import sparse >>>>>>> g = np.array([0, 0, 1, 1]) ? #categories, integers, >>>>>>> u = np.arange(2) ? ?#unique's, ?range(number_categories) >>> >>> If 'u' is *always* going to be np.arange(number_categories), then >>> actually this is quite trivial (untested code): >>> >>> data = np.ones(len(g), dtype=np.int8) >>> indices = g >>> indptr = np.arange(len(g)) >>> a = np.csr_matrix((data, indices, indptr)) >> >> This works nicely ?(only "sparse" namespace) > > small correction: > indptr needs to be 1 longer or it drops the last row > (since I didn't RTFM, I don't know what it means, but it works) > >>>> g = np.array([0, 0, 1, 2, 1, 1, 2, 0]) >>>> data = np.ones(len(g), dtype=np.int8) >>>> indptr = np.arange(len(g)+1) ? #add 1 >>>> a = sparse.csr_matrix((data, g, indptr)) >>>> a.todense() > matrix([[1, 0, 0], > ? ? ? ?[1, 0, 0], > ? ? ? ?[0, 1, 0], > ? ? ? ?[0, 0, 1], > ? ? ? ?[0, 1, 0], > ? ? ? ?[0, 1, 0], > ? ? ? ?[0, 0, 1], > ? ? ? ?[1, 0, 0]], dtype=int8) >>>> np.all(a.todense() == (g[:,None] == np.arange(3)).astype(int)) > True > > Josef > >> >> u = np.arange(number_categories) ?will be a code requirement >> (group or period labels are consecutive ints) >> >>> >>> This gives you a CSR matrix, which you can either use as is or convert to CSC. >>> >>> If you want to build CSC directly, and want to support an arbitrary >>> 'u' vector, then you could do something like (untested code): >>> >>> data = np.ones(len(g), dtype=np.int8) >>> indices = np.empty(len(g), dtype=int) >>> write_offset = 0 >>> indptr = np.empty(number_categories, dtype=int) >>> for col_i, category in enumerate(u): >>> ?indptr[col_i] = write_offset >>> ?rows = (data == category).nonzero()[0] >>> ?indices[write_offset:write_offset + len(rows)] = rows >>> ?write_offset += len(rows) >> >> I still need to check this. >> >>> >>> Or you could just use a loop that fills in an LIL matrix :-) >> >> I'm playing with panel data or general error component models. The >> main point of using sparse is to have a compact, non-loop version. >> >> In some previous attempts at sparse the cost of constructing the array >> with loops removed much of the advantage of using them, and I could >> just loop in the algorithm directly. finally got started with some timing groupsums: 200,000 observations, 20 variables, 500 groups sparse looks good, even better than bincount In [69]: %timeit a = sparse.csr_matrix((data, g, np.arange(len(g)+1))) 1000 loops, best of 3: 732 us per loop In [64]: a Out[64]: <200000x500 sparse matrix of type '' with 200000 stored elements in Compressed Sparse Row format> In [65]: %timeit x.T * a #sparse 100 loops, best of 3: 6.28 ms per loop In [66]: %timeit np.array([np.bincount(g, weights=x[:,col]) for col in range(x.shape[1])]) 10 loops, best of 3: 57.5 ms per loop In [67]: %timeit np.dot(xT, indi) 1 loops, best of 3: 1.29 s per loop In [72]: %timeit for cat in u: x[g==cat].sum(0) 1 loops, best of 3: 635 ms per loop In [68]: indi.shape Out[68]: (200000, 500) In [70]: x.shape Out[70]: (200000, 20) In [73]: res_dot = np.dot(xT, indi) In [74]: res_sparse = x.T * a In [75]: np.max(np.abs(res_dot - res_sparse)) Out[75]: 0.0 nice Josef >> >> Thanks, >> >> Josef >> >>> >>> -- Nathaniel >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> From tloramus at gmail.com Wed Nov 30 17:07:49 2011 From: tloramus at gmail.com (Miha Marolt) Date: Wed, 30 Nov 2011 23:07:49 +0100 Subject: [SciPy-User] No Scipy 0.10 reference manual Message-ID: <4ED6A935.4040506@gmail.com> There is no link to the Scipy 0.10 reference guide on the documentation page (http://docs.scipy.org/doc/).