From gregor.thalhammer at gmail.com Fri Jun 1 04:16:59 2012 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Fri, 1 Jun 2012 10:16:59 +0200 Subject: [SciPy-User] leastsq - When to scale covariance matrix by reduced chi square for confidence interval estimation In-Reply-To: References: Message-ID: <8A8AD292-FD1D-46F1-B49C-4E6FEF357C37@gmail.com> Am 1.6.2012 um 05:50 schrieb Markus Baden: > Hi List, > > I'm trying to get my head around, when to scale the covariance matrix by the reduced chi square of the problem for getting an estimate of the error of a parameter obtained via fitting. I'm kinda stuck and would appreciate any pointers to an answer. From the documentation of scipy.optimize.leastsq and scipy.curve_fit, as well as from some old threads on this mailing list [1, 2] it seems the procedure in scipy is the following > > 1) Create estimate of the Hessian matrix based on the Jacobian at the final value, which is called the covariance matrix cov_x and is the curvature of the fitting parameters around the minimum > > 2) Calculate the reduced chi square, red_chi_2 = sum( (w_i *(y_i - f_i))**2 ) / dof, where w_i are the weights, y_i the data, f_i the fit and dof the degrees of freedom (number of knobs) > > 3) Get parameter estimates by calculating sqrt(diag(cov_x) * red_chi_2) > > 4) Scale confidence interval by appropriate value of student t distribution, e.g. when predicting the standard deviation of a single value just *= 1 > > So far, so good. However in the literature [3, 4] I often find that steps 2 and 3 are skipped when the data is weighted by errors in the individual observation. Obviously for a good fit with red_chi_2 = 1 both ways of getting an error are the same. [3] and [4] caution that the method they are using assume among others normality and a reduced chi square of about 1 and discourage the use of estimating the error in the fit for bad fits. However it seems that the method currently in scipy somehow is more robust. Take for example data similiar to the one I am currently working with [5]. The fit has a reduced chi square of about one, and hence the errors of both the scipy method and the literature method agree. If I make my reduced chi square worse by scaling the error bars, the method in the literature gives either very, very small errors or very, very large ones. The scipy method however always produces about the same error estimate. Here is the output of [5] If you have knowledge about the statistical errors of your data, then skipping step 2 and 3 is the recommended, and you can use the chi square to assess the validity of the fit and your assumptions about the errors. On the other hand, if you have insufficient knowledge about the errors, you can use the reduced chi square as an estimate for the variance of your data (at least under the assumption that the error is the same for all data points). This is the idea behind steps 2 and 3. > Now in the particular problem I am working at, we have a couple of fits like [5] and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful. I didn't look at your data, but I guess that these values of the reduced chi square are still in range such that they are not a significant deviation from the expected value of one. The chi-squared distribution is rather broad. So I would omit steps 2 and 3. Only if you have good reasons not to trust your assumptions about the errors of the data, then apply steps 2 and 3. Gregor From markus.baden at gmail.com Fri Jun 1 05:21:53 2012 From: markus.baden at gmail.com (Markus Baden) Date: Fri, 1 Jun 2012 17:21:53 +0800 Subject: [SciPy-User] leastsq - When to scale covariance matrix by reduced chi square for confidence interval estimation In-Reply-To: <8A8AD292-FD1D-46F1-B49C-4E6FEF357C37@gmail.com> References: <8A8AD292-FD1D-46F1-B49C-4E6FEF357C37@gmail.com> Message-ID: Hi Gregor, Thanks for the fast reply. If you have knowledge about the statistical errors of your data, then > skipping step 2 and 3 is the recommended, and you can use the chi square to > assess the validity of the fit and your assumptions about the errors. On > the other hand, if you have insufficient knowledge about the errors, you > can use the reduced chi square as an estimate for the variance of your data > (at least under the assumption that the error is the same for all data > points). This is the idea behind steps 2 and 3. > I just want to get that straight: So basically in the case where I either don't have errors, or I don't trust them, multiplying the covariance by the reduced chi square amounts to "normalizing" the covariance such that the fit would have a chi square of one (?). Maybe your point could go into the docs for curve_fit... or there could be a comment about standard procedure a bit like in origin ( http://www.originlab.com/www/support/resultstech.aspx?ID=452&language=English&Version=All ) > > > Now in the particular problem I am working at, we have a couple of fits > like [5] and some of them have a slightly worse reduced chi square of say > about 1.4 or 0.7. At this point the two methods start to deviate and I am > wondering which would be the correct way of quoting the errors estimated > from the fit. Even a basic reference to some text book that explains the > method used in scipy would be very helpful. > > I didn't look at your data, but I guess that these values of the reduced > chi square are still in range such that they are not a significant > deviation from the expected value of one. The chi-squared distribution is > rather broad. So I would omit steps 2 and 3. Only if you have good reasons > not to trust your assumptions about the errors of the data, then apply > steps 2 and 3. > We looked at which part of the CDF these values are and they are still ok. And our errors are all inferred from measurements, so we trust them quite a bit. We use the fitting described to obtain a particular property of an ion via spectroscopy... that's also why we want to get our errors on that property correct :) In any case, thanks again, Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregor.thalhammer at gmail.com Fri Jun 1 08:21:15 2012 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Fri, 1 Jun 2012 14:21:15 +0200 Subject: [SciPy-User] leastsq - When to scale covariance matrix by reduced chi square for confidence interval estimation In-Reply-To: References: <8A8AD292-FD1D-46F1-B49C-4E6FEF357C37@gmail.com> Message-ID: Am 1.6.2012 um 11:21 schrieb Markus Baden: > Hi Gregor, > > Thanks for the fast reply. > > If you have knowledge about the statistical errors of your data, then skipping step 2 and 3 is the recommended, and you can use the chi square to assess the validity of the fit and your assumptions about the errors. On the other hand, if you have insufficient knowledge about the errors, you can use the reduced chi square as an estimate for the variance of your data (at least under the assumption that the error is the same for all data points). This is the idea behind steps 2 and 3. > > I just want to get that straight: So basically in the case where I either don't have errors, or I don't trust them, multiplying the covariance by the reduced chi square amounts to "normalizing" the covariance such that the fit would have a chi square of one (?). Maybe your point could go into the docs for curve_fit... or there could be a comment about standard procedure a bit like in origin (http://www.originlab.com/www/support/resultstech.aspx?ID=452&language=English&Version=All) Yes, I think you correctly got the idea. > > > > Now in the particular problem I am working at, we have a couple of fits like [5] and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful. > > I didn't look at your data, but I guess that these values of the reduced chi square are still in range such that they are not a significant deviation from the expected value of one. The chi-squared distribution is rather broad. So I would omit steps 2 and 3. Only if you have good reasons not to trust your assumptions about the errors of the data, then apply steps 2 and 3. > > We looked at which part of the CDF these values are and they are still ok. And our errors are all inferred from measurements, so we trust them quite a bit. We use the fitting described to obtain a particular property of an ion via spectroscopy... that's also why we want to get our errors on that property correct :) > As e.g. described nicely in http://mljohnson.pharm.virginia.edu/pdfs/174.pdf you have to be careful about the parameter error estimates obtained by this procedure. In general the obtained results are too optimistic. Gregor -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Jun 1 12:02:49 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 1 Jun 2012 12:02:49 -0400 Subject: [SciPy-User] leastsq - When to scale covariance matrix by reduced chi square for confidence interval estimation In-Reply-To: References: <8A8AD292-FD1D-46F1-B49C-4E6FEF357C37@gmail.com> Message-ID: On Fri, Jun 1, 2012 at 8:21 AM, Gregor Thalhammer wrote: > > Am 1.6.2012 um 11:21 schrieb Markus Baden: > > Hi Gregor, > > Thanks for the fast reply. > >> If you have knowledge about the statistical errors of your data, then >> skipping step 2 and 3 is the recommended, and you can use the chi square to >> assess the validity of the fit and your assumptions about the errors. On the >> other hand, if you have insufficient knowledge about the errors, you can use >> the reduced chi square as an estimate for the variance of your data (at >> least under the assumption that the error is the same for all data points). when you use the weighting and the estimated sigma, you still leave the relative weighting unchanged. So we don't really assume that the error is the same for all data points. >> This is the idea behind steps 2 and 3. > > > I just want to get that straight: So basically in the case where I either > don't have errors, or I don't trust them, multiplying the covariance by the > reduced chi square amounts to "normalizing" the covariance such that the fit > would have a chi square of one (?). Maybe your point could go into the docs > for curve_fit... or there could be a comment about standard procedure a bit > like in origin > (http://www.originlab.com/www/support/resultstech.aspx?ID=452&language=English&Version=All) > > > Yes, I think you correctly got the idea. My interpretation is that you have to trust your error estimate more than the error estimate from the reduced chi square. In the two extreme scaled_x_errors cases, your belief about the errors is far from the model fit. So either the residual sum of squares is a bad measurement of the error (outliers, ....) or what you think the measurement errors should be doesn't agree with the data. In the example the measurement error is translated from x_errors to y_errors by a linear approximation (at least according to the comments). If this approximation is not good, then it introduces another type of error, that would create a discrepancy between your measurement errors and the reduced chisquare. > > >> >> >> > Now in the particular problem I am working at, we have a couple of fits >> > like [5] and some of them have a slightly worse reduced chi square of say >> > about 1.4 or 0.7. At this point the two methods start to deviate and I am >> > wondering which would be the correct way of quoting the errors estimated >> > from the fit. Even a basic reference to some text book that explains the >> > method used in scipy would be very helpful. >> >> I didn't look at your data, but I guess that these values of the reduced >> chi square are still in range such that they are not a significant deviation >> from the expected value of one. The chi-squared distribution is rather >> broad. So I would omit steps 2 and 3. Only if you have good reasons not to >> trust your assumptions about the errors of the data, then apply steps 2 and >> 3. > > > We looked at which part of the CDF these values are and they are still ok. > And our errors are all inferred from measurements, so we trust them quite a > bit. We use the fitting described to obtain a particular property of an ion > via spectroscopy... that's also why we want to get our errors on that > property correct :) > > > As e.g. described nicely in?http://mljohnson.pharm.virginia.edu/pdfs/174.pdf > you have to be careful about the parameter error estimates obtained by this > procedure. In general the obtained results are too optimistic. Thanks, helpful example and discussion. I've never seen an estimated curve that fits so well. extra: With the measurement error in x, the least squares estimator is usually biased. But I only know part of the theory for the linear case and don't have much idea about how this will affect the estimates in the non-linear case. Josef > > Gregor > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Sun Jun 3 07:25:01 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 3 Jun 2012 07:25:01 -0400 Subject: [SciPy-User] ttest returning NaN for 0/0 zero variance? Message-ID: Should we stop guessing what the ttest is with zero variance, and switch to returning NaN in these cases? http://article.gmane.org/gmane.comp.python.scientific.devel/9622 Initially, IIRC, I tried to avoid the nan, because there was discussion somewhere, like pymvpa mailing list or code, that didn't want a NaN in the results. The question whether 0/0=0 or 0/0=1 in the ttests, comes up about once a year. Since I finally have R open again (reduced output): almost the same > t.test(c(0,0,1e-15), c(0,0,0),var.equal=TRUE) Two Sample t-test data: c(0, 0, 1e-15) and c(0, 0, 0) t = 1, df = 4, p-value = 0.3739 > t.test(c(0,0,1e-100), c(0,0,0),var.equal=TRUE) Two Sample t-test data: c(0, 0, 1e-100) and c(0, 0, 0) t = 1, df = 4, p-value = 0.3739 > t.test(c(0,0,1e-100), c(0,0,1e-50),var.equal=TRUE) Two Sample t-test data: c(0, 0, 1e-100) and c(0, 0, 1e-50) t = -1, df = 4, p-value = 0.3739 > t.test(c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,1e-100), c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,1e-100),var.equal=TRUE) Two Sample t-test data: c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-100) and c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-100) t = 0, df = 28, p-value = 1 > t.test(c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,1e-50), c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,1e-100),var.equal=TRUE) Two Sample t-test data: c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-50) and c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-100) t = 1, df = 28, p-value = 0.3259 > t.test(c(0,0,1e-100), c(0,0,1e-100),var.equal=TRUE) Two Sample t-test data: c(0, 0, 1e-100) and c(0, 0, 1e-100) t = 0, df = 4, p-value = 1 exactly the same > t.test(c(0,0,0), c(0,0,0),var.equal=TRUE) Two Sample t-test data: c(0, 0, 0) and c(0, 0, 0) t = NaN, df = 4, p-value = NA If we don't return NaN, then I'm still in favor of the 0/0=1 solution. Josef From d.sabinasz at googlemail.com Sun Jun 3 07:20:23 2012 From: d.sabinasz at googlemail.com (Daniel Sabinasz) Date: Sun, 03 Jun 2012 13:20:23 +0200 Subject: [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution Message-ID: <4FCB4877.60200@googlemail.com> Hi all, I need to sample a random number from a distribution whose probability density function I specify myself. Is that possible using scipy.stats? Here is what I have already: import scipy.stats as st class my_pdf(st.rv_continuous): def _pdf(self,x): return x*x/10.0 my_cv = my_pdf(name='my_pdf') Can I now somehow sample a random number from my_cv? Thanks in advance! Daniel From josef.pktd at gmail.com Sun Jun 3 10:07:52 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 3 Jun 2012 10:07:52 -0400 Subject: [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution In-Reply-To: <4FCB4877.60200@googlemail.com> References: <4FCB4877.60200@googlemail.com> Message-ID: On Sun, Jun 3, 2012 at 7:20 AM, Daniel Sabinasz wrote: > Hi all, > > I need to sample a random number from a distribution whose probability > density function I specify myself. Is that possible using scipy.stats? > > Here is what I have already: > > import scipy.stats as st > > class my_pdf(st.rv_continuous): > ? ? def _pdf(self,x): > ? ? ? ? return x*x/10.0 > > my_cv = my_pdf(name='my_pdf') > > Can I now somehow sample a random number from my_cv? you can define your own distribution with the pdf, then the generic methods will calculate the rvs http://stackoverflow.com/questions/10678546/creating-new-distributions-in-scipy It will require a large number of calls to generate random numbers, numerically calculating ppf and cdf, so this won't be efficient. Using a linear interpolated ppf will be faster, and might be accurate enough. (I still need to write an example for trying out the ppf interpolation version.) Josef > > Thanks in advance! > > Daniel > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From njs at pobox.com Sun Jun 3 10:18:20 2012 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 3 Jun 2012 15:18:20 +0100 Subject: [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution In-Reply-To: <4FCB4877.60200@googlemail.com> References: <4FCB4877.60200@googlemail.com> Message-ID: On Sun, Jun 3, 2012 at 12:20 PM, Daniel Sabinasz wrote: > Hi all, > > I need to sample a random number from a distribution whose probability > density function I specify myself. Is that possible using scipy.stats? > > Here is what I have already: > > import scipy.stats as st > > class my_pdf(st.rv_continuous): > ? ? def _pdf(self,x): > ? ? ? ? return x*x/10.0 > > my_cv = my_pdf(name='my_pdf') > > Can I now somehow sample a random number from my_cv? The easiest and fastest way to generate random variates from a given distribution is to calculate the quantile function and then feed it random samples from the uniform distribution[1]. The catch is that computing the quantile function requires that you be able to calculate the integral of your PDF (the CDF), and then invert the CDF, so this method only applies for PDFs for which this is possible. But if your PDF is really as simple as the one in your example then this is a good approach :-). [1] https://en.wikipedia.org/wiki/Inverse_transform_sampling -- Nathaniel From josef.pktd at gmail.com Sun Jun 3 11:21:04 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 3 Jun 2012 11:21:04 -0400 Subject: [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution In-Reply-To: References: <4FCB4877.60200@googlemail.com> Message-ID: On Sun, Jun 3, 2012 at 10:07 AM, wrote: > On Sun, Jun 3, 2012 at 7:20 AM, Daniel Sabinasz > wrote: >> Hi all, >> >> I need to sample a random number from a distribution whose probability >> density function I specify myself. Is that possible using scipy.stats? >> >> Here is what I have already: >> >> import scipy.stats as st >> >> class my_pdf(st.rv_continuous): >> ? ? def _pdf(self,x): >> ? ? ? ? return x*x/10.0 >> >> my_cv = my_pdf(name='my_pdf') I didn't look at the example before x**2 makes a proper pdf only with a bounded support, so, when creating the instance, then the bounds .a and .b need to be given as keyword arguments. (or set as attributes) It also looks like ppf should have a closed form expression. (my algebra isn't good enough today to figure out bounds) Josef >> >> Can I now somehow sample a random number from my_cv? > > you can define your own distribution with the pdf, then the generic > methods will calculate the rvs > > http://stackoverflow.com/questions/10678546/creating-new-distributions-in-scipy > > It will require a large number of calls to generate random numbers, > numerically calculating ppf and cdf, so this won't be efficient. > Using a linear interpolated ppf will be faster, and might be accurate enough. > > (I still need to write an example for trying out the ppf interpolation version.) > > Josef > >> >> Thanks in advance! >> >> Daniel >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Sun Jun 3 13:51:49 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 3 Jun 2012 13:51:49 -0400 Subject: [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution In-Reply-To: References: <4FCB4877.60200@googlemail.com> Message-ID: On Sun, Jun 3, 2012 at 11:21 AM, wrote: > On Sun, Jun 3, 2012 at 10:07 AM, ? wrote: >> On Sun, Jun 3, 2012 at 7:20 AM, Daniel Sabinasz >> wrote: >>> Hi all, >>> >>> I need to sample a random number from a distribution whose probability >>> density function I specify myself. Is that possible using scipy.stats? >>> >>> Here is what I have already: >>> >>> import scipy.stats as st >>> >>> class my_pdf(st.rv_continuous): >>> ? ? def _pdf(self,x): >>> ? ? ? ? return x*x/10.0 >>> >>> my_cv = my_pdf(name='my_pdf') > > I didn't look at the example before > > x**2 makes a proper pdf only with a bounded support, so, when creating > the instance, then the bounds .a and .b need to be given as keyword > arguments. (or set as attributes) > > It also looks like ppf should have a closed form expression. > > (my algebra isn't good enough today to figure out bounds) I picked [-1, 1] for the bounds of the distribution, example with explicit ppf is much faster https://gist.github.com/2864348 Josef > > Josef > > > >>> >>> Can I now somehow sample a random number from my_cv? >> >> you can define your own distribution with the pdf, then the generic >> methods will calculate the rvs >> >> http://stackoverflow.com/questions/10678546/creating-new-distributions-in-scipy >> >> It will require a large number of calls to generate random numbers, >> numerically calculating ppf and cdf, so this won't be efficient. >> Using a linear interpolated ppf will be faster, and might be accurate enough. >> >> (I still need to write an example for trying out the ppf interpolation version.) >> >> Josef >> >>> >>> Thanks in advance! >>> >>> Daniel >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user From d.sabinasz at googlemail.com Sun Jun 3 14:43:26 2012 From: d.sabinasz at googlemail.com (Daniel Sabinasz) Date: Sun, 03 Jun 2012 20:43:26 +0200 Subject: [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution In-Reply-To: References: <4FCB4877.60200@googlemail.com> Message-ID: <4FCBB04E.5020405@googlemail.com> Thank you Josef and Nathaniel. That really helped! From sturla at molden.no Mon Jun 4 09:21:53 2012 From: sturla at molden.no (Sturla Molden) Date: Mon, 04 Jun 2012 15:21:53 +0200 Subject: [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution In-Reply-To: <4FCB4877.60200@googlemail.com> References: <4FCB4877.60200@googlemail.com> Message-ID: <4FCCB671.5080705@molden.no> On 03.06.2012 13:20, Daniel Sabinasz wrote: > Hi all, > > I need to sample a random number from a distribution whose probability > density function I specify myself. Is that possible using scipy.stats? Sampling a general distribution is typically an MCMC problem, that e.g. can be solved with the Metropolis-Hastings sampler. http://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm Because of its recursive nature, a Markov chain like this is better written in Cython, or you can use NumPy to run multiple chains in parallel. (I depends on how many samples you need, of course, anything below a million should be fast enough in Python.) You might also take a look at PyMCMC: https://github.com/rdenham/pymcmc Sturla From kwgoodman at gmail.com Mon Jun 4 13:09:05 2012 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 4 Jun 2012 10:09:05 -0700 Subject: [SciPy-User] [ANN] Bottleneck 0.6.0 released Message-ID: Bottleneck is a collection of fast NumPy array functions written in Cython. It contains functions like median, nanargmax, move_median, rankdata, partsort, replace, anynan. The sixth release of bottleneck adds four new functions and, thanks to Dougal Sutherland, now supports Python 3.2. New functions: - replace(arr, old, new), e.g, replace(arr, np.nan, 0) - nn(arr, arr0, axis) nearest neighbor and its index of 1d arr0 in 2d arr - anynan(arr, axis) faster alternative to np.isnan(arr).any(axis) - allnan(arr, axis) faster alternative to np.isnan(arr).all(axis) Enhancements: - Python 3.2 support (may work on earlier versions of Python 3) - C files are now generated with Cython 0.16 instead of 0.14.1 - Upgrade numpydoc from 0.3.1 to 0.4 to support Sphinx 1.0.1 Breaks from 0.5.0: - Support for Python 2.5 dropped - Default axis for benchmark suite is now axis=1 (was 0) Bug fixes: - #31 Confusing error message in partsort and argpartsort - #32 Update path in MANIFEST.in - #35 Wrong output for very large (2**31) input arrays download ? http://pypi.python.org/pypi/Bottleneck docs ? http://berkeleyanalytics.com/bottleneck code ? http://github.com/kwgoodman/bottleneck mailing list ? http://groups.google.com/group/bottle-neck From vanforeest at gmail.com Mon Jun 4 16:07:11 2012 From: vanforeest at gmail.com (nicky van foreest) Date: Mon, 4 Jun 2012 22:07:11 +0200 Subject: [SciPy-User] scipy.stats: Sampling from an arbitrary probability distribution In-Reply-To: <4FCCB671.5080705@molden.no> References: <4FCB4877.60200@googlemail.com> <4FCCB671.5080705@molden.no> Message-ID: Hi Maybe this article is of help http://www.google.com/url?sa=t&rct=j&q=plehn%20bruns&source=web&cd=2&ved=0CFoQFjAB&url=http%3A%2F%2Fwww.logistics-journal.de%2Fnot-reviewed%2F2005%2F7%2Fapproximation%2FApproximation_and_Computation_of_Random_Variables_using_Finite_Elements.pdf&ei=CRXNT4eyC6ik4gSoqYWPDA&usg=AFQjCNFLF-Zr3w6opNHWsM3HYGUaTqK8vA Nicky On Jun 4, 2012 3:22 PM, "Sturla Molden" wrote: > On 03.06.2012 13:20, Daniel Sabinasz wrote: > > Hi all, > > > > I need to sample a random number from a distribution whose probability > > density function I specify myself. Is that possible using scipy.stats? > > Sampling a general distribution is typically an MCMC problem, that e.g. > can be solved with the Metropolis-Hastings sampler. > > http://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm > > Because of its recursive nature, a Markov chain like this is better > written in Cython, or you can use NumPy to run multiple chains in > parallel. (I depends on how many samples you need, of course, anything > below a million should be fast enough in Python.) > > You might also take a look at PyMCMC: > https://github.com/rdenham/pymcmc > > > Sturla > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pybokeh at gmail.com Mon Jun 4 21:22:26 2012 From: pybokeh at gmail.com (pybokeh) Date: Mon, 4 Jun 2012 21:22:26 -0400 Subject: [SciPy-User] why does scipy.stats.t.ppf return negative values? Message-ID: I am reviewing sample problems in my statistics book from college. When I look up the percentage point for t-distribution in the appendix of the book, it lists positive values. When I use say for example: stats.t.ppf(0.025,15), it shows -2.131, whereas, my stat book shows 2.131. I tried other parameters and I get same values as what my stat book shows, but again the values it returns are negative. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Mon Jun 4 21:42:40 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 4 Jun 2012 21:42:40 -0400 Subject: [SciPy-User] why does scipy.stats.t.ppf return negative values? In-Reply-To: References: Message-ID: On Mon, Jun 4, 2012 at 9:22 PM, pybokeh wrote: > I am reviewing sample problems in my statistics book from college.? When I > look up the percentage point for t-distribution in the appendix of the book, > it lists positive values.? When I use say for example: > stats.t.ppf(0.025,15), it shows -2.131, whereas, my stat book shows 2.131. > I tried other parameters and I get same values as what my stat book shows, > but again the values it returns are negative. > You're in the left tail of the distribution. It's symmetric about zero. 2.5 % of zero mean, unit variance numbers that follow Student's t distribution are less than -2.131. 97.5% are less than 2.131. nobs = 100000. np.random.seed(12345) rvs = stats.t(15).rvs(nobs) print np.sum(rvs References: <4FCB4877.60200@googlemail.com> <4FCCB671.5080705@molden.no> Message-ID: <70A853FD-DE88-4DEA-A60D-635DDE4E747D@googlemail.com> On Jun 4, 2012, at 3:21 PM, Sturla Molden wrote: > On 03.06.2012 13:20, Daniel Sabinasz wrote: >> Hi all, >> >> I need to sample a random number from a distribution whose probability >> density function I specify myself. Is that possible using scipy.stats? > > Sampling a general distribution is typically an MCMC problem, that e.g. > can be solved with the Metropolis-Hastings sampler. > > http://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm > > Because of its recursive nature, a Markov chain like this is better > written in Cython, or you can use NumPy to run multiple chains in > parallel. (I depends on how many samples you need, of course, anything > below a million should be fast enough in Python.) > > You might also take a look at PyMCMC: > https://github.com/rdenham/pymcmc > > > Sturla > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user If you are willing to look outside of scipy, there are nice methods to generate random numbers from arbitrary distributions in ROOT, a C++ physics data analysis package with python bindings: http://root.cern.ch/drupal/content/pyroot import ROOT # Define the function and limits you want: # TF1::TF1(const char* name, const char* formula, Double_t xmin = 0, Double_t xmax = 1) f = ROOT.TF1("my_pdf", "x * x / 10.", -1, 1) # Generate 100 random numbers from that distribution [f.GetRandom() for _ in range(100)] You can sample from an arbitrary 2D distribution as well: # TF2::TF2(const char* name, const char* formula, Double_t xmin = 0, Double_t xmax = 1, Double_t ymin = 0, Double_t ymax = 1) f2 = ROOT.TF2("my_pdf2", "x * x / 10. + pow(y, 4)", -1, 1, 3, 4) x, y = ROOT.Double(), ROOT.Double() f2.GetRandom2(x, y) If you only want a histogram of values, not the array, you can avoid the python call overhead: # TH1D::TH1D(const char* name, const char* title, Int_t nbinsx, Double_t xlow, Double_t xup) h = ROOT.TH1D("my_hist", "my_hist", 1000, -1, 1) # void TH1::FillRandom(const char* fname, Int_t ntimes = 5000) In [49]: %timeit h.FillRandom("my_pdf", int(1e6)) 10 loops, best of 3: 171 ms per loop In [48]: %timeit [f.GetRandom() for _ in range(int(1e6))] 1 loops, best of 3: 2.62 s per loop Here you can see the method used (parabolic approximations): http://root.cern.ch/root/html/src/TF1.cxx.html#gYdi6C Even if most users don't want to install ROOT, it might be worth comparing the accuracy / speed to the method in scipy. ROOT also contains the UNURAN package, which implements several methods to sample from arbitrary one- or multi-dimensional distributions. http://root.cern.ch/root/html/MATH_UNURAN_Index.html http://statmath.wu.ac.at/unuran/ Unfortunately it's GPL and doesn't have python bindings itself as far as I know. Christoph From bouloumag at gmail.com Wed Jun 6 15:13:24 2012 From: bouloumag at gmail.com (Darcoux Christine) Date: Wed, 6 Jun 2012 15:13:24 -0400 Subject: [SciPy-User] 2-D data interpolation In-Reply-To: References: Message-ID: 2012/5/31 Nadav Horesh : > You can try also scipy.interpolate.griddata (not the pylab's griddata!) > > ? Nadav > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I tried this function and it works very similar to the interp2 function of matlab. The only problem is that my data are very peaky and the cubic version of griddata gives me big over/under-shoots. The matlab function does not gives me such behavior. Is there another high 2D order interpolator that could be used in this case ? Octave has a nice "pchip" option for interp2 that use piecewise cubic Hermite interpolating polynomial, but unfortunatly griddata does not have such option. Thanks Christine From pav at iki.fi Wed Jun 6 18:10:55 2012 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 07 Jun 2012 00:10:55 +0200 Subject: [SciPy-User] 2-D data interpolation In-Reply-To: References: Message-ID: 06.06.2012 21:13, Darcoux Christine kirjoitti: [clip] > I tried this function and it works very similar to the interp2 > function of matlab. The only problem is that my data are very peaky > and the cubic version of griddata gives me big over/under-shoots. The > matlab function does not gives me such behavior. > > Is there another high 2D order interpolator that could be used in this case ? > > Octave has a nice "pchip" option for interp2 that use piecewise cubic > Hermite interpolating polynomial, but unfortunatly griddata does not > have such option. Ah, your data is actually on a regular grid rather than scattered. This means n-dim interpolation can be built up from 1-D interpolation as a tensor product. Scipy does have `pchip` for 1-D, but currently it's a bit clumsy to use for 2-D data. You can build up tensor product 2-D pchip interpolation like this: ---- import numpy as np from scipy.interpolate import pchip def pchip2(x, y, z, xi, yi): """P-chip interpolation on 2-D. x, y, xi, yi should be 1-D and z.shape == (len(x), len(y))""" z2 = np.empty([len(xi), len(y)], dtype=z.dtype) for k in xrange(len(y)): z2[:,k] = pchip(x, z[:,k])(xi) z3 = np.empty([len(xi), len(yi)], dtype=z.dtype) for k in xrange(len(xi)): z3[k,:] = pchip(y, z2[k,:])(yi) return z3 # Random test case import matplotlib.pyplot as plt x = 2*pi*np.random.rand(20) x.sort() y = 2*pi*np.random.rand(10) y.sort() z = np.cos(x[:,None]) * np.sin(y) plt.figure(1) plt.pcolor(x, y, z.T) xi = np.linspace(x.min(), x.max(), 200) yi = np.linspace(y.min(), y.max(), 300) zi = pchip2(x, y, z, xi, yi) plt.figure(2) plt.pcolor(xi, yi, zi.T) ---- It's not the fastest thing in the world, but maybe helps? The same idea generalizes to 3 and any higher dimensions. The pchip idea does not directly generalize to true *scattered* data interpolation, which is what `griddata` does. There, the problem is quite a bit more tricky, but likely there is a publication out there that discusses how to do monotonicity-preserving interpolation on an arbitrary network of triangles... -- Pauli Virtanen From hjgreenberg at gmail.com Wed Jun 6 13:15:21 2012 From: hjgreenberg at gmail.com (Harvey) Date: Wed, 6 Jun 2012 10:15:21 -0700 (PDT) Subject: [SciPy-User] mac os x installation Message-ID: <99a1282e-031d-467b-95f1-47b931618ad7@f8g2000pbf.googlegroups.com> I have python 2.6.1 on mac os x and it has numpy. I downloaded scipy...dmg and it wouldn't install, giving msg: scipy 0.10.1 can't be installed on this disk. scipy requires python.org Python 2.6 to install. Does this mean I must install a different version of python? I am concerned with conflicts with my current version, which I installed to run with other python code. It is not the std os x version, but it is not in the default location. The installer did not give me a chance to define its location. Any help would be appreciated. From boyd.blackwell at anu.edu.au Thu Jun 7 05:05:18 2012 From: boyd.blackwell at anu.edu.au (bdb112) Date: Thu, 7 Jun 2012 02:05:18 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] sinc interpolation In-Reply-To: References: <4B06FB5E.8070806@gmail.com> Message-ID: <33974580.post@talk.nabble.com> Corrected typo and improved code: typo: should be yy = resample(y,100) Improved code: from scipy.signal import resample from numpy import linspace, sin, pi from pylab import plot, show # replace 4*pi to see effect of non-periodic function (e.g. at either end) x = linspace(0,4*pi,10,endpoint=False) y = sin(x) # if you supply t, you get the interpolated t back as well. (yy,xx) = resample(y, 100, t = x) plot(x,y,'ro', xx, yy) show() Travis Oliphant-5 wrote: > > > > On Nov 20, 2009, at 2:26 PM, David Trem wrote: > >> Hello, >> >> Is sinc interpolation available in Scipy ? > > Yes, use scipy.signal.resample which uses a Fourier method to > downsample or upsample a signal: > > from scipy.signal import resample > from numpy import r_, sin > from pylab import plot > > x = r_[0:10] > y = sin(x) > yy = resample(x, 100) > > # This is a bit tricky to get the x-samples right > xx = r_[0:10:101j][:-1] > > plot(x,y,'ro', xx, yy) > > > -Travis > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/sinc-interpolation-tp26449760p33974580.html Sent from the Scipy-User mailing list archive at Nabble.com. From irina.nudelman at gmail.com Thu Jun 7 13:19:23 2012 From: irina.nudelman at gmail.com (Irina Nudelman) Date: Thu, 7 Jun 2012 13:19:23 -0400 Subject: [SciPy-User] scipy install on Mac Message-ID: Hello, I have trouble installing Scipy on my Mac. I downloaded this: scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to install, but it told me it needs python 2.6. However, that's the version I have already. When I do python, I see: Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. Can you help me figure out how to install it? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Jun 7 16:12:41 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 7 Jun 2012 22:12:41 +0200 Subject: [SciPy-User] scipy install on Mac In-Reply-To: References: Message-ID: On Thu, Jun 7, 2012 at 7:19 PM, Irina Nudelman wrote: > Hello, > > I have trouble installing Scipy on my Mac. I downloaded this: > scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to > install, but it told me it needs python 2.6. However, that's the version I > have already. When I do python, I see: > > Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) > [GCC 4.2.1 (Apple Inc. build 5646)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > > Can you help me figure out how to install it? > Hi Irina, that is your system Python (installed by Apple, best to leave it alone). You should install Python 2.6 from http://www.python.org/download/releases/2.6.6/, or 2.7 (also from python.org). Then you can use the binary installers of numpy and scipy, in that order. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Jun 7 16:26:36 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 7 Jun 2012 22:26:36 +0200 Subject: [SciPy-User] mac os x installation In-Reply-To: <99a1282e-031d-467b-95f1-47b931618ad7@f8g2000pbf.googlegroups.com> References: <99a1282e-031d-467b-95f1-47b931618ad7@f8g2000pbf.googlegroups.com> Message-ID: On Wed, Jun 6, 2012 at 7:15 PM, Harvey wrote: > I have python 2.6.1 on mac os x and it has numpy. I downloaded > scipy...dmg and it wouldn't install, giving msg: scipy 0.10.1 can't be > installed on this disk. scipy requires python.org Python 2.6 to > install. Does this mean I must install a different version of > python? I am concerned with conflicts with my current version, which > I installed to run with other python code. It is not the std os x > version, but it is not in the default location. The installer did not > give me a chance to define its location. Any help would be > appreciated. > Your current 2.6.1 is the version installed by OS X itself, which is used by the OS for some applications. It's better to not modify anything about it. Installing a new Python from python.org won't interfere with your system Python, and is as standard as it gets. The numpy/scipy/matplotlib/... binary installers all work only with the Python from python.org. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From irina.nudelman at gmail.com Thu Jun 7 17:09:43 2012 From: irina.nudelman at gmail.com (Irina Nudelman) Date: Thu, 7 Jun 2012 17:09:43 -0400 Subject: [SciPy-User] scipy install on Mac In-Reply-To: References: Message-ID: Thanks Ralf. I just tried what you said - I installed Python2.6. Then I ran the scipy installation again from disc image - this time it finished with "install succesfull". However, I still get the error saying there's no module named scipy when I run my code! Also, I don't know where exactly scipy got installed - when I search I only see the dmg file. Probably my code is still using the apple python, but if I got installation successful, why can't I at least see where scipy went?! On Thu, Jun 7, 2012 at 4:12 PM, Ralf Gommers wrote: > > > On Thu, Jun 7, 2012 at 7:19 PM, Irina Nudelman wrote: > >> Hello, >> >> I have trouble installing Scipy on my Mac. I downloaded this: >> scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to >> install, but it told me it needs python 2.6. However, that's the version I >> have already. When I do python, I see: >> >> Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) >> [GCC 4.2.1 (Apple Inc. build 5646)] on darwin >> Type "help", "copyright", "credits" or "license" for more information. >> >> Can you help me figure out how to install it? >> > > Hi Irina, that is your system Python (installed by Apple, best to leave it > alone). You should install Python 2.6 from > http://www.python.org/download/releases/2.6.6/, or 2.7 (also from > python.org). Then you can use the binary installers of numpy and scipy, > in that order. > > Cheers, > Ralf > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Jun 7 17:16:29 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 7 Jun 2012 23:16:29 +0200 Subject: [SciPy-User] scipy install on Mac In-Reply-To: References: Message-ID: On Thu, Jun 7, 2012 at 11:09 PM, Irina Nudelman wrote: > Thanks Ralf. > > I just tried what you said - I installed Python2.6. Then I ran the scipy > installation again from disc image - this time it finished with "install > succesfull". However, I still get the error saying there's no module named > scipy when I run my code! Also, I don't know where exactly scipy got > installed - when I search I only see the dmg file. Probably my code is > still using the apple python, but if I got installation successful, why > can't I at least see where scipy went?! > > It's installed to /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy. If you installed the Python from python.org, opening a new terminal and typing "python" should use the newly installed Python. You can check that by: >>> import os >>> os.__file__ '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/os.pyc' Ralf > On Thu, Jun 7, 2012 at 4:12 PM, Ralf Gommers wrote: > >> >> >> On Thu, Jun 7, 2012 at 7:19 PM, Irina Nudelman wrote: >> >>> Hello, >>> >>> I have trouble installing Scipy on my Mac. I downloaded this: >>> scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to >>> install, but it told me it needs python 2.6. However, that's the version I >>> have already. When I do python, I see: >>> >>> Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) >>> [GCC 4.2.1 (Apple Inc. build 5646)] on darwin >>> Type "help", "copyright", "credits" or "license" for more information. >>> >>> Can you help me figure out how to install it? >>> >> >> Hi Irina, that is your system Python (installed by Apple, best to leave >> it alone). You should install Python 2.6 from >> http://www.python.org/download/releases/2.6.6/, or 2.7 (also from >> python.org). Then you can use the binary installers of numpy and scipy, >> in that order. >> >> Cheers, >> Ralf >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irina.nudelman at gmail.com Thu Jun 7 17:26:04 2012 From: irina.nudelman at gmail.com (Irina Nudelman) Date: Thu, 7 Jun 2012 17:26:04 -0400 Subject: [SciPy-User] scipy install on Mac In-Reply-To: References: Message-ID: thanks. Ok, it works now for scipy, but as I feared - every other package is screwed up! Now I get no module found for every other import, such as networkx, pygraphviz, etc. How do I make them all work together?!! This is quite frustrating - I don't understand why scipy installation is so difficult? Every other package was fine, including numpy. I just installed and it all worked. your help is much appreciated to help me figure this out. On Thu, Jun 7, 2012 at 5:16 PM, Ralf Gommers wrote: > > > On Thu, Jun 7, 2012 at 11:09 PM, Irina Nudelman wrote: > >> Thanks Ralf. >> >> I just tried what you said - I installed Python2.6. Then I ran the scipy >> installation again from disc image - this time it finished with "install >> succesfull". However, I still get the error saying there's no module named >> scipy when I run my code! Also, I don't know where exactly scipy got >> installed - when I search I only see the dmg file. Probably my code is >> still using the apple python, but if I got installation successful, why >> can't I at least see where scipy went?! >> >> > It's installed to > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy. > > If you installed the Python from python.org, opening a new terminal and > typing "python" should use the newly installed Python. You can check that > by: > > >>> import os > >>> os.__file__ > '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/os.pyc' > > Ralf > > > >> On Thu, Jun 7, 2012 at 4:12 PM, Ralf Gommers > > wrote: >> >>> >>> >>> On Thu, Jun 7, 2012 at 7:19 PM, Irina Nudelman >> > wrote: >>> >>>> Hello, >>>> >>>> I have trouble installing Scipy on my Mac. I downloaded this: >>>> scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to >>>> install, but it told me it needs python 2.6. However, that's the version I >>>> have already. When I do python, I see: >>>> >>>> Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) >>>> [GCC 4.2.1 (Apple Inc. build 5646)] on darwin >>>> Type "help", "copyright", "credits" or "license" for more information. >>>> >>>> Can you help me figure out how to install it? >>>> >>> >>> Hi Irina, that is your system Python (installed by Apple, best to leave >>> it alone). You should install Python 2.6 from >>> http://www.python.org/download/releases/2.6.6/, or 2.7 (also from >>> python.org). Then you can use the binary installers of numpy and scipy, >>> in that order. >>> >>> Cheers, >>> Ralf >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Thu Jun 7 17:28:54 2012 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Thu, 7 Jun 2012 14:28:54 -0700 Subject: [SciPy-User] scipy install on Mac In-Reply-To: References: Message-ID: On Thu, Jun 7, 2012 at 2:26 PM, Irina Nudelman wrote: > thanks. Ok, it works now for scipy, but as I feared - every other package is > screwed up! Now I get no module found for every other import, such as > networkx, pygraphviz, etc. How do I make them all work together?!! This is > quite frustrating - I don't understand why scipy installation is so > difficult? Every other package was fine, including numpy. I just installed > and it all worked. > > your help is much appreciated to help me figure this out. Every Python install keeps a separate collection of libraries. It's not that the other libraries are screwed up; they're simply not there. You have to install them for the python.org Python. -Chris > > On Thu, Jun 7, 2012 at 5:16 PM, Ralf Gommers > wrote: >> >> >> >> On Thu, Jun 7, 2012 at 11:09 PM, Irina Nudelman >> wrote: >>> >>> Thanks Ralf. >>> >>> I just tried what you said - I installed Python2.6. Then I ran the scipy >>> installation again from disc image - this time it finished with "install >>> succesfull". However, I still get the error saying there's no module named >>> scipy when I run my code! Also, I don't know where exactly scipy got >>> installed - when I search I only see the dmg file. Probably my code is still >>> using the apple python, but if I got installation successful, why can't I at >>> least see where scipy went?! >>> >> >> It's installed to >> /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy. >> >> If you installed the Python from python.org, opening a new terminal and >> typing "python" should use the newly installed Python. You can check that >> by: >> >> >>> import os >> >>> os.__file__ >> '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/os.pyc' >> >> Ralf >> >> >>> >>> On Thu, Jun 7, 2012 at 4:12 PM, Ralf Gommers >>> wrote: >>>> >>>> >>>> >>>> On Thu, Jun 7, 2012 at 7:19 PM, Irina Nudelman >>>> wrote: >>>>> >>>>> Hello, >>>>> >>>>> I have trouble installing Scipy on my Mac. I downloaded this: >>>>> scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to >>>>> install, but it told me it needs python 2.6. However, that's the version I >>>>> have already. When I do python, I see: >>>>> >>>>> Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) >>>>> [GCC 4.2.1 (Apple Inc. build 5646)] on darwin >>>>> Type "help", "copyright", "credits" or "license" for more information. >>>>> >>>>> Can you help me figure out how to install it? >>>> >>>> >>>> Hi Irina, that is your system Python (installed by Apple, best to leave >>>> it alone). You should install Python 2.6 from >>>> http://www.python.org/download/releases/2.6.6/, or 2.7 (also from >>>> python.org). Then you can use the binary installers of numpy and scipy, in >>>> that order. >>>> >>>> Cheers, >>>> Ralf >>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nadavh at visionsense.com Fri Jun 8 02:50:20 2012 From: nadavh at visionsense.com (Nadav Horesh) Date: Fri, 8 Jun 2012 06:50:20 +0000 Subject: [SciPy-User] SciPy-User Digest, Vol 106, Issue 6 In-Reply-To: References: Message-ID: The default interpolation method in linear. Does you case falls in between (cubic --- overshoot, linear --- too coarse)? ________________________________________ 2012/5/31 Nadav Horesh : > You can try also scipy.interpolate.griddata (not the pylab's griddata!) > > ? Nadav > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I tried this function and it works very similar to the interp2 function of matlab. The only problem is that my data are very peaky and the cubic version of griddata gives me big over/under-shoots. The matlab function does not gives me such behavior. Is there another high 2D order interpolator that could be used in this case ? Octave has a nice "pchip" option for interp2 that use piecewise cubic Hermite interpolating polynomial, but unfortunatly griddata does not have such option. Thanks Christine From irina.nudelman at gmail.com Thu Jun 7 11:59:30 2012 From: irina.nudelman at gmail.com (Irina Nudelman) Date: Thu, 7 Jun 2012 11:59:30 -0400 Subject: [SciPy-User] scipy install on Mac Message-ID: Hello, I have trouble installing Scipy on my Mac. I downloaded this: scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to install, but it told me it needs python 2.6. However, that's the version I have already. When I do python, I see: Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. Can you help me figure out how to install it? Also, is it possible to install it with easy_install? It didn't work when I tried, giving me: non-existing path in '/private/var/folders/RO/ROU+Y3JnHwiY4x+4HGSFPk+++TI/-Tmp-/easy_install-6RBzym/scipy-0.10.1/scipy/io': 'docs' /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) Traceback (most recent call last): File "/usr/bin/easy_install-2.6", line 10, in load_entry_point('setuptools==0.6c9', 'console_scripts', 'easy_install')() File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 1671, in main with_ei_usage(lambda: File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 1659, in with_ei_usage return f() File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 1675, in distclass=DistributionWithoutHelpCommands, **kw File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/core.py", line 152, in setup dist.run_commands() File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/dist.py", line 975, in run_commands self.run_command(cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/dist.py", line 995, in run_command cmd_obj.run() File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 211, in run self.easy_install(spec, not self.no_deps) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 446, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 476, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 655, in install_eggs return self.build_and_install(setup_script, setup_base) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 930, in build_and_install self.run_setup(setup_script, setup_base, args) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 919, in run_setup run_setup(setup_script, args) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", line 27, in run_setup lambda: execfile( File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", line 63, in run return func() File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", line 29, in {'__file__':setup_script, '__name__':'__main__'} File "setup.py", line 196, in File "setup.py", line 187, in setup_package File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", line 150, in setup config = configuration() File "setup.py", line 138, in configuration File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage caller_level = 2) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage caller_level = caller_level + 1) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 781, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 20, in configuration File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage caller_level = 2) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage caller_level = caller_level + 1) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 766, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "/private/var/folders/RO/ROU+Y3JnHwiY4x+4HGSFPk+++TI/-Tmp-/easy_install-6RBzym/scipy-0.10.1/scipy/special/setup.py", line 14, in ValueError: numpy >= 1.4 is required (detected 1.2.1 from /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/__init__.pyc) Thank you very much, Irina. -------------- next part -------------- An HTML attachment was scrubbed... URL: From blattnem at gmail.com Fri Jun 8 16:32:00 2012 From: blattnem at gmail.com (Marcel Blattner) Date: Fri, 8 Jun 2012 22:32:00 +0200 Subject: [SciPy-User] scipy install on Mac In-Reply-To: References: Message-ID: Hi Perhaps you should use port. I use it and never had any problems installing scipy, numpy, networkx etc. You just do port install py26-packagename. You find port here http://www.macports.org/ Cheers Marcel On 07.06.2012, at 17:59, Irina Nudelman wrote: > Hello, > > I have trouble installing Scipy on my Mac. I downloaded this: scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to install, but it told me it needs python 2.6. However, that's the version I have already. When I do python, I see: > > Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) > [GCC 4.2.1 (Apple Inc. build 5646)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > > Can you help me figure out how to install it? > > Also, is it possible to install it with easy_install? It didn't work when I tried, giving me: > > non-existing path in '/private/var/folders/RO/ROU+Y3JnHwiY4x+4HGSFPk+++TI/-Tmp-/easy_install-6RBzym/scipy-0.10.1/scipy/io': 'docs' > /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: UserWarning: > UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > Traceback (most recent call last): > File "/usr/bin/easy_install-2.6", line 10, in > load_entry_point('setuptools==0.6c9', 'console_scripts', 'easy_install')() > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 1671, in main > with_ei_usage(lambda: > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 1659, in with_ei_usage > return f() > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 1675, in > distclass=DistributionWithoutHelpCommands, **kw > File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/core.py", line 152, in setup > dist.run_commands() > File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/dist.py", line 975, in run_commands > self.run_command(cmd) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/dist.py", line 995, in run_command > cmd_obj.run() > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 211, in run > self.easy_install(spec, not self.no_deps) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 446, in easy_install > return self.install_item(spec, dist.location, tmpdir, deps) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 476, in install_item > dists = self.install_eggs(spec, download, tmpdir) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 655, in install_eggs > return self.build_and_install(setup_script, setup_base) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 930, in build_and_install > self.run_setup(setup_script, setup_base, args) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", line 919, in run_setup > run_setup(setup_script, args) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", line 27, in run_setup > lambda: execfile( > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", line 63, in run > return func() > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", line 29, in > {'__file__':setup_script, '__name__':'__main__'} > File "setup.py", line 196, in > File "setup.py", line 187, in setup_package > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", line 150, in setup > config = configuration() > File "setup.py", line 138, in configuration > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage > caller_level = 2) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage > caller_level = caller_level + 1) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 781, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/setup.py", line 20, in configuration > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage > caller_level = 2) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage > caller_level = caller_level + 1) > File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 766, in _get_configuration_from_setup_py > ('.py', 'U', 1)) > File "/private/var/folders/RO/ROU+Y3JnHwiY4x+4HGSFPk+++TI/-Tmp-/easy_install-6RBzym/scipy-0.10.1/scipy/special/setup.py", line 14, in > ValueError: numpy >= 1.4 is required (detected 1.2.1 from /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/__init__.pyc) > > Thank you very much, > Irina. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Jun 8 16:45:37 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 8 Jun 2012 22:45:37 +0200 Subject: [SciPy-User] scipy install on Mac In-Reply-To: References: Message-ID: On Thu, Jun 7, 2012 at 5:59 PM, Irina Nudelman wrote: > Hello, > > I have trouble installing Scipy on my Mac. I downloaded this: > scipy-0.10.1-py2.6-python.org-macosx10.3.dmg and clicked and tried to > install, but it told me it needs python 2.6. However, that's the version I > have already. When I do python, I see: > > Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) > [GCC 4.2.1 (Apple Inc. build 5646)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > > Can you help me figure out how to install it? > > Also, is it possible to install it with easy_install? It didn't work when > I tried, giving me: > > non-existing path in > '/private/var/folders/RO/ROU+Y3JnHwiY4x+4HGSFPk+++TI/-Tmp-/easy_install-6RBzym/scipy-0.10.1/scipy/io': > 'docs' > /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: > UserWarning: > UMFPACK sparse solver ( > http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be specified in > the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > Traceback (most recent call last): > File "/usr/bin/easy_install-2.6", line 10, in > load_entry_point('setuptools==0.6c9', 'console_scripts', > 'easy_install')() > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 1671, in main > with_ei_usage(lambda: > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 1659, in with_ei_usage > return f() > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 1675, in > distclass=DistributionWithoutHelpCommands, **kw > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/core.py", > line 152, in setup > dist.run_commands() > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/dist.py", > line 975, in run_commands > self.run_command(cmd) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/dist.py", > line 995, in run_command > cmd_obj.run() > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 211, in run > self.easy_install(spec, not self.no_deps) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 446, in easy_install > return self.install_item(spec, dist.location, tmpdir, deps) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 476, in install_item > dists = self.install_eggs(spec, download, tmpdir) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 655, in install_eggs > return self.build_and_install(setup_script, setup_base) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 930, in build_and_install > self.run_setup(setup_script, setup_base, args) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/command/easy_install.py", > line 919, in run_setup > run_setup(setup_script, args) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", > line 27, in run_setup > lambda: execfile( > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", > line 63, in run > return func() > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/setuptools/sandbox.py", > line 29, in > {'__file__':setup_script, '__name__':'__main__'} > File "setup.py", line 196, in > File "setup.py", line 187, in setup_package > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", > line 150, in setup > config = configuration() > File "setup.py", line 138, in configuration > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 851, in add_subpackage > caller_level = 2) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 834, in get_subpackage > caller_level = caller_level + 1) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 781, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/setup.py", line 20, in configuration > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 851, in add_subpackage > caller_level = 2) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 834, in get_subpackage > caller_level = caller_level + 1) > File > "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 766, in _get_configuration_from_setup_py > ('.py', 'U', 1)) > File > "/private/var/folders/RO/ROU+Y3JnHwiY4x+4HGSFPk+++TI/-Tmp-/easy_install-6RBzym/scipy-0.10.1/scipy/special/setup.py", > line 14, in > ValueError: numpy >= 1.4 is required (detected 1.2.1 from > /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/__init__.pyc) > You're again using your system Python here, not the one from python.org. The former resides in /System, the latter in /Frameworks. You probably have to install setuptools for your new Python first, which gives you an easy_install that works with it. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mglerner at gmail.com Sun Jun 10 11:34:16 2012 From: mglerner at gmail.com (Michael Lerner) Date: Sun, 10 Jun 2012 11:34:16 -0400 Subject: [SciPy-User] Combining distance matrices Message-ID: Hi all, I have a large number of objects and I want to calculate a distance matrix with a user-defined metric and scipy.spacial.pdist. The job is computationally expensive enough that I need to split it into parts and run it on a cluster. At the moment, I make a several lists of distance pairs to calculate, ship each list off to a different CPU/processor, gather the results and make a distance matrix by hand. I now have a situation where I have a new list of objects, and I'd like a full distance matrix for all of the old + new objects. Is there a built in way to handle such things, or do I need to explicitly figure out what new calculations need to be done, do them, and rebuild the distance matrix myself? It's not a deviously hard problem or anything, but it seems like someone must have run into the problem of wanting to combine distance matrices before, and I don't want to reinvent the wheel. Cheers, -Michael -- Michael Lerner Department of Physics and Astronomy Earlham College - Drawer 111 801 National Road West Richmond, IN 47374-4095 -------------- next part -------------- An HTML attachment was scrubbed... URL: From angel.yanguas at gmail.com Sat Jun 9 22:33:38 2012 From: angel.yanguas at gmail.com (Angel Yanguas-Gil) Date: Sat, 9 Jun 2012 21:33:38 -0500 Subject: [SciPy-User] how critical are testing errors in Scipy under Lion? Message-ID: Hi, My OS is Lion. I compiled from source following the instructions in Scipy docs under python.org's 2.7.3 and I got errors when running scipy.test(). I know there is a previous thread on this topic that lead to a ticket to look into the problem, but what I am trying to do is understand the impact of these errors on the day-to-day work with scipy. How do they affect Scipy's reliability under Lion? I know it is related to simple precision, but I don't know how big of an issue is. I've been using scipy for almost five years, and since moving to Lion last fall I have been reluctant to base my more serious work on Scipy+Lion for that reason, to the point that I am using Scipy mostly under Ubuntu these days. But it would be great to know if I am overreacting. Any clues? (relevant part from nose test results are included below). Thanks, ay Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy SciPy version 0.10.1 SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy Python version 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] nose version 1.1.2 ====================================================================== FAIL: test_asum (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", line 58, in test_asum assert_almost_equal(f([3,-4,5]),12) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 12 ====================================================================== FAIL: test_dot (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", line 67, in test_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: -9 ====================================================================== FAIL: test_nrm2 (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", line 78, in test_nrm2 assert_almost_equal(f([3,-4,5]),math.sqrt(50)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 7.0710678118654755 ====================================================================== FAIL: test_basic.TestNorm.test_overflow ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 581, in test_overflow assert_almost_equal(norm(a), a) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 452, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 800, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals (mismatch 100.0%) x: array(-0.0) y: array([ 1.00000002e+20], dtype=float32) ====================================================================== FAIL: test_basic.TestNorm.test_stable ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 586, in test_stable assert_almost_equal(norm(a) - 1e4, 0.5) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: -10000.0 DESIRED: 0.5 ====================================================================== FAIL: test_basic.TestNorm.test_types ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 568, in test_types assert_allclose(norm(x), np.sqrt(14), rtol=tol) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 1168, in assert_allclose verbose=verbose, header=header) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=2.38419e-06, atol=0 (mismatch 100.0%) x: array(1.0842021724855044e-19) y: array(3.7416573867739413) ====================================================================== FAIL: test_asum (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", line 99, in test_asum assert_almost_equal(f([3,-4,5]),12) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 12 ====================================================================== FAIL: test_dot (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", line 109, in test_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: -9 ====================================================================== FAIL: test_nrm2 (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", line 127, in test_nrm2 assert_almost_equal(f([3,-4,5]),math.sqrt(50)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 7.0710678118654755 Bummer. From zubair.anwar at gmail.com Mon Jun 11 03:19:45 2012 From: zubair.anwar at gmail.com (Zubair Anwar) Date: Mon, 11 Jun 2012 00:19:45 -0700 Subject: [SciPy-User] using PyCont.display() to plot a functional of the state variables Message-ID: First of all, thanks for creating such a wonderful tool! I am 2 days into working with PyDSTool and wanted to plot a function of the state variables using the display() method. Is this possible? Googling for this hasn't turned anything useful. As an example consider a single ODE for variable 'x' and parameter 'r'. With the display method I can only plot the bifurcation diagram for 'x'. How do I plot it for say x**2 or some other function of 'x'? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Mon Jun 11 12:08:38 2012 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 11 Jun 2012 12:08:38 -0400 Subject: [SciPy-User] using PyCont.display() to plot a functional of the state variables In-Reply-To: References: Message-ID: Hi Zubair, Thanks for writing. I'll send you an off-list reply, because this isn't the right list for this question, but for future reference I want to provide another chance to let list readers who use PyDSTool to know that the right place is http://sourceforge.net/projects/pydstool/forums/forum/472291 Also, in case anyone else is reading this reply, an article about PyDSTool was just accepted at PLoS Computational Biology, which is a prestigious journal if you didn't know. The article includes citations to numpy, scipy, matplotlib which hopefully will further promote python in science! Thanks, Rob On Mon, Jun 11, 2012 at 3:19 AM, Zubair Anwar wrote: > First of all, thanks for creating such a wonderful tool! > > I am 2 days into working with PyDSTool and wanted to plot a function of the > state variables using the display() method. Is this possible? Googling for > this hasn't turned anything useful. > > As an example consider a single ODE for variable 'x' and parameter 'r'. With > the display method I can only plot the bifurcation diagram for 'x'. How do I > plot it for say x**2 or some other function of 'x'? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Tue Jun 12 13:03:03 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 12 Jun 2012 19:03:03 +0200 Subject: [SciPy-User] how critical are testing errors in Scipy under Lion? In-Reply-To: References: Message-ID: On Sun, Jun 10, 2012 at 4:33 AM, Angel Yanguas-Gil wrote: > Hi, My OS is Lion. I compiled from source following the instructions > in Scipy docs under python.org's 2.7.3 and I got errors when running > scipy.test(). > > I know there is a previous thread on this topic that lead to a ticket > to look into the problem, but what I am trying to do is understand the > impact of these errors on the day-to-day work with scipy. How do they > affect Scipy's reliability under Lion? I know it is related to simple > precision, but I don't know how big of an issue is. I've been using > scipy for almost five years, and since moving to Lion last fall I have > been reluctant to base my more serious work on Scipy+Lion for that > reason, to the point that I am using Scipy mostly under Ubuntu these > days. But it would be great to know if I am overreacting. > Unless you specifically want to use single precision for something, it shouldn't be a big issue. Normally you won't run into this; other parts of scipy also don't rely on these single precision BLAS functions. Ralf > > Any clues? (relevant part from nose test results are included below). > > Thanks, > > ay > > > Running unit tests for scipy > NumPy version 1.6.1 > NumPy is installed in > > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy > SciPy version 0.10.1 > SciPy is installed in > > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy > Python version 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43) [GCC > 4.2.1 (Apple Inc. build 5666) (dot 3)] > nose version 1.1.2 > > > ====================================================================== > FAIL: test_asum (test_blas.TestFBLAS1Simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", > line 58, in test_asum > assert_almost_equal(f([3,-4,5]),12) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 468, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: 0.0 > DESIRED: 12 > > ====================================================================== > FAIL: test_dot (test_blas.TestFBLAS1Simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", > line 67, in test_dot > assert_almost_equal(f([3,-4,5],[2,5,1]),-9) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 468, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: 0.0 > DESIRED: -9 > > ====================================================================== > FAIL: test_nrm2 (test_blas.TestFBLAS1Simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/lib/blas/tests/test_blas.py", > line 78, in test_nrm2 > assert_almost_equal(f([3,-4,5]),math.sqrt(50)) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 468, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: 0.0 > DESIRED: 7.0710678118654755 > > ====================================================================== > FAIL: test_basic.TestNorm.test_overflow > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", > line 581, in test_overflow > assert_almost_equal(norm(a), a) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 452, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 800, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 636, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > > (mismatch 100.0%) > x: array(-0.0) > y: array([ 1.00000002e+20], dtype=float32) > > ====================================================================== > FAIL: test_basic.TestNorm.test_stable > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", > line 586, in test_stable > assert_almost_equal(norm(a) - 1e4, 0.5) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 468, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: -10000.0 > DESIRED: 0.5 > > ====================================================================== > FAIL: test_basic.TestNorm.test_types > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", > line 568, in test_types > assert_allclose(norm(x), np.sqrt(14), rtol=tol) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 1168, in assert_allclose > verbose=verbose, header=header) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 636, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=2.38419e-06, atol=0 > > (mismatch 100.0%) > x: array(1.0842021724855044e-19) > y: array(3.7416573867739413) > > ====================================================================== > FAIL: test_asum (test_blas.TestFBLAS1Simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", > line 99, in test_asum > assert_almost_equal(f([3,-4,5]),12) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 468, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: 0.0 > DESIRED: 12 > > > ====================================================================== > FAIL: test_dot (test_blas.TestFBLAS1Simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", > line 109, in test_dot > assert_almost_equal(f([3,-4,5],[2,5,1]),-9) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 468, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: 0.0 > DESIRED: -9 > > ====================================================================== > FAIL: test_nrm2 (test_blas.TestFBLAS1Simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/linalg/tests/test_blas.py", > line 127, in test_nrm2 > assert_almost_equal(f([3,-4,5]),math.sqrt(50)) > File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", > line 468, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: 0.0 > DESIRED: 7.0710678118654755 > > Bummer. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Jun 13 13:56:30 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 13 Jun 2012 19:56:30 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released Message-ID: Hi, I am pleased to announce the availability of the first beta release of SciPy0.11.0. For this release over 120 tickets and pull requests have been closed, and many new features have been added. Also noteworthy is that the number of contributors for this release has risen to over 50. Some of the highlights are: - A new module, sparse.csgraph, has been added which provides a number of common sparse graph algorithms. - New unified interfaces to the existing optimization and root finding functions have been added. Sources and binaries can be found at http://sourceforge.net/projects/scipy/files/scipy/0.11.0b1/, release notes are copied below. Please try this release and report any problems on the mailing list. Cheers, Ralf ========================== SciPy 0.11.0 Release Notes ========================== .. note:: Scipy 0.11.0 is not released yet! .. contents:: SciPy 0.11.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. Highlights of this release are: - A new module has been added which provides a number of common sparse graph algorithms. - New unified interfaces to the existing optimization and root finding functions have been added. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Our development attention will now shift to bug-fix releases on the 0.11.x branch, and on adding new features on the master branch. This release requires Python 2.4-2.7 or 3.1-3.2 and NumPy 1.5.1 or greater. New features ============ Sparse Graph Submodule ---------------------- The new submodule :mod:`scipy.sparse.csgraph` implements a number of efficient graph algorithms for graphs stored as sparse adjacency matrices. Available routines are: - :func:`connected_components` - determine connected components of a graph - :func:`laplacian` - compute the laplacian of a graph - :func:`shortest_path` - compute the shortest path between points on a positive graph - :func:`dijkstra` - use Dijkstra's algorithm for shortest path - :func:`floyd_warshall` - use the Floyd-Warshall algorithm for shortest path - :func:`breadth_first_order` - compute a breadth-first order of nodes - :func:`depth_first_order` - compute a depth-first order of nodes - :func:`breadth_first_tree` - construct the breadth-first tree from a given node - :func:`depth_first_tree` - construct a depth-first tree from a given node - :func:`minimum_spanning_tree` - construct the minimum spanning tree of a graph ``scipy.optimize`` improvements ------------------------------- The optimize module has received a lot of attention this release. In addition to added tests, documentation improvements, bug fixes and code clean-up, the following improvements were made: - A unified interface to minimizers of univariate and multivariate functions has been added. - A unified interface to root finding algorithms for multivariate functions has been added. - The L-BFGS-B algorithm has been updated to version 3.0. Unified interfaces to minimizers ```````````````````````````````` Two new functions ``scipy.optimize.minimize`` and ``scipy.optimize.minimize_scalar`` were added to provide a common interface to minimizers of multivariate and univariate functions respectively. For multivariate functions, ``scipy.optimize.minimize`` provides an interface to methods for unconstrained optimization (`fmin`, `fmin_powell`, `fmin_cg`, `fmin_ncg`, `fmin_bfgs` and `anneal`) or constrained optimization (`fmin_l_bfgs_b`, `fmin_tnc`, `fmin_cobyla` and `fmin_slsqp`). For univariate functions, ``scipy.optimize.minimize_scalar`` provides an interface to methods for unconstrained and bounded optimization (`brent`, `golden`, `fminbound`). This allows for easier comparing and switching between solvers. Unified interface to root finding algorithms ```````````````````````````````````````````` The new function ``scipy.optimize.root`` provides a common interface to root finding algorithms for multivariate functions, embeding `fsolve`, `leastsq` and `nonlin` solvers. ``scipy.linalg`` improvements ----------------------------- New matrix equation solvers ``````````````````````````` Solvers for the Sylvester equation (``scipy.linalg.solve_sylvester``, discrete and continuous Lyapunov equations (``scipy.linalg.solve_lyapunov``, ``scipy.linalg.solve_discrete_lyapunov``) and discrete and continuous algebraic Riccati equations (``scipy.linalg.solve_continuous_are``, ``scipy.linalg.solve_discrete_are``) have been added to ``scipy.linalg``. These solvers are often used in the field of linear control theory. QZ and QR Decomposition ```````````````````````` It is now possible to calculate the QZ, or Generalized Schur, decomposition using ``scipy.linalg.qz``. This function wraps the LAPACK routines sgges, dgges, cgges, and zgges. The function ``scipy.linalg.qr_multiply``, which allows efficient computation of the matrix product of Q (from a QR decompostion) and a vector, has been added. Pascal matrices ``````````````` A function for creating Pascal matrices, ``scipy.linalg.pascal``, was added. Sparse matrix construction and operations ----------------------------------------- Two new functions, ``scipy.sparse.diags`` and ``scipy.sparse.block_diag``, were added to easily construct diagonal and block-diagonal sparse matrices respectively. ``scipy.sparse.csc_matrix`` and ``csr_matrix`` now support the operations ``sin``, ``tan``, ``arcsin``, ``arctan``, ``sinh``, ``tanh``, ``arcsinh``, ``arctanh``, ``rint``, ``sign``, ``expm1``, ``log1p``, ``deg2rad``, ``rad2deg``, ``floor``, ``ceil`` and ``trunc``. Previously, these operations had to be performed by operating on the matrices' ``data`` attribute. LSMR iterative solver --------------------- LSMR, an iterative method for solving (sparse) linear and linear least-squares systems, was added as ``scipy.sparse.linalg.lsmr``. Discrete Sine Transform ----------------------- Bindings for the discrete sine transform functions have been added to ``scipy.fftpack``. ``scipy.interpolate`` improvements ---------------------------------- For interpolation in spherical coordinates, the three classes ``scipy.interpolate.SmoothSphereBivariateSpline``, ``scipy.interpolate.LSQSphereBivariateSpline``, and ``scipy.interpolate.RectSphereBivariateSpline`` have been added. Binned statistics (``scipy.stats``) ----------------------------------- The stats module has gained functions to do binned statistics, which are a generalization of histograms, in 1-D, 2-D and multiple dimensions: ``scipy.stats.binned_statistic``, ``scipy.stats.binned_statistic_2d`` and ``scipy.stats.binned_statistic_dd``. Deprecated features =================== ``scipy.sparse.cs_graph_components`` has been made a part of the sparse graph submodule, and renamed to ``scipy.sparse.csgraph.connected_components``. Calling the former routine will result in a deprecation warning. ``scipy.misc.radon`` has been deprecated. A more full-featured radon transform can be found in scikits-image. ``scipy.io.save_as_module`` has been deprecated. A better way to save multiple Numpy arrays is the ``numpy.savez`` function. The `xa` and `xb` parameters for all distributions in ``scipy.stats.distributions`` already weren't used; they have now been deprecated. Backwards incompatible changes ============================== Removal of ``scipy.maxentropy`` ------------------------------- The ``scipy.maxentropy`` module, which was deprecated in the 0.10.0 release, has been removed. Logistic regression in scikits.learn is a good and modern alternative for this functionality. alternative for this functionality. Minor change in behavior of ``splev`` ------------------------------------- The spline evaluation function now behaves similarly to ``interp1d`` for size-1 arrays. Previous behavior:: >>> from scipy.interpolate import splev, splrep, interp1d >>> x = [1,2,3,4,5] >>> y = [4,5,6,7,8] >>> tck = splrep(x, y) >>> splev([1], tck) 4. >>> splev(1, tck) 4. Corrected behavior:: >>> splev([1], tck) array([ 4.]) >>> splev(1, tck) array(4.) This affects also the ``UnivariateSpline`` classes. Behavior of ``scipy.integrate.complex_ode`` ------------------------------------------- The behavior of the ``y`` attribute of ``complex_ode`` is changed. Previously, it expressed the complex-valued solution in the form:: z = ode.y[::2] + 1j * ode.y[1::2] Now, it is directly the complex-valued solution:: z = ode.y Minor change in behavior of T-tests ----------------------------------- The T-tests ``scipy.stats.ttest_ind``, ``scipy.stats.ttest_rel`` and ``scipy.stats.ttest_1samp`` have been changed so that 0 / 0 now returns NaN instead of 1. Other changes ============= The SuperLU sources in ``scipy.sparse.linalg`` have been updated to version 4.3 from upstream. The function ``scipy.signal.bode``, which calculates magnitude and phase data for a continuous-time system, has been added. The two-sample T-test ``scipy.stats.ttest_ind`` gained an option to compare samples with unequal variances, i.e. Welch's T-test. ``scipy.misc.logsumexp`` now takes an optional ``axis`` keyword argument. Authors ======= This release contains work by the following people (contributed at least one patch to this release, names in alphabetical order): * Jeff Armstrong * Chad Baker * Brandon Beacher + * behrisch + * borishim + * Matthew Brett * Lars Buitinck * Luis Pedro Coelho + * Johann Cohen-Tanugi * David Cournapeau * dougal + * Ali Ebrahim + * endolith + * Bj?rn Forsman + * Robert Gantner + * Sebastian Gassner + * Christoph Gohlke * Ralf Gommers * Yaroslav Halchenko * Charles Harris * Jonathan Helmus + * Andreas Hilboll + * Marc Honnorat + * Jonathan Hunt + * Maxim Ivanov + * Thouis (Ray) Jones * Christopher Kuster + * Denis Laxalde + * Travis Oliphant * Joonas Paalasmaa + * Fabian Pedregosa * Josef Perktold * Gavin Price + * Jim Radford + * Andrew Schein + * Skipper Seabold * Jacob Silterra + * Scott Sinclair * Alexis Tabary + * Martin Teichmann * Matt Terry + * Nicky van Foreest + * Jacob Vanderplas * Patrick Varilly + * Pauli Virtanen * Nils Wagner + * Darryl Wally + * Stefan van der Walt * Liming Wang + * David Warde-Farley + * Warren Weckesser * Sebastian Werk + * Mike Wimmer + * Tony S Yu + A total of 54 people contributed to this release. People with a "+" by their names contributed a patch for the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Tue Jun 12 15:11:21 2012 From: nils106 at googlemail.com (Nils Wagner) Date: Tue, 12 Jun 2012 20:11:21 +0100 Subject: [SciPy-User] scipy.test() failures Message-ID: NumPy version 1.7.0.dev-cf72f82 SciPy version 0.11.0.dev-5655bc9 Python version 2.7.2 (default, Aug 19 2011, 20:41:43) [GCC] openSUSE 12.1 x86_64 GNU/Linux ERROR: Failure: DeprecationWarning ( The scipy.maxentropy module is deprecated in scipy 0.10, and scheduled to be removed in 0.11. If you are using some of the functionality in this module and are of the opinion that it should be kept or moved somewhere - or you are even interested to maintain/improve this whole module - please ask on the scipy-dev mailing list. The logsumexp function has already been moved to scipy.misc.) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/loader.py", line 390, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.7/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.7/site-packages/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/maxentropy/__init__.py", line 19, in DeprecationWarning) DeprecationWarning: The scipy.maxentropy module is deprecated in scipy 0.10, and scheduled to be removed in 0.11. If you are using some of the functionality in this module and are of the opinion that it should be kept or moved somewhere - or you are even interested to maintain/improve this whole module - please ask on the scipy-dev mailing list. The logsumexp function has already been moved to scipy.misc. ====================================================================== FAIL: test_improvement (test_quadpack.TestCtypesQuad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/testing/decorators.py", line 146, in skipper_func return f(*args, **kwargs) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/integrate/tests/test_quadpack.py", line 53, in test_improvement assert_(fast < 0.5*slow, (fast, slow)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/testing/utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: (0.004730939865112305, 0.009073019027709961) ---------------------------------------------------------------------- Ran 5481 tests in 177.105s FAILED (KNOWNFAIL=12, SKIP=28, errors=1, failures=1) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Jun 12 21:46:57 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 12 Jun 2012 18:46:57 -0700 Subject: [SciPy-User] SciPy2012 conference: Last week for early birds, poster submissions Message-ID: Hi everyone We're rapidly approaching SciPy2012 , which takes place in Austin, Texas from July 16th to 21st. This is a reminder that the *discounted early bird registration* closes on the 18th of this month. Also, we decided to keep the queue for *poster submissions* open until all slots are filled. So, whether you have a neat side project, a lightning talk gone rogue, or simply want to get the community talking about your latest and greatest idea--send in a poster abstract to 2012submissions at scipy.org. See you in Austin! St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Jun 14 02:34:07 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 14 Jun 2012 08:34:07 +0200 Subject: [SciPy-User] scipy.test() failures In-Reply-To: References: Message-ID: On Tue, Jun 12, 2012 at 9:11 PM, Nils Wagner wrote: > NumPy version 1.7.0.dev-cf72f82 > SciPy version 0.11.0.dev-5655bc9 > Python version 2.7.2 (default, Aug 19 2011, 20:41:43) [GCC] > openSUSE 12.1 x86_64 GNU/Linux > > ERROR: Failure: DeprecationWarning ( > The scipy.maxentropy module is deprecated in scipy 0.10, and scheduled to > be > removed in 0.11. > You should clean your install dir (rm -rf site-packages/scipy), this is from a file that doesn't exist in master anymore. > > If you are using some of the functionality in this module and are of the > opinion that it should be kept or moved somewhere - or you are even > interested > to maintain/improve this whole module - please ask on the scipy-dev mailing > list. > > The logsumexp function has already been moved to scipy.misc.) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/loader.py", line 390, in > loadTestsFromName > addr.filename, addr.module) > File "/usr/lib/python2.7/site-packages/nose/importer.py", line 39, in > importFromPath > return self.importFromDir(dir_path, fqname) > File "/usr/lib/python2.7/site-packages/nose/importer.py", line 86, in > importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/maxentropy/__init__.py", > line 19, in > DeprecationWarning) > DeprecationWarning: > The scipy.maxentropy module is deprecated in scipy 0.10, and scheduled to > be > removed in 0.11. > > If you are using some of the functionality in this module and are of the > opinion that it should be kept or moved somewhere - or you are even > interested > to maintain/improve this whole module - please ask on the scipy-dev mailing > list. > > The logsumexp function has already been moved to scipy.misc. > > ====================================================================== > FAIL: test_improvement (test_quadpack.TestCtypesQuad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/testing/decorators.py", > line 146, in skipper_func > return f(*args, **kwargs) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/integrate/tests/test_quadpack.py", > line 53, in test_improvement > assert_(fast < 0.5*slow, (fast, slow)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/testing/utils.py", > line 34, in assert_ > raise AssertionError(msg) > AssertionError: (0.004730939865112305, 0.009073019027709961) > This looks like a real issue, this test was recently added. Looks like it's not using ctypes. Travis, any idea why this can happen? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at continuum.io Thu Jun 14 05:59:09 2012 From: travis at continuum.io (Travis Oliphant) Date: Thu, 14 Jun 2012 04:59:09 -0500 Subject: [SciPy-User] scipy.test() failures In-Reply-To: References: Message-ID: On Jun 12, 2012, at 2:11 PM, Nils Wagner wrote: > NumPy version 1.7.0.dev-cf72f82 > SciPy version 0.11.0.dev-5655bc9 > Python version 2.7.2 (default, Aug 19 2011, 20:41:43) [GCC] > openSUSE 12.1 x86_64 GNU/Linux > > ERROR: Failure: DeprecationWarning ( > The scipy.maxentropy module is deprecated in scipy 0.10, and scheduled to be > removed in 0.11. > > If you are using some of the functionality in this module and are of the > opinion that it should be kept or moved somewhere - or you are even interested > to maintain/improve this whole module - please ask on the scipy-dev mailing > list. > > The logsumexp function has already been moved to scipy.misc.) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/loader.py", line 390, in loadTestsFromName > addr.filename, addr.module) > File "/usr/lib/python2.7/site-packages/nose/importer.py", line 39, in importFromPath > return self.importFromDir(dir_path, fqname) > File "/usr/lib/python2.7/site-packages/nose/importer.py", line 86, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/maxentropy/__init__.py", line 19, in > DeprecationWarning) > DeprecationWarning: > The scipy.maxentropy module is deprecated in scipy 0.10, and scheduled to be > removed in 0.11. > > If you are using some of the functionality in this module and are of the > opinion that it should be kept or moved somewhere - or you are even interested > to maintain/improve this whole module - please ask on the scipy-dev mailing > list. > > The logsumexp function has already been moved to scipy.misc. > > ====================================================================== > FAIL: test_improvement (test_quadpack.TestCtypesQuad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/testing/decorators.py", line 146, in skipper_func > return f(*args, **kwargs) > File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/integrate/tests/test_quadpack.py", line 53, in test_improvement > assert_(fast < 0.5*slow, (fast, slow)) > File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/testing/utils.py", line 34, in assert_ > raise AssertionError(msg) > AssertionError: (0.004730939865112305, 0.009073019027709961) > It looks like the test is too aggressive for this system. The code is faster, just not fast enough for the test.. I should change the test to just being perhaps 80% of slow or even just asking it to be < slow . Timing tests are a bit problematic as different run-times act so differently. Perhaps there is another way to figure out if the faster path is getting called. I would just deactivate the test for now. -Travis > ---------------------------------------------------------------------- > Ran 5481 tests in 177.105s > > FAILED (KNOWNFAIL=12, SKIP=28, errors=1, failures=1) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From cgohlke at uci.edu Thu Jun 14 14:33:23 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Thu, 14 Jun 2012 11:33:23 -0700 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: Message-ID: <4FDA2E73.4010906@uci.edu> On 6/13/2012 10:56 AM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the first beta release of > SciPy 0.11.0. For this release over 120 tickets and pull requests have > been closed, and many new features have been added. Also noteworthy is > that the number of contributors for this release has risen to over 50. > Some of the highlights are: > > - A new module, sparse.csgraph, has been added which provides a > number of common sparse graph algorithms. > - New unified interfaces to the existing optimization and root > finding functions have been added. > > Sources and binaries can be found at > http://sourceforge.net/projects/scipy/files/scipy/0.11.0b1/, release > notes are copied below. > > Please try this release and report any problems on the mailing list. > > Cheers, > Ralf > Hi Ralf, Using msvc9/MKL builds and numpy 1.6.2: All tests pass on win-amd64-py2.7 and win32-py2.7 (except for the known ndimage.test_datatypes.test_uint64_max failure on win32). test_smoke_bisplrep_bisplev fails on win-amd64-py3.2 and win32-py3.2. test_shortest_path.test_shortest_path_indices fails on win-amd64-py3.2. Christoph ====================================================================== ERROR: test_smoke_bisplrep_bisplev (test_fitpack.TestSmokeTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", line 219, in test_smoke_bisplrep_bisplev self.check_5() File "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", line 180, in check_5 xy=makepairs(x,y) File "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", line 32, in makepairs xy.shape=sh[0]*sh[1],sh[2] IndexError: tuple index out of range ====================================================================== FAIL: test_shortest_path.test_shortest_path_indices(array([[ 0. , 3. , 3. , 1. , 2.858], ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python32\lib\site-packages\nose\case.py", line 198, in runTest self.test(*self.arg) File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line 800, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals (mismatch 15.0%) x: array([[ 0. , 3. , 3. , 1. , 2.85772861], [ 3. , 0. , 6. , 2. , 4. ], [ 3. , 6. , 0. , 4. , 5.85772861], [ 1. , 2. , 4. , 0. , 2.85772861]]) y: array([[ 0., 3., 3., 1., 2.], [ 3., 0., 6., 2., 4.], [ 3., 6., 0., 4., 5.], [ 1., 2., 4., 0., 2.]]) ---------------------------------------------------------------------- From matrajt at gmail.com Thu Jun 14 17:40:26 2012 From: matrajt at gmail.com (Laura Matrajt) Date: Thu, 14 Jun 2012 14:40:26 -0700 Subject: [SciPy-User] least squares producing different answers? Message-ID: Hi, I have an ODE with unknown parameters that I am trying to fit to data. In order to do this, I am trying to use scipy's least squares to evaluate different sets of parameters. Most of the time, this works fine, but I noticed that sometimes I get different answers even though I am starting with the same initial guess. This makes sense to me if least squares uses something random in the algorithm, say at the very beginning to initialize the search. Otherwise, this might be a bug in my part. So, I have been searching for a bug for the last two days and I haven't found it. Does anyone have any idea if it is indeed the case that leastsq uses some type of random initialization? Thank you in advance. -- Laura -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jun 14 18:12:58 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 14 Jun 2012 18:12:58 -0400 Subject: [SciPy-User] least squares producing different answers? In-Reply-To: References: Message-ID: On Thu, Jun 14, 2012 at 5:40 PM, Laura Matrajt wrote: > Hi, > I have an ODE with unknown parameters that I am trying to fit to data. In > order to do this, I am trying to use scipy's least squares to evaluate > different sets of parameters. Most of the time, this works fine, but I > noticed? that sometimes I get different answers even though I am starting > with the same initial guess. > This makes sense to me if least squares uses something random in the > algorithm, say at the very beginning to initialize the search. > Otherwise, this might be a bug in my part. So, I have been searching for a > bug for the last two days and I haven't found it. > > Does anyone have any idea if it is indeed the case that leastsq uses some > type of random initialization? the algorithm of optimize.leastsq is not (supposed to be) random, and I don't think I've ever heard of random problems with it. I guess it could be that there are some rounding, floating point effects if the problem is not well behaved, like a singular jacobian, but I only know of cases in linalg for that. Josef > > Thank you in advance. > > > -- > Laura > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Thu Jun 14 19:13:37 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 15 Jun 2012 01:13:37 +0200 Subject: [SciPy-User] least squares producing different answers? In-Reply-To: References: Message-ID: 14.06.2012 23:40, Laura Matrajt kirjoitti: [clip] > Does anyone have any idea if it is indeed the case that leastsq uses > some type of random initialization? No, it does not use random numbers. However, be aware that floating point math with modern CPUs and compilers is often not reproducible, even across different runs on the same machine, using the same binary: http://blog.nag.com/2011/02/wandering-precision.html http://www.nccs.nasa.gov/images/FloatingPoint_consistency.pdf I assume the semi-random results in themselves are not a problem for your case? You can always request a stricter tolerance from the solver if needed. -- Pauli Virtanen From tmp50 at ukr.net Fri Jun 15 05:52:58 2012 From: tmp50 at ukr.net (Dmitrey) Date: Fri, 15 Jun 2012 12:52:58 +0300 Subject: [SciPy-User] [ANN} OpenOpt / FuncDesigner release 0.39 Message-ID: <92779.1339753978.799619080370585600@ffe16.ukr.net> Hi all, I'm glad to inform you about new OpenOpt Suite release 0.39 (2012-June-15): interalg: add categorical variables and general logical constraints, many other improvements Some improvements for automatic differentiation DerApproximator and some OpenOpt / FuncDesigner functionality now works with PyPy New solver lsmr for dense / sparse LLSP oovar constructors now can handle parameters lb and ub, e.g. a = oovar('a', lb=-1, ub=[1,2,3]) (this oovar should have size 3) or x = oovars(10, lb=-1, ub=1) New FuncDesigner function hstack, similar syntax to numpy.hstack, e.g. f = hstack((a,b,c,d)) Some bugfixes I have some progress toward solving in FuncDesigner linear DAE (differential algebraic equations, example) and Stochastic Opimization (example), but this is too premature yet to be released, there is 60-70% probability it will be properly implemented in next OpenOpt release. In our website you could vote for most required OpenOpt Suite development direction(s). -------------------- Regards, D. http://openopt.org/Dmitrey -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgomezdans at gmail.com Fri Jun 15 12:03:06 2012 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Fri, 15 Jun 2012 17:03:06 +0100 Subject: [SciPy-User] Issues with f2py/distutils Message-ID: Hi! I'm trying to package some fortran code and wrappers that use f2py. I have a pyf file, and I want a setup file that will compile and install the library. Also, I will need several versions of the same library available. In the past, we just compiled libraries with different names, and used that. I read that a wrapper trick can be used (see here: http://cens.ioc.ee/pipermail/f2py-users/2004-September/000921.html). My problem comes when trying to package and use the above. Our software requires a configuration file to specify a module name to load. This module will be the Fortran code above. In the past, we used different named versions (like my_module1, my_module2, etc) of the module, as the module has common statements that otherwise get trashed between calls. Following the "recipe" in the above link, and having my code organised as follows my_extension/ setup.py my_extension/ __init__.py *.f, *.f90 # fortran files I basically want my compiled fortran extension to appear in my_extension/my_extension/, and in __init__ I use the trick in the recipe to add a new class wraps the fortran library class MyExtension(object): def __init__(self, *args, **kwargs): myextension_mod = MExt('') # This is the so file self.myextension = myextension_mod._module You can then do this: import my_extension m = my_extension.MyExtension() m.myextension. However, I'd basically want to just do from my_extension import Is this possible? And if it is, how do you go about that? Also, my current setup.py compiles the code and puts it above the main package directory, not inside that directory (my_extension/my_extension). Here's what my setup.py file looks like from numpy.distutils.core import setup import sys, os def configuration ( parent_package='', top_path=None ): from numpy.distutils.misc_util import Configuration config = Configuration(parent_package=parent_package,top_path=top_path, ) config.add_extension ('extension_name', sources = [ 'blah.f', 'blahblah.f'] ) return config setup ( configuration = configuration, name="my_extensions" packages=['my_extension'], package_dir={'my_extension': 'my_extension'} ) Thanks! Jose From peter.eckert2 at arcor.de Sat Jun 16 09:46:54 2012 From: peter.eckert2 at arcor.de (Peter Eckert) Date: Sat, 16 Jun 2012 15:46:54 +0200 Subject: [SciPy-User] Scipy.io loadmat/savemat problem Message-ID: <4FDC8E4D.1040704@arcor.de> Hello Scipy-Experts, I am experiencing a problem with the scipy loadmat/savemat features. I have a matfile in mat-5 format which was created in Matlab that I want to manipulate. When I open the existing mat-file with loadmat and directly save the data to a new mat-file with savemat, the new file created actually differs from the original file (it seems to be less compressed). See the screenshots below. The new matfile can be read in Matlab without problems, however I actually want to pass the matlab to an executable created with Realtime Workshop with the rtsim option. Starting this with the new matfile creates an error "The values field in the parameters struct is invalid". Does anybody has experience with the difference of the savemat option compared to original Matlab matfiles and how I might change it? Thank you very much for your help! Regards, Peter Python: 2.7 Scipy: 0.9.0 Matlab: 2009SP2 Original Matlab file After loadmat and savemat: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cbjdfcee.png Type: image/png Size: 7795 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dcajhbdh.png Type: image/png Size: 43165 bytes Desc: not available URL: From matthew.brett at gmail.com Sun Jun 17 01:39:46 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 16 Jun 2012 22:39:46 -0700 Subject: [SciPy-User] Scipy.io loadmat/savemat problem In-Reply-To: <4FDC8E4D.1040704@arcor.de> References: <4FDC8E4D.1040704@arcor.de> Message-ID: Hi, On Sat, Jun 16, 2012 at 6:46 AM, Peter Eckert wrote: > > Hello Scipy-Experts, > > I am experiencing a problem with the scipy loadmat/savemat features. > I have a matfile in mat-5 format which was created in Matlab that I want to manipulate. When I open the existing mat-file with loadmat and directly save the data to a new mat-file with savemat, the new file created actually differs from the original file (it seems to be less compressed). See the screenshots below. > The new matfile can be read in Matlab without problems, however I actually want to pass the matlab to an executable created with Realtime Workshop with the rtsim option. Starting this with the new matfile creates an error "The values field in the parameters struct is invalid". > Does anybody has experience with the difference of the savemat option compared to original Matlab matfiles and how I might change it? I think the difference you see there is due to the lack of default matrix compression in savemat. You can turn it on with the ``do_compression=True`` flag to savemat. I doubt that's the problem though. You could also try the ``matlab_compatible`` flag to loadmat, Best, Matthew From samuelandjw at gmail.com Mon Jun 18 08:03:03 2012 From: samuelandjw at gmail.com (Wu Degang) Date: Mon, 18 Jun 2012 20:03:03 +0800 Subject: [SciPy-User] possible bug in scipy 0.10.1 newton_krylon solver Message-ID: <4FDF18F7.4020809@ust.hk> Hi, I'm using scipy on my macbook pro with mac os x lion through macports (py27-scipy). I'm not sure whether it is a platform-specific problem. Here is a program that should reproduces the bug: import numpy as np import math from scipy.optimize import newton_krylov from numpy import zeros_like, zeros from math import log import pickle with open('bug_data','rb') as pkl_file: k_seq = pickle.load(pkl_file) N = len(k_seq) def func(ws): new_lhs = zeros_like(ws) for i in xrange(N): new_lhs[i] = k_seq[i] - (N-1) + np.sum(1/(1+np.exp(ws[i]+ws))) - 1/(1+np.exp(2*ws[i])) return new_lhs guess = zeros(N, float) sol = newton_krylov(func,guess,verbose=1) On my MBP, it produces the following output: 0: |F(x)| = 1308.14; step 1; tol 8.46824e-05 bug.py:17: RuntimeWarning: overflow encountered in exp new_lhs[i] = k_seq[i] - (N-1) + np.sum(1/(1+np.exp(ws[i]+ws))) - 1/(1+np.exp(2*ws[i])) 1: |F(x)| = 181529; step 1; tol 0.9999 2: |F(x)| = 128432; step 1; tol 0.89982 3: |F(x)| = 128000; step 1; tol 0.893968 4: |F(x)| = 89085; step 1; tol 0.719261 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/lgmres.py:264: RuntimeWarning: invalid value encountered in divide outer_v.append((dx/nx, None)) Traceback (most recent call last): File "bug.py", line 22, in sol = newton_krylov(func,guess,verbose=1) File "", line 8, in newton_krylov File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/nonlin.py", line 297, in nonlin_solve raise ValueError("Jacobian inversion yielded zero vector. " ValueError: Jacobian inversion yielded zero vector. This indicates a bug in the Jacobian approximation. To data file (a pickle file actually) can be found here (I don't know whether it is possible to upload file, perhaps attachment?): https://dl.dropbox.com/u/7383429/bug_data. It contains a numpy array. I also have a five-year old laptop running xubuntu 12.04 with scipy 0.9. The same program and the same data gives a very different output: 0: |F(x)| = 32140.1; step 1; tol 0.0511187 1: |F(x)| = 11196.2; step 1; tol 0.109217 2: |F(x)| = 4105.06; step 1; tol 0.120988 3: |F(x)| = 1787.21; step 1; tol 0.170589 4: |F(x)| = 574.511; step 1; tol 0.0930013 5: |F(x)| = 209.088; step 1; tol 0.119207 6: |F(x)| = 68.6437; step 1; tol 0.0970035 7: |F(x)| = 20.731; step 1; tol 0.0820881 8: |F(x)| = 3.55224; step 1; tol 0.0264245 9: |F(x)| = 0.109991; step 1; tol 0.000862892 10: |F(x)| = 7.99967e-05; step 1; tol 4.76067e-07 11: |F(x)| = 4.1262e-08; step 1; tol 2.39441e-07 It runs successfully. Thanks. Wu Degang From nwagner at iam.uni-stuttgart.de Mon Jun 18 10:11:56 2012 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 18 Jun 2012 16:11:56 +0200 Subject: [SciPy-User] site.cfg Message-ID: Hi all, I would like to configure NumPy, SciPy such that my (optimized) BLAS/LAPACK libraries are not used. How can I do that by using site.cfg ? Nils From semi at bgu.ac.il Sun Jun 17 01:48:16 2012 From: semi at bgu.ac.il (Semi) Date: Sun, 17 Jun 2012 08:48:16 +0300 Subject: [SciPy-User] import scipy.stats undefined symbol: clapack_sgesv Message-ID: <4FDD6FA0.4050608@bgu.ac.il> Please help! Python 2.6.6 (r266:84292, May 20 2011, 16:42:11) [GCC 4.4.5 20110214 (Red Hat 4.4.5-6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> *import scipy.stats* Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/stats/__init__.py", line 322, in from stats import * File "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/stats/stats.py", line 194, in import scipy.linalg as linalg File "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/__init__.py", line 116, in from basic import * File "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/basic.py", line 12, in from lapack import get_lapack_funcs File "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/lapack.py", line 15, in from scipy.linalg import clapack ImportError: /usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/clapack.so:*undefined symbol: clapack_sgesv* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Mon Jun 18 09:25:18 2012 From: nils106 at googlemail.com (Nils Wagner) Date: Mon, 18 Jun 2012 15:25:18 +0200 Subject: [SciPy-User] site.cfg Message-ID: Hi all, I would like to configure NumPy, SciPy such that my (optimized) BLAS/LAPACK libraries are not used. How can I do that by using site.cfg ? Nils From pav at iki.fi Mon Jun 18 14:36:50 2012 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 18 Jun 2012 20:36:50 +0200 Subject: [SciPy-User] possible bug in scipy 0.10.1 newton_krylon solver In-Reply-To: <4FDF18F7.4020809@ust.hk> References: <4FDF18F7.4020809@ust.hk> Message-ID: Hi, 18.06.2012 14:03, Wu Degang kirjoitti: > I'm using scipy on my macbook pro with mac os x lion through macports > (py27-scipy). I'm not sure whether it is a platform-specific problem. > Here is a program that should reproduces the bug: [clip] Thanks for the report. Based on the output, it looks like you are actually running the code as: sol = newton_krylov(func,guess,line_search=False,verbose=1) Note the value of the `line_search` parameter. I see this only if the line search is disabled (regardless of Scipy version), as above, although then the code wants to stick with a small step size in the beginning and converges somewhat slowly. The cause is probably that when line search is disabled, the strong nonlinearity due to the exp functions causes the optimizer to get lost in a "flat" region of the function. The finite-difference Jacobian matrix estimate then likely is zero, which causes the Krylov solver to spit out a zero vector (and an error code, which is ignored by the Jacobian estimator). When the solver reaches such a state, it cannot easily recover, hence the failure. This may be a feature. I do not at the moment see useful heuristics that would help here. That it works differently on different machines is possibly that the problem is sensitive to rounding error, which differs on different machines. -- Pauli Virtanen From ralf.gommers at googlemail.com Mon Jun 18 16:46:56 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 18 Jun 2012 22:46:56 +0200 Subject: [SciPy-User] import scipy.stats undefined symbol: clapack_sgesv In-Reply-To: <4FDD6FA0.4050608@bgu.ac.il> References: <4FDD6FA0.4050608@bgu.ac.il> Message-ID: On Sun, Jun 17, 2012 at 7:48 AM, Semi wrote: > Please help! > > Python 2.6.6 (r266:84292, May 20 2011, 16:42:11) > [GCC 4.4.5 20110214 (Red Hat 4.4.5-6)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> *import scipy.stats* > Traceback (most recent call last): > File "", line 1, in > File > "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/stats/__init__.py", > line 322, in > from stats import * > File > "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/stats/stats.py", > line 194, in > import scipy.linalg as linalg > File > "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/__init__.py", > line 116, in > from basic import * > File > "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/basic.py", > line 12, in > from lapack import get_lapack_funcs > File > "/usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/lapack.py", > line 15, in > from scipy.linalg import clapack > ImportError: > /usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/clapack.so: > * undefined symbol: clapack_sgesv* > This should be due to a broken LAPACK / ATLAS install. See for example http://projects.scipy.org/scipy/ticket/1031. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Tue Jun 19 07:33:49 2012 From: sturla at molden.no (Sturla Molden) Date: Tue, 19 Jun 2012 13:33:49 +0200 Subject: [SciPy-User] import scipy.stats undefined symbol: clapack_sgesv In-Reply-To: <4FDD6FA0.4050608@bgu.ac.il> References: <4FDD6FA0.4050608@bgu.ac.il> Message-ID: <4FE0639D.9030209@molden.no> On 17.06.2012 07:48, Semi wrote: > /usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/clapack.so:*undefined > symbol: clapack_sgesv* Does SciPy use clapack? Isn't clapack abandonware in favour of lapacke? Sturla From bala.biophysics at gmail.com Tue Jun 19 11:51:35 2012 From: bala.biophysics at gmail.com (Bala subramanian) Date: Tue, 19 Jun 2012 17:51:35 +0200 Subject: [SciPy-User] autocorrelation Message-ID: Friends, I need to calculate the autocorrelation of my data. How can i do the same in scipy. I want to make a plot similar to that shown in the following link. autocorrelation of the data for a user input time lag. http://www.itl.nist.gov/div898/handbook/eda/section3/autocopl.htm -- C. Balasubramanian -------------- next part -------------- An HTML attachment was scrubbed... URL: From deshpande.jaidev at gmail.com Tue Jun 19 12:03:33 2012 From: deshpande.jaidev at gmail.com (Jaidev Deshpande) Date: Tue, 19 Jun 2012 21:33:33 +0530 Subject: [SciPy-User] autocorrelation In-Reply-To: References: Message-ID: On Tue, Jun 19, 2012 at 9:21 PM, Bala subramanian wrote: > Friends, > I need to calculate the autocorrelation of my data. How can i do the same in > scipy. > > I want to make a plot similar to that shown in the following link. > autocorrelation of the data for a user input time lag. > > http://www.itl.nist.gov/div898/handbook/eda/section3/autocopl.htm > > -- > C. Balasubramanian > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, You can use the numpy.corrcoef function. Please refer to this http://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html#numpy.corrcoef The plotting can be easily done with matplotlib. If you want the figure exactly as in the link you provided, you can use the xlabel and ylabel functions for labeling the axes, and the axis command to set the range of the X and Y axes. Regards From jsseabold at gmail.com Tue Jun 19 12:18:34 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 19 Jun 2012 12:18:34 -0400 Subject: [SciPy-User] autocorrelation In-Reply-To: References: Message-ID: On Tue, Jun 19, 2012 at 12:03 PM, Jaidev Deshpande < deshpande.jaidev at gmail.com> wrote: > On Tue, Jun 19, 2012 at 9:21 PM, Bala subramanian > wrote: > > Friends, > > I need to calculate the autocorrelation of my data. How can i do the > same in > > scipy. > > > > I want to make a plot similar to that shown in the following link. > > autocorrelation of the data for a user input time lag. > > > > http://www.itl.nist.gov/div898/handbook/eda/section3/autocopl.htm > > > > -- > > C. Balasubramanian > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Hi, > > You can use the numpy.corrcoef function. Please refer to this > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html#numpy.corrcoef > > The plotting can be easily done with matplotlib. If you want the > figure exactly as in the link you provided, you can use the xlabel and > ylabel functions for labeling the axes, and the axis command to set > the range of the X and Y axes. > You could also use statsmodels. I don't think we have a convenience function yet for plotting ACF (pull requests welcome), but you can do something like (using the FLICKER.DAT from NIST in that example) import statsmodels.api as sm import matplotlib.pyplot as plt plt.interactive(False) fig = plt.figure() ax = fig.add_subplot(111) acf = sm.tsa.acf(y, nlags=250) ax.bar(np.arange(251), acf) plt.show() -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Jun 19 12:29:19 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 19 Jun 2012 12:29:19 -0400 Subject: [SciPy-User] autocorrelation In-Reply-To: References: Message-ID: On Tue, Jun 19, 2012 at 12:18 PM, Skipper Seabold wrote: > On Tue, Jun 19, 2012 at 12:03 PM, Jaidev Deshpande > wrote: >> >> On Tue, Jun 19, 2012 at 9:21 PM, Bala subramanian >> wrote: >> > Friends, >> > I need to calculate the autocorrelation of my data. How can i do the >> > same in >> > scipy. >> > >> > I want to make a plot similar to that shown in the following link. >> > autocorrelation of the data for a user input time lag. >> > >> > http://www.itl.nist.gov/div898/handbook/eda/section3/autocopl.htm >> > >> > -- >> > C. Balasubramanian >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> >> Hi, >> >> You can use the numpy.corrcoef function. Please refer to this >> >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html#numpy.corrcoef >> >> The plotting can be easily done with matplotlib. If you want the >> figure exactly as in the link you provided, you can use the xlabel and >> ylabel functions for labeling the axes, and the axis command to set >> the range of the X and Y axes. > > > You could also use statsmodels. I don't think we have a convenience function > yet for plotting ACF (pull requests welcome), but you can do something like > (using the FLICKER.DAT from NIST in that example) We have it somewhere but I cannot find it in the docs. Pandas also has it or will get it as part of GSOC. Josef > > import statsmodels.api as sm > import matplotlib.pyplot as plt > > plt.interactive(False) > fig = plt.figure() > ax = fig.add_subplot(111) > > acf = sm.tsa.acf(y, nlags=250) > > ax.bar(np.arange(251), acf) > plt.show() > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From bala.biophysics at gmail.com Tue Jun 19 12:58:16 2012 From: bala.biophysics at gmail.com (Bala subramanian) Date: Tue, 19 Jun 2012 18:58:16 +0200 Subject: [SciPy-User] autocorrelation In-Reply-To: References: Message-ID: Thanks a lot, that solved the problem. This is what i am trying to get. On Tue, Jun 19, 2012 at 6:29 PM, wrote: > On Tue, Jun 19, 2012 at 12:18 PM, Skipper Seabold > wrote: > > On Tue, Jun 19, 2012 at 12:03 PM, Jaidev Deshpande > > wrote: > >> > >> On Tue, Jun 19, 2012 at 9:21 PM, Bala subramanian > >> wrote: > >> > Friends, > >> > I need to calculate the autocorrelation of my data. How can i do the > >> > same in > >> > scipy. > >> > > >> > I want to make a plot similar to that shown in the following link. > >> > autocorrelation of the data for a user input time lag. > >> > > >> > http://www.itl.nist.gov/div898/handbook/eda/section3/autocopl.htm > >> > > >> > -- > >> > C. Balasubramanian > >> > > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > >> Hi, > >> > >> You can use the numpy.corrcoef function. Please refer to this > >> > >> > http://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html#numpy.corrcoef > >> > >> The plotting can be easily done with matplotlib. If you want the > >> figure exactly as in the link you provided, you can use the xlabel and > >> ylabel functions for labeling the axes, and the axis command to set > >> the range of the X and Y axes. > > > > > > You could also use statsmodels. I don't think we have a convenience > function > > yet for plotting ACF (pull requests welcome), but you can do something > like > > (using the FLICKER.DAT from NIST in that example) > > We have it somewhere but I cannot find it in the docs. > Pandas also has it or will get it as part of GSOC. > > Josef > > > > > > import statsmodels.api as sm > > import matplotlib.pyplot as plt > > > > plt.interactive(False) > > fig = plt.figure() > > ax = fig.add_subplot(111) > > > > acf = sm.tsa.acf(y, nlags=250) > > > > ax.bar(np.arange(251), acf) > > plt.show() > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- C. Balasubramanian -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Tue Jun 19 14:53:22 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 19 Jun 2012 11:53:22 -0700 (PDT) Subject: [SciPy-User] (no subject) Message-ID: <1340132002.41558.YahooMailNeo@web113410.mail.gq1.yahoo.com> http://www.raffivartanian.com/avo/wp-content/themes/distinctive/googles.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Tue Jun 19 17:24:31 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 19 Jun 2012 14:24:31 -0700 (PDT) Subject: [SciPy-User] Apologies for the spam Message-ID: <1340141071.84781.YahooMailNeo@web113405.mail.gq1.yahoo.com> My account has been hacked and the last email was spam. The account should now be back under my control, many?apologies?for the inconvenience. cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.eckert2 at arcor.de Mon Jun 18 13:27:46 2012 From: peter.eckert2 at arcor.de (peter.eckert2 at arcor.de) Date: Mon, 18 Jun 2012 19:27:46 +0200 (CEST) Subject: [SciPy-User] Scipy.io loadmat/savemat problem Message-ID: <1462769343.115417.1340040466823.JavaMail.ngmail@webmail13.arcor-online.net> Hi Matthew, the "matlab_compatible=True" flag corrected the problems. Thanks a lot! Regards Peter From samuelandjw at gmail.com Tue Jun 19 19:28:07 2012 From: samuelandjw at gmail.com (Wu Degang) Date: Wed, 20 Jun 2012 07:28:07 +0800 Subject: [SciPy-User] possible bug in scipy 0.10.1 newton_krylon solver Message-ID: <4FE10B07.4000207@ust.hk> Pauli Virtanen wrote: > Based on the output, it looks like you are actually running the code as: > > sol = newton_krylov(func,guess,line_search=False,verbose=1) > > Note the value of the `line_search` parameter. I have just checked the document (for 0.10.1) that comes with the release, and I found that " line_search : {None, 'armijo' (default), 'wolfe'}, optional Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to 'armijo'." So the default is 'armijo'. I tried to use line_search=None, and line_search = 'wolfe' (also for 0.10.1). The output was exactly the same. bash-3.2$ python entropy_1_und_step1.py 0: |F(x)| = 1308.14; step 1; tol 8.46824e-05 entropy_1_und_step1.py:39: RuntimeWarning: overflow encountered in exp new_lhs[i] = k_seq[i] - (N-1) + np.sum(1/(1+np.exp(ws[i]+ws))) - 1/(1+np.exp(2*ws[i])) 1: |F(x)| = 181529; step 1; tol 0.9999 2: |F(x)| = 128432; step 1; tol 0.89982 3: |F(x)| = 128000; step 1; tol 0.893968 4: |F(x)| = 89085; step 1; tol 0.719261 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/lgmres.py:264: RuntimeWarning: invalid value encountered in divide outer_v.append((dx/nx, None)) Traceback (most recent call last): File "entropy_1_und_step1.py", line 44, in sol = newton_krylov(func,guess,line_search=None,verbose=1) File "", line 8, in newton_krylov File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/nonlin.py", line 297, in nonlin_solve raise ValueError("Jacobian inversion yielded zero vector. " ValueError: Jacobian inversion yielded zero vector. This indicates a bug in the Jacobian approximation. bash-3.2$ python entropy_1_und_step1.py 0: |F(x)| = 1308.14; step 1; tol 8.46824e-05 entropy_1_und_step1.py:39: RuntimeWarning: overflow encountered in exp new_lhs[i] = k_seq[i] - (N-1) + np.sum(1/(1+np.exp(ws[i]+ws))) - 1/(1+np.exp(2*ws[i])) 1: |F(x)| = 181529; step 1; tol 0.9999 2: |F(x)| = 128432; step 1; tol 0.89982 3: |F(x)| = 128000; step 1; tol 0.893968 4: |F(x)| = 89085; step 1; tol 0.719261 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/lgmres.py:264: RuntimeWarning: invalid value encountered in divide outer_v.append((dx/nx, None)) Traceback (most recent call last): File "entropy_1_und_step1.py", line 44, in sol = newton_krylov(func,guess,line_search='wolfe',verbose=1) File "", line 8, in newton_krylov File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/nonlin.py", line 297, in nonlin_solve raise ValueError("Jacobian inversion yielded zero vector. " ValueError: Jacobian inversion yielded zero vector. This indicates a bug in the Jacobian approximation. From russ at perspexis.com Tue Jun 19 23:36:19 2012 From: russ at perspexis.com (Russell Warren) Date: Tue, 19 Jun 2012 23:36:19 -0400 Subject: [SciPy-User] Problems compiling NumPy on linux with a compiled lapack/blas Message-ID: I'm trying to compile numpy from source with both lapack and BLAS compiled from source as well. I've got full details below, but my compilation method is based mostly on guidelines from these links: http://www.scipy.org/Installing_SciPy/BuildingGeneral http://www.scipy.org/Installing_SciPy/Linux I'm trying to compile in a portable way, mostly with the use of -march=i486 -mtune=i686, since I'm trying to make this as a portable extension for a minimal linux install based on tinycore linux. I don't expect/think that this is particularly relevant, but one outcome of it is that I'm doing everything from source... ie: no debian apt packages or other conveniences available. It also means I'm not using ATLAS, since I think it tunes to the local cpu by definition. I can compile numpy with no problems at all if I don't set the LAPACK environment variable, and numpy.test("full") ends up working perfectly fine with no errors. However, when I set the LAPACK environment variable to the path of my generated libflapack.a file, the numpy compilation fails. It is failing at a strange location (when gfortran is trying to generate lapack_lite.so) and in a strange way (it looks like the python header files are missing). Full details: gfortran 4.6.1 gcc 4.6.1 os.name: posix sys.platform: linux2 sys.version: '2.7.3 (default, Jun 19 2012, 16:53:07) \n[GCC 4.6.1]' numpy.version: 1.6.2 uname -a: Linux edms 3.0.21-tinycore #3021 SMP Sat Feb 18 11:54:11 EET 2012 i686 GNU/Linux ATLAS version: NOT USING ATLAS... is this a problem?? I'm compiling BLAS, LAPACK, and NumPy using this command chain: http://static.inky.ws/syn/502 And my failed setup.py output log is here: http://static.inky.ws/text/503 What is happening here? Do I need to use ATLAS? Any help on getting numpy working with lapack is greatly appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Jun 19 23:57:35 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 20 Jun 2012 04:57:35 +0100 Subject: [SciPy-User] Problems compiling NumPy on linux with a compiled lapack/blas In-Reply-To: References: Message-ID: Le 20 juin 2012 04:36, "Russell Warren" a ?crit : > > I'm trying to compile numpy from source with both lapack and BLAS compiled from source as well. I've got full details below, but my compilation method is based mostly on guidelines from these links: > http://www.scipy.org/Installing_SciPy/BuildingGeneral > http://www.scipy.org/Installing_SciPy/Linux > > I'm trying to compile in a portable way, mostly with the use of -march=i486 -mtune=i686, since I'm trying to make this as a portable extension for a minimal linux install based on tinycore linux. I don't expect/think that this is particularly relevant, but one outcome of it is that I'm doing everything from source... ie: no debian apt packages or other conveniences available. It also means I'm not using ATLAS, since I think it tunes to the local cpu by definition. > > I can compile numpy with no problems at all if I don't set the LAPACK environment variable, and numpy.test("full") ends up working perfectly fine with no errors. > > However, when I set the LAPACK environment variable to the path of my generated libflapack.a file, the numpy compilation fails. It is failing at a strange location (when gfortran is trying to generate lapack_lite.so) and in a strange way (it looks like the python header files are missing). > > Full details: > gfortran 4.6.1 > gcc 4.6.1 > os.name: posix > sys.platform: linux2 > sys.version: '2.7.3 (default, Jun 19 2012, 16:53:07) \n[GCC 4.6.1]' > numpy.version: 1.6.2 > uname -a: Linux edms 3.0.21-tinycore #3021 SMP Sat Feb 18 11:54:11 EET 2012 i686 GNU/Linux > ATLAS version: NOT USING ATLAS... is this a problem?? > > I'm compiling BLAS, LAPACK, and NumPy using this command chain: > http://static.inky.ws/syn/502 > > And my failed setup.py output log is here: > http://static.inky.ws/text/503 > > What is happening here? Do I need to use ATLAS? Any help on getting numpy working with lapack is greatly appreciated! Your log appears incomplete. You definitely don't have to use atlas to build numpy/scipy. David > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From russ at perspexis.com Wed Jun 20 09:55:24 2012 From: russ at perspexis.com (Russell Warren) Date: Wed, 20 Jun 2012 09:55:24 -0400 Subject: [SciPy-User] Problems compiling NumPy on linux with a compiled lapack/blas In-Reply-To: CAGY4rcW6T5c=ScSdcFr1=Oq63_-vD7QyP+6iCXv3grFT0qwR0A@mail.gmail.com Message-ID: <4FE1D64C.8080406@perspexis.com> > Your log appears incomplete. You definitely don't have to use atlas to > build numpy/scipy. Sorry about that... the pastebin clipped the output and I didn't notice. Here is the full dump: http://pastebin.com/raw.php?i=Jy5t1B6S -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Jun 20 12:57:07 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 20 Jun 2012 17:57:07 +0100 Subject: [SciPy-User] Problems compiling NumPy on linux with a compiled lapack/blas In-Reply-To: <4FE1D64C.8080406@perspexis.com> References: <4FE1D64C.8080406@perspexis.com> Message-ID: On Wed, Jun 20, 2012 at 2:55 PM, Russell Warren wrote: > > Your log appears incomplete. You definitely don't have to use atlas to > > build numpy/scipy. > > Sorry about that... the pastebin clipped the output and I didn't notice. Here is the full dump:http://pastebin.com/raw.php?i=Jy5t1B6S > > Do you have LDFLAGS defined in your environment ? If so, you should unset it, as it overrides the link options used by distutils. If you want to override LDFLAGS, then you need to add the -shared flag. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From russ at perspexis.com Wed Jun 20 13:49:48 2012 From: russ at perspexis.com (Russell Warren) Date: Wed, 20 Jun 2012 13:49:48 -0400 Subject: [SciPy-User] Problems compiling NumPy on linux with a compiled lapack/blas In-Reply-To: References: <4FE1D64C.8080406@perspexis.com> Message-ID: > > Do you have LDFLAGS defined in your environment ? If so, you should unset > it, as it overrides the link options used by distutils. If you want to > override LDFLAGS, then you need to add the -shared flag. > Thanks very much!! I did have LDFLAGS defined. Unsetting it solved the problem completely and numpy compiles perfectly, and numpy.test("full") passes. As a double-check... my understanding is that numpy.distutils.system_info is the best way to confirm that lapack and blas are, in fact, going to be used. Is that right? My output checking this is below: >>> import numpy.distutils.system_info as sysinfo >>> sysinfo.get_info("lapack") {'libraries': ['flapack'], 'library_dirs': ['/edms/src/lapack-3.4.1'], 'language': 'f77'} >>> sysinfo.get_info("blas") {'libraries': ['fblas'], 'library_dirs': ['/edms/src/BLAS'], 'language': 'f77'} This also seems to indicates that the lapack and blas libraries aren't statically linked, given those library_dirs. Is that correct? If so I should obviously move them to a more permanent/sensible location. Russ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Jun 20 14:06:06 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 20 Jun 2012 20:06:06 +0200 Subject: [SciPy-User] import scipy.stats undefined symbol: clapack_sgesv In-Reply-To: <4FE0639D.9030209@molden.no> References: <4FDD6FA0.4050608@bgu.ac.il> <4FE0639D.9030209@molden.no> Message-ID: On Tue, Jun 19, 2012 at 1:33 PM, Sturla Molden wrote: > On 17.06.2012 07:48, Semi wrote: > > > > /usr/lib/python2.6/site-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/linalg/clapack.so:*undefined > > symbol: clapack_sgesv* > > Does SciPy use clapack? Isn't clapack abandonware in favour of lapacke? > Yes, and don't think so. Lapacke has only been available for 1.5 years and isn't widely used as far as I can tell. Clapack has been around for much longer and is more widely used. Lapacke now seems to be endorsed as the new standard on Netlib though, so maybe at some point there may be a reason (and someone who puts in the effort) to switch. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From russ at perspexis.com Wed Jun 20 17:16:13 2012 From: russ at perspexis.com (Russell Warren) Date: Wed, 20 Jun 2012 17:16:13 -0400 Subject: [SciPy-User] Inconsistent scipy.test() failures Message-ID: I just tested what I thought was a successful scipy installation and it is failing. What's more, it is failing differently between the first and subsequent runs. FWIW, numpy.test("full") passes with no errors. Should I be concerned about any of these failures? Are scipy test suite failures typical/expected? Results of first scipy.test() run (1 failure, peppered with warnings): http://pastebin.com/raw.php?i=00BrCYNA Results of subsequent scipy.test() runs (1 error, 3 failures, no warnings): http://pastebin.com/raw.php?i=DHcLUjp9 System info: - NumPy version 1.6.2 - SciPy version 0.10.1 - Python version 2.7.3 (default, Jun 19 2012, 16:53:07) [GCC 4.6.1] - nose version 1.1.2 - lapack-3.4.1 (netlib version, compiled from http://www.netlib.org/lapack/lapack.tgz) - gfortran v4.6.1 - gcc v4.6.1 - uname -a = Linux edms 3.0.21-tinycore #3021 SMP Sat Feb 18 11:54:11 EET 2012 i686 GNU/Linux - - os.name: posix - sys.platform: linux2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at maths.lth.se Thu Jun 21 05:48:02 2012 From: tony at maths.lth.se (Tony Stillfjord) Date: Thu, 21 Jun 2012 11:48:02 +0200 Subject: [SciPy-User] Inconsistent scipy.test() failures In-Reply-To: References: Message-ID: Hi, Russell's post arrived just as I was in the process of writing a similar question. I get similar results, but my first run passes fine (OK (KNOWNFAIL=13, SKIP=18)) but with some warnings while the next run fails the test_mio.test_mat4_3d and the "Regression test for #651" tests. Subsequent runs seem to give the same 2 failures. I ran these tests in the process of trying to figure out why some wrapped FORTRAN code gives me different results on different computers, so I have also set up two VMs with 32-bit and 64-bit Ubuntu 10.10. (Since I believed the architecture had something to do with it.) On the 32-bit system I don't get the test_ndgriddata.TestGriddata.test_1d_unsorted fail but instead the test_singular in linalg/tests/test_decomp.py fails. I also get FAIL: test_datatypes.test_uint64_max, from ndimage/tests/test_datatypes.py (which I suppose could be expected on a 32-bit system?). On the 64-bit system I get no errors on the first run, then on subsequent runs the test_mio.test_mat4_3d and the regression test for #651 fails. System information: NumPy version 1.6.1 SciPy version 0.10.1 Python version 2.6.6 (r266:84292, Sep 15 2010, 16:22:56) [GCC 4.4.5] nose version 0.11.1 Lapack from package manager's liblapack-dev gfortran v4.4.5 gcc v.4.4.5 uname -a = Linux lambert 2.6.35-32-generic #67-Ubuntu SMP Mon Mar 5 19:39:49 UTC 2012 x86_64 GNU/Linux os.name: posix sys.platform: linux2 The VMs should be roughly the same thing, these are the different uname -a's: Linux Ubuntu101032 2.6.35-32-generic #67-Ubuntu SMP Mon Mar 5 19:35:26 UTC 2012 i686 GNU/Linux Linux Ubuntu101064 2.6.35-32-generic #67-Ubuntu SMP Mon Mar 5 19:39:49 UTC 2012 x86_64 GNU/Linux Best regards, Tony Stillfjord On Wed, Jun 20, 2012 at 11:16 PM, Russell Warren wrote: > I just tested what I thought was a successful scipy installation and it is > failing. What's more, it is failing differently between the first and > subsequent runs. FWIW, numpy.test("full") passes with no errors. > > Should I be concerned about any of these failures? Are scipy test suite > failures typical/expected? > > Results of first scipy.test() run (1 failure, peppered with warnings): > http://pastebin.com/raw.php?i=00BrCYNA > > Results of subsequent scipy.test() runs (1 error, 3 failures, no warnings): > http://pastebin.com/raw.php?i=DHcLUjp9 > > System info: > > - NumPy version 1.6.2 > - SciPy version 0.10.1 > - Python version 2.7.3 (default, Jun 19 2012, 16:53:07) [GCC 4.6.1] > - nose version 1.1.2 > - lapack-3.4.1 (netlib version, compiled from > http://www.netlib.org/lapack/lapack.tgz) > - gfortran v4.6.1 > - gcc v4.6.1 > - uname -a = Linux edms 3.0.21-tinycore #3021 SMP Sat Feb 18 11:54:11 > EET 2012 i686 GNU/Linux > - > - os.name: posix > - sys.platform: linux2 > > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Thu Jun 21 07:29:13 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 21 Jun 2012 13:29:13 +0200 Subject: [SciPy-User] Inconsistent scipy.test() failures In-Reply-To: References: Message-ID: On 20 June 2012 23:16, Russell Warren wrote: > I just tested what I thought was a successful scipy installation and it is > failing. ?What's more, it is failing differently between the first and > subsequent runs. ?FWIW,?numpy.test("full") passes with no errors. > > Should I be concerned about any of these failures? ?Are scipy test suite > failures typical/expected? > > Results of first scipy.test() run (1 failure, peppered with warnings): > http://pastebin.com/raw.php?i=00BrCYNA > > Results of subsequent scipy.test() runs (1 error, 3 failures, no warnings): > http://pastebin.com/raw.php?i=DHcLUjp9 > > System info: > > NumPy version 1.6.2 > SciPy version 0.10.1 > Python version 2.7.3 (default, Jun 19 2012, 16:53:07) [GCC 4.6.1] > nose version 1.1.2 > lapack-3.4.1 (netlib version, compiled from > http://www.netlib.org/lapack/lapack.tgz) > gfortran v4.6.1 > gcc v4.6.1 > uname -a =?Linux edms 3.0.21-tinycore #3021 SMP Sat Feb 18 11:54:11 EET 2012 > i686 GNU/Linux > os.name: posix > sys.platform: linux2 I don't think this is actually a problem. I only see this behaviour if I run scipy.test() multiple times in the *same* Python interpreter instance. As I understand it, the warnings that are being tested for in 'test_mio.test_mat4_3d' and 'Regression test for #651: better handling of badly conditioned' are only raised the first time they are encountered in a given Python session - so those two tests will obviously fail if you run scipy.test() multiple times. I don't see the other failures, which look like minor numerical precision problems (maybe due to cache effects?), I wouldn't worry too much about those either. Cheers, Scott From yury at shurup.com Thu Jun 21 09:18:38 2012 From: yury at shurup.com (Yury V. Zaytsev) Date: Thu, 21 Jun 2012 15:18:38 +0200 Subject: [SciPy-User] NumPy C API: where does casting pointer-to-object to pointer-to-function come from? Message-ID: <1340284718.15289.44.camel@newpride> Hi, I'm trying to get rid of warnings in my C++ code that extensively uses NumPy C API. Unfortunately, with gcc 4.6 / -Wall -pedantic, NumPy 1.6.1 on Fedora Core 16 I get lots of warnings of this kind: warning: ISO C++ forbids casting between pointer-to-function and pointer-to-object [enabled by default] It is generated by this kind of lines: PyObject* pObj; ... PyArray_Descr* typecode = PyArray_DescrFromScalar(pObj); // bang! ... int val; PyArray_ScalarAsCtype(pObj, &val); // bang! ... int size = PyArray_Size(pObj); // bang! ... and so on. This is highly surprising to me, because the signatures of the functions are as follows according to the documentation: http://docs.scipy.org/doc/numpy/reference/c-api.array.html PyArray_Descr* PyArray_DescrFromScalar(PyObject* scalar) void PyArray_ScalarAsCtype(PyObject* scalar, void* ctypeptr) npy_intp PyArray_Size(PyArrayObject* obj) I suspect that I can mute these warnings with re-interpret cast gcc extension as described here: http://www.mr-edd.co.uk/blog/supressing_gcc_warnings but I don't get it what to cast to what and where all this is coming from. As it seems to be from the function signatures there shouldn't be any problems with type conversion here?! Any hints would be highly appreciated, -- Sincerely yours, Yury V. Zaytsev From njs at pobox.com Thu Jun 21 11:06:01 2012 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 21 Jun 2012 16:06:01 +0100 Subject: [SciPy-User] NumPy C API: where does casting pointer-to-object to pointer-to-function come from? In-Reply-To: <1340284718.15289.44.camel@newpride> References: <1340284718.15289.44.camel@newpride> Message-ID: On Thu, Jun 21, 2012 at 2:18 PM, Yury V. Zaytsev wrote: > Hi, > > I'm trying to get rid of warnings in my C++ code that extensively uses > NumPy C API. Unfortunately, with gcc 4.6 / -Wall -pedantic, NumPy 1.6.1 > on Fedora Core 16 I get lots of warnings of this kind: > > ? ?warning: ISO C++ forbids casting between pointer-to-function and pointer-to-object [enabled by default] > > It is generated by this kind of lines: > > ? ?PyObject* pObj; > ? ?... > ? ?PyArray_Descr* typecode = PyArray_DescrFromScalar(pObj); // bang! > ? ?... > ? ?int val; > ? ?PyArray_ScalarAsCtype(pObj, &val); // bang! > ? ?... > ? ?int size = PyArray_Size(pObj); // bang! > > ... and so on. > > This is highly surprising to me, because the signatures of the functions > are as follows according to the documentation: > > ? ?http://docs.scipy.org/doc/numpy/reference/c-api.array.html > > ? ?PyArray_Descr* PyArray_DescrFromScalar(PyObject* scalar) > ? ?void PyArray_ScalarAsCtype(PyObject* scalar, void* ctypeptr) > ? ?npy_intp PyArray_Size(PyArrayObject* obj) > > I suspect that I can mute these warnings with re-interpret cast gcc > extension as described here: > > http://www.mr-edd.co.uk/blog/supressing_gcc_warnings > > but I don't get it what to cast to what and where all this is coming > from. As it seems to be from the function signatures there shouldn't be > any problems with type conversion here?! I suspect that the offending cast here is the one that numpy is doing internally via a macro. If you look at __multiarray_api.h, you'll see that when you write PyArray_DescrFromScalar and friends you are not actually doing a simple function call: #define PyArray_DescrFromScalar \ (*(PyArray_Descr * (*)(PyObject *)) \ PyArray_API[57]) Here PyArray_API is an array of void* which gets populated when you call import_array(). The reason things work this way are too tiresome to explain in detail, but it's needed to make dynamic linking play well with Python dynamic module loading. Standard C++ apparently does not guarantee that you can cast a void* to a function pointer: http://stackoverflow.com/questions/1096341/function-pointers-casting-in-c I guess the solution would be to come up with some more elaborate cast that avoids this warning and works with both C and C++, then teach numpy/core/code_generators/* to produce it. -N From sjm.guzman at googlemail.com Thu Jun 21 11:20:27 2012 From: sjm.guzman at googlemail.com (Jose Guzman) Date: Thu, 21 Jun 2012 17:20:27 +0200 Subject: [SciPy-User] scipy.integrate.odeint returns zeros when changing function Message-ID: <4FE33BBB.1070407@gmail.com> Hello everybody! I am trying to solve a basic 2-state kinetic scheme of the form: # Two-state kinetic model # # alpha*[cc] # (1-n) ----------> n # <---------- # beta where [cc] is a function step function. To solve it numerically, I used scipy.integrate.odeint. Everything works fine if I define a step function that takes the value 1 between 2 and 3 . from numpy import linspace from scipy.integrate import odeint # define a square pulse with the concentration of Glu def pulse(t): """ t in ms """ if t<=2 or t>=3: return 0.0 else: return 1.0 # in mM # independent variable time = linspace(0,10, 1000) # 10 ms # define differential equation def diff(n, t, alpha, beta): """ alpha in 1/(mM*ms) beta in 1/ms dn/dt = alpha*cc*(1-n) - beta*n """ cc = pulse(t) # square pulse return alpha*cc*(1-n)-beta*n # and solve it for alpha = 1 and beta = 1 y = odeint(func = diff, y0=0, t= time, args=(1,1)) However, if i change the pulse function to return 1 between, say 5 and 6 the solution to the differential equation returns 0 for all the interval. I cannot figure out why. # new function def pulse(t): """ t in ms """ if t<=5 or t>=6: return 0.0 else: return 1.0 # in mM y = odeint(func = diff, y0=0, t= time, args=(1,1)) # returns all zeros I would greatly appreciate some help here Best Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Thu Jun 21 11:35:23 2012 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 21 Jun 2012 10:35:23 -0500 Subject: [SciPy-User] scipy.integrate.odeint returns zeros when changing function In-Reply-To: <4FE33BBB.1070407@gmail.com> References: <4FE33BBB.1070407@gmail.com> Message-ID: On Thu, Jun 21, 2012 at 10:20 AM, Jose Guzman wrote: > Hello everybody! > > I am trying to solve a basic 2-state kinetic scheme of the form: > > # Two-state kinetic model > # > #??????? alpha*[cc] > # (1-n) ----------> n > #?????? <---------- > #????????? beta > > > where [cc] is a function step function. To solve it numerically, I used > scipy.integrate.odeint. Everything works fine if I define a step function > that takes the value 1 between 2 and 3 . > > > from numpy import linspace > from scipy.integrate import odeint > > # define a square pulse with the concentration of Glu > > def pulse(t): > ??? """ t in ms """ > ??? if t<=2 or t>=3: > ??????? return 0.0 > ??? else: > ??????? return 1.0 # in mM > > # independent variable > time = linspace(0,10, 1000) # 10 ms > > > # define differential equation > def diff(n, t, alpha, beta): > ??? """ > ??? alpha in 1/(mM*ms) > ??? beta in 1/ms > ??? dn/dt = alpha*cc*(1-n) - beta*n > ??? """ > ??? cc = pulse(t) # square pulse > ??? return alpha*cc*(1-n)-beta*n > > # and solve it for alpha = 1 and beta = 1 > y = odeint(func = diff, y0=0, t= time, args=(1,1)) > > > However, if i change the pulse function to return 1 between, say 5 and 6 the > solution to the differential equation returns 0 for all the interval. I > cannot figure out why. > > # new function > def pulse(t): > ??? """ t in ms """ > ??? if t<=5 or t>=6: > ??????? return 0.0 > ??? else: > ??????? return 1.0 # in mM > y = odeint(func = diff, y0=0, t= time, args=(1,1)) # returns all zeros > > I would greatly appreciate some help here > Short answer: try adding the keyword argument hmax=0.1 in your call to odeint. Longer answer: If the pulse were not there, the exact solution to your problem (with initial condition 0) is y(t) = 0. The solver is adaptive, and with a solution this nice, it will quickly adapt to very large step sizes. It is likely that the solver has simply skipped right past your pulse by taking a step greater than 1 time unit. Setting hmax=0.1 tells the solver to never take a step larger than 0.1. (I'll refrain from my usual advice against using discontinuous functions in a solver that is based on certain smoothness assumptions...) Warren > Best > > Jose > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sjm.guzman at googlemail.com Thu Jun 21 11:50:05 2012 From: sjm.guzman at googlemail.com (Jose Guzman) Date: Thu, 21 Jun 2012 17:50:05 +0200 Subject: [SciPy-User] scipy.integrate.odeint returns zeros when changing function In-Reply-To: References: <4FE33BBB.1070407@gmail.com> Message-ID: <4FE342AD.8090702@gmail.com> On 21/06/12 17:35, Warren Weckesser wrote: > On Thu, Jun 21, 2012 at 10:20 AM, Jose Guzman wrote: >> Hello everybody! >> >> I am trying to solve a basic 2-state kinetic scheme of the form: >> >> # Two-state kinetic model >> # >> # alpha*[cc] >> # (1-n) ----------> n >> #<---------- >> # beta >> >> >> where [cc] is a function step function. To solve it numerically, I used >> scipy.integrate.odeint. Everything works fine if I define a step function >> that takes the value 1 between 2 and 3 . >> >> >> from numpy import linspace >> from scipy.integrate import odeint >> >> # define a square pulse with the concentration of Glu >> >> def pulse(t): >> """ t in ms """ >> if t<=2 or t>=3: >> return 0.0 >> else: >> return 1.0 # in mM >> >> # independent variable >> time = linspace(0,10, 1000) # 10 ms >> >> >> # define differential equation >> def diff(n, t, alpha, beta): >> """ >> alpha in 1/(mM*ms) >> beta in 1/ms >> dn/dt = alpha*cc*(1-n) - beta*n >> """ >> cc = pulse(t) # square pulse >> return alpha*cc*(1-n)-beta*n >> >> # and solve it for alpha = 1 and beta = 1 >> y = odeint(func = diff, y0=0, t= time, args=(1,1)) >> >> >> However, if i change the pulse function to return 1 between, say 5 and 6 the >> solution to the differential equation returns 0 for all the interval. I >> cannot figure out why. >> >> # new function >> def pulse(t): >> """ t in ms """ >> if t<=5 or t>=6: >> return 0.0 >> else: >> return 1.0 # in mM >> y = odeint(func = diff, y0=0, t= time, args=(1,1)) # returns all zeros >> >> I would greatly appreciate some help here >> > > Short answer: try adding the keyword argument hmax=0.1 in your call to odeint. > > Longer answer: If the pulse were not there, the exact solution to > your problem (with initial condition 0) is y(t) = 0. The solver is > adaptive, and with a solution this nice, it will quickly adapt to very > large step sizes. It is likely that the solver has simply skipped > right past your pulse by taking a step greater than 1 time unit. > Setting hmax=0.1 tells the solver to never take a step larger than > 0.1. Wov, this was absolutely amazing! I was not aware that odeint uses an adaptive ODE solver. Just one more question. Now I am applying a complex pulse: def pulse(t): """ t in ms """ if t<=1 or t>=2: return 0.0 else: return 1.0 # in mM # construct a 10 times pulse step = np.linspace(0,10, 1000) p = list() # 10 pulses for i in range(10): p += [pulse(t) for t in step] time = np.linspace(0, 10*10, 1000*10) However, the solver only shows the first peak: y = odeint(func = diff_eq, y0=0, t= time, args=(4,1), hmax=0.01) you can have a look here: http://aleph.sagemath.org/?q=3e9bc8e9-dbb0-4a1d-b495-dbd93dd6e594 Is there any other option in odeint that I should be awared of? Should I use another ODE solver? Thanks a lot Warren! I really appreciate your help! Jose From yury at shurup.com Thu Jun 21 12:51:28 2012 From: yury at shurup.com (Yury V. Zaytsev) Date: Thu, 21 Jun 2012 18:51:28 +0200 Subject: [SciPy-User] NumPy C API: where does casting pointer-to-object to pointer-to-function come from? In-Reply-To: References: <1340284718.15289.44.camel@newpride> Message-ID: <1340297488.15289.87.camel@newpride> Hi Nathaniel, On Thu, 2012-06-21 at 16:06 +0100, Nathaniel Smith wrote: > I suspect that the offending cast here is the one that numpy is doing > internally via a macro. If you look at __multiarray_api.h, you'll see > that when you write PyArray_DescrFromScalar and friends you are not > actually doing a simple function call: Thanks a lot for your post, your explanation is very concise and helpful! So it seems that these warnings are technically not in *my* code, but rather come from macro expansions... > #define PyArray_DescrFromScalar \ > (*(PyArray_Descr * (*)(PyObject *)) \ > PyArray_API[57]) > > Here PyArray_API is an array of void* which gets populated when you > call import_array(). The reason things work this way are too tiresome > to explain in detail, but it's needed to make dynamic linking play > well with Python dynamic module loading. I know that my lack of knowledge about the details of NumPy internals mostly prevents me from making any suggestions that would make sense (if only by accident), but I wonder whether it could be possible to replace the array with individual variables of correct type like PyArray_API_57, etc. since the final API is anyways created by the code generator...? > Standard C++ apparently does not guarantee that you can cast a void* > to a function pointer Right, the problem kind of doesn't exist in C, because POSIX requires data and function pointers to be of the same size, otherwise the platform is just not POSIX-compatible, so you'll get no warnings there. In C++, however, it's not guaranteed, and so you get the warning, but AFAIK there is no clean workaround :-/ > I guess the solution would be to come up with some more elaborate cast > that avoids this warning and works with both C and C++, then teach > numpy/core/code_generators/* to produce it. For gcc there is an extension to reinterpret_cast that silences the warning if activated (see web page that I've mentioned earlier), so I guess it's a matter making it generate something along the lines of: #ifdef __cplusplus && __GNUC__ __extension__ #define PyArray_DescrFromScalar \ reinterpret_cast<*(PyArray_Descr * (*)(PyObject *))> \ (PyArray_API[57]) #else #define PyArray_DescrFromScalar \ (*(PyArray_Descr * (*)(PyObject *)) \ PyArray_API[57]) #endif Looks like if the above is not acceptable, we are stuck, and it's quite a bit of a hassle to silence all these warnings in my code. The union-based solutions only guarantee perfectly undefined behavior :-( I guess the only option will be then to filter warnings from the build logs at the CI level... -- Sincerely yours, Yury V. Zaytsev From sjm.guzman at googlemail.com Fri Jun 22 04:16:19 2012 From: sjm.guzman at googlemail.com (Jose Guzman) Date: Fri, 22 Jun 2012 10:16:19 +0200 Subject: [SciPy-User] scipy.integrate.odeint returns zeros when changing function In-Reply-To: <4FE342AD.8090702@gmail.com> References: <4FE33BBB.1070407@gmail.com> <4FE342AD.8090702@gmail.com> Message-ID: <4FE429D3.60109@gmail.com> On 21/06/12 17:50, Jose Guzman wrote: > On 21/06/12 17:35, Warren Weckesser wrote: >> On Thu, Jun 21, 2012 at 10:20 AM, Jose >> Guzman wrote: >>> Hello everybody! >>> >>> I am trying to solve a basic 2-state kinetic scheme of the form: >>> >>> # Two-state kinetic model >>> # >>> # alpha*[cc] >>> # (1-n) ----------> n >>> #<---------- >>> # beta >>> >>> >>> where [cc] is a function step function. To solve it numerically, I used >>> scipy.integrate.odeint. Everything works fine if I define a step >>> function >>> that takes the value 1 between 2 and 3 . >>> >>> >>> from numpy import linspace >>> from scipy.integrate import odeint >>> >>> # define a square pulse with the concentration of Glu >>> >>> def pulse(t): >>> """ t in ms """ >>> if t<=2 or t>=3: >>> return 0.0 >>> else: >>> return 1.0 # in mM >>> >>> # independent variable >>> time = linspace(0,10, 1000) # 10 ms >>> >>> >>> # define differential equation >>> def diff(n, t, alpha, beta): >>> """ >>> alpha in 1/(mM*ms) >>> beta in 1/ms >>> dn/dt = alpha*cc*(1-n) - beta*n >>> """ >>> cc = pulse(t) # square pulse >>> return alpha*cc*(1-n)-beta*n >>> >>> # and solve it for alpha = 1 and beta = 1 >>> y = odeint(func = diff, y0=0, t= time, args=(1,1)) >>> >>> >>> However, if i change the pulse function to return 1 between, say 5 >>> and 6 the >>> solution to the differential equation returns 0 for all the interval. I >>> cannot figure out why. >>> >>> # new function >>> def pulse(t): >>> """ t in ms """ >>> if t<=5 or t>=6: >>> return 0.0 >>> else: >>> return 1.0 # in mM >>> y = odeint(func = diff, y0=0, t= time, args=(1,1)) # returns all zeros >>> >>> I would greatly appreciate some help here >>> >> >> Short answer: try adding the keyword argument hmax=0.1 in your call >> to odeint. >> >> Longer answer: If the pulse were not there, the exact solution to >> your problem (with initial condition 0) is y(t) = 0. The solver is >> adaptive, and with a solution this nice, it will quickly adapt to very >> large step sizes. It is likely that the solver has simply skipped >> right past your pulse by taking a step greater than 1 time unit. >> Setting hmax=0.1 tells the solver to never take a step larger than >> 0.1. > Wov, this was absolutely amazing! I was not aware that odeint uses an > adaptive ODE solver. > > Just one more question. Now I am applying a complex pulse: > > def pulse(t): > """ t in ms """ > if t<=1 or t>=2: > return 0.0 > else: > return 1.0 # in mM > > # construct a 10 times pulse > step = np.linspace(0,10, 1000) > p = list() > # 10 pulses > for i in range(10): > p += [pulse(t) for t in step] > time = np.linspace(0, 10*10, 1000*10) > > However, the solver only shows the first peak: > y = odeint(func = diff_eq, y0=0, t= time, args=(4,1), hmax=0.01) > > you can have a look here: > http://aleph.sagemath.org/?q=3e9bc8e9-dbb0-4a1d-b495-dbd93dd6e594 > > Is there any other option in odeint that I should be awared of? Should > I use another ODE solver? > > Thanks a lot Warren! I really appreciate your help! > > Jose > > Obviously the way I defined the multiple pulse was erroneous and had nothing to do with the integration method. You can find the solution here: http://aleph.sagemath.org/?q=7d522c13-3579-40c1-a2ff-4f9a3941f2d6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjm.guzman at googlemail.com Thu Jun 21 11:14:40 2012 From: sjm.guzman at googlemail.com (Jose Guzman) Date: Thu, 21 Jun 2012 08:14:40 -0700 (PDT) Subject: [SciPy-User] scipy.integrate odeint Message-ID: Hello everybody! I am trying to solve a basic 2-state kinetic scheme of the form: # Two-state kinetic model # # alpha*[cc] # (1-n) ----------> n # <---------- # beta where [cc] is a function step function. To solve it numerically, I used scipy.integrate.odeint. Everything works fine if I define a step function that takes the value 1 between 2 and 3 . from numpy import linspace from scipy.integrate import odeint # define a square pulse with the concentration of Glu def pulse(t): """ t in ms """ if t<=2 or t>=3: return 0.0 else: return 1.0 # in mM # independent variable time = linspace(0,10, 1000) # 10 ms # define differential equation def diff(n, t, alpha, beta): """ alpha in 1/(mM*ms) beta in 1/ms dn/dt = alpha*cc*(1-n) - beta*n """ cc = pulse(t) # square pulse return alpha*cc*(1-n)-beta*n # and solve it for alpha = 1 and beta = 1 y = odeint(func = diff, y0=0, t= time, args=(1,1)) However, if i change the pulse function to return 1 between, say 5 and 6 the solution to the differential equation returns 0 for all the interval. I cannot figure out why. # new function def pulse(t): """ t in ms """ if t<=5 or t>=6: return 0.0 else: return 1.0 # in mM y = odeint(func = diff, y0=0, t= time, args=(1,1)) # returns all zeros I would greatly appreciate some help here Best Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From brandon at rascalmicro.com Fri Jun 22 21:40:46 2012 From: brandon at rascalmicro.com (Brandon Stafford) Date: Fri, 22 Jun 2012 21:40:46 -0400 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? Message-ID: Hi all, I've been working on a small board for scientists (and artists) that runs Python. The idea is to make something like an Arduino, but beefed up to handle real web traffic. Some background on the board at http://www.engadget.com/2012/06/13/rascal-micro-hands-on-video/ and more at http://rascalmicro.com I'm shipping boards with scipy and numpy preinstalled, and I'm wondering what other libraries complement those two well. I'm aware of (and actually have a library of my own listed on) the list at http://www.scipy.org/Topical_Software but it's hard to tell what's both mature and widely used. What are the top 3 or 4 libraries you use, either for control during experiments or for data analysis? Cheers, Brandon -- Brandon Stafford Rascal Micro: small computers for art and science Somerville, MA, USA From alejandro.weinstein at gmail.com Fri Jun 22 22:28:16 2012 From: alejandro.weinstein at gmail.com (Alejandro Weinstein) Date: Fri, 22 Jun 2012 20:28:16 -0600 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: References: Message-ID: > What are the top 3 or 4 libraries you use, either for control during > experiments or for data analysis? scikit-learn (http://scikit-learn.org/) matplotlib (http://matplotlib.sourceforge.net) networkx (http>//networkx.lanl.gov) Alejandro. From eckjoh2 at web.de Sat Jun 23 03:06:03 2012 From: eckjoh2 at web.de (Johannes Eckstein) Date: Sat, 23 Jun 2012 09:06:03 +0200 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: References: Message-ID: <4FE56ADB.2010600@web.de> HI, I'm recently more interested in quick'n dirty GUI-designing, Therefore i use guidata, For data storing I use: h5py, xlrd, xlwt and xlutils, python modules written by Chris Withers. xlwt is a fork of pyExcelerator I there a good control library? very nice thing, indeed your rascal! Cheers, Johannes > Hi all, > > I've been working on a small board for scientists (and artists) that > runs Python. The idea is to make something like an Arduino, but beefed > up to handle real web traffic. Some background on the board at > http://www.engadget.com/2012/06/13/rascal-micro-hands-on-video/ and > more at http://rascalmicro.com > > I'm shipping boards with scipy and numpy preinstalled, and I'm > wondering what other libraries complement those two well. I'm aware of > (and actually have a library of my own listed on) the list at > http://www.scipy.org/Topical_Software but it's hard to tell what's > both mature and widely used. > > What are the top 3 or 4 libraries you use, either for control during > experiments or for data analysis? > > Cheers, > Brandon > -- > Brandon Stafford > Rascal Micro: small computers for art and science > Somerville, MA, USA > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Dipl.-Ing. Johannes Eckstein Automatisierungstechnik i. d. Produktion Universit?t Stuttgart From dtgml1+scipy at gmail.com Sun Jun 24 07:00:27 2012 From: dtgml1+scipy at gmail.com (daniel-s) Date: Sun, 24 Jun 2012 21:00:27 +1000 Subject: [SciPy-User] Using arbitrary object as datapoints for scipy.spatial.KDTree Message-ID: Hi, A KDTree is usually created with a list of tuples, where each tuple represents a datapoint (ie. [(3,4,5), (6,7,8)]) Is it possible to create one using a list of your own objects? ie. something like a list of "point" objects class point(object): self.__init__(self, location): self.location = location In this case self.location is what the tree would use in creating nodes and finding neighbours. I have tried to implement the __getitem__(self): method, but this does not work. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Jun 24 07:38:43 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 24 Jun 2012 13:38:43 +0200 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: References: Message-ID: On Sat, Jun 23, 2012 at 3:40 AM, Brandon Stafford wrote: > Hi all, > > I've been working on a small board for scientists (and artists) that > runs Python. The idea is to make something like an Arduino, but beefed > up to handle real web traffic. Some background on the board at > http://www.engadget.com/2012/06/13/rascal-micro-hands-on-video/ and > more at http://rascalmicro.com > Looks awesome, thanks for sharing. > > I'm shipping boards with scipy and numpy preinstalled, and I'm > wondering what other libraries complement those two well. I'm aware of > (and actually have a library of my own listed on) the list at > http://www.scipy.org/Topical_Software but it's hard to tell what's > both mature and widely used. > > What are the top 3 or 4 libraries you use, either for control during > experiments or for data analysis? > For data analysis scikit-image, statsmodels and scikit-learn would be high on my list. Plus matplotlib, to not have to learn jQplot. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Sun Jun 24 23:42:31 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Sun, 24 Jun 2012 20:42:31 -0700 (PDT) Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: References: Message-ID: <1340595751.61197.YahooMailNeo@web113407.mail.gq1.yahoo.com> Sounds like an awesome platform! another emphatic vote for matplotlib and in no particular order: PIL pyfftw pytables django (and possibly?associated?modules like django-south) pyro (I use this for both control of remote devices, and as the basis of a distributed analysis backend) mysql pyserial (not sure how relevant this is on your platform) (Some of these have a bit of a web app type slant, but figure that this might be where you're looking to go anyway - I've got a neat temperature monitoring app which produces temperature history graphs with a web interface using numpy, scipy, matplotlib, django ?and an SQL database on a server. ?This is linked via pyro to a second computer which runs some custom driver code for an obscure ADC board that monitors the temperature sensors. ) ________________________________ From: Ralf Gommers To: SciPy Users List Sent: Sunday, 24 June 2012 11:38 PM Subject: Re: [SciPy-User] Useful libraries beyond numpy and scipy? On Sat, Jun 23, 2012 at 3:40 AM, Brandon Stafford wrote: Hi all, > >I've been working on a small board for scientists (and artists) that >runs Python. The idea is to make something like an Arduino, but beefed >up to handle real web traffic. Some background on the board at >http://www.engadget.com/2012/06/13/rascal-micro-hands-on-video/ and >more at http://rascalmicro.com > Looks awesome, thanks for sharing. ? >I'm shipping boards with scipy and numpy preinstalled, and I'm >wondering what other libraries complement those two well. I'm aware of >(and actually have a library of my own listed on) the list at >http://www.scipy.org/Topical_Software but it's hard to tell what's >both mature and widely used. > >What are the top 3 or 4 libraries you use, either for control during >experiments or for data analysis? > For data analysis scikit-image, statsmodels and scikit-learn would be high on my list. Plus matplotlib, to not have to learn jQplot. Ralf _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbasu at physics.cornell.edu Mon Jun 25 05:54:48 2012 From: sbasu at physics.cornell.edu (Sourish Basu) Date: Mon, 25 Jun 2012 10:54:48 +0100 Subject: [SciPy-User] Finding contour polygons without matplotlib? Message-ID: <4FE83568.5060900@physics.cornell.edu> Hi all, Is there a way in scipy+numpy to calculate the contour polygons of a field z=z(x,y) without invoking matplotlib? I would like to check whether two points lie within the same contour polygons, without invoking the contouring routine from matplotlib. I would like to avoid matplotlib for two reasons. Firstly the cluster on which I work does not have matplotlib, and secondly I would like to avoid the overhead of plotting that would inevitably accompany repeated calls to the matplotlib implementation of contour. Thanks, Sourish Basu From zachary.pincus at yale.edu Mon Jun 25 07:22:01 2012 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 25 Jun 2012 07:22:01 -0400 Subject: [SciPy-User] Finding contour polygons without matplotlib? In-Reply-To: <4FE83568.5060900@physics.cornell.edu> References: <4FE83568.5060900@physics.cornell.edu> Message-ID: I wrote some contour-finding code (2D only) a long time ago that has recently found a good home in the scikits-image package: http://scikits-image.org/docs/0.6/api/skimage.measure.html#find-contours On Jun 25, 2012, at 5:54 AM, Sourish Basu wrote: > Hi all, > > Is there a way in scipy+numpy to calculate the contour polygons of a > field z=z(x,y) without invoking matplotlib? I would like to check > whether two points lie within the same contour polygons, without > invoking the contouring routine from matplotlib. I would like to avoid > matplotlib for two reasons. Firstly the cluster on which I work does not > have matplotlib, and secondly I would like to avoid the overhead of > plotting that would inevitably accompany repeated calls to the > matplotlib implementation of contour. > > Thanks, > > Sourish Basu > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From zachary.pincus at yale.edu Mon Jun 25 08:57:19 2012 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 25 Jun 2012 08:57:19 -0400 Subject: [SciPy-User] Finding contour polygons without matplotlib? In-Reply-To: References: <4FE83568.5060900@physics.cornell.edu> Message-ID: > I wrote some contour-finding code (2D only) a long time ago that has recently found a good home in the scikits-image package: > http://scikits-image.org/docs/0.6/api/skimage.measure.html#find-contours Also, here's an example use: http://scikits-image.org/docs/dev/auto_examples/plot_contours.html > > On Jun 25, 2012, at 5:54 AM, Sourish Basu wrote: > >> Hi all, >> >> Is there a way in scipy+numpy to calculate the contour polygons of a >> field z=z(x,y) without invoking matplotlib? I would like to check >> whether two points lie within the same contour polygons, without >> invoking the contouring routine from matplotlib. I would like to avoid >> matplotlib for two reasons. Firstly the cluster on which I work does not >> have matplotlib, and secondly I would like to avoid the overhead of >> plotting that would inevitably accompany repeated calls to the >> matplotlib implementation of contour. >> >> Thanks, >> >> Sourish Basu >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From stefan at sun.ac.za Mon Jun 25 00:56:15 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 24 Jun 2012 21:56:15 -0700 Subject: [SciPy-User] ANN: scikits-image 0.6 release Message-ID: Announcement: scikits-image 0.6 =============================== We're happy to announce the 6th version of scikits-image! Scikits-image is an image processing toolbox for SciPy that includes algorithms for segmentation, geometric transformations, color space manipulation, analysis, filtering, morphology, feature detection, and more. For more information, examples, and documentation, please visit our website http://skimage.org New Features ------------ - Packaged in Debian as ``python-skimage`` - Template matching - Fast user-defined image warping - Adaptive thresholding - Structural similarity index - Polygon, circle and ellipse drawing - Peak detection - Region properties - TiffFile I/O plugin ... along with some bug fixes and performance tweaks. Contributors to this release ---------------------------- - Vincent Albufera - David Cournapeau - Christoph Gohlke - Emmanuelle Gouillart - Pieter Holtzhausen - Zachary Pincus - Johannes Sch?nberger - Tom (tangofoxtrotmike) - James Turner - Stefan van der Walt - Tony S Yu From trive at astro.su.se Mon Jun 25 06:50:58 2012 From: trive at astro.su.se (=?ISO-8859-1?Q?Th=F8ger_Rivera-Thorsen?=) Date: Mon, 25 Jun 2012 12:50:58 +0200 Subject: [SciPy-User] Finding contour polygons without matplotlib? In-Reply-To: <4FE83568.5060900@physics.cornell.edu> References: <4FE83568.5060900@physics.cornell.edu> Message-ID: <4FE84292.3000003@astro.su.se> I have been wanting something similar to this for a while too: the ability to define polygons by data coordinates, not by plotting - they are really a means to define regions of interest, not necessarily meant to ever be visualized. On 06/25/2012 11:54 AM, Sourish Basu wrote: > Hi all, > > Is there a way in scipy+numpy to calculate the contour polygons of a > field z=z(x,y) without invoking matplotlib? I would like to check > whether two points lie within the same contour polygons, without > invoking the contouring routine from matplotlib. I would like to avoid > matplotlib for two reasons. Firstly the cluster on which I work does not > have matplotlib, and secondly I would like to avoid the overhead of > plotting that would inevitably accompany repeated calls to the > matplotlib implementation of contour. > > Thanks, > > Sourish Basu > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Th?ger Rivera-Thorsen Ph.D. Student of astrophysics Stockholms Universitet Office: C6-3022 From nicolas.pinto at gmail.com Mon Jun 25 13:14:25 2012 From: nicolas.pinto at gmail.com (Nicolas Pinto) Date: Mon, 25 Jun 2012 13:14:25 -0400 Subject: [SciPy-User] linalg.eigh hangs only after importing sparse module In-Reply-To: References: Message-ID: Update: the issue was related to atlas-3.8.0, upgrading to 3.8.2 or to the 3.9.* series fixed the issue. HTH n On Thu, Apr 5, 2012 at 3:00 PM, Pauli Virtanen wrote: > 05.04.2012 20:33, Nicolas Pinto kirjoitti: >> I tried (in a virtualenv) to compile with gcc-4.2.4, gcc-4.3.6 and >> gcc-4.4.6 and all failed. Any suggestion on what to try next? > > Suggestions > > - Check with different versions of ATLAS and Lapack. > > - Run the program under valgrind, with python-specific suppressions > ?enabled. Python's source distribution has a suppression file for > ?Valgrind. > > - Run the program in gdb, first having compiled everything with debug > ?symbols. (OPT="-ggdb" FOPT="-ggdb" python setup.py build ...) > > ?Check if the values passed down to the LAPACK routine are OK. > > Unfortunately, these take some work. > > ? ? ? ?Pauli > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolas Pinto http://web.mit.edu/pinto From pengkui.luo at gmail.com Mon Jun 25 14:34:11 2012 From: pengkui.luo at gmail.com (Pengkui Luo) Date: Mon, 25 Jun 2012 11:34:11 -0700 Subject: [SciPy-User] saving a sparse matrix to file Message-ID: Is there a unified API to save a sparse matrix of any type (dok, lil, csc, ...) to file, as simple as pickle dump/load? e.g. Can I do something like the following? import numpy as np from scipy.sparse import dok_matrix from scipy.io import save, load mat = dok_matrix (np.zeros ((80000, 80000)), np.int) save (file ('foobar.mat', 'wb'), mat) # or mat.save ('foobar.mat') mat2 = load (file ('foobar.mat')) It seems that there was a .save method for a sparse matrix several years ago, but was later deprecated. Why? http://projects.scipy.org/scipy/ticket/590 http://mail.scipy.org/pipermail/scipy-user/2012-January/031326.html I came across this email response stating that for csr_matrix one can save.data, .indices, and .indptr attributes individually to three different files and load them all to reassemble the matrix. Is it a hack only suitable for csr_matrix, or one that can be applied to all types of sparse matrices? Can I save a matrix to a single file instead of three? http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.savemat.html I also came across the scipy.io.savemat API, but had no idea how to use it.... What does "mdict : Dictionary from which to save matfile variables." mean? What is this dict for? Can someone provide an example on how to use it? Thanks, Pengkui -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Jun 25 16:34:39 2012 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 25 Jun 2012 21:34:39 +0100 Subject: [SciPy-User] saving a sparse matrix to file In-Reply-To: References: Message-ID: On Mon, Jun 25, 2012 at 7:34 PM, Pengkui Luo wrote: > Is there a unified API to save a sparse matrix of any type (dok, lil, csc, > ...) to file, as simple as pickle dump/load? If you want a pickle, have you tried using Python's generic pickle dump/load interface? import cPickle cPickle.dump(myobj, open("myfile.pickle", "w")) myobj2 = cPickle.load(open("myfile.pickle")) http://docs.python.org/library/pickle.html -n From josef.pktd at gmail.com Mon Jun 25 17:21:57 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Jun 2012 17:21:57 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: <4FDA2E73.4010906@uci.edu> References: <4FDA2E73.4010906@uci.edu> Message-ID: On Thu, Jun 14, 2012 at 2:33 PM, Christoph Gohlke wrote: > > > On 6/13/2012 10:56 AM, Ralf Gommers wrote: >> Hi, >> >> I am pleased to announce the availability of the first beta release of >> SciPy 0.11.0. For this release over 120 tickets and pull requests have >> been closed, and many new features have been added. Also noteworthy is >> that the number of contributors for this release has risen to over 50. >> Some of the highlights are: >> >> ? ?- A new module, sparse.csgraph, has been added which provides a >> number of common sparse graph algorithms. >> ? ?- New unified interfaces to the existing optimization and root >> finding functions have been added. >> >> Sources and binaries can be found at >> http://sourceforge.net/projects/scipy/files/scipy/0.11.0b1/, release >> notes are copied below. >> >> Please try this release and report any problems on the mailing list. >> >> Cheers, >> Ralf >> > > Hi Ralf, > > Using msvc9/MKL builds and numpy 1.6.2: > > All tests pass on win-amd64-py2.7 and win32-py2.7 (except for the known > ndimage.test_datatypes.test_uint64_max failure on win32). > > test_smoke_bisplrep_bisplev fails on win-amd64-py3.2 and win32-py3.2. > > test_shortest_path.test_shortest_path_indices fails on win-amd64-py3.2. > > Christoph > > > ====================================================================== > ERROR: test_smoke_bisplrep_bisplev (test_fitpack.TestSmokeTests) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > line 219, in test_smoke_bisplrep_bisplev > ? ? self.check_5() > ? File > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > line 180, in check_5 > ? ? xy=makepairs(x,y) > ? File > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > line 32, in makepairs > ? ? xy.shape=sh[0]*sh[1],sh[2] > IndexError: tuple index out of range > > ====================================================================== > FAIL: test_shortest_path.test_shortest_path_indices(array([[ 0. ? , ?3. > ? , ?3. ? , ?1. ? , ?2.858], > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File "X:\Python32\lib\site-packages\nose\case.py", line 198, in runTest > ? ? self.test(*self.arg) > ? File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line > 800, in assert_array_almost_equal > ? ? header=('Arrays are not almost equal to %d decimals' % decimal)) > ? File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line > 636, in assert_array_compare > ? ? raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 6 decimals > > (mismatch 15.0%) > ?x: array([[ 0. ? ? ? ?, ?3. ? ? ? ?, ?3. ? ? ? ?, ?1. ? ? ? ?, > 2.85772861], > ? ? ? ?[ 3. ? ? ? ?, ?0. ? ? ? ?, ?6. ? ? ? ?, ?2. ? ? ? ?, ?4. ? ? ? ?], > ? ? ? ?[ 3. ? ? ? ?, ?6. ? ? ? ?, ?0. ? ? ? ?, ?4. ? ? ? ?, ?5.85772861], > ? ? ? ?[ 1. ? ? ? ?, ?2. ? ? ? ?, ?4. ? ? ? ?, ?0. ? ? ? ?, ?2.85772861]]) > ?y: array([[ 0., ?3., ?3., ?1., ?2.], > ? ? ? ?[ 3., ?0., ?6., ?2., ?4.], > ? ? ? ?[ 3., ?6., ?0., ?4., ?5.], > ? ? ? ?[ 1., ?2., ?4., ?0., ?2.]]) > > ---------------------------------------------------------------------- > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user A late bug: During Debian testing of statsmodels on various machines, we found http://projects.scipy.org/scipy/ticket/1680 >From what I have found, dtypes like np.dtype('From fixing similar bugs in statsmodels, the change would be just to drop the '<' from the dtype string. My current solution for statsmodels is to use spsolve with use_umfpack=False on Big Endian machines. Would be good if this could still be changed for 0.11. I don't know much about exotic machines, but dropping '<' looks innocent enough to me. I also don't understand the globals for useUmfpack enought to figure out whether the tests in scipy\sparse\linalg\dsolve\tests\test_linsolve.py actually test the umfpack version. Josef From ralf.gommers at googlemail.com Mon Jun 25 17:33:47 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 25 Jun 2012 23:33:47 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: <4FDA2E73.4010906@uci.edu> Message-ID: On Mon, Jun 25, 2012 at 11:21 PM, wrote: > On Thu, Jun 14, 2012 at 2:33 PM, Christoph Gohlke wrote: > > > > > > On 6/13/2012 10:56 AM, Ralf Gommers wrote: > >> Hi, > >> > >> I am pleased to announce the availability of the first beta release of > >> SciPy 0.11.0. For this release over 120 tickets and pull requests have > >> been closed, and many new features have been added. Also noteworthy is > >> that the number of contributors for this release has risen to over 50. > >> Some of the highlights are: > >> > >> - A new module, sparse.csgraph, has been added which provides a > >> number of common sparse graph algorithms. > >> - New unified interfaces to the existing optimization and root > >> finding functions have been added. > >> > >> Sources and binaries can be found at > >> http://sourceforge.net/projects/scipy/files/scipy/0.11.0b1/, release > >> notes are copied below. > >> > >> Please try this release and report any problems on the mailing list. > >> > >> Cheers, > >> Ralf > >> > > > > Hi Ralf, > > > > Using msvc9/MKL builds and numpy 1.6.2: > > > > All tests pass on win-amd64-py2.7 and win32-py2.7 (except for the known > > ndimage.test_datatypes.test_uint64_max failure on win32). > > > > test_smoke_bisplrep_bisplev fails on win-amd64-py3.2 and win32-py3.2. > > > > test_shortest_path.test_shortest_path_indices fails on win-amd64-py3.2. > > > > Christoph > > > > > > ====================================================================== > > ERROR: test_smoke_bisplrep_bisplev (test_fitpack.TestSmokeTests) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > > line 219, in test_smoke_bisplrep_bisplev > > self.check_5() > > File > > > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > > line 180, in check_5 > > xy=makepairs(x,y) > > File > > > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > > line 32, in makepairs > > xy.shape=sh[0]*sh[1],sh[2] > > IndexError: tuple index out of range > > > > ====================================================================== > > FAIL: test_shortest_path.test_shortest_path_indices(array([[ 0. , 3. > > , 3. , 1. , 2.858], > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "X:\Python32\lib\site-packages\nose\case.py", line 198, in runTest > > self.test(*self.arg) > > File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line > > 800, in assert_array_almost_equal > > header=('Arrays are not almost equal to %d decimals' % decimal)) > > File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line > > 636, in assert_array_compare > > raise AssertionError(msg) > > AssertionError: > > Arrays are not almost equal to 6 decimals > > > > (mismatch 15.0%) > > x: array([[ 0. , 3. , 3. , 1. , > > 2.85772861], > > [ 3. , 0. , 6. , 2. , 4. ], > > [ 3. , 6. , 0. , 4. , 5.85772861], > > [ 1. , 2. , 4. , 0. , > 2.85772861]]) > > y: array([[ 0., 3., 3., 1., 2.], > > [ 3., 0., 6., 2., 4.], > > [ 3., 6., 0., 4., 5.], > > [ 1., 2., 4., 0., 2.]]) > > > > ---------------------------------------------------------------------- > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > A late bug: > > During Debian testing of statsmodels on various machines, we found > http://projects.scipy.org/scipy/ticket/1680 > > >From what I have found, dtypes like np.dtype(' correctly on Big Endian machines. > > The umfpack wrapper uses '<' at the two places in the ticket. > > >From fixing similar bugs in statsmodels, the change would be just to > drop the '<' from the dtype string. > > My current solution for statsmodels is to use spsolve with > use_umfpack=False on Big Endian machines. > > Would be good if this could still be changed for 0.11. > > I don't know much about exotic machines, but dropping '<' looks > innocent enough to me. > Looks like the correct fix, no problem including that. > > I also don't understand the globals for useUmfpack enought to figure > out whether the tests in > scipy\sparse\linalg\dsolve\tests\test_linsolve.py actually test the > umfpack version. > Looks to me like Umfpack is tested if available. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Mon Jun 25 17:45:53 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 25 Jun 2012 17:45:53 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: <4FDA2E73.4010906@uci.edu> Message-ID: Just uncovered a bug doingsome root finding This call signature for qr is (no longer?) valid https://github.com/scipy/scipy/blob/master/scipy/optimize/nonlin.py#L844 Should be easy to update the call signature, but I don't know how quickly and it's untested. http://projects.scipy.org/scipy/ticket/1682 Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jun 25 17:51:01 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Jun 2012 17:51:01 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: <4FDA2E73.4010906@uci.edu> Message-ID: On Mon, Jun 25, 2012 at 5:33 PM, Ralf Gommers wrote: > > > On Mon, Jun 25, 2012 at 11:21 PM, wrote: >> >> On Thu, Jun 14, 2012 at 2:33 PM, Christoph Gohlke wrote: >> > >> > >> > On 6/13/2012 10:56 AM, Ralf Gommers wrote: >> >> Hi, >> >> >> >> I am pleased to announce the availability of the first beta release of >> >> SciPy 0.11.0. For this release over 120 tickets and pull requests have >> >> been closed, and many new features have been added. Also noteworthy is >> >> that the number of contributors for this release has risen to over 50. >> >> Some of the highlights are: >> >> >> >> ? ?- A new module, sparse.csgraph, has been added which provides a >> >> number of common sparse graph algorithms. >> >> ? ?- New unified interfaces to the existing optimization and root >> >> finding functions have been added. >> >> >> >> Sources and binaries can be found at >> >> http://sourceforge.net/projects/scipy/files/scipy/0.11.0b1/, release >> >> notes are copied below. >> >> >> >> Please try this release and report any problems on the mailing list. >> >> >> >> Cheers, >> >> Ralf >> >> >> > >> > Hi Ralf, >> > >> > Using msvc9/MKL builds and numpy 1.6.2: >> > >> > All tests pass on win-amd64-py2.7 and win32-py2.7 (except for the known >> > ndimage.test_datatypes.test_uint64_max failure on win32). >> > >> > test_smoke_bisplrep_bisplev fails on win-amd64-py3.2 and win32-py3.2. >> > >> > test_shortest_path.test_shortest_path_indices fails on win-amd64-py3.2. >> > >> > Christoph >> > >> > >> > ====================================================================== >> > ERROR: test_smoke_bisplrep_bisplev (test_fitpack.TestSmokeTests) >> > ---------------------------------------------------------------------- >> > Traceback (most recent call last): >> > ? File >> > >> > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", >> > line 219, in test_smoke_bisplrep_bisplev >> > ? ? self.check_5() >> > ? File >> > >> > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", >> > line 180, in check_5 >> > ? ? xy=makepairs(x,y) >> > ? File >> > >> > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", >> > line 32, in makepairs >> > ? ? xy.shape=sh[0]*sh[1],sh[2] >> > IndexError: tuple index out of range >> > >> > ====================================================================== >> > FAIL: test_shortest_path.test_shortest_path_indices(array([[ 0. ? , ?3. >> > ? , ?3. ? , ?1. ? , ?2.858], >> > ---------------------------------------------------------------------- >> > Traceback (most recent call last): >> > ? File "X:\Python32\lib\site-packages\nose\case.py", line 198, in >> > runTest >> > ? ? self.test(*self.arg) >> > ? File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line >> > 800, in assert_array_almost_equal >> > ? ? header=('Arrays are not almost equal to %d decimals' % decimal)) >> > ? File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line >> > 636, in assert_array_compare >> > ? ? raise AssertionError(msg) >> > AssertionError: >> > Arrays are not almost equal to 6 decimals >> > >> > (mismatch 15.0%) >> > ?x: array([[ 0. ? ? ? ?, ?3. ? ? ? ?, ?3. ? ? ? ?, ?1. ? ? ? ?, >> > 2.85772861], >> > ? ? ? ?[ 3. ? ? ? ?, ?0. ? ? ? ?, ?6. ? ? ? ?, ?2. ? ? ? ?, ?4. >> > ?], >> > ? ? ? ?[ 3. ? ? ? ?, ?6. ? ? ? ?, ?0. ? ? ? ?, ?4. ? ? ? ?, >> > ?5.85772861], >> > ? ? ? ?[ 1. ? ? ? ?, ?2. ? ? ? ?, ?4. ? ? ? ?, ?0. ? ? ? ?, >> > ?2.85772861]]) >> > ?y: array([[ 0., ?3., ?3., ?1., ?2.], >> > ? ? ? ?[ 3., ?0., ?6., ?2., ?4.], >> > ? ? ? ?[ 3., ?6., ?0., ?4., ?5.], >> > ? ? ? ?[ 1., ?2., ?4., ?0., ?2.]]) >> > >> > ---------------------------------------------------------------------- >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> A late bug: >> >> During Debian testing of statsmodels on various machines, we found >> http://projects.scipy.org/scipy/ticket/1680 >> >> >From what I have found, dtypes like np.dtype('> correctly on Big Endian machines. >> >> The umfpack wrapper uses '<' at the two places in the ticket. >> >> >From fixing similar bugs in statsmodels, the change would be just to >> drop the '<' from the dtype string. >> >> My current solution for statsmodels is to use spsolve with >> use_umfpack=False on Big Endian machines. >> >> Would be good if this could still be changed for 0.11. >> >> I don't know much about exotic machines, but dropping '<' looks >> innocent enough to me. > > > Looks like the correct fix, no problem including that. Thanks, >> >> >> I also don't understand the globals for useUmfpack enought to figure >> out whether the tests in >> scipy\sparse\linalg\dsolve\tests\test_linsolve.py ?actually test the >> umfpack version. > > > Looks to me like Umfpack is tested if available. the testfile sets a global: use_solver( useUmfpack = False ) but I don't see whether locals and global settings override each other in linsolve. and Yaroslav said the test didn't run on the Big Endian machine. Josef > > Ralf > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From stef.mientki at gmail.com Mon Jun 25 18:32:49 2012 From: stef.mientki at gmail.com (Stef Mientki) Date: Tue, 26 Jun 2012 00:32:49 +0200 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: References: Message-ID: <4FE8E711.3080606@gmail.com> On 24-06-2012 13:38, Ralf Gommers wrote: > > > On Sat, Jun 23, 2012 at 3:40 AM, Brandon Stafford > wrote: > > Hi all, > > I've been working on a small board for scientists (and artists) that > runs Python. The idea is to make something like an Arduino, but beefed > up to handle real web traffic. Some background on the board at > http://www.engadget.com/2012/06/13/rascal-micro-hands-on-video/ and > more at http://rascalmicro.com > > > Looks awesome, thanks for sharing. certainly looks interesting, but how does it compare to Raspberry ? thanks, Stef -------------- next part -------------- An HTML attachment was scrubbed... URL: From brandon at rascalmicro.com Mon Jun 25 19:13:47 2012 From: brandon at rascalmicro.com (Brandon Stafford) Date: Mon, 25 Jun 2012 19:13:47 -0400 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: <1340595751.61197.YahooMailNeo@web113407.mail.gq1.yahoo.com> References: <1340595751.61197.YahooMailNeo@web113407.mail.gq1.yahoo.com> Message-ID: Sounds like matplotlib and the scikit stuff top the list. About the comparison to the Raspberry Pi: both are embedded Linux boards, but the idea of the Rascal is to be preconfigured for making Python-backed web interfaces for Arduino-style hardware. My understanding is that the Raspberry Pi is a general-purpose board broadly intended to be a cheap platform for learning programming. Thanks for all the suggestions. The scientific Python ecosystem repeatedly surprises me with how fast it improves. Further suggestions and/or requests can be sent to me off-list at my first name @ rascalmicro.com. Cheers, Brandon On Sun, Jun 24, 2012 at 11:42 PM, David Baddeley wrote: > Sounds like an awesome platform! > > another emphatic vote for > matplotlib > > and in no particular order: > PIL > pyfftw > pytables > django (and possibly?associated?modules like django-south) > pyro (I use this for both control of remote devices, and as the basis of a > distributed analysis backend) > mysql > pyserial (not sure how relevant this is on your platform) > > (Some of these have a bit of a web app type slant, but figure that this > might be where you're looking to go anyway - I've got a neat temperature > monitoring app which produces temperature history graphs with a web > interface using numpy, scipy, matplotlib, django ?and an SQL database on a > server. ?This is linked via pyro to a second computer which runs some custom > driver code for an obscure ADC board that monitors the temperature sensors. > ) > > > ________________________________ > From: Ralf Gommers > To: SciPy Users List > Sent: Sunday, 24 June 2012 11:38 PM > Subject: Re: [SciPy-User] Useful libraries beyond numpy and scipy? > > > > On Sat, Jun 23, 2012 at 3:40 AM, Brandon Stafford > wrote: > > Hi all, > > I've been working on a small board for scientists (and artists) that > runs Python. The idea is to make something like an Arduino, but beefed > up to handle real web traffic. Some background on the board at > http://www.engadget.com/2012/06/13/rascal-micro-hands-on-video/ and > more at http://rascalmicro.com > > > Looks awesome, thanks for sharing. > > > > I'm shipping boards with scipy and numpy preinstalled, and I'm > wondering what other libraries complement those two well. I'm aware of > (and actually have a library of my own listed on) the list at > http://www.scipy.org/Topical_Software but it's hard to tell what's > both mature and widely used. > > What are the top 3 or 4 libraries you use, either for control during > experiments or for data analysis? > > > For data analysis scikit-image, statsmodels and scikit-learn would be high > on my list. Plus matplotlib, to not have to learn jQplot. > > Ralf > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Brandon Stafford Rascal Micro: small computers for art and science Somerville, MA, USA From andrew_giessel at hms.harvard.edu Mon Jun 25 20:18:59 2012 From: andrew_giessel at hms.harvard.edu (Andrew Giessel) Date: Mon, 25 Jun 2012 20:18:59 -0400 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: References: <1340595751.61197.YahooMailNeo@web113407.mail.gq1.yahoo.com> Message-ID: I would vote for the Flask microframe work for web stuff (esp. over django- much ligher weight). flask + mongodb is a great combo. Andrew On Mon, Jun 25, 2012 at 7:13 PM, Brandon Stafford wrote: > Sounds like matplotlib and the scikit stuff top the list. > > About the comparison to the Raspberry Pi: both are embedded Linux > boards, but the idea of the Rascal is to be preconfigured for making > Python-backed web interfaces for Arduino-style hardware. My > understanding is that the Raspberry Pi is a general-purpose board > broadly intended to be a cheap platform for learning programming. > > Thanks for all the suggestions. The scientific Python ecosystem > repeatedly surprises me with how fast it improves. > > Further suggestions and/or requests can be sent to me off-list at my > first name @ rascalmicro.com. > > Cheers, > Brandon > > On Sun, Jun 24, 2012 at 11:42 PM, David Baddeley > wrote: > > Sounds like an awesome platform! > > > > another emphatic vote for > > matplotlib > > > > and in no particular order: > > PIL > > pyfftw > > pytables > > django (and possibly associated modules like django-south) > > pyro (I use this for both control of remote devices, and as the basis of > a > > distributed analysis backend) > > mysql > > pyserial (not sure how relevant this is on your platform) > > > > (Some of these have a bit of a web app type slant, but figure that this > > might be where you're looking to go anyway - I've got a neat temperature > > monitoring app which produces temperature history graphs with a web > > interface using numpy, scipy, matplotlib, django and an SQL database on > a > > server. This is linked via pyro to a second computer which runs some > custom > > driver code for an obscure ADC board that monitors the temperature > sensors. > > ) > > > > > > ________________________________ > > From: Ralf Gommers > > To: SciPy Users List > > Sent: Sunday, 24 June 2012 11:38 PM > > Subject: Re: [SciPy-User] Useful libraries beyond numpy and scipy? > > > > > > > > On Sat, Jun 23, 2012 at 3:40 AM, Brandon Stafford < > brandon at rascalmicro.com> > > wrote: > > > > Hi all, > > > > I've been working on a small board for scientists (and artists) that > > runs Python. The idea is to make something like an Arduino, but beefed > > up to handle real web traffic. Some background on the board at > > http://www.engadget.com/2012/06/13/rascal-micro-hands-on-video/ and > > more at http://rascalmicro.com > > > > > > Looks awesome, thanks for sharing. > > > > > > > > I'm shipping boards with scipy and numpy preinstalled, and I'm > > wondering what other libraries complement those two well. I'm aware of > > (and actually have a library of my own listed on) the list at > > http://www.scipy.org/Topical_Software but it's hard to tell what's > > both mature and widely used. > > > > What are the top 3 or 4 libraries you use, either for control during > > experiments or for data analysis? > > > > > > For data analysis scikit-image, statsmodels and scikit-learn would be > high > > on my list. Plus matplotlib, to not have to learn jQplot. > > > > Ralf > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > Brandon Stafford > Rascal Micro: small computers for art and science > Somerville, MA, USA > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Andrew Giessel, PhD Department of Neurobiology, Harvard Medical School 220 Longwood Ave Boston, MA 02115 ph: 617.432.7971 email: andrew_giessel at hms.harvard.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at laxalde.org Tue Jun 26 10:07:55 2012 From: denis at laxalde.org (Denis Laxalde) Date: Tue, 26 Jun 2012 10:07:55 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: <4FDA2E73.4010906@uci.edu> Message-ID: <4FE9C23B.6050503@laxalde.org> Skipper Seabold a ?crit : > Just uncovered a bug doingsome root finding > > This call signature for qr is (no longer?) valid > > https://github.com/scipy/scipy/blob/master/scipy/optimize/nonlin.py#L844 > > Should be easy to update the call signature, but I don't know how > quickly and it's untested. > > http://projects.scipy.org/scipy/ticket/1682 Fixed in master. Should this be backported for 0.11 as well? -- Denis From pav at iki.fi Tue Jun 26 15:30:26 2012 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 26 Jun 2012 21:30:26 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: <4FE9C23B.6050503@laxalde.org> References: <4FDA2E73.4010906@uci.edu> <4FE9C23B.6050503@laxalde.org> Message-ID: 26.06.2012 16:07, Denis Laxalde kirjoitti: > Skipper Seabold a ?crit : >> Just uncovered a bug doingsome root finding >> >> This call signature for qr is (no longer?) valid >> >> https://github.com/scipy/scipy/blob/master/scipy/optimize/nonlin.py#L844 >> >> Should be easy to update the call signature, but I don't know how >> quickly and it's untested. >> >> http://projects.scipy.org/scipy/ticket/1682 > > Fixed in master. > Should this be backported for 0.11 as well? I think yes. This points out that the test coverage is incomplete here. Pauli From jsseabold at gmail.com Tue Jun 26 15:37:27 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 26 Jun 2012 15:37:27 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: <4FDA2E73.4010906@uci.edu> <4FE9C23B.6050503@laxalde.org> Message-ID: On Tue, Jun 26, 2012 at 3:30 PM, Pauli Virtanen wrote: > 26.06.2012 16:07, Denis Laxalde kirjoitti: >> Skipper Seabold a ?crit : >>> Just uncovered a bug doingsome root finding >>> >>> This call signature for qr is (no longer?) valid >>> >>> https://github.com/scipy/scipy/blob/master/scipy/optimize/nonlin.py#L844 >>> >>> Should be easy to update the call signature, but I don't know how >>> quickly and it's untested. >>> >>> http://projects.scipy.org/scipy/ticket/1682 >> >> Fixed in master. >> Should this be backported for 0.11 as well? > > I think yes. This points out that the test coverage is incomplete here. > I uncovered some more unusual behavior doing root finding on functions with complex numbers. Is this assumed to be supported? If so, I can file more bug reports. Skipper From denis at laxalde.org Tue Jun 26 15:49:57 2012 From: denis at laxalde.org (Denis Laxalde) Date: Tue, 26 Jun 2012 15:49:57 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: <4FDA2E73.4010906@uci.edu> <4FE9C23B.6050503@laxalde.org> Message-ID: <4FEA1265.9020604@laxalde.org> Skipper Seabold a ?crit : >>>> http://projects.scipy.org/scipy/ticket/1682 >>> >>> Fixed in master. >>> Should this be backported for 0.11 as well? >> >> I think yes. This points out that the test coverage is incomplete here. Done. > I uncovered some more unusual behavior doing root finding on functions > with complex numbers. Is this assumed to be supported? If so, I can > file more bug reports. The nonlin module has tests with complex functions, so I guess it is supposed to be supported. Other modules don't apparently. -- Denis From jsseabold at gmail.com Tue Jun 26 16:04:00 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 26 Jun 2012 16:04:00 -0400 Subject: [SciPy-User] Broyden with complex numbers [Was Re: ANN: SciPy 0.11.0 beta 1 released] Message-ID: On Tue, Jun 26, 2012 at 3:49 PM, Denis Laxalde wrote: > Skipper Seabold a ?crit : >>>>> http://projects.scipy.org/scipy/ticket/1682 >>>> >>>> Fixed in master. Thanks for having a look, btw. >>>> Should this be backported for 0.11 as well? >>> >>> I think yes. This points out that the test coverage is incomplete here. > > Done. > >> I uncovered some more unusual behavior doing root finding on functions >> with complex numbers. Is this assumed to be supported? If so, I can >> file more bug reports. > > The nonlin module has tests with complex functions, so I guess it is > supposed to be supported. Other modules don't apparently. This is using broyden. Maybe it's my numpy? '1.8.0.dev-6a06466' I've since figured out there's a closed form solution to this function, but ... from scipy import optimize import numpy as np z = np.array([ 1.01839781-0.j, 0.29594979-0.9802577j, 0.29594979+0.9802577j, -1.47466178-0.j]) r = np.abs(z) func = lambda x : (r*np.exp(2j*np.pi*np.array(x))) - z # correct result optimize.broyden1(func, [0, -.203, .203, .5]) # raises an error optimize.broyden1(func, [0, -.203, .203, .5], line_search=None) TypeError: Cannot cast ufunc add output from dtype('complex128') to dtype('float64') with casting rule 'same_kind' Skipper From pav at iki.fi Tue Jun 26 16:53:13 2012 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 26 Jun 2012 22:53:13 +0200 Subject: [SciPy-User] Broyden with complex numbers [Was Re: ANN: SciPy 0.11.0 beta 1 released] In-Reply-To: References: Message-ID: 26.06.2012 22:04, Skipper Seabold kirjoitti: > This is using broyden. Maybe it's my numpy? > '1.8.0.dev-6a06466' [clip] > # raises an error > optimize.broyden1(func, [0, -.203, .203, .5], line_search=None) > > > > TypeError: Cannot cast ufunc add output from dtype('complex128') to > dtype('float64') with casting rule 'same_kind' I get "nonlin.py:314: ComplexWarning: Casting complex values to real discards the imaginary part" with Numpy 1.6.1. There's indeed a probable bug on line 314 --- should maybe do "x = x + d" instead of "x += d". And yes, I think the inexact Newton should work OK also on the complex plane. I'm not sure whether all the Jacobian approximations are fully complex-safe, though. Pauli From denis at laxalde.org Tue Jun 26 16:59:21 2012 From: denis at laxalde.org (Denis Laxalde) Date: Tue, 26 Jun 2012 16:59:21 -0400 Subject: [SciPy-User] Broyden with complex numbers [Was Re: ANN: SciPy 0.11.0 beta 1 released] In-Reply-To: References: Message-ID: <4FEA22A9.3000400@laxalde.org> Pauli Virtanen a ?crit : >> > TypeError: Cannot cast ufunc add output from dtype('complex128') to >> > dtype('float64') with casting rule 'same_kind' > I get "nonlin.py:314: ComplexWarning: Casting complex values to real > discards the imaginary part" with Numpy 1.6.1. > > There's indeed a probable bug on line 314 --- should maybe do > "x = x + d" instead of "x += d". I agree. That would be consistent with what's done with line_search. -- Denis From charlesr.harris at gmail.com Tue Jun 26 18:02:29 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 26 Jun 2012 16:02:29 -0600 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: <4FDA2E73.4010906@uci.edu> <4FE9C23B.6050503@laxalde.org> Message-ID: On Tue, Jun 26, 2012 at 1:37 PM, Skipper Seabold wrote: > On Tue, Jun 26, 2012 at 3:30 PM, Pauli Virtanen wrote: > > 26.06.2012 16:07, Denis Laxalde kirjoitti: > >> Skipper Seabold a ?crit : > >>> Just uncovered a bug doingsome root finding > >>> > >>> This call signature for qr is (no longer?) valid > >>> > >>> > https://github.com/scipy/scipy/blob/master/scipy/optimize/nonlin.py#L844 > >>> > >>> Should be easy to update the call signature, but I don't know how > >>> quickly and it's untested. > >>> > >>> http://projects.scipy.org/scipy/ticket/1682 > >> > >> Fixed in master. > >> Should this be backported for 0.11 as well? > > > > I think yes. This points out that the test coverage is incomplete here. > > > > I uncovered some more unusual behavior doing root finding on functions > with complex numbers. Is this assumed to be supported? If so, I can > file more bug reports. > What sort of problems? What sort of functions? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Tue Jun 26 18:05:25 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 26 Jun 2012 18:05:25 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: References: <4FDA2E73.4010906@uci.edu> <4FE9C23B.6050503@laxalde.org> Message-ID: On Tue, Jun 26, 2012 at 6:02 PM, Charles R Harris wrote: > > > On Tue, Jun 26, 2012 at 1:37 PM, Skipper Seabold > wrote: >> >> On Tue, Jun 26, 2012 at 3:30 PM, Pauli Virtanen wrote: >> > 26.06.2012 16:07, Denis Laxalde kirjoitti: >> >> Skipper Seabold a ?crit : >> >>> Just uncovered a bug doingsome root finding >> >>> >> >>> This call signature for qr is (no longer?) valid >> >>> >> >>> >> >>> https://github.com/scipy/scipy/blob/master/scipy/optimize/nonlin.py#L844 >> >>> >> >>> Should be easy to update the call signature, but I don't know how >> >>> quickly and it's untested. >> >>> >> >>> http://projects.scipy.org/scipy/ticket/1682 >> >> >> >> Fixed in master. >> >> Should this be backported for 0.11 as well? >> > >> > I think yes. This points out that the test coverage is incomplete here. >> > >> >> I uncovered some more unusual behavior doing root finding on functions >> with complex numbers. Is this assumed to be supported? If so, I can >> file more bug reports. > > > What sort of problems? What sort of functions? > Started a new thread: "Broyden with complex numbers" Skipper From pmhobson at gmail.com Tue Jun 26 20:32:48 2012 From: pmhobson at gmail.com (Paul Hobson) Date: Tue, 26 Jun 2012 17:32:48 -0700 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: References: Message-ID: On Fri, Jun 22, 2012 at 6:40 PM, Brandon Stafford wrote: > Hi all, > > I've been working on a small board for scientists (and artists) that > runs Python. The idea is to make something like an Arduino, but beefed > up to handle real web traffic. Some background on the board at > http://www.engadget.com/2012/06/13/rascal-micro-hands-on-video/ and > more at http://rascalmicro.com > > I'm shipping boards with scipy and numpy preinstalled, and I'm > wondering what other libraries complement those two well. I'm aware of > (and actually have a library of my own listed on) the list at > http://www.scipy.org/Topical_Software but it's hard to tell what's > both mature and widely used. > > What are the top 3 or 4 libraries you use, either for control during > experiments or for data analysis? I love these bare bones machines that are coming out! I mostly use (judging from my ipython history): matplotlib pandas pyodbc/MySQLdb/psycopg2 GDAL + other GIS-type stuff From mohanr at fss.co.in Wed Jun 27 05:40:38 2012 From: mohanr at fss.co.in (Mohan Radhakrishnan) Date: Wed, 27 Jun 2012 15:10:38 +0530 Subject: [SciPy-User] Simulate web requests Message-ID: <3BAA9108A76CF149A553864B23D47E13C334F3@fssbemail.fss.india> Hi, I am looking for some assistance with these requirements. We are trying to simulate web requests to hit a web applications. Let's assume I have a url to hit that requires a username and password. 1. These web requests should have exponential inter arrival times. 2. Generate self-similar traffic. I am not very sure about the properties of this traffic. Is Variance Time plot used for analyzing if the data received from this simulation is self-similar ? Are there examples using SciPY that can help in creating a program that can generate this traffic ? Recommended reading ? I am also trying to use R for this. Thanks, Mohan DISCLAIMER: ==========================================================================================================================================================The information contained in this e-mail message may be privileged and/or confidential and protected from disclosure under applicable law. It is intended only for the individual to whom or entity to which it is addressed as shown at the beginning of the message. If the reader of this message is not the intended recipient, or if the employee or agent responsible for delivering the message is not an employee or agent of the intended recipient, you are hereby notified that any review, dissemination,distribution, use, or copying of this message is strictly prohibited. If you have received this message in error, please notify us immediately by return e-mail and permanently delete this message and your reply to the extent it includes this message. Any views or opinions presented in this message or attachments are those of the author and do not necessarily represent those of the Company. All e-mails and attachments sent and received are subject to monitoring, reading, and archival by the Company.========================================================================================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Wed Jun 27 05:45:02 2012 From: sturla at molden.no (Sturla Molden) Date: Wed, 27 Jun 2012 11:45:02 +0200 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: References: Message-ID: <4FEAD61E.2080904@molden.no> On 23.06.2012 03:40, Brandon Stafford wrote: > What are the top 3 or 4 libraries you use, either for control during > experiments or for data analysis? ROOT from CERN (C++, Python and GUI frontends) IMSL (Python, C and Fortran frontends) NAG (call from ctypes, Cython, f2py) ACML (call from ctypes, Cython, f2py) Intel MKL (call from ctypes, Cython, f2py) VTK and Mayavi2 PyOpenGL OpenCV PyOpenCL, PyCUDA wxPython PySide PyTables From sergio.vera at alma3d.com Wed Jun 27 06:15:48 2012 From: sergio.vera at alma3d.com (Sergio Vera) Date: Wed, 27 Jun 2012 12:15:48 +0200 Subject: [SciPy-User] Useful libraries beyond numpy and scipy? In-Reply-To: <4FEAD61E.2080904@molden.no> References: <4FEAD61E.2080904@molden.no> Message-ID: http://scikit-learn.org/ http://scikits-image.org/ I would love to seer other image processing stuff, however, I'm not sure of the capabilities of the hardware dealing with the more ambitious OpenCV, OpenCL, Mayavi2.... On Wed, Jun 27, 2012 at 11:45 AM, Sturla Molden wrote: > On 23.06.2012 03:40, Brandon Stafford wrote: > > > What are the top 3 or 4 libraries you use, either for control during > > experiments or for data analysis? > > ROOT from CERN (C++, Python and GUI frontends) > IMSL (Python, C and Fortran frontends) > NAG (call from ctypes, Cython, f2py) > ACML (call from ctypes, Cython, f2py) > Intel MKL (call from ctypes, Cython, f2py) > VTK and Mayavi2 > PyOpenGL > OpenCV > PyOpenCL, PyCUDA > wxPython > PySide > PyTables > > > > > > > > > > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sergio Vera Alma IT Systems C/ Vilana, 4B, 4? 1? 08022 Barcelona T. (+34) 932 380 592 www.alma3d.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at laxalde.org Wed Jun 27 09:57:46 2012 From: denis at laxalde.org (Denis Laxalde) Date: Wed, 27 Jun 2012 09:57:46 -0400 Subject: [SciPy-User] Broyden with complex numbers [Was Re: ANN: SciPy 0.11.0 beta 1 released] In-Reply-To: <4FEA22A9.3000400@laxalde.org> References: <4FEA22A9.3000400@laxalde.org> Message-ID: <4FEB115A.8030409@laxalde.org> Denis Laxalde a ?crit : >>>> TypeError: Cannot cast ufunc add output from dtype('complex128') to >>>> dtype('float64') with casting rule 'same_kind' >> I get "nonlin.py:314: ComplexWarning: Casting complex values to real >> discards the imaginary part" with Numpy 1.6.1. >> >> There's indeed a probable bug on line 314 --- should maybe do >> "x = x + d" instead of "x += d". > > I agree. That would be consistent with what's done with line_search. Fixed in master and maintenance/0.11.x. -- Denis From jsseabold at gmail.com Wed Jun 27 09:59:26 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 27 Jun 2012 09:59:26 -0400 Subject: [SciPy-User] Broyden with complex numbers [Was Re: ANN: SciPy 0.11.0 beta 1 released] In-Reply-To: <4FEB115A.8030409@laxalde.org> References: <4FEA22A9.3000400@laxalde.org> <4FEB115A.8030409@laxalde.org> Message-ID: On Wed, Jun 27, 2012 at 9:57 AM, Denis Laxalde wrote: > Denis Laxalde a ?crit : >>>>> ? TypeError: Cannot cast ufunc add output from dtype('complex128') to >>>>> ? dtype('float64') with casting rule 'same_kind' >>> I get "nonlin.py:314: ComplexWarning: Casting complex values to real >>> discards the imaginary part" with Numpy 1.6.1. >>> >>> There's indeed a probable bug on line 314 --- should maybe do >>> "x = x + d" instead of "x += d". >> >> I agree. That would be consistent with what's done with line_search. > > Fixed in master and maintenance/0.11.x. > Great, thanks for looking into this. Skipper From vanforeest at gmail.com Wed Jun 27 11:13:41 2012 From: vanforeest at gmail.com (nicky van foreest) Date: Wed, 27 Jun 2012 17:13:41 +0200 Subject: [SciPy-User] Simulate web requests In-Reply-To: <3BAA9108A76CF149A553864B23D47E13C334F3@fssbemail.fss.india> References: <3BAA9108A76CF149A553864B23D47E13C334F3@fssbemail.fss.india> Message-ID: Hi Mohan, You might consider using simpy for the simulations. Here you can also find a simple, straightforward, simulation of a generic single-server queue: http://nicky.vanforeest.com/queueing/simulateQueuingProcess/queue.html. It should not be too hard to modify to a multi (infinite?) server queue. Nicky On 27 June 2012 11:40, Mohan Radhakrishnan wrote: > Hi, > > ???????? I am looking for some assistance with these requirements. ?We are > trying to simulate web requests to hit a web applications. Let?s assume I > have a url to hit that requires a username and password. > > > > These web requests should have exponential inter arrival times. > Generate self-similar traffic. I am not very sure about the properties of > this traffic. Is Variance Time plot used for analyzing if the data received > from this simulation is self-similar ? > > > > Are there examples using SciPY that can help in creating a program that can > generate this traffic ? ?Recommended reading ?? I am also trying to use R > for this. > > > > > > Thanks, > > Mohan > > > > DISCLAIMER: > ========================================================================================================================================================== > The information contained in this e-mail message may be privileged and/or > confidential and protected from disclosure under applicable law. It is > intended only for the individual to whom or entity to which it is addressed > as shown at the beginning of the message. If the reader of this message is > not the intended recipient, or if the employee or agent responsible for > delivering the message is not an employee or agent of the intended > recipient, you are hereby notified that any review, > dissemination,distribution, use, or copying of this message is strictly > prohibited. If you have received this message in error, please notify us > immediately by return e-mail and permanently delete this message and your > reply to the extent it includes this message. Any views or opinions > presented in this message or attachments are those of the author and do not > necessarily represent those of the Company. All e-mails and attachments sent > and received are subject to monitoring, reading, and archival by the > Company. > ========================================================================================================================================================== > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From denis at laxalde.org Wed Jun 27 13:25:18 2012 From: denis at laxalde.org (Denis Laxalde) Date: Wed, 27 Jun 2012 13:25:18 -0400 Subject: [SciPy-User] Broyden with complex numbers [Was Re: ANN: SciPy 0.11.0 beta 1 released] In-Reply-To: References: <4FEA22A9.3000400@laxalde.org> <4FEB115A.8030409@laxalde.org> Message-ID: <4FEB41FE.70904@laxalde.org> Skipper Seabold a ?crit : >>>>>> TypeError: Cannot cast ufunc add output from dtype('complex128') to >>>>>> dtype('float64') with casting rule 'same_kind' >>>> I get "nonlin.py:314: ComplexWarning: Casting complex values to real >>>> discards the imaginary part" with Numpy 1.6.1. >>>> >>>> There's indeed a probable bug on line 314 --- should maybe do >>>> "x = x + d" instead of "x += d". >>> >>> I agree. That would be consistent with what's done with line_search. >> >> Fixed in master and maintenance/0.11.x. >> > > Great, thanks for looking into this. That being said, your example still fails for me because of NaNs... -- Denis From jsseabold at gmail.com Wed Jun 27 13:29:26 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 27 Jun 2012 13:29:26 -0400 Subject: [SciPy-User] Broyden with complex numbers [Was Re: ANN: SciPy 0.11.0 beta 1 released] In-Reply-To: <4FEB41FE.70904@laxalde.org> References: <4FEA22A9.3000400@laxalde.org> <4FEB115A.8030409@laxalde.org> <4FEB41FE.70904@laxalde.org> Message-ID: On Wed, Jun 27, 2012 at 1:25 PM, Denis Laxalde wrote: > Skipper Seabold a ?crit : > >>>>>> TypeError: Cannot cast ufunc add output from dtype('complex128') > to > >>>>>> dtype('float64') with casting rule 'same_kind' > >>>> I get "nonlin.py:314: ComplexWarning: Casting complex values to real > >>>> discards the imaginary part" with Numpy 1.6.1. > >>>> > >>>> There's indeed a probable bug on line 314 --- should maybe do > >>>> "x = x + d" instead of "x += d". > >>> > >>> I agree. That would be consistent with what's done with line_search. > >> > >> Fixed in master and maintenance/0.11.x. > >> > > > > Great, thanks for looking into this. > > That being said, your example still fails for me because of NaNs... > Hmm, I'm not sure if it's a 'typlcal' root finding problem, but I was definitely having problems with most of the solvers trying to get a solution when I don't know good starting values. It's why I ended up finding the closed form solution. Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Jun 27 17:47:27 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 27 Jun 2012 23:47:27 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 beta 1 released In-Reply-To: <4FDA2E73.4010906@uci.edu> References: <4FDA2E73.4010906@uci.edu> Message-ID: On Thu, Jun 14, 2012 at 8:33 PM, Christoph Gohlke wrote: > > > On 6/13/2012 10:56 AM, Ralf Gommers wrote: > > Hi, > > > > I am pleased to announce the availability of the first beta release of > > SciPy 0.11.0. For this release over 120 tickets and pull requests have > > been closed, and many new features have been added. Also noteworthy is > > that the number of contributors for this release has risen to over 50. > > Some of the highlights are: > > > > - A new module, sparse.csgraph, has been added which provides a > > number of common sparse graph algorithms. > > - New unified interfaces to the existing optimization and root > > finding functions have been added. > > > > Sources and binaries can be found at > > http://sourceforge.net/projects/scipy/files/scipy/0.11.0b1/, release > > notes are copied below. > > > > Please try this release and report any problems on the mailing list. > > > > Cheers, > > Ralf > > > > Hi Ralf, > > Using msvc9/MKL builds and numpy 1.6.2: > > Thanks Scott and Christoph for testing. > All tests pass on win-amd64-py2.7 and win32-py2.7 (except for the known > ndimage.test_datatypes.test_uint64_max failure on win32). > OK, will handle that the same way as for 0.10.x. > > test_smoke_bisplrep_bisplev fails on win-amd64-py3.2 and win32-py3.2. > This is due to a less than brilliant py3k change: map(None, x, y) doesn't work anymore and the recommended spelling is now this (??): list(map(lambda *a: a, zip(*itertools.zip_longest(x, y)))) > test_shortest_path.test_shortest_path_indices fails on win-amd64-py3.2. > > This still needs investigation. The weave error that Scott reported on 3.x should probably simply be marked as knownfail in master and 0.11.x. Ralf > Christoph > > > ====================================================================== > ERROR: test_smoke_bisplrep_bisplev (test_fitpack.TestSmokeTests) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > line 219, in test_smoke_bisplrep_bisplev > self.check_5() > File > > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > line 180, in check_5 > xy=makepairs(x,y) > File > > "X:\Python32-x32\lib\site-packages\scipy\interpolate\tests\test_fitpack.py", > line 32, in makepairs > xy.shape=sh[0]*sh[1],sh[2] > IndexError: tuple index out of range > > ====================================================================== > FAIL: test_shortest_path.test_shortest_path_indices(array([[ 0. , 3. > , 3. , 1. , 2.858], > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "X:\Python32\lib\site-packages\nose\case.py", line 198, in runTest > self.test(*self.arg) > File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line > 800, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "X:\Python32\lib\site-packages\numpy\testing\utils.py", line > 636, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 6 decimals > > (mismatch 15.0%) > x: array([[ 0. , 3. , 3. , 1. , > 2.85772861], > [ 3. , 0. , 6. , 2. , 4. ], > [ 3. , 6. , 0. , 4. , 5.85772861], > [ 1. , 2. , 4. , 0. , 2.85772861]]) > y: array([[ 0., 3., 3., 1., 2.], > [ 3., 0., 6., 2., 4.], > [ 3., 6., 0., 4., 5.], > [ 1., 2., 4., 0., 2.]]) > > ---------------------------------------------------------------------- > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohanr at fss.co.in Thu Jun 28 00:52:02 2012 From: mohanr at fss.co.in (Mohan Radhakrishnan) Date: Thu, 28 Jun 2012 10:22:02 +0530 Subject: [SciPy-User] Simulate web requests In-Reply-To: Message-ID: <3BAA9108A76CF149A553864B23D47E13C33731@fssbemail.fss.india> Hi, Good article. I was earlier planning to use PDQ for this. Actually I was trying to use a statistical library to hit by web url in such a way that certain distributions of these hits can be simulated. It looks like SciPY code could be used to do that. I use basic R but when I want to simulate traffic like this I use java. It looked like SciPY can help write such code. It is a web statistical traffic simulation test for measuring and modeling for capacity planning. I will queues also to simulate but that is a different form of simulation ? Thanks, Mohan -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of nicky van foreest Sent: Wednesday, June 27, 2012 8:44 PM To: SciPy Users List Subject: Re: [SciPy-User] Simulate web requests Hi Mohan, You might consider using simpy for the simulations. Here you can also find a simple, straightforward, simulation of a generic single-server queue: http://nicky.vanforeest.com/queueing/simulateQueuingProcess/queue.html. It should not be too hard to modify to a multi (infinite?) server queue. Nicky On 27 June 2012 11:40, Mohan Radhakrishnan wrote: > Hi, > > I am looking for some assistance with these requirements. We are > trying to simulate web requests to hit a web applications. Let's assume I > have a url to hit that requires a username and password. > > > > These web requests should have exponential inter arrival times. > Generate self-similar traffic. I am not very sure about the properties of > this traffic. Is Variance Time plot used for analyzing if the data received > from this simulation is self-similar ? > > > > Are there examples using SciPY that can help in creating a program that can > generate this traffic ? Recommended reading ? I am also trying to use R > for this. > > > > > > Thanks, > > Mohan > > > > DISCLAIMER: > ========================================================================================================================================================== > The information contained in this e-mail message may be privileged and/or > confidential and protected from disclosure under applicable law. It is > intended only for the individual to whom or entity to which it is addressed > as shown at the beginning of the message. If the reader of this message is > not the intended recipient, or if the employee or agent responsible for > delivering the message is not an employee or agent of the intended > recipient, you are hereby notified that any review, > dissemination,distribution, use, or copying of this message is strictly > prohibited. If you have received this message in error, please notify us > immediately by return e-mail and permanently delete this message and your > reply to the extent it includes this message. Any views or opinions > presented in this message or attachments are those of the author and do not > necessarily represent those of the Company. All e-mails and attachments sent > and received are subject to monitoring, reading, and archival by the > Company. > ========================================================================================================================================================== > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user DISCLAIMER: ==========================================================================================================================================================The information contained in this e-mail message may be privileged and/or confidential and protected from disclosure under applicable law. It is intended only for the individual to whom or entity to which it is addressed as shown at the beginning of the message. If the reader of this message is not the intended recipient, or if the employee or agent responsible for delivering the message is not an employee or agent of the intended recipient, you are hereby notified that any review, dissemination,distribution, use, or copying of this message is strictly prohibited. If you have received this message in error, please notify us immediately by return e-mail and permanently delete this message and your reply to the extent it includes this message. Any views or opinions presented in this message or attachments are those of the author and do not necessarily represent those of the Company. All e-mails and attachments sent and received are subject to monitoring, reading, and archival by the Company.========================================================================================================================================================== From servant.mathieu at gmail.com Thu Jun 28 09:33:40 2012 From: servant.mathieu at gmail.com (servant mathieu) Date: Thu, 28 Jun 2012 15:33:40 +0200 Subject: [SciPy-User] Mean, variance, and parametrisation of an inverse Gaussian distribution Message-ID: Dear scipy users, The time for a diffusion process to reach a single evidence threshold a is often modeled as an inverse Gaussian distribution with mean (a/?) and variance (a*?2/?3 ), where ? represents the mean drift rate and ?2 the variance of the accumlulation process. How could I reparametrise the scipy.stats.invgauss* * function to manipulate those parameters? Cheers, Mathieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From webmaster at hasenkopf2000.net Thu Jun 28 11:26:22 2012 From: webmaster at hasenkopf2000.net (Andreas Pritschet) Date: Thu, 28 Jun 2012 17:26:22 +0200 Subject: [SciPy-User] Bessel function of complex order Message-ID: <4FEC779E.2020009@hasenkopf2000.net> Hi, I have noticed in the docs and some "bug reports" that Bessel functions in SciPy support only real order. But for my work I require a modified Bessel function of second kind of complex(!) order for complex values. Is in SciPy a chance of calculating something like scipy.special.kv(1j*k,1j), whereby k is an array?? Thanks and best regards Andi -- Andreas Pritschet Phone: +49 151 11728439 Homepage: http://www.pritschet.me GPG Pub Key: http://goo.gl/4mOsM -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 554 bytes Desc: OpenPGP digital signature URL: From surfcast23 at gmail.com Thu Jun 28 11:32:47 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 08:32:47 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions Message-ID: <34086886.post@talk.nabble.com> Hi I am trying to translate a Matlab code from Trefethen's Spectral Methods in MATLAB to Python. I am running into a problem with numpy.concatenate wanting the arrays to have the same number of dimensions. Here is part of the Matlab code that I am rewriting N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; a = .1; c = a + sin (x-1).^2; v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; D = toeplitz(column,-column); % Time-stepping by leap frog formula: tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') plotgap = round(tplot/dt); dt = tplot/plotgap; nplots = round(tmax/tplot); data = [v; zeros(nplots,N)]; tdata = t; What I am trying in Python for j in range(1,N): tmax = 15; tplot = .15; h = 2*pi/N; dt = h/4; nplots = around(tmax/tplot); plotgap = around(tplot/dt); dt = tplot/plotgap; x = h*arange(1,N); c = a+np.sin(x-1)**2; v = exp(-100*(x-1)**2); vold = exp(-100*(x-a*dt-1)**2); column = ((0.5*(-1)**arange(1,N-1))*cotdg(arange(1,N-1))*(h/2)); D = toeplitz(column,-column); k = np.zeros(((nplots,N))); data = np.concatenate((v, k),axis = 0); tdata = t; I have highlighted the vectors in question. v.shape gives (511,) k.shape gives (100, 512) I would appreciate any help with this. Thank you! -- View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34086886.html Sent from the Scipy-User mailing list archive at Nabble.com. From hnry2k at hotmail.com Thu Jun 28 11:48:17 2012 From: hnry2k at hotmail.com (=?iso-8859-1?B?Sm9yZ2UgRS4gtFNhbmNoZXogU2FuY2hleg==?=) Date: Thu, 28 Jun 2012 10:48:17 -0500 Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: <34086886.post@talk.nabble.com> References: <34086886.post@talk.nabble.com> Message-ID: Hi, it seems to me that you don't know that loops in python end before the last index value of the loop, you just need to do something like: Np1 = N+1 for j in range (1, Np1): HTH Best Regards Jorge > Date: Thu, 28 Jun 2012 08:32:47 -0700 > From: surfcast23 at gmail.com > To: scipy-user at scipy.org > Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions > > > Hi I am trying to translate a Matlab code from Trefethen's Spectral Methods > in MATLAB to Python. I am running into a problem with numpy.concatenate > wanting the arrays to have the same number of dimensions. > Here is part of the Matlab code that I am rewriting > > > N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; > a = .1; > c = a + sin (x-1).^2; > v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); > > column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; > D = toeplitz(column,-column); > > % Time-stepping by leap frog formula: > tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') > plotgap = round(tplot/dt); dt = tplot/plotgap; > nplots = round(tmax/tplot); > data = [v; zeros(nplots,N)]; tdata = t; > > > What I am trying in Python > > for j in range(1,N): > tmax = 15; > tplot = .15; > h = 2*pi/N; > dt = h/4; > nplots = around(tmax/tplot); > plotgap = around(tplot/dt); > dt = tplot/plotgap; > x = h*arange(1,N); > c = a+np.sin(x-1)**2; > v = exp(-100*(x-1)**2); > vold = exp(-100*(x-a*dt-1)**2); > > column = ((0.5*(-1)**arange(1,N-1))*cotdg(arange(1,N-1))*(h/2)); > D = toeplitz(column,-column); > > k = np.zeros(((nplots,N))); > data = np.concatenate((v, k),axis = 0); > tdata = t; > > > I have highlighted the vectors in question. > v.shape gives (511,) > > k.shape gives (100, 512) > > I would appreciate any help with this. Thank you! > > -- > View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34086886.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Thu Jun 28 12:06:11 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Thu, 28 Jun 2012 12:06:11 -0400 Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: References: <34086886.post@talk.nabble.com> Message-ID: On Thu, Jun 28, 2012 at 11:48 AM, Jorge E. ?Sanchez Sanchez < hnry2k at hotmail.com> wrote: > Hi, > > it seems to me that you don't know that loops in python end before the > last index value of the loop, > you just need to do something like: > Np1 = N+1 > for j in range (1, Np1): > > HTH > Best Regards > Jorge > > > > Date: Thu, 28 Jun 2012 08:32:47 -0700 > > From: surfcast23 at gmail.com > > To: scipy-user at scipy.org > > Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same > number of dimensions > > > > > > > Hi I am trying to translate a Matlab code from Trefethen's Spectral > Methods > > in MATLAB to Python. I am running into a problem with numpy.concatenate > > wanting the arrays to have the same number of dimensions. > > Here is part of the Matlab code that I am rewriting > > > > > > N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; > > a = .1; > > c = a + sin (x-1).^2; > > v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); > > > > column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; > > D = toeplitz(column,-column); > > > > % Time-stepping by leap frog formula: > > tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') > > plotgap = round(tplot/dt); dt = tplot/plotgap; > > nplots = round(tmax/tplot); > > data = [v; zeros(nplots,N)]; tdata = t; > > > > > > What I am trying in Python > > > > for j in range(1,N): > [snip] You actually don't need to loop. Your original matlab code translates fairly naturally into numpy code: the main difference being the substitution of `np.arange(N) + 1` for `[1:N]`, where `np.arange(N)` gives values from 0 to N - 1, as Jorge suggests (although it's not specific to loops). Here's a rough translation (not tested against matlab output, so I could have made some errors in the translation). Cheers, -Tony #~~~ code import numpy as np from scipy.linalg import toeplitz from scipy.special import cotdg N = 512 h = 2 * np.pi/N x = h * (np.arange(N) + 1) t = 0 dt = h / 4 a = .1 c = a + np.sin(x - 1)**2 v = np.exp(-100 * (x - 1)**2) vold = np.exp(-100 * (x - a*dt - 1)**2) i = np.arange(1, N) column = np.hstack([0, .5 * (-1**i) * cotdg(i * h/2)]) D = toeplitz(column, -column) -------------- next part -------------- An HTML attachment was scrubbed... URL: From surfcast23 at gmail.com Thu Jun 28 12:25:48 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 09:25:48 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: References: <34086886.post@talk.nabble.com> Message-ID: <34087175.post@talk.nabble.com> Hi Jorgre, I totally forgot the fact that python loops go to N-1 thanks. I am now getting this for the array shapes v.shape (512,) k.shape (100, 512) and still the error data = np.concatenate((v, k),axis = 0); ValueError: arrays must have same number of dimensions do I need to some how make the first dimension of k 512? Jorge E. ?Sanchez Sanchez wrote: > > > Hi, > > it seems to me that you don't know that loops in python end before the > last index value of the loop, > you just need to do something like: > Np1 = N+1 > for j in range (1, Np1): > > HTH > Best Regards > Jorge > > >> Date: Thu, 28 Jun 2012 08:32:47 -0700 >> From: surfcast23 at gmail.com >> To: scipy-user at scipy.org >> Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same >> number of dimensions >> >> >> Hi I am trying to translate a Matlab code from Trefethen's Spectral >> Methods >> in MATLAB to Python. I am running into a problem with numpy.concatenate >> wanting the arrays to have the same number of dimensions. >> Here is part of the Matlab code that I am rewriting >> >> >> N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; >> a = .1; >> c = a + sin (x-1).^2; >> v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); >> >> column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; >> D = toeplitz(column,-column); >> >> % Time-stepping by leap frog formula: >> tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') >> plotgap = round(tplot/dt); dt = tplot/plotgap; >> nplots = round(tmax/tplot); >> data = [v; zeros(nplots,N)]; tdata = t; >> >> >> What I am trying in Python >> >> for j in range(1,N): >> tmax = 15; >> tplot = .15; >> h = 2*pi/N; >> dt = h/4; >> nplots = around(tmax/tplot); >> plotgap = around(tplot/dt); >> dt = tplot/plotgap; >> x = h*arange(1,N); >> c = a+np.sin(x-1)**2; >> v = exp(-100*(x-1)**2); >> vold = exp(-100*(x-a*dt-1)**2); >> >> column = ((0.5*(-1)**arange(1,N-1))*cotdg(arange(1,N-1))*(h/2)); >> D = toeplitz(column,-column); >> >> k = np.zeros(((nplots,N))); >> data = np.concatenate((v, k),axis = 0); >> tdata = t; >> >> >> I have highlighted the vectors in question. >> v.shape gives (511,) >> >> k.shape gives (100, 512) >> >> I would appreciate any help with this. Thank you! >> >> -- >> View this message in context: >> http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34086886.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087175.html Sent from the Scipy-User mailing list archive at Nabble.com. From hnry2k at hotmail.com Thu Jun 28 12:31:09 2012 From: hnry2k at hotmail.com (=?iso-8859-1?B?Sm9yZ2UgRS4gtFNhbmNoZXogU2FuY2hleg==?=) Date: Thu, 28 Jun 2012 11:31:09 -0500 Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: <34087175.post@talk.nabble.com> References: <34086886.post@talk.nabble.com>, , <34087175.post@talk.nabble.com> Message-ID: Look at Tony Yu's nice translation, it should work Regards Jorge > Date: Thu, 28 Jun 2012 09:25:48 -0700 > From: surfcast23 at gmail.com > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions > > > Hi Jorgre, > > I totally forgot the fact that python loops go to N-1 thanks. I am now > getting this for the array shapes > > v.shape (512,) > k.shape (100, 512) > > and still the error > > data = np.concatenate((v, k),axis = 0); > ValueError: arrays must have same number of dimensions > > > do I need to some how make the first dimension of k 512? > > > Jorge E. ?Sanchez Sanchez wrote: > > > > > > Hi, > > > > it seems to me that you don't know that loops in python end before the > > last index value of the loop, > > you just need to do something like: > > Np1 = N+1 > > for j in range (1, Np1): > > > > HTH > > Best Regards > > Jorge > > > > > >> Date: Thu, 28 Jun 2012 08:32:47 -0700 > >> From: surfcast23 at gmail.com > >> To: scipy-user at scipy.org > >> Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same > >> number of dimensions > >> > >> > >> Hi I am trying to translate a Matlab code from Trefethen's Spectral > >> Methods > >> in MATLAB to Python. I am running into a problem with numpy.concatenate > >> wanting the arrays to have the same number of dimensions. > >> Here is part of the Matlab code that I am rewriting > >> > >> > >> N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; > >> a = .1; > >> c = a + sin (x-1).^2; > >> v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); > >> > >> column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; > >> D = toeplitz(column,-column); > >> > >> % Time-stepping by leap frog formula: > >> tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') > >> plotgap = round(tplot/dt); dt = tplot/plotgap; > >> nplots = round(tmax/tplot); > >> data = [v; zeros(nplots,N)]; tdata = t; > >> > >> > >> What I am trying in Python > >> > >> for j in range(1,N): > >> tmax = 15; > >> tplot = .15; > >> h = 2*pi/N; > >> dt = h/4; > >> nplots = around(tmax/tplot); > >> plotgap = around(tplot/dt); > >> dt = tplot/plotgap; > >> x = h*arange(1,N); > >> c = a+np.sin(x-1)**2; > >> v = exp(-100*(x-1)**2); > >> vold = exp(-100*(x-a*dt-1)**2); > >> > >> column = ((0.5*(-1)**arange(1,N-1))*cotdg(arange(1,N-1))*(h/2)); > >> D = toeplitz(column,-column); > >> > >> k = np.zeros(((nplots,N))); > >> data = np.concatenate((v, k),axis = 0); > >> tdata = t; > >> > >> > >> I have highlighted the vectors in question. > >> v.shape gives (511,) > >> > >> k.shape gives (100, 512) > >> > >> I would appreciate any help with this. Thank you! > >> > >> -- > >> View this message in context: > >> http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34086886.html > >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087175.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From surfcast23 at gmail.com Thu Jun 28 12:32:07 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 09:32:07 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: References: <34086886.post@talk.nabble.com> Message-ID: <34087216.post@talk.nabble.com> Hi Tony, I tried you r translation and it works, but I still get the error line 50, in data = np.concatenate((v, k),axis = 0); ValueError: arrays must have same number of dimensions Thanks, Khary Tony Yu-3 wrote: > > On Thu, Jun 28, 2012 at 11:48 AM, Jorge E. ?Sanchez Sanchez < > hnry2k at hotmail.com> wrote: > >> Hi, >> >> it seems to me that you don't know that loops in python end before the >> last index value of the loop, >> you just need to do something like: >> Np1 = N+1 >> for j in range (1, Np1): >> >> HTH >> Best Regards >> Jorge >> > >> >> > Date: Thu, 28 Jun 2012 08:32:47 -0700 >> > From: surfcast23 at gmail.com >> > To: scipy-user at scipy.org >> > Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same >> number of dimensions >> >> > >> > >> > Hi I am trying to translate a Matlab code from Trefethen's Spectral >> Methods >> > in MATLAB to Python. I am running into a problem with numpy.concatenate >> > wanting the arrays to have the same number of dimensions. >> > Here is part of the Matlab code that I am rewriting >> > >> > >> > N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; >> > a = .1; >> > c = a + sin (x-1).^2; >> > v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); >> > >> > column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; >> > D = toeplitz(column,-column); >> > >> > % Time-stepping by leap frog formula: >> > tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') >> > plotgap = round(tplot/dt); dt = tplot/plotgap; >> > nplots = round(tmax/tplot); >> > data = [v; zeros(nplots,N)]; tdata = t; >> > >> > >> > What I am trying in Python >> > >> > for j in range(1,N): >> > > [snip] > > You actually don't need to loop. Your original matlab code translates > fairly naturally into numpy code: the main difference being the > substitution of `np.arange(N) + 1` for `[1:N]`, where `np.arange(N)` gives > values from 0 to N - 1, as Jorge suggests (although it's not specific to > loops). > > Here's a rough translation (not tested against matlab output, so I could > have made some errors in the translation). > > Cheers, > -Tony > > #~~~ code > import numpy as np > from scipy.linalg import toeplitz > from scipy.special import cotdg > > N = 512 > h = 2 * np.pi/N > x = h * (np.arange(N) + 1) > t = 0 > dt = h / 4 > a = .1 > c = a + np.sin(x - 1)**2 > v = np.exp(-100 * (x - 1)**2) > vold = np.exp(-100 * (x - a*dt - 1)**2) > > i = np.arange(1, N) > column = np.hstack([0, .5 * (-1**i) * cotdg(i * h/2)]) > D = toeplitz(column, -column) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087216.html Sent from the Scipy-User mailing list archive at Nabble.com. From hnry2k at hotmail.com Thu Jun 28 12:44:47 2012 From: hnry2k at hotmail.com (=?iso-8859-1?B?Sm9yZ2UgRS4gtFNhbmNoZXogU2FuY2hleg==?=) Date: Thu, 28 Jun 2012 11:44:47 -0500 Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: <34087175.post@talk.nabble.com> References: <34086886.post@talk.nabble.com>, , <34087175.post@talk.nabble.com> Message-ID: I was looking at the numpy manual -http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html from it I can assure you that you should concatenate along axis=1 not axis = 0. Jorge > Date: Thu, 28 Jun 2012 09:25:48 -0700 > From: surfcast23 at gmail.com > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions > > > Hi Jorgre, > > I totally forgot the fact that python loops go to N-1 thanks. I am now > getting this for the array shapes > > v.shape (512,) > k.shape (100, 512) > > and still the error > > data = np.concatenate((v, k),axis = 0); > ValueError: arrays must have same number of dimensions > > > do I need to some how make the first dimension of k 512? > > > Jorge E. ?Sanchez Sanchez wrote: > > > > > > Hi, > > > > it seems to me that you don't know that loops in python end before the > > last index value of the loop, > > you just need to do something like: > > Np1 = N+1 > > for j in range (1, Np1): > > > > HTH > > Best Regards > > Jorge > > > > > >> Date: Thu, 28 Jun 2012 08:32:47 -0700 > >> From: surfcast23 at gmail.com > >> To: scipy-user at scipy.org > >> Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same > >> number of dimensions > >> > >> > >> Hi I am trying to translate a Matlab code from Trefethen's Spectral > >> Methods > >> in MATLAB to Python. I am running into a problem with numpy.concatenate > >> wanting the arrays to have the same number of dimensions. > >> Here is part of the Matlab code that I am rewriting > >> > >> > >> N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; > >> a = .1; > >> c = a + sin (x-1).^2; > >> v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); > >> > >> column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; > >> D = toeplitz(column,-column); > >> > >> % Time-stepping by leap frog formula: > >> tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') > >> plotgap = round(tplot/dt); dt = tplot/plotgap; > >> nplots = round(tmax/tplot); > >> data = [v; zeros(nplots,N)]; tdata = t; > >> > >> > >> What I am trying in Python > >> > >> for j in range(1,N): > >> tmax = 15; > >> tplot = .15; > >> h = 2*pi/N; > >> dt = h/4; > >> nplots = around(tmax/tplot); > >> plotgap = around(tplot/dt); > >> dt = tplot/plotgap; > >> x = h*arange(1,N); > >> c = a+np.sin(x-1)**2; > >> v = exp(-100*(x-1)**2); > >> vold = exp(-100*(x-a*dt-1)**2); > >> > >> column = ((0.5*(-1)**arange(1,N-1))*cotdg(arange(1,N-1))*(h/2)); > >> D = toeplitz(column,-column); > >> > >> k = np.zeros(((nplots,N))); > >> data = np.concatenate((v, k),axis = 0); > >> tdata = t; > >> > >> > >> I have highlighted the vectors in question. > >> v.shape gives (511,) > >> > >> k.shape gives (100, 512) > >> > >> I would appreciate any help with this. Thank you! > >> > >> -- > >> View this message in context: > >> http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34086886.html > >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087175.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Thu Jun 28 12:44:51 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Thu, 28 Jun 2012 12:44:51 -0400 Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: <34087216.post@talk.nabble.com> References: <34086886.post@talk.nabble.com> <34087216.post@talk.nabble.com> Message-ID: Try this data = np.concatenate((v.reshape((512,1)).transpose(), k)) Cheers -- Oleksandr Huziy 2012/6/28 surfcast23 > > Hi Tony, > > I tried you r translation and it works, but I still get the error > > line 50, in data = np.concatenate((v, k),axis = 0); > ValueError: arrays must have same number of dimensions > > Thanks, > Khary > > > > Tony Yu-3 wrote: > > > > On Thu, Jun 28, 2012 at 11:48 AM, Jorge E. ?Sanchez Sanchez < > > hnry2k at hotmail.com> wrote: > > > >> Hi, > >> > >> it seems to me that you don't know that loops in python end before the > >> last index value of the loop, > >> you just need to do something like: > >> Np1 = N+1 > >> for j in range (1, Np1): > >> > >> HTH > >> Best Regards > >> Jorge > >> > > > >> > >> > Date: Thu, 28 Jun 2012 08:32:47 -0700 > >> > From: surfcast23 at gmail.com > >> > To: scipy-user at scipy.org > >> > Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same > >> number of dimensions > >> > >> > > >> > > >> > Hi I am trying to translate a Matlab code from Trefethen's Spectral > >> Methods > >> > in MATLAB to Python. I am running into a problem with > numpy.concatenate > >> > wanting the arrays to have the same number of dimensions. > >> > Here is part of the Matlab code that I am rewriting > >> > > >> > > >> > N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; > >> > a = .1; > >> > c = a + sin (x-1).^2; > >> > v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); > >> > > >> > column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; > >> > D = toeplitz(column,-column); > >> > > >> > % Time-stepping by leap frog formula: > >> > tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') > >> > plotgap = round(tplot/dt); dt = tplot/plotgap; > >> > nplots = round(tmax/tplot); > >> > data = [v; zeros(nplots,N)]; tdata = t; > >> > > >> > > >> > What I am trying in Python > >> > > >> > for j in range(1,N): > >> > > > > [snip] > > > > You actually don't need to loop. Your original matlab code translates > > fairly naturally into numpy code: the main difference being the > > substitution of `np.arange(N) + 1` for `[1:N]`, where `np.arange(N)` > gives > > values from 0 to N - 1, as Jorge suggests (although it's not specific to > > loops). > > > > Here's a rough translation (not tested against matlab output, so I could > > have made some errors in the translation). > > > > Cheers, > > -Tony > > > > #~~~ code > > import numpy as np > > from scipy.linalg import toeplitz > > from scipy.special import cotdg > > > > N = 512 > > h = 2 * np.pi/N > > x = h * (np.arange(N) + 1) > > t = 0 > > dt = h / 4 > > a = .1 > > c = a + np.sin(x - 1)**2 > > v = np.exp(-100 * (x - 1)**2) > > vold = np.exp(-100 * (x - a*dt - 1)**2) > > > > i = np.arange(1, N) > > column = np.hstack([0, .5 * (-1**i) * cotdg(i * h/2)]) > > D = toeplitz(column, -column) > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: > http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087216.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From surfcast23 at gmail.com Thu Jun 28 12:49:41 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 09:49:41 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: References: <34086886.post@talk.nabble.com> <34087216.post@talk.nabble.com> Message-ID: <34087320.post@talk.nabble.com> Hi sanGuziy, That did it. Thank you! can you explain why it worked for future reference? Khary sanGuziy wrote: > > Try this > > data = np.concatenate((v.reshape((512,1)).transpose(), k)) > > Cheers > -- > Oleksandr Huziy > > 2012/6/28 surfcast23 > >> >> Hi Tony, >> >> I tried you r translation and it works, but I still get the error >> >> line 50, in data = np.concatenate((v, k),axis = 0); >> ValueError: arrays must have same number of dimensions >> >> Thanks, >> Khary >> >> >> >> Tony Yu-3 wrote: >> > >> > On Thu, Jun 28, 2012 at 11:48 AM, Jorge E. ?Sanchez Sanchez < >> > hnry2k at hotmail.com> wrote: >> > >> >> Hi, >> >> >> >> it seems to me that you don't know that loops in python end before the >> >> last index value of the loop, >> >> you just need to do something like: >> >> Np1 = N+1 >> >> for j in range (1, Np1): >> >> >> >> HTH >> >> Best Regards >> >> Jorge >> >> >> > >> >> >> >> > Date: Thu, 28 Jun 2012 08:32:47 -0700 >> >> > From: surfcast23 at gmail.com >> >> > To: scipy-user at scipy.org >> >> > Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same >> >> number of dimensions >> >> >> >> > >> >> > >> >> > Hi I am trying to translate a Matlab code from Trefethen's Spectral >> >> Methods >> >> > in MATLAB to Python. I am running into a problem with >> numpy.concatenate >> >> > wanting the arrays to have the same number of dimensions. >> >> > Here is part of the Matlab code that I am rewriting >> >> > >> >> > >> >> > N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; >> >> > a = .1; >> >> > c = a + sin (x-1).^2; >> >> > v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); >> >> > >> >> > column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; >> >> > D = toeplitz(column,-column); >> >> > >> >> > % Time-stepping by leap frog formula: >> >> > tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') >> >> > plotgap = round(tplot/dt); dt = tplot/plotgap; >> >> > nplots = round(tmax/tplot); >> >> > data = [v; zeros(nplots,N)]; tdata = t; >> >> > >> >> > >> >> > What I am trying in Python >> >> > >> >> > for j in range(1,N): >> >> >> > >> > [snip] >> > >> > You actually don't need to loop. Your original matlab code translates >> > fairly naturally into numpy code: the main difference being the >> > substitution of `np.arange(N) + 1` for `[1:N]`, where `np.arange(N)` >> gives >> > values from 0 to N - 1, as Jorge suggests (although it's not specific >> to >> > loops). >> > >> > Here's a rough translation (not tested against matlab output, so I >> could >> > have made some errors in the translation). >> > >> > Cheers, >> > -Tony >> > >> > #~~~ code >> > import numpy as np >> > from scipy.linalg import toeplitz >> > from scipy.special import cotdg >> > >> > N = 512 >> > h = 2 * np.pi/N >> > x = h * (np.arange(N) + 1) >> > t = 0 >> > dt = h / 4 >> > a = .1 >> > c = a + np.sin(x - 1)**2 >> > v = np.exp(-100 * (x - 1)**2) >> > vold = np.exp(-100 * (x - a*dt - 1)**2) >> > >> > i = np.arange(1, N) >> > column = np.hstack([0, .5 * (-1**i) * cotdg(i * h/2)]) >> > D = toeplitz(column, -column) >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087216.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087320.html Sent from the Scipy-User mailing list archive at Nabble.com. From guziy.sasha at gmail.com Thu Jun 28 12:59:51 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Thu, 28 Jun 2012 12:59:51 -0400 Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: <34087320.post@talk.nabble.com> References: <34086886.post@talk.nabble.com> <34087216.post@talk.nabble.com> <34087320.post@talk.nabble.com> Message-ID: It asked for the same number of dimensions so it needed to be 2, that is why reshape (you get 2 dimensions but the size of the second is 1), and transpose, because the dimensions, other than the one along which you are concatenating should be of the same size. Cheers __ Oleksandr Huziy 2012/6/28 surfcast23 > > Hi sanGuziy, > > That did it. Thank you! can you explain why it worked for future > reference? > > Khary > > > > sanGuziy wrote: > > > > Try this > > > > data = np.concatenate((v.reshape((512,1)).transpose(), k)) > > > > Cheers > > -- > > Oleksandr Huziy > > > > 2012/6/28 surfcast23 > > > >> > >> Hi Tony, > >> > >> I tried you r translation and it works, but I still get the error > >> > >> line 50, in data = np.concatenate((v, k),axis = 0); > >> ValueError: arrays must have same number of dimensions > >> > >> Thanks, > >> Khary > >> > >> > >> > >> Tony Yu-3 wrote: > >> > > >> > On Thu, Jun 28, 2012 at 11:48 AM, Jorge E. ?Sanchez Sanchez < > >> > hnry2k at hotmail.com> wrote: > >> > > >> >> Hi, > >> >> > >> >> it seems to me that you don't know that loops in python end before > the > >> >> last index value of the loop, > >> >> you just need to do something like: > >> >> Np1 = N+1 > >> >> for j in range (1, Np1): > >> >> > >> >> HTH > >> >> Best Regards > >> >> Jorge > >> >> > >> > > >> >> > >> >> > Date: Thu, 28 Jun 2012 08:32:47 -0700 > >> >> > From: surfcast23 at gmail.com > >> >> > To: scipy-user at scipy.org > >> >> > Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have > same > >> >> number of dimensions > >> >> > >> >> > > >> >> > > >> >> > Hi I am trying to translate a Matlab code from Trefethen's Spectral > >> >> Methods > >> >> > in MATLAB to Python. I am running into a problem with > >> numpy.concatenate > >> >> > wanting the arrays to have the same number of dimensions. > >> >> > Here is part of the Matlab code that I am rewriting > >> >> > > >> >> > > >> >> > N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; > >> >> > a = .1; > >> >> > c = a + sin (x-1).^2; > >> >> > v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); > >> >> > > >> >> > column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; > >> >> > D = toeplitz(column,-column); > >> >> > > >> >> > % Time-stepping by leap frog formula: > >> >> > tmax = 15; tplot = .15; clf, drawnow, set(gcf,'renderer','zbuffer') > >> >> > plotgap = round(tplot/dt); dt = tplot/plotgap; > >> >> > nplots = round(tmax/tplot); > >> >> > data = [v; zeros(nplots,N)]; tdata = t; > >> >> > > >> >> > > >> >> > What I am trying in Python > >> >> > > >> >> > for j in range(1,N): > >> >> > >> > > >> > [snip] > >> > > >> > You actually don't need to loop. Your original matlab code translates > >> > fairly naturally into numpy code: the main difference being the > >> > substitution of `np.arange(N) + 1` for `[1:N]`, where `np.arange(N)` > >> gives > >> > values from 0 to N - 1, as Jorge suggests (although it's not specific > >> to > >> > loops). > >> > > >> > Here's a rough translation (not tested against matlab output, so I > >> could > >> > have made some errors in the translation). > >> > > >> > Cheers, > >> > -Tony > >> > > >> > #~~~ code > >> > import numpy as np > >> > from scipy.linalg import toeplitz > >> > from scipy.special import cotdg > >> > > >> > N = 512 > >> > h = 2 * np.pi/N > >> > x = h * (np.arange(N) + 1) > >> > t = 0 > >> > dt = h / 4 > >> > a = .1 > >> > c = a + np.sin(x - 1)**2 > >> > v = np.exp(-100 * (x - 1)**2) > >> > vold = np.exp(-100 * (x - a*dt - 1)**2) > >> > > >> > i = np.arange(1, N) > >> > column = np.hstack([0, .5 * (-1**i) * cotdg(i * h/2)]) > >> > D = toeplitz(column, -column) > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> > >> -- > >> View this message in context: > >> > http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087216.html > >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: > http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087320.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Thu Jun 28 13:00:23 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Thu, 28 Jun 2012 13:00:23 -0400 Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: <34087320.post@talk.nabble.com> References: <34086886.post@talk.nabble.com> <34087216.post@talk.nabble.com> <34087320.post@talk.nabble.com> Message-ID: On Thu, Jun 28, 2012 at 12:49 PM, surfcast23 wrote: > > Hi sanGuziy, > > That did it. Thank you! can you explain why it worked for future > reference? > > Khary > Hi Khary, Two points: 1) `concatenate` doesn't know which direction you want to concatenate in (this is the `axis` argument Jorge metioned) 2) `concatenate` expects arrays of the same dimension (in your example, v is 1D and k is 2D), so if you're adding rows to an array, those arrays must have the same number of columns (v had no columns b/c it was an N-array, which is different than an N x 1, column array). Instead of `concatenate` you could use `vstack`: `np.vstack([v, k])` The "v" in `vstack` means "vertical" here; there's also `hstack` and `dstack` (horizontal and depth). `vstack` is nice because you don't have to broadcast your input `v` to a column vector (as you did with `concatenate`). -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From surfcast23 at gmail.com Thu Jun 28 13:03:19 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 10:03:19 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: References: <34086886.post@talk.nabble.com> <34087216.post@talk.nabble.com> <34087320.post@talk.nabble.com> Message-ID: <34087396.post@talk.nabble.com> Thanks for the clarification Tony! Tony Yu-3 wrote: > > On Thu, Jun 28, 2012 at 12:49 PM, surfcast23 wrote: > >> >> Hi sanGuziy, >> >> That did it. Thank you! can you explain why it worked for future >> reference? >> >> Khary >> > > Hi Khary, > > Two points: > 1) `concatenate` doesn't know which direction you want to concatenate in > (this is the `axis` argument Jorge metioned) > 2) `concatenate` expects arrays of the same dimension (in your example, v > is 1D and k is 2D), so if you're adding rows to an array, those arrays > must > have the same number of columns (v had no columns b/c it was an N-array, > which is different than an N x 1, column array). > > Instead of `concatenate` you could use `vstack`: `np.vstack([v, k])` > The "v" in `vstack` means "vertical" here; there's also `hstack` and > `dstack` (horizontal and depth). `vstack` is nice because you don't have > to > broadcast your input `v` to a column vector (as you did with > `concatenate`). > > -Tony > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087396.html Sent from the Scipy-User mailing list archive at Nabble.com. From surfcast23 at gmail.com Thu Jun 28 13:03:43 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 10:03:43 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have same number of dimensions In-Reply-To: References: <34086886.post@talk.nabble.com> <34087216.post@talk.nabble.com> <34087320.post@talk.nabble.com> Message-ID: <34087399.post@talk.nabble.com> Got it Thank you sanGuziy wrote: > > It asked for the same number of dimensions so it needed to be 2, > that is why reshape (you get 2 dimensions but the size of the second is > 1), > and transpose, because > the dimensions, other than the one along which you are concatenating > should > be of the same size. > > Cheers > __ > Oleksandr Huziy > > 2012/6/28 surfcast23 > >> >> Hi sanGuziy, >> >> That did it. Thank you! can you explain why it worked for future >> reference? >> >> Khary >> >> >> >> sanGuziy wrote: >> > >> > Try this >> > >> > data = np.concatenate((v.reshape((512,1)).transpose(), k)) >> > >> > Cheers >> > -- >> > Oleksandr Huziy >> > >> > 2012/6/28 surfcast23 >> > >> >> >> >> Hi Tony, >> >> >> >> I tried you r translation and it works, but I still get the error >> >> >> >> line 50, in data = np.concatenate((v, k),axis = 0); >> >> ValueError: arrays must have same number of dimensions >> >> >> >> Thanks, >> >> Khary >> >> >> >> >> >> >> >> Tony Yu-3 wrote: >> >> > >> >> > On Thu, Jun 28, 2012 at 11:48 AM, Jorge E. ?Sanchez Sanchez < >> >> > hnry2k at hotmail.com> wrote: >> >> > >> >> >> Hi, >> >> >> >> >> >> it seems to me that you don't know that loops in python end before >> the >> >> >> last index value of the loop, >> >> >> you just need to do something like: >> >> >> Np1 = N+1 >> >> >> for j in range (1, Np1): >> >> >> >> >> >> HTH >> >> >> Best Regards >> >> >> Jorge >> >> >> >> >> > >> >> >> >> >> >> > Date: Thu, 28 Jun 2012 08:32:47 -0700 >> >> >> > From: surfcast23 at gmail.com >> >> >> > To: scipy-user at scipy.org >> >> >> > Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have >> same >> >> >> number of dimensions >> >> >> >> >> >> > >> >> >> > >> >> >> > Hi I am trying to translate a Matlab code from Trefethen's >> Spectral >> >> >> Methods >> >> >> > in MATLAB to Python. I am running into a problem with >> >> numpy.concatenate >> >> >> > wanting the arrays to have the same number of dimensions. >> >> >> > Here is part of the Matlab code that I am rewriting >> >> >> > >> >> >> > >> >> >> > N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; >> >> >> > a = .1; >> >> >> > c = a + sin (x-1).^2; >> >> >> > v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); >> >> >> > >> >> >> > column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; >> >> >> > D = toeplitz(column,-column); >> >> >> > >> >> >> > % Time-stepping by leap frog formula: >> >> >> > tmax = 15; tplot = .15; clf, drawnow, >> set(gcf,'renderer','zbuffer') >> >> >> > plotgap = round(tplot/dt); dt = tplot/plotgap; >> >> >> > nplots = round(tmax/tplot); >> >> >> > data = [v; zeros(nplots,N)]; tdata = t; >> >> >> > >> >> >> > >> >> >> > What I am trying in Python >> >> >> > >> >> >> > for j in range(1,N): >> >> >> >> >> > >> >> > [snip] >> >> > >> >> > You actually don't need to loop. Your original matlab code >> translates >> >> > fairly naturally into numpy code: the main difference being the >> >> > substitution of `np.arange(N) + 1` for `[1:N]`, where `np.arange(N)` >> >> gives >> >> > values from 0 to N - 1, as Jorge suggests (although it's not >> specific >> >> to >> >> > loops). >> >> > >> >> > Here's a rough translation (not tested against matlab output, so I >> >> could >> >> > have made some errors in the translation). >> >> > >> >> > Cheers, >> >> > -Tony >> >> > >> >> > #~~~ code >> >> > import numpy as np >> >> > from scipy.linalg import toeplitz >> >> > from scipy.special import cotdg >> >> > >> >> > N = 512 >> >> > h = 2 * np.pi/N >> >> > x = h * (np.arange(N) + 1) >> >> > t = 0 >> >> > dt = h / 4 >> >> > a = .1 >> >> > c = a + np.sin(x - 1)**2 >> >> > v = np.exp(-100 * (x - 1)**2) >> >> > vold = np.exp(-100 * (x - a*dt - 1)**2) >> >> > >> >> > i = np.arange(1, N) >> >> > column = np.hstack([0, .5 * (-1**i) * cotdg(i * h/2)]) >> >> > D = toeplitz(column, -column) >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> >> >> -- >> >> View this message in context: >> >> >> http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087216.html >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087320.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087399.html Sent from the Scipy-User mailing list archive at Nabble.com. From vanforeest at gmail.com Thu Jun 28 13:23:12 2012 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 28 Jun 2012 19:23:12 +0200 Subject: [SciPy-User] Mean, variance, and parametrisation of an inverse Gaussian distribution In-Reply-To: References: Message-ID: As a first step, here is some example code: In [1]: from scipy.stats import invgauss In [2]: rv = invgauss(3, loc = 4) In [3]: rv.mean() Out[3]: 7.0 In [4]: rv = invgauss(3, loc = 0) In [5]: rv.mean() Out[5]: 3.0 In [6]: On 28 June 2012 19:22, nicky van foreest wrote: > Hi Mathieu, > > I just checked the wikipedia on this distribution. From this and the > info on the sicpy.stats on invgauss I think you should try to use the > loc, scale and shape parameters of invgauss to match your need. The > meaning of loc and scale can be found here: > > http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling > > The paragraph below this explains how to use shape parameters. You can > tune these parameters such that the mean is a/\sigma and the variance > is also what you need. > > Hope this helps > > Nicky > > On 28 June 2012 15:33, servant mathieu wrote: >> Dear scipy users, >> >> The time for a diffusion process to reach a single evidence threshold?a?is >> often modeled as an inverse Gaussian distribution with mean (a/?) and >> variance (a*?2/?3 ), where??? represents the mean drift rate and???2? the >> variance of the accumlulation process. How could I reparametrise the >> scipy.stats.invgauss??function to manipulate those parameters? >> >> Cheers, >> Mathieu >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> From vanforeest at gmail.com Thu Jun 28 13:22:31 2012 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 28 Jun 2012 19:22:31 +0200 Subject: [SciPy-User] Mean, variance, and parametrisation of an inverse Gaussian distribution In-Reply-To: References: Message-ID: Hi Mathieu, I just checked the wikipedia on this distribution. From this and the info on the sicpy.stats on invgauss I think you should try to use the loc, scale and shape parameters of invgauss to match your need. The meaning of loc and scale can be found here: http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling The paragraph below this explains how to use shape parameters. You can tune these parameters such that the mean is a/\sigma and the variance is also what you need. Hope this helps Nicky On 28 June 2012 15:33, servant mathieu wrote: > Dear scipy users, > > The time for a diffusion process to reach a single evidence threshold?a?is > often modeled as an inverse Gaussian distribution with mean (a/?) and > variance (a*?2/?3 ), where??? represents the mean drift rate and???2? the > variance of the accumlulation process. How could I reparametrise the > scipy.stats.invgauss??function to manipulate those parameters? > > Cheers, > Mathieu > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Thu Jun 28 13:31:53 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 Jun 2012 13:31:53 -0400 Subject: [SciPy-User] Mean, variance, and parametrisation of an inverse Gaussian distribution In-Reply-To: References: Message-ID: On Thu, Jun 28, 2012 at 1:23 PM, nicky van foreest wrote: > As a first step, here is some example code: > > In [1]: from scipy.stats import invgauss > > In [2]: rv = invgauss(3, loc = 4) > > In [3]: rv.mean() > Out[3]: 7.0 > > In [4]: rv = invgauss(3, loc = 0) > > In [5]: rv.mean() > Out[5]: 3.0 > > In [6]: > > > On 28 June 2012 19:22, nicky van foreest wrote: >> Hi Mathieu, >> >> I just checked the wikipedia on this distribution. From this and the >> info on the sicpy.stats on invgauss I think you should try to use the >> loc, scale and shape parameters of invgauss to match your need. The >> meaning of loc and scale can be found here: >> >> http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling >> >> The paragraph below this explains how to use shape parameters. You can >> tune these parameters such that the mean is a/\sigma and the variance >> is also what you need. the source of invgaus has def _stats(self, mu): return mu, mu**3.0, 3*sqrt(mu), 15*mu the first two are mean and variance, loc and scale are added generically. I tried a bit to see how to map this to the mean variance in the question, but I wasn't successful (in the time I had for this). Josef >> >> Hope this helps >> >> Nicky >> >> On 28 June 2012 15:33, servant mathieu wrote: >>> Dear scipy users, >>> >>> The time for a diffusion process to reach a single evidence threshold?a?is >>> often modeled as an inverse Gaussian distribution with mean (a/?) and >>> variance (a*?2/?3 ), where??? represents the mean drift rate and???2? the >>> variance of the accumlulation process. How could I reparametrise the >>> scipy.stats.invgauss??function to manipulate those parameters? >>> >>> Cheers, >>> Mathieu >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From surfcast23 at gmail.com Thu Jun 28 15:52:56 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 12:52:56 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update Message-ID: <34088172.post@talk.nabble.com> Hi I just updated to Numpy 1.6.2. When I tried to run a script that needs to import Numpy, Scipy, and Matplotlib I get the following error. Traceback (most recent call last): File "C:\Documents and Settings\Khary\My Documents\PHYSICS\Wave-eqn.py", line 4, in from scipy.linalg import toeplitz File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line 115, in from .misc import * File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in from . import fblas ImportError: DLL load failed: The specified module could not be found. I tried reinstalling and nothing. Does anyone know of a fix? Thank you -- View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088172.html Sent from the Scipy-User mailing list archive at Nabble.com. From ralf.gommers at googlemail.com Thu Jun 28 16:19:24 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 28 Jun 2012 22:19:24 +0200 Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: <34088172.post@talk.nabble.com> References: <34088172.post@talk.nabble.com> Message-ID: On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 wrote: > > Hi I just updated to Numpy 1.6.2. When I tried to run a script that needs > to > import Numpy, Scipy, and Matplotlib I get the following error. > > Traceback (most recent call last): > File "C:\Documents and Settings\Khary\My Documents\PHYSICS\Wave-eqn.py", > line 4, in > from scipy.linalg import toeplitz > File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line 115, > in > from .misc import * > File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in > > from . import fblas > ImportError: DLL load failed: The specified module could not be found. > > I tried reinstalling and nothing. Does anyone know of a fix? > How did you install numpy? If with a binary, please say where you got it. And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" exist? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From surfcast23 at gmail.com Thu Jun 28 16:28:13 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 13:28:13 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: References: <34088172.post@talk.nabble.com> Message-ID: <34088331.post@talk.nabble.com> Hi Ralf, I installed via binary that I got from the sourceforge site http://sourceforge.net/projects/numpy/files/NumPy/ Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does exist Ralf Gommers-2 wrote: > > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 wrote: > >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script that >> needs >> to >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> Traceback (most recent call last): >> File "C:\Documents and Settings\Khary\My Documents\PHYSICS\Wave-eqn.py", >> line 4, in >> from scipy.linalg import toeplitz >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line 115, >> in >> from .misc import * >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in >> >> from . import fblas >> ImportError: DLL load failed: The specified module could not be found. >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> > > How did you install numpy? If with a binary, please say where you got it. > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" exist? > > Ralf > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html Sent from the Scipy-User mailing list archive at Nabble.com. From ralf.gommers at googlemail.com Thu Jun 28 16:46:57 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 28 Jun 2012 22:46:57 +0200 Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: <34088331.post@talk.nabble.com> References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> Message-ID: On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 wrote: > > Hi Ralf, > > I installed via binary that I got from the sourceforge site > > http://sourceforge.net/projects/numpy/files/NumPy/ > > Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does exist > > What version of Windows do you have? How did you install Python itself, and is it 32-bit or 64-bit? Ralf > > Ralf Gommers-2 wrote: > > > > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 > wrote: > > > >> > >> Hi I just updated to Numpy 1.6.2. When I tried to run a script that > >> needs > >> to > >> import Numpy, Scipy, and Matplotlib I get the following error. > >> > >> Traceback (most recent call last): > >> File "C:\Documents and Settings\Khary\My > Documents\PHYSICS\Wave-eqn.py", > >> line 4, in > >> from scipy.linalg import toeplitz > >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line > 115, > >> in > >> from .misc import * > >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in > >> > >> from . import fblas > >> ImportError: DLL load failed: The specified module could not be found. > >> > >> I tried reinstalling and nothing. Does anyone know of a fix? > >> > > > > How did you install numpy? If with a binary, please say where you got it. > > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" exist? > > > > Ralf > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: > http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From surfcast23 at gmail.com Thu Jun 28 17:00:00 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 14:00:00 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> Message-ID: <34088483.post@talk.nabble.com> Hi Ralf, Sorry I should have included that information in my original message. Windows xp 32-bit. Python 3.2.3, installed via binary downloaded from the python site http://www.python.org/download/ Khary Ralf Gommers-2 wrote: > > On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 wrote: > >> >> Hi Ralf, >> >> I installed via binary that I got from the sourceforge site >> >> http://sourceforge.net/projects/numpy/files/NumPy/ >> >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does exist >> >> > What version of Windows do you have? How did you install Python itself, > and > is it 32-bit or 64-bit? > > Ralf > > >> >> Ralf Gommers-2 wrote: >> > >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 >> wrote: >> > >> >> >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script that >> >> needs >> >> to >> >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> >> >> Traceback (most recent call last): >> >> File "C:\Documents and Settings\Khary\My >> Documents\PHYSICS\Wave-eqn.py", >> >> line 4, in >> >> from scipy.linalg import toeplitz >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line >> 115, >> >> in >> >> from .misc import * >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in >> >> >> >> from . import fblas >> >> ImportError: DLL load failed: The specified module could not be found. >> >> >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> >> >> > >> > How did you install numpy? If with a binary, please say where you got >> it. >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" exist? >> > >> > Ralf >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088483.html Sent from the Scipy-User mailing list archive at Nabble.com. From ralf.gommers at googlemail.com Thu Jun 28 17:18:46 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 28 Jun 2012 23:18:46 +0200 Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: <34088483.post@talk.nabble.com> References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> <34088483.post@talk.nabble.com> Message-ID: On Thu, Jun 28, 2012 at 11:00 PM, surfcast23 wrote: > > Hi Ralf, > > Sorry I should have included that information in my original message. > Windows xp 32-bit. Python 3.2.3, installed via binary downloaded from the > python site http://www.python.org/download/ > > Ahh, I'm half asleep so keep forgetting some questions. What scipy version and where did you get it? Do the numpy tests pass (run "numpy.test('full')")? Did scipy work before you upgraded numpy, and what numpy version did you use then? Ralf Khary > > > Ralf Gommers-2 wrote: > > > > On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 > wrote: > > > >> > >> Hi Ralf, > >> > >> I installed via binary that I got from the sourceforge site > >> > >> http://sourceforge.net/projects/numpy/files/NumPy/ > >> > >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does exist > >> > >> > > What version of Windows do you have? How did you install Python itself, > > and > > is it 32-bit or 64-bit? > > > > Ralf > > > > > >> > >> Ralf Gommers-2 wrote: > >> > > >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 > >> wrote: > >> > > >> >> > >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script that > >> >> needs > >> >> to > >> >> import Numpy, Scipy, and Matplotlib I get the following error. > >> >> > >> >> Traceback (most recent call last): > >> >> File "C:\Documents and Settings\Khary\My > >> Documents\PHYSICS\Wave-eqn.py", > >> >> line 4, in > >> >> from scipy.linalg import toeplitz > >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line > >> 115, > >> >> in > >> >> from .misc import * > >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, > in > >> >> > >> >> from . import fblas > >> >> ImportError: DLL load failed: The specified module could not be > found. > >> >> > >> >> I tried reinstalling and nothing. Does anyone know of a fix? > >> >> > >> > > >> > How did you install numpy? If with a binary, please say where you got > >> it. > >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" exist? > >> > > >> > Ralf > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> > >> -- > >> View this message in context: > >> > http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html > >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: > http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088483.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From surfcast23 at gmail.com Thu Jun 28 17:50:33 2012 From: surfcast23 at gmail.com (surfcast23) Date: Thu, 28 Jun 2012 14:50:33 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> <34088483.post@talk.nabble.com> Message-ID: <34088684.post@talk.nabble.com> Hi Ralf, Yes Scipy worked fine before the numpt upgrade. I think I am running version 0.10.1, but I am not sure. Is there a way to check the version? I also downloaded it from the sourceforge site. The my old version of numpy I believe was 1.6.1. The numpy.test('full') results are... Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test('full') Running unit tests for numpy Traceback (most recent call last): File "", line 1, in File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line 318, in test self._show_system_info() File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line 187, in _show_system_info nose = import_nose() File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line 69, in import_nose raise ImportError(msg) ImportError: Need nose >= 0.10.0 for tests - see http://somethingaboutorange.com /mrl/projects/nose >>> Thanks Khary Ralf Gommers-2 wrote: > > On Thu, Jun 28, 2012 at 11:00 PM, surfcast23 wrote: > >> >> Hi Ralf, >> >> Sorry I should have included that information in my original message. >> Windows xp 32-bit. Python 3.2.3, installed via binary downloaded from the >> python site http://www.python.org/download/ >> >> Ahh, I'm half asleep so keep forgetting some questions. > > What scipy version and where did you get it? Do the numpy tests pass (run > "numpy.test('full')")? Did scipy work before you upgraded numpy, and what > numpy version did you use then? > > Ralf > > > > Khary >> >> >> Ralf Gommers-2 wrote: >> > >> > On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 >> wrote: >> > >> >> >> >> Hi Ralf, >> >> >> >> I installed via binary that I got from the sourceforge site >> >> >> >> http://sourceforge.net/projects/numpy/files/NumPy/ >> >> >> >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does exist >> >> >> >> >> > What version of Windows do you have? How did you install Python itself, >> > and >> > is it 32-bit or 64-bit? >> > >> > Ralf >> > >> > >> >> >> >> Ralf Gommers-2 wrote: >> >> > >> >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 >> >> wrote: >> >> > >> >> >> >> >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script that >> >> >> needs >> >> >> to >> >> >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> >> >> >> >> Traceback (most recent call last): >> >> >> File "C:\Documents and Settings\Khary\My >> >> Documents\PHYSICS\Wave-eqn.py", >> >> >> line 4, in >> >> >> from scipy.linalg import toeplitz >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", >> line >> >> 115, >> >> >> in >> >> >> from .misc import * >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, >> in >> >> >> >> >> >> from . import fblas >> >> >> ImportError: DLL load failed: The specified module could not be >> found. >> >> >> >> >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> >> >> >> >> > >> >> > How did you install numpy? If with a binary, please say where you >> got >> >> it. >> >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" >> exist? >> >> > >> >> > Ralf >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> >> >> -- >> >> View this message in context: >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088483.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088684.html Sent from the Scipy-User mailing list archive at Nabble.com. From srean.list at gmail.com Thu Jun 28 20:41:16 2012 From: srean.list at gmail.com (srean) Date: Thu, 28 Jun 2012 19:41:16 -0500 Subject: [SciPy-User] saving a sparse matrix to file In-Reply-To: References: Message-ID: If you want to share sparse matrices across different applications a good bet is the matrix market file format. Most sparse matrix processing software will accept it. There are examples here http://www.scipy.org/SciPyPackages/Sparse#head-bb8f5e0195ae126e95a6be0c92de803887e9f300 But do note that its a ascii file format so the output file can be long and can take a while to read or write. We have used it and it worked great. On some of the larger files however the writing was taking close to an hour, so we sped it up by using multiprocessing to write different parts of it. But becuse we needed only limited subset of all features provided by the matrix market format our code only handles that restricted subset and wont be generally useful. Best srean On Mon, Jun 25, 2012 at 1:34 PM, Pengkui Luo wrote: > Is there a unified API to save a sparse matrix of any type (dok, lil, csc, > ...) to file, as simple as pickle dump/load? > > e.g. Can I do something like the following? > > import numpy as np > from scipy.sparse import dok_matrix > from scipy.io import save, load > mat = dok_matrix (np.zeros ((80000, 80000)), np.int) > save (file ('foobar.mat', 'wb'), mat) ?# or mat.save ('foobar.mat') > mat2 = load (file ('foobar.mat')) > > > It seems that there was a .save method for a sparse matrix several years > ago, but was later?deprecated. Why? > http://projects.scipy.org/scipy/ticket/590 > > http://mail.scipy.org/pipermail/scipy-user/2012-January/031326.html > I came across this email response stating that for csr_matrix one can > save.data, .indices, and .indptr attributes individually to three different > files and load them all to reassemble the matrix. Is it a hack only suitable > for csr_matrix, or one that can be applied to all types of sparse matrices? > Can I save a matrix to a single file instead of three? > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.savemat.html > I also came across the scipy.io.savemat API, but had no idea how to use > it.... What does?"mdict : Dictionary from which to save matfile variables." > mean? What is this dict for? Can someone provide an example on how to use > it? > > Thanks, > Pengkui > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ernesto.adorio at gmail.com Thu Jun 28 22:33:34 2012 From: ernesto.adorio at gmail.com (ernesto.adorio at gmail.com) Date: Fri, 29 Jun 2012 10:33:34 +0800 Subject: [SciPy-User] SciPy-User Digest, Vol 106, Issue 38 Message-ID: <7v0p27qm96n9igbqx9bms7ds.1340937214155@email.android.com> scipy-user-request at scipy.org wrote: >Send SciPy-User mailing list submissions to > scipy-user at scipy.org > >To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user >or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > >You can reach the person managing the list at > scipy-user-owner at scipy.org > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of SciPy-User digest..." > > >Today's Topics: > > 1. Re: [SciPy-user] ValueError: arrays must have same number of > dimensions (surfcast23) > 2. Re: Mean, variance, and parametrisation of an inverse > Gaussian distribution (nicky van foreest) > 3. Re: Mean, variance, and parametrisation of an inverse > Gaussian distribution (nicky van foreest) > 4. Re: Mean, variance, and parametrisation of an inverse > Gaussian distribution (josef.pktd at gmail.com) > 5. [SciPy-user] ImportError: DLL load failed: The specified > module could not be found.After update (surfcast23) > 6. Re: [SciPy-user] ImportError: DLL load failed: The specified > module could not be found.After update (Ralf Gommers) > 7. Re: [SciPy-user] ImportError: DLL load failed: The specified > module could not be found.After update (surfcast23) > 8. Re: [SciPy-user] ImportError: DLL load failed: The specified > module could not be found.After update (Ralf Gommers) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Thu, 28 Jun 2012 10:03:43 -0700 (PDT) >From: surfcast23 >Subject: Re: [SciPy-User] [SciPy-user] ValueError: arrays must have > same number of dimensions >To: scipy-user at scipy.org >Message-ID: <34087399.post at talk.nabble.com> >Content-Type: text/plain; charset=UTF-8 > > >Got it Thank you > >sanGuziy wrote: >> >> It asked for the same number of dimensions so it needed to be 2, >> that is why reshape (you get 2 dimensions but the size of the second is >> 1), >> and transpose, because >> the dimensions, other than the one along which you are concatenating >> should >> be of the same size. >> >> Cheers >> __ >> Oleksandr Huziy >> >> 2012/6/28 surfcast23 >> >>> >>> Hi sanGuziy, >>> >>> That did it. Thank you! can you explain why it worked for future >>> reference? >>> >>> Khary >>> >>> >>> >>> sanGuziy wrote: >>> > >>> > Try this >>> > >>> > data = np.concatenate((v.reshape((512,1)).transpose(), k)) >>> > >>> > Cheers >>> > -- >>> > Oleksandr Huziy >>> > >>> > 2012/6/28 surfcast23 >>> > >>> >> >>> >> Hi Tony, >>> >> >>> >> I tried you r translation and it works, but I still get the error >>> >> >>> >> line 50, in data = np.concatenate((v, k),axis = 0); >>> >> ValueError: arrays must have same number of dimensions >>> >> >>> >> Thanks, >>> >> Khary >>> >> >>> >> >>> >> >>> >> Tony Yu-3 wrote: >>> >> > >>> >> > On Thu, Jun 28, 2012 at 11:48 AM, Jorge E. ?Sanchez Sanchez < >>> >> > hnry2k at hotmail.com> wrote: >>> >> > >>> >> >> Hi, >>> >> >> >>> >> >> it seems to me that you don't know that loops in python end before >>> the >>> >> >> last index value of the loop, >>> >> >> you just need to do something like: >>> >> >> Np1 = N+1 >>> >> >> for j in range (1, Np1): >>> >> >> >>> >> >> HTH >>> >> >> Best Regards >>> >> >> Jorge >>> >> >> >>> >> > >>> >> >> >>> >> >> > Date: Thu, 28 Jun 2012 08:32:47 -0700 >>> >> >> > From: surfcast23 at gmail.com >>> >> >> > To: scipy-user at scipy.org >>> >> >> > Subject: [SciPy-User] [SciPy-user] ValueError: arrays must have >>> same >>> >> >> number of dimensions >>> >> >> >>> >> >> > >>> >> >> > >>> >> >> > Hi I am trying to translate a Matlab code from Trefethen's >>> Spectral >>> >> >> Methods >>> >> >> > in MATLAB to Python. I am running into a problem with >>> >> numpy.concatenate >>> >> >> > wanting the arrays to have the same number of dimensions. >>> >> >> > Here is part of the Matlab code that I am rewriting >>> >> >> > >>> >> >> > >>> >> >> > N = 512; h = 2*pi/N; x = h*(1:N); t = 0; dt = h/4; >>> >> >> > a = .1; >>> >> >> > c = a + sin (x-1).^2; >>> >> >> > v = exp(-100*(x-1).^2); vold = exp(-100*(x-a*dt-1).^2); >>> >> >> > >>> >> >> > column = [0 .5*(-1).^(1:N-1).*cot((1:N-1)*h/2)]; >>> >> >> > D = toeplitz(column,-column); >>> >> >> > >>> >> >> > % Time-stepping by leap frog formula: >>> >> >> > tmax = 15; tplot = .15; clf, drawnow, >>> set(gcf,'renderer','zbuffer') >>> >> >> > plotgap = round(tplot/dt); dt = tplot/plotgap; >>> >> >> > nplots = round(tmax/tplot); >>> >> >> > data = [v; zeros(nplots,N)]; tdata = t; >>> >> >> > >>> >> >> > >>> >> >> > What I am trying in Python >>> >> >> > >>> >> >> > for j in range(1,N): >>> >> >> >>> >> > >>> >> > [snip] >>> >> > >>> >> > You actually don't need to loop. Your original matlab code >>> translates >>> >> > fairly naturally into numpy code: the main difference being the >>> >> > substitution of `np.arange(N) + 1` for `[1:N]`, where `np.arange(N)` >>> >> gives >>> >> > values from 0 to N - 1, as Jorge suggests (although it's not >>> specific >>> >> to >>> >> > loops). >>> >> > >>> >> > Here's a rough translation (not tested against matlab output, so I >>> >> could >>> >> > have made some errors in the translation). >>> >> > >>> >> > Cheers, >>> >> > -Tony >>> >> > >>> >> > #~~~ code >>> >> > import numpy as np >>> >> > from scipy.linalg import toeplitz >>> >> > from scipy.special import cotdg >>> >> > >>> >> > N = 512 >>> >> > h = 2 * np.pi/N >>> >> > x = h * (np.arange(N) + 1) >>> >> > t = 0 >>> >> > dt = h / 4 >>> >> > a = .1 >>> >> > c = a + np.sin(x - 1)**2 >>> >> > v = np.exp(-100 * (x - 1)**2) >>> >> > vold = np.exp(-100 * (x - a*dt - 1)**2) >>> >> > >>> >> > i = np.arange(1, N) >>> >> > column = np.hstack([0, .5 * (-1**i) * cotdg(i * h/2)]) >>> >> > D = toeplitz(column, -column) >>> >> > >>> >> > _______________________________________________ >>> >> > SciPy-User mailing list >>> >> > SciPy-User at scipy.org >>> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> > >>> >> > >>> >> >>> >> -- >>> >> View this message in context: >>> >> >>> http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087216.html >>> >> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >> >>> >> _______________________________________________ >>> >> SciPy-User mailing list >>> >> SciPy-User at scipy.org >>> >> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>> > >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> > >>> >>> -- >>> View this message in context: >>> http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087320.html >>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >-- >View this message in context: http://old.nabble.com/ValueError%3A-arrays-must-have-same-number-of-dimensions-tp34086886p34087399.html >Sent from the Scipy-User mailing list archive at Nabble.com. > > > >------------------------------ > >Message: 2 >Date: Thu, 28 Jun 2012 19:23:12 +0200 >From: nicky van foreest >Subject: Re: [SciPy-User] Mean, variance, and parametrisation of an > inverse Gaussian distribution >To: SciPy Users List >Message-ID: > >Content-Type: text/plain; charset=ISO-8859-7 > >As a first step, here is some example code: > >In [1]: from scipy.stats import invgauss > >In [2]: rv = invgauss(3, loc = 4) > >In [3]: rv.mean() >Out[3]: 7.0 > >In [4]: rv = invgauss(3, loc = 0) > >In [5]: rv.mean() >Out[5]: 3.0 > >In [6]: > > >On 28 June 2012 19:22, nicky van foreest wrote: >> Hi Mathieu, >> >> I just checked the wikipedia on this distribution. From this and the >> info on the sicpy.stats on invgauss I think you should try to use the >> loc, scale and shape parameters of invgauss to match your need. The >> meaning of loc and scale can be found here: >> >> http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling >> >> The paragraph below this explains how to use shape parameters. You can >> tune these parameters such that the mean is a/\sigma and the variance >> is also what you need. >> >> Hope this helps >> >> Nicky >> >> On 28 June 2012 15:33, servant mathieu wrote: >>> Dear scipy users, >>> >>> The time for a diffusion process to reach a single evidence threshold?a?is >>> often modeled as an inverse Gaussian distribution with mean (a/?) and >>> variance (a*?2/?3 ), where??? represents the mean drift rate and???2? the >>> variance of the accumlulation process. How could I reparametrise the >>> scipy.stats.invgauss??function to manipulate those parameters? >>> >>> Cheers, >>> Mathieu >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> > > >------------------------------ > >Message: 3 >Date: Thu, 28 Jun 2012 19:22:31 +0200 >From: nicky van foreest >Subject: Re: [SciPy-User] Mean, variance, and parametrisation of an > inverse Gaussian distribution >To: SciPy Users List >Message-ID: > >Content-Type: text/plain; charset=ISO-8859-7 > >Hi Mathieu, > >I just checked the wikipedia on this distribution. From this and the >info on the sicpy.stats on invgauss I think you should try to use the >loc, scale and shape parameters of invgauss to match your need. The >meaning of loc and scale can be found here: > >http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling > >The paragraph below this explains how to use shape parameters. You can >tune these parameters such that the mean is a/\sigma and the variance >is also what you need. > >Hope this helps > >Nicky > >On 28 June 2012 15:33, servant mathieu wrote: >> Dear scipy users, >> >> The time for a diffusion process to reach a single evidence threshold?a?is >> often modeled as an inverse Gaussian distribution with mean (a/?) and >> variance (a*?2/?3 ), where??? represents the mean drift rate and???2? the >> variance of the accumlulation process. How could I reparametrise the >> scipy.stats.invgauss??function to manipulate those parameters? >> >> Cheers, >> Mathieu >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > >------------------------------ > >Message: 4 >Date: Thu, 28 Jun 2012 13:31:53 -0400 >From: josef.pktd at gmail.com >Subject: Re: [SciPy-User] Mean, variance, and parametrisation of an > inverse Gaussian distribution >To: SciPy Users List >Message-ID: > >Content-Type: text/plain; charset=ISO-8859-7 > >On Thu, Jun 28, 2012 at 1:23 PM, nicky van foreest wrote: >> As a first step, here is some example code: >> >> In [1]: from scipy.stats import invgauss >> >> In [2]: rv = invgauss(3, loc = 4) >> >> In [3]: rv.mean() >> Out[3]: 7.0 >> >> In [4]: rv = invgauss(3, loc = 0) >> >> In [5]: rv.mean() >> Out[5]: 3.0 >> >> In [6]: >> >> >> On 28 June 2012 19:22, nicky van foreest wrote: >>> Hi Mathieu, >>> >>> I just checked the wikipedia on this distribution. From this and the >>> info on the sicpy.stats on invgauss I think you should try to use the >>> loc, scale and shape parameters of invgauss to match your need. The >>> meaning of loc and scale can be found here: >>> >>> http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling >>> >>> The paragraph below this explains how to use shape parameters. You can >>> tune these parameters such that the mean is a/\sigma and the variance >>> is also what you need. > >the source of invgaus has > > def _stats(self, mu): > return mu, mu**3.0, 3*sqrt(mu), 15*mu > >the first two are mean and variance, loc and scale are added generically. > >I tried a bit to see how to map this to the mean variance in the >question, but I wasn't successful (in the time I had for this). > >Josef > >>> >>> Hope this helps >>> >>> Nicky >>> >>> On 28 June 2012 15:33, servant mathieu wrote: >>>> Dear scipy users, >>>> >>>> The time for a diffusion process to reach a single evidence threshold?a?is >>>> often modeled as an inverse Gaussian distribution with mean (a/?) and >>>> variance (a*?2/?3 ), where??? represents the mean drift rate and???2? the >>>> variance of the accumlulation process. How could I reparametrise the >>>> scipy.stats.invgauss??function to manipulate those parameters? >>>> >>>> Cheers, >>>> Mathieu >>>> >>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > >------------------------------ > >Message: 5 >Date: Thu, 28 Jun 2012 12:52:56 -0700 (PDT) >From: surfcast23 >Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The > specified module could not be found.After update >To: scipy-user at scipy.org >Message-ID: <34088172.post at talk.nabble.com> >Content-Type: text/plain; charset=us-ascii > > >Hi I just updated to Numpy 1.6.2. When I tried to run a script that needs to >import Numpy, Scipy, and Matplotlib I get the following error. > >Traceback (most recent call last): > File "C:\Documents and Settings\Khary\My Documents\PHYSICS\Wave-eqn.py", >line 4, in > from scipy.linalg import toeplitz > File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line 115, >in > from .misc import * > File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in > > from . import fblas >ImportError: DLL load failed: The specified module could not be found. > >I tried reinstalling and nothing. Does anyone know of a fix? >Thank you >-- >View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088172.html >Sent from the Scipy-User mailing list archive at Nabble.com. > > > >------------------------------ > >Message: 6 >Date: Thu, 28 Jun 2012 22:19:24 +0200 >From: Ralf Gommers >Subject: Re: [SciPy-User] [SciPy-user] ImportError: DLL load failed: > The specified module could not be found.After update >To: SciPy Users List >Message-ID: > >Content-Type: text/plain; charset="iso-8859-1" > >On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 wrote: > >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script that needs >> to >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> Traceback (most recent call last): >> File "C:\Documents and Settings\Khary\My Documents\PHYSICS\Wave-eqn.py", >> line 4, in >> from scipy.linalg import toeplitz >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line 115, >> in >> from .misc import * >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in >> >> from . import fblas >> ImportError: DLL load failed: The specified module could not be found. >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> > >How did you install numpy? If with a binary, please say where you got it. >And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" exist? > >Ralf >-------------- next part -------------- >An HTML attachment was scrubbed... >URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120628/6b8ce13b/attachment-0001.html > >------------------------------ > >Message: 7 >Date: Thu, 28 Jun 2012 13:28:13 -0700 (PDT) >From: surfcast23 >Subject: Re: [SciPy-User] [SciPy-user] ImportError: DLL load failed: > The specified module could not be found.After update >To: scipy-user at scipy.org >Message-ID: <34088331.post at talk.nabble.com> >Content-Type: text/plain; charset=us-ascii > > >Hi Ralf, > > I installed via binary that I got from the sourceforge site > >http://sourceforge.net/projects/numpy/files/NumPy/ > >Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does exist > > > >Ralf Gommers-2 wrote: >> >> On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 wrote: >> >>> >>> Hi I just updated to Numpy 1.6.2. When I tried to run a script that >>> needs >>> to >>> import Numpy, Scipy, and Matplotlib I get the following error. >>> >>> Traceback (most recent call last): >>> File "C:\Documents and Settings\Khary\My Documents\PHYSICS\Wave-eqn.py", >>> line 4, in >>> from scipy.linalg import toeplitz >>> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line 115, >>> in >>> from .misc import * >>> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in >>> >>> from . import fblas >>> ImportError: DLL load failed: The specified module could not be found. >>> >>> I tried reinstalling and nothing. Does anyone know of a fix? >>> >> >> How did you install numpy? If with a binary, please say where you got it. >> And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" exist? >> >> Ralf >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >-- >View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html >Sent from the Scipy-User mailing list archive at Nabble.com. > > > >------------------------------ > >Message: 8 >Date: Thu, 28 Jun 2012 22:46:57 +0200 >From: Ralf Gommers >Subject: Re: [SciPy-User] [SciPy-user] ImportError: DLL load failed: > The specified module could not be found.After update >To: SciPy Users List >Message-ID: > >Content-Type: text/plain; charset="iso-8859-1" > >On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 wrote: > >> >> Hi Ralf, >> >> I installed via binary that I got from the sourceforge site >> >> http://sourceforge.net/projects/numpy/files/NumPy/ >> >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does exist >> >> >What version of Windows do you have? How did you install Python itself, and >is it 32-bit or 64-bit? > >Ralf > > >> >> Ralf Gommers-2 wrote: >> > >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 >> wrote: >> > >> >> >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script that >> >> needs >> >> to >> >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> >> >> Traceback (most recent call last): >> >> File "C:\Documents and Settings\Khary\My >> Documents\PHYSICS\Wave-eqn.py", >> >> line 4, in >> >> from scipy.linalg import toeplitz >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", line >> 115, >> >> in >> >> from .misc import * >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line 3, in >> >> >> >> from . import fblas >> >> ImportError: DLL load failed: The specified module could not be found. >> >> >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> >> >> > >> > How did you install numpy? If with a binary, please say where you got it. >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" exist? >> > >> > Ralf >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >-------------- next part -------------- >An HTML attachment was scrubbed... >URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120628/1931dd07/attachment.html > >------------------------------ > >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > > >End of SciPy-User Digest, Vol 106, Issue 38 >******************************************* From ralf.gommers at googlemail.com Fri Jun 29 03:26:43 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 29 Jun 2012 09:26:43 +0200 Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: <34088684.post@talk.nabble.com> References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> <34088483.post@talk.nabble.com> <34088684.post@talk.nabble.com> Message-ID: On Thu, Jun 28, 2012 at 11:50 PM, surfcast23 wrote: > > Hi Ralf, > > Yes Scipy worked fine before the numpt upgrade. I think I am running > version 0.10.1, but I am not sure. Is there a way to check the version? "import scipy; scipy.__version__" That combination of Python/NumPy/SciPy should be correct and no problem to upgrade. Can you check that if you open your interpreter in the scipy\linalg\ dir, you can import fblas? You can also try using http://www.dependencywalker.com/ on fblas.pyd, to find out what the problem is with that dll. I > also downloaded it from the sourceforge site. The my old version of numpy I > believe was 1.6.1. The numpy.test('full') results are... > > Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] > on > win > 32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.test('full') > Running unit tests for numpy > Traceback (most recent call last): > File "", line 1, in > File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line > 318, in > test > self._show_system_info() > File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line > 187, in > _show_system_info > nose = import_nose() > File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line 69, > in > import_nose > raise ImportError(msg) > ImportError: Need nose >= 0.10.0 for tests - see > http://somethingaboutorange.com > /mrl/projects/nose > >>> > This just means you haven't got nose installed, which is the package needed to run the tests: http://nose.readthedocs.org/en/latest/ Ralf > Thanks > Khary > > > > Ralf Gommers-2 wrote: > > > > On Thu, Jun 28, 2012 at 11:00 PM, surfcast23 > wrote: > > > >> > >> Hi Ralf, > >> > >> Sorry I should have included that information in my original message. > >> Windows xp 32-bit. Python 3.2.3, installed via binary downloaded from > the > >> python site http://www.python.org/download/ > >> > >> Ahh, I'm half asleep so keep forgetting some questions. > > > > What scipy version and where did you get it? Do the numpy tests pass (run > > "numpy.test('full')")? Did scipy work before you upgraded numpy, and what > > numpy version did you use then? > > > > Ralf > > > > > > > > Khary > >> > >> > >> Ralf Gommers-2 wrote: > >> > > >> > On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 > >> wrote: > >> > > >> >> > >> >> Hi Ralf, > >> >> > >> >> I installed via binary that I got from the sourceforge site > >> >> > >> >> http://sourceforge.net/projects/numpy/files/NumPy/ > >> >> > >> >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does exist > >> >> > >> >> > >> > What version of Windows do you have? How did you install Python > itself, > >> > and > >> > is it 32-bit or 64-bit? > >> > > >> > Ralf > >> > > >> > > >> >> > >> >> Ralf Gommers-2 wrote: > >> >> > > >> >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 > >> >> wrote: > >> >> > > >> >> >> > >> >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script > that > >> >> >> needs > >> >> >> to > >> >> >> import Numpy, Scipy, and Matplotlib I get the following error. > >> >> >> > >> >> >> Traceback (most recent call last): > >> >> >> File "C:\Documents and Settings\Khary\My > >> >> Documents\PHYSICS\Wave-eqn.py", > >> >> >> line 4, in > >> >> >> from scipy.linalg import toeplitz > >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", > >> line > >> >> 115, > >> >> >> in > >> >> >> from .misc import * > >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line > 3, > >> in > >> >> >> > >> >> >> from . import fblas > >> >> >> ImportError: DLL load failed: The specified module could not be > >> found. > >> >> >> > >> >> >> I tried reinstalling and nothing. Does anyone know of a fix? > >> >> >> > >> >> > > >> >> > How did you install numpy? If with a binary, please say where you > >> got > >> >> it. > >> >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" > >> exist? > >> >> > > >> >> > Ralf > >> >> > > >> >> > _______________________________________________ > >> >> > SciPy-User mailing list > >> >> > SciPy-User at scipy.org > >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> >> > > >> >> > > >> >> > >> >> -- > >> >> View this message in context: > >> >> > >> > http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html > >> >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >> > >> >> _______________________________________________ > >> >> SciPy-User mailing list > >> >> SciPy-User at scipy.org > >> >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >> > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> > >> -- > >> View this message in context: > >> > http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088483.html > >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: > http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088684.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From surfcast23 at gmail.com Fri Jun 29 07:42:19 2012 From: surfcast23 at gmail.com (surfcast23) Date: Fri, 29 Jun 2012 04:42:19 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> <34088483.post@talk.nabble.com> <34088684.post@talk.nabble.com> Message-ID: <34090884.post@talk.nabble.com> Hi Ralf, I think I may have found the problem. After looking at a few posting with similar problems It seems as if there is a bug with PATH expanding in Environmental Variables on update. Problem is I don't know anything about environmental variables or setting PATH's http://bugs.activestate.com/show_bug.cgi?id=89474 http://community.activestate.com/forum/newbie-question-about-importing-modules Ralf Gommers-2 wrote: > > On Thu, Jun 28, 2012 at 11:50 PM, surfcast23 wrote: > >> >> Hi Ralf, >> >> Yes Scipy worked fine before the numpt upgrade. I think I am running >> version 0.10.1, but I am not sure. Is there a way to check the version? > > > "import scipy; scipy.__version__" > > That combination of Python/NumPy/SciPy should be correct and no problem to > upgrade. Can you check that if you open your interpreter in the > scipy\linalg\ dir, you can import fblas? You can also try using > http://www.dependencywalker.com/ on fblas.pyd, to find out what the > problem > is with that dll. > > > I >> also downloaded it from the sourceforge site. The my old version of numpy >> I >> believe was 1.6.1. The numpy.test('full') results are... >> >> Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] >> on >> win >> 32 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import numpy >> >>> numpy.test('full') >> Running unit tests for numpy >> Traceback (most recent call last): >> File "", line 1, in >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 318, in >> test >> self._show_system_info() >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 187, in >> _show_system_info >> nose = import_nose() >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 69, >> in >> import_nose >> raise ImportError(msg) >> ImportError: Need nose >= 0.10.0 for tests - see >> http://somethingaboutorange.com >> /mrl/projects/nose >> >>> >> > > This just means you haven't got nose installed, which is the package > needed > to run the tests: http://nose.readthedocs.org/en/latest/ > > Ralf > > > > >> Thanks >> Khary >> >> >> >> Ralf Gommers-2 wrote: >> > >> > On Thu, Jun 28, 2012 at 11:00 PM, surfcast23 >> wrote: >> > >> >> >> >> Hi Ralf, >> >> >> >> Sorry I should have included that information in my original message. >> >> Windows xp 32-bit. Python 3.2.3, installed via binary downloaded from >> the >> >> python site http://www.python.org/download/ >> >> >> >> Ahh, I'm half asleep so keep forgetting some questions. >> > >> > What scipy version and where did you get it? Do the numpy tests pass >> (run >> > "numpy.test('full')")? Did scipy work before you upgraded numpy, and >> what >> > numpy version did you use then? >> > >> > Ralf >> > >> > >> > >> > Khary >> >> >> >> >> >> Ralf Gommers-2 wrote: >> >> > >> >> > On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 >> >> wrote: >> >> > >> >> >> >> >> >> Hi Ralf, >> >> >> >> >> >> I installed via binary that I got from the sourceforge site >> >> >> >> >> >> http://sourceforge.net/projects/numpy/files/NumPy/ >> >> >> >> >> >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does >> exist >> >> >> >> >> >> >> >> > What version of Windows do you have? How did you install Python >> itself, >> >> > and >> >> > is it 32-bit or 64-bit? >> >> > >> >> > Ralf >> >> > >> >> > >> >> >> >> >> >> Ralf Gommers-2 wrote: >> >> >> > >> >> >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 >> >> >> >> wrote: >> >> >> > >> >> >> >> >> >> >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script >> that >> >> >> >> needs >> >> >> >> to >> >> >> >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> >> >> >> >> >> >> Traceback (most recent call last): >> >> >> >> File "C:\Documents and Settings\Khary\My >> >> >> Documents\PHYSICS\Wave-eqn.py", >> >> >> >> line 4, in >> >> >> >> from scipy.linalg import toeplitz >> >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", >> >> line >> >> >> 115, >> >> >> >> in >> >> >> >> from .misc import * >> >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line >> 3, >> >> in >> >> >> >> >> >> >> >> from . import fblas >> >> >> >> ImportError: DLL load failed: The specified module could not be >> >> found. >> >> >> >> >> >> >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> >> >> >> >> >> >> > >> >> >> > How did you install numpy? If with a binary, please say where you >> >> got >> >> >> it. >> >> >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" >> >> exist? >> >> >> > >> >> >> > Ralf >> >> >> > >> >> >> > _______________________________________________ >> >> >> > SciPy-User mailing list >> >> >> > SciPy-User at scipy.org >> >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> >> >> > >> >> >> >> >> >> -- >> >> >> View this message in context: >> >> >> >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html >> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> >> >> _______________________________________________ >> >> >> SciPy-User mailing list >> >> >> SciPy-User at scipy.org >> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> >> >> -- >> >> View this message in context: >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088483.html >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088684.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34090884.html Sent from the Scipy-User mailing list archive at Nabble.com. From surfcast23 at gmail.com Fri Jun 29 08:36:46 2012 From: surfcast23 at gmail.com (surfcast23) Date: Fri, 29 Jun 2012 05:36:46 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> <34088483.post@talk.nabble.com> <34088684.post@talk.nabble.com> Message-ID: <34091092.post@talk.nabble.com> I tried to import fblas from with in scip\linalg and got the same DLL error Ralf Gommers-2 wrote: > > On Thu, Jun 28, 2012 at 11:50 PM, surfcast23 wrote: > >> >> Hi Ralf, >> >> Yes Scipy worked fine before the numpt upgrade. I think I am running >> version 0.10.1, but I am not sure. Is there a way to check the version? > > > "import scipy; scipy.__version__" > > That combination of Python/NumPy/SciPy should be correct and no problem to > upgrade. Can you check that if you open your interpreter in the > fblas, you can import fblas? You can also try using > http://www.dependencywalker.com/ on fblas.pyd, to find out what the > problem > is with that dll. > > > I >> also downloaded it from the sourceforge site. The my old version of numpy >> I >> believe was 1.6.1. The numpy.test('full') results are... >> >> Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] >> on >> win >> 32 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import numpy >> >>> numpy.test('full') >> Running unit tests for numpy >> Traceback (most recent call last): >> File "", line 1, in >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 318, in >> test >> self._show_system_info() >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 187, in >> _show_system_info >> nose = import_nose() >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 69, >> in >> import_nose >> raise ImportError(msg) >> ImportError: Need nose >= 0.10.0 for tests - see >> http://somethingaboutorange.com >> /mrl/projects/nose >> >>> >> > > This just means you haven't got nose installed, which is the package > needed > to run the tests: http://nose.readthedocs.org/en/latest/ > > Ralf > > > > >> Thanks >> Khary >> >> >> >> Ralf Gommers-2 wrote: >> > >> > On Thu, Jun 28, 2012 at 11:00 PM, surfcast23 >> wrote: >> > >> >> >> >> Hi Ralf, >> >> >> >> Sorry I should have included that information in my original message. >> >> Windows xp 32-bit. Python 3.2.3, installed via binary downloaded from >> the >> >> python site http://www.python.org/download/ >> >> >> >> Ahh, I'm half asleep so keep forgetting some questions. >> > >> > What scipy version and where did you get it? Do the numpy tests pass >> (run >> > "numpy.test('full')")? Did scipy work before you upgraded numpy, and >> what >> > numpy version did you use then? >> > >> > Ralf >> > >> > >> > >> > Khary >> >> >> >> >> >> Ralf Gommers-2 wrote: >> >> > >> >> > On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 >> >> wrote: >> >> > >> >> >> >> >> >> Hi Ralf, >> >> >> >> >> >> I installed via binary that I got from the sourceforge site >> >> >> >> >> >> http://sourceforge.net/projects/numpy/files/NumPy/ >> >> >> >> >> >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does >> exist >> >> >> >> >> >> >> >> > What version of Windows do you have? How did you install Python >> itself, >> >> > and >> >> > is it 32-bit or 64-bit? >> >> > >> >> > Ralf >> >> > >> >> > >> >> >> >> >> >> Ralf Gommers-2 wrote: >> >> >> > >> >> >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 >> >> >> >> wrote: >> >> >> > >> >> >> >> >> >> >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script >> that >> >> >> >> needs >> >> >> >> to >> >> >> >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> >> >> >> >> >> >> Traceback (most recent call last): >> >> >> >> File "C:\Documents and Settings\Khary\My >> >> >> Documents\PHYSICS\Wave-eqn.py", >> >> >> >> line 4, in >> >> >> >> from scipy.linalg import toeplitz >> >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", >> >> line >> >> >> 115, >> >> >> >> in >> >> >> >> from .misc import * >> >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line >> 3, >> >> in >> >> >> >> >> >> >> >> from . import fblas >> >> >> >> ImportError: DLL load failed: The specified module could not be >> >> found. >> >> >> >> >> >> >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> >> >> >> >> >> >> > >> >> >> > How did you install numpy? If with a binary, please say where you >> >> got >> >> >> it. >> >> >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" >> >> exist? >> >> >> > >> >> >> > Ralf >> >> >> > >> >> >> > _______________________________________________ >> >> >> > SciPy-User mailing list >> >> >> > SciPy-User at scipy.org >> >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> >> >> > >> >> >> >> >> >> -- >> >> >> View this message in context: >> >> >> >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html >> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> >> >> _______________________________________________ >> >> >> SciPy-User mailing list >> >> >> SciPy-User at scipy.org >> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> >> >> -- >> >> View this message in context: >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088483.html >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088684.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34091092.html Sent from the Scipy-User mailing list archive at Nabble.com. From surfcast23 at gmail.com Fri Jun 29 08:47:09 2012 From: surfcast23 at gmail.com (surfcast23) Date: Fri, 29 Jun 2012 05:47:09 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> <34088483.post@talk.nabble.com> <34088684.post@talk.nabble.com> Message-ID: <34091143.post@talk.nabble.com> I ran dependency walker on fblas and got these errors, Error: At least one required implicit or forwarded dependency was not found. Warning: At least one delay-load dependency module was not found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module. Thanks Khary Ralf Gommers-2 wrote: > > On Thu, Jun 28, 2012 at 11:50 PM, surfcast23 wrote: > >> >> Hi Ralf, >> >> Yes Scipy worked fine before the numpt upgrade. I think I am running >> version 0.10.1, but I am not sure. Is there a way to check the version? > > > "import scipy; scipy.__version__" > > That combination of Python/NumPy/SciPy should be correct and no problem to > upgrade. Can you check that if you open your interpreter in the > scipy\linalg\ dir, you can import fblas? You can also try using > http://www.dependencywalker.com/ on fblas.pyd, to find out what the > problem > is with that dll. > > > I >> also downloaded it from the sourceforge site. The my old version of numpy >> I >> believe was 1.6.1. The numpy.test('full') results are... >> >> Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] >> on >> win >> 32 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import numpy >> >>> numpy.test('full') >> Running unit tests for numpy >> Traceback (most recent call last): >> File "", line 1, in >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 318, in >> test >> self._show_system_info() >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 187, in >> _show_system_info >> nose = import_nose() >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 69, >> in >> import_nose >> raise ImportError(msg) >> ImportError: Need nose >= 0.10.0 for tests - see >> http://somethingaboutorange.com >> /mrl/projects/nose >> >>> >> > > This just means you haven't got nose installed, which is the package > needed > to run the tests: http://nose.readthedocs.org/en/latest/ > > Ralf > > > > >> Thanks >> Khary >> >> >> >> Ralf Gommers-2 wrote: >> > >> > On Thu, Jun 28, 2012 at 11:00 PM, surfcast23 >> wrote: >> > >> >> >> >> Hi Ralf, >> >> >> >> Sorry I should have included that information in my original message. >> >> Windows xp 32-bit. Python 3.2.3, installed via binary downloaded from >> the >> >> python site http://www.python.org/download/ >> >> >> >> Ahh, I'm half asleep so keep forgetting some questions. >> > >> > What scipy version and where did you get it? Do the numpy tests pass >> (run >> > "numpy.test('full')")? Did scipy work before you upgraded numpy, and >> what >> > numpy version did you use then? >> > >> > Ralf >> > >> > >> > >> > Khary >> >> >> >> >> >> Ralf Gommers-2 wrote: >> >> > >> >> > On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 >> >> wrote: >> >> > >> >> >> >> >> >> Hi Ralf, >> >> >> >> >> >> I installed via binary that I got from the sourceforge site >> >> >> >> >> >> http://sourceforge.net/projects/numpy/files/NumPy/ >> >> >> >> >> >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does >> exist >> >> >> >> >> >> >> >> > What version of Windows do you have? How did you install Python >> itself, >> >> > and >> >> > is it 32-bit or 64-bit? >> >> > >> >> > Ralf >> >> > >> >> > >> >> >> >> >> >> Ralf Gommers-2 wrote: >> >> >> > >> >> >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 >> >> >> >> wrote: >> >> >> > >> >> >> >> >> >> >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script >> that >> >> >> >> needs >> >> >> >> to >> >> >> >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> >> >> >> >> >> >> Traceback (most recent call last): >> >> >> >> File "C:\Documents and Settings\Khary\My >> >> >> Documents\PHYSICS\Wave-eqn.py", >> >> >> >> line 4, in >> >> >> >> from scipy.linalg import toeplitz >> >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", >> >> line >> >> >> 115, >> >> >> >> in >> >> >> >> from .misc import * >> >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line >> 3, >> >> in >> >> >> >> >> >> >> >> from . import fblas >> >> >> >> ImportError: DLL load failed: The specified module could not be >> >> found. >> >> >> >> >> >> >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> >> >> >> >> >> >> > >> >> >> > How did you install numpy? If with a binary, please say where you >> >> got >> >> >> it. >> >> >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" >> >> exist? >> >> >> > >> >> >> > Ralf >> >> >> > >> >> >> > _______________________________________________ >> >> >> > SciPy-User mailing list >> >> >> > SciPy-User at scipy.org >> >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> >> >> > >> >> >> >> >> >> -- >> >> >> View this message in context: >> >> >> >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html >> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> >> >> _______________________________________________ >> >> >> SciPy-User mailing list >> >> >> SciPy-User at scipy.org >> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> >> >> -- >> >> View this message in context: >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088483.html >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088684.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34091143.html Sent from the Scipy-User mailing list archive at Nabble.com. From sjm.guzman at googlemail.com Fri Jun 29 11:36:01 2012 From: sjm.guzman at googlemail.com (Jose Guzman) Date: Fri, 29 Jun 2012 17:36:01 +0200 Subject: [SciPy-User] Sum of squared errors in leastsq Message-ID: <4FEDCB61.4080201@gmail.com> Dear Scipy users, I am trying to fit some data to different models with curve_fit. Does curve_fit or leastsq to return the sum of the squared errors or should I compute it manually*? Thanks a lot in advance! *) val = np.array([mymodel(i, param) for i in xdata]-np.array(ydata)) np.sum(val**2) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Fri Jun 29 12:14:45 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 29 Jun 2012 12:14:45 -0400 Subject: [SciPy-User] Sum of squared errors in leastsq In-Reply-To: <4FEDCB61.4080201@gmail.com> References: <4FEDCB61.4080201@gmail.com> Message-ID: On Fri, Jun 29, 2012 at 11:36 AM, Jose Guzman wrote: > Dear Scipy users, > > I am trying to fit some data to different models with curve_fit. Does > curve_fit or leastsq to return the sum of the squared errors or should I > compute it manually*? > > Thanks a lot in advance! > I believe that curve_fit will compute the sum of squares of the residuals given a function, xdata, and ydata. Leastsq assumes your function returns the residuals. http://docs.scipy.org/scipy/docs/scipy.optimize.minpack.leastsq/#scipy-optimize-leastsq http://docs.scipy.org/scipy/docs/scipy.optimize.minpack.curve_fit/#scipy-optimize-curve-fit Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Fri Jun 29 12:57:29 2012 From: wesmckinn at gmail.com (Wes McKinney) Date: Fri, 29 Jun 2012 12:57:29 -0400 Subject: [SciPy-User] ANN: pandas 0.8.0 released Message-ID: hi all, (apologies for the cross-post) I'm very pleased to announce the pandas 0.8.0 release. This is a massive release introducing, among other things, a substantial overhaul of pandas's time series processing with substantially increased performance, decreased memory usage, and dozens of new features. It also incorporates portions of the inactive scikits.timeseries codebase, so scikits.timeseries users will be able to migrate. Since pandas now utilizes NumPy's datetime64 dtype, users will need to use NumPy 1.6 or higher from now on. New time series features include: - High performance resampling: upsampling and downsampling - Nanosecond-level Timestamp support - Frequency inference capabilities - Simplified frequency specification - Robust, high performance time zone localization and conversion - Enhanced date parsing - New Period and PeriodIndex objects derived partially from legacy scikits.timeseries Date and DateArray In addition to enhanced time series capabilities, pandas has also acquired many new plotting functions and features, which will continue as Vytautas Jancauskas, our GSoC 2012 student, continues implementing new features. There are too many other improvements and performance enhancements to mention, see the What's New page and full release notes for more: What's new: http://pandas.pydata.org/pandas-docs/stable/whatsnew.html Many thanks to all those who contributed to making this milestone release happen: $ git log v0.7.3..v0.8.0 --pretty=format:%aN | sort | uniq -c | sort -rn 499 Wes McKinney 257 Chang She 162 Adam Klein 17 Skipper Seabold 13 Vytautas Jancauskas 13 Kieran O'Mahony 10 Wouter Overmeire 8 Thomas Kluyver 8 Luca Beltrame 7 Takafumi Arakaki 5 Mark Wiebe 5 Marc Abramowitz 3 Yaroslav Halchenko 3 timmie 2 RuiDC 2 Roy Hyunjin Han 2 Paddy Mullen 2 Jacques Kvam 2 Eric Chlebek 1 thuske 1 Stefan van der Walt 1 Senthil Palanisami 1 Peng Yu 1 Lorenzo Bolla 1 Kelsey Jordahl 1 Kamil Kisiel 1 David Zaslavsky Happy data hacking! - Wes What is it ========== pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with relational, time series, or any other kind of labeled data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Links ===== Release Notes: http://github.com/pydata/pandas/blob/master/RELEASE.rst Documentation: http://pandas.pydata.org Installers: http://pypi.python.org/pypi/pandas Code Repository: http://github.com/pydata/pandas Mailing List: http://groups.google.com/group/pydata Blogs: http://blog.wesmckinney.com and http://blog.lambdafoundry.com From surfcast23 at gmail.com Fri Jun 29 22:44:43 2012 From: surfcast23 at gmail.com (surfcast23) Date: Fri, 29 Jun 2012 19:44:43 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ImportError: DLL load failed: The specified module could not be found.After update In-Reply-To: References: <34088172.post@talk.nabble.com> <34088331.post@talk.nabble.com> <34088483.post@talk.nabble.com> <34088684.post@talk.nabble.com> Message-ID: <34094204.post@talk.nabble.com> Think I figured it out. When I tried from scipy.linalg import toeplitz from the command line I got the same error. I updated scipy and now I no longer get those errors. thanks for the help! Ralf Gommers-2 wrote: > > On Thu, Jun 28, 2012 at 11:50 PM, surfcast23 wrote: > >> >> Hi Ralf, >> >> Yes Scipy worked fine before the numpt upgrade. I think I am running >> version 0.10.1, but I am not sure. Is there a way to check the version? > > > "import scipy; scipy.__version__" > > That combination of Python/NumPy/SciPy should be correct and no problem to > upgrade. Can you check that if you open your interpreter in the > scipy\linalg\ dir, you can import fblas? You can also try using > http://www.dependencywalker.com/ on fblas.pyd, to find out what the > problem > is with that dll. > > > I >> also downloaded it from the sourceforge site. The my old version of numpy >> I >> believe was 1.6.1. The numpy.test('full') results are... >> >> Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] >> on >> win >> 32 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import numpy >> >>> numpy.test('full') >> Running unit tests for numpy >> Traceback (most recent call last): >> File "", line 1, in >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 318, in >> test >> self._show_system_info() >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 187, in >> _show_system_info >> nose = import_nose() >> File "C:\Python32\lib\site-packages\numpy\testing\nosetester.py", line >> 69, >> in >> import_nose >> raise ImportError(msg) >> ImportError: Need nose >= 0.10.0 for tests - see >> http://somethingaboutorange.com >> /mrl/projects/nose >> >>> >> > > This just means you haven't got nose installed, which is the package > needed > to run the tests: http://nose.readthedocs.org/en/latest/ > > Ralf > > > > >> Thanks >> Khary >> >> >> >> Ralf Gommers-2 wrote: >> > >> > On Thu, Jun 28, 2012 at 11:00 PM, surfcast23 >> wrote: >> > >> >> >> >> Hi Ralf, >> >> >> >> Sorry I should have included that information in my original message. >> >> Windows xp 32-bit. Python 3.2.3, installed via binary downloaded from >> the >> >> python site http://www.python.org/download/ >> >> >> >> Ahh, I'm half asleep so keep forgetting some questions. >> > >> > What scipy version and where did you get it? Do the numpy tests pass >> (run >> > "numpy.test('full')")? Did scipy work before you upgraded numpy, and >> what >> > numpy version did you use then? >> > >> > Ralf >> > >> > >> > >> > Khary >> >> >> >> >> >> Ralf Gommers-2 wrote: >> >> > >> >> > On Thu, Jun 28, 2012 at 10:28 PM, surfcast23 >> >> wrote: >> >> > >> >> >> >> >> >> Hi Ralf, >> >> >> >> >> >> I installed via binary that I got from the sourceforge site >> >> >> >> >> >> http://sourceforge.net/projects/numpy/files/NumPy/ >> >> >> >> >> >> Yes "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" does >> exist >> >> >> >> >> >> >> >> > What version of Windows do you have? How did you install Python >> itself, >> >> > and >> >> > is it 32-bit or 64-bit? >> >> > >> >> > Ralf >> >> > >> >> > >> >> >> >> >> >> Ralf Gommers-2 wrote: >> >> >> > >> >> >> > On Thu, Jun 28, 2012 at 9:52 PM, surfcast23 >> >> >> >> wrote: >> >> >> > >> >> >> >> >> >> >> >> Hi I just updated to Numpy 1.6.2. When I tried to run a script >> that >> >> >> >> needs >> >> >> >> to >> >> >> >> import Numpy, Scipy, and Matplotlib I get the following error. >> >> >> >> >> >> >> >> Traceback (most recent call last): >> >> >> >> File "C:\Documents and Settings\Khary\My >> >> >> Documents\PHYSICS\Wave-eqn.py", >> >> >> >> line 4, in >> >> >> >> from scipy.linalg import toeplitz >> >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\__init__.py", >> >> line >> >> >> 115, >> >> >> >> in >> >> >> >> from .misc import * >> >> >> >> File "C:\Python32\lib\site-packages\scipy\linalg\misc.py", line >> 3, >> >> in >> >> >> >> >> >> >> >> from . import fblas >> >> >> >> ImportError: DLL load failed: The specified module could not be >> >> found. >> >> >> >> >> >> >> >> I tried reinstalling and nothing. Does anyone know of a fix? >> >> >> >> >> >> >> > >> >> >> > How did you install numpy? If with a binary, please say where you >> >> got >> >> >> it. >> >> >> > And does "C:\Python32\lib\site-packages\scipy\linalg\fblas.pyd" >> >> exist? >> >> >> > >> >> >> > Ralf >> >> >> > >> >> >> > _______________________________________________ >> >> >> > SciPy-User mailing list >> >> >> > SciPy-User at scipy.org >> >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> >> >> > >> >> >> >> >> >> -- >> >> >> View this message in context: >> >> >> >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088331.html >> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> >> >> _______________________________________________ >> >> >> SciPy-User mailing list >> >> >> SciPy-User at scipy.org >> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> >> >> -- >> >> View this message in context: >> >> >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088483.html >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34088684.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/ImportError%3A-DLL-load-failed%3A-The-specified-module-could-not-be-found.After-update-tp34088172p34094204.html Sent from the Scipy-User mailing list archive at Nabble.com. From fperez.net at gmail.com Sat Jun 30 04:55:36 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 30 Jun 2012 01:55:36 -0700 Subject: [SciPy-User] [ANN] IPython 0.13 is officially out! Message-ID: Hi all, on behalf of the IPython development team, and just in time for the imminent Debian freeze and SciPy 2012, I'm thrilled to announce, after an intense 6 months of work, the official release of IPython 0.13. This version contains several major new features, as well as a large amount of bug and regression fixes. The previous version (0.12) was released on December 19 2011, so in this development cycle we had: - ~6 months of work. - 373 pull requests merged. - 742 issues closed (non-pull requests). - contributions from 62 authors. - 1760 commits. - a diff of 114226 lines. This means that we closed a total of 1115 issues over 6 months, for a rate of almost 200 issues closed per month and almost 300 commits per month. We are very grateful to all of you who have contributed so enthusiastically to the project and have had the patience of pushing your contributions through our often lengthy review process. We've also welcomed several new members to the core IPython development group: J?rgen Stenarson (@jstenar - this really was an omission as J?rgen has been our Windows expert for a long time) and Matthias Bussonier (@Carreau), who has been very active on all fronts of the project. *Highlights* There is too much new work to write up here, so we refer you to our full What's New document (http://ipython.org/ipython-doc/rel-0.13/whatsnew/version0.13.html) for the full details. But the main highlights of this release are: * Brand new UI for the notebook, with major usability improvements (real menus, toolbar, and much more) * Manage all your parallel cluster configurations from the notebook with push-button simplicity (cluster start/stop with one button). * Cell magics: commands prefixed with %% apply to an entire cell. We ship with many cell magics by default, including timing, profiling, running cells under bash, Perl and Ruby as well as magics to interface seamlessly with Cython, R and Octave. * The IPython.parallel tools have received many fixes, optimizations, and a number of API improvements to make writing, profiling and debugging parallel codes with IPython much easier. * We have unified our interactive kernels (the basic ipython object you know and love) with the engines running in parallel, so that you can now use all IPython special tricks in parallel too. And you can connect a console or qtconsole to any parallel engine for direct, interactive execution, plotting and debugging in a cluster. *Downloads* Download links and instructions are at: http://ipython.org/download.html And IPython is also on PyPI: http://pypi.python.org/pypi/ipython Those contain a built version of the HTML docs; if you want pure source downloads with no docs, those are available on github: Tarball: https://github.com/ipython/ipython/tarball/rel-0.13 Zipball: https://github.com/ipython/ipython/zipball/rel-0.13 Please see our release notes for the full details on everything about this release: http://ipython.org/ipython-doc/rel-0.13/whatsnew/version0.13.html As usual, if you find any other problem, please file a ticket --or even better, a pull request fixing it-- on our github issues site (https://github.com/ipython/ipython/issues). Many thanks to all who contributed! Fernando, on behalf of the IPython development team. http://ipython.org From pav at iki.fi Sat Jun 30 17:57:19 2012 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 30 Jun 2012 21:57:19 +0000 (UTC) Subject: [SciPy-User] Bessel function of complex order References: <4FEC779E.2020009@hasenkopf2000.net> Message-ID: Andreas Pritschet hasenkopf2000.net> writes: > I have noticed in the docs and some "bug reports" that Bessel functions > in SciPy support only real order. But for my work I require a modified > Bessel function of second kind of complex(!) order for complex values. > > Is in SciPy a chance of calculating something like > scipy.special.kv(1j*k,1j), whereby k is an array? You can use `besselk` from the mpmath library. It's arbitrary-precision code, and will probably be slower roughly by a factor of 1000 than equivalent compiled floating point code. That may be fast enough. Adding complex-order functions present in Scipy requires either (i) someone to dig up a license-compatible special function library that has them, or (ii) someone with sufficient knowledge to write such a part of such a library. This is somewhat onerous and fiddly work, and I'm not going promise when (and if) I would work on it. BTW, if you know good special function libraries with BSD compatible licenses, please reply! Scipy is missing a complex domain implementation of some of the hypergeometric functions, which would be required for expressing the complex-order bessels. The implementation for hypergeometric functions in mpmath IIRC relies on dynamic adjustment of precision on the fly, and such code cannot be simply ported for floating point. -- Pauli Virtanen