From zufryy at gmail.com Sat Jan 1 06:12:37 2011 From: zufryy at gmail.com (Zufry Malik Ibrahim) Date: Sat, 1 Jan 2011 18:12:37 +0700 Subject: [SciPy-User] How to get first-order optimality from scipy.optimize.leastsq module Message-ID: I curious about how to get the "first-order optimality" when using scipy.optimize.leastsq module. I don't have problem to get min using scipy.optimize.leastsq but I get confused when I want get first-order optimality... This is sample matlab script to get first-order optimality [x,resnorm,residual,exitflag,output,lambda]= lsqcurvefit(func,x0,xdata,tdata); foo = output.firstorderopt %get first-order optimality value this is some of foo reference from mathworks here Thanks for your attention, Happy New Year 2011 :) -- *Zufry Malik Ibrahim* Physics Department Bandung Institute of Technology , Indonesia -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Sat Jan 1 11:38:41 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Sat, 1 Jan 2011 11:38:41 -0500 Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters and then fails to converge In-Reply-To: <1293842375.17929.3.camel@mypride> References: <1293731252.6936.47.camel@mypride> <1293842375.17929.3.camel@mypride> Message-ID: On Fri, Dec 31, 2010 at 7:39 PM, Yury V. Zaytsev wrote: > On Fri, 2010-12-31 at 16:35 -0500, josef.pktd at gmail.com wrote: > >> But your function has a discontinuity, and I wouldn't expect a bfgs >> method to produce anything useful since the method assumes smoothness, >> ?as far as I know. > > You are perfectly right about the discontinuity, but that was not the > point. I was rather interested if anyone else is seeing the optimizer > trying out NaNs as function parameters as in my case or not... > > I have this problem with a completely different (smooth and > differentiable) function, the test script is just something I came up > with without thinking too much to illustrate the problem. > I don't see the NaNs (on 64-bit). But I have run into perhaps a similar problem recently. I switch from fmin_l_bfgs_b to fmin_tnc and was able to fine tune the step size in the line search (eta in tnc). Looking only very briefly it looks like the step size for bfgs is adaptive and determined in the code, but I don't see how to sensibly change it. For tnc, I start with eta = 1e-8 and when I get return code 4, I rerun with eta *= 10. The return codes for tnc are in >>> from scipy.optimize.tnc import RCSTRINGS >>> RCSTRINGS {-1: 'Infeasible (low > up)', 0: 'Local minima reach (|pg| ~= 0)', 1: 'Converged (|f_n-f_(n-1)| ~= 0)', 2: 'Converged (|x_n-x_(n-1)| ~= 0)', 3: 'Max. number of function evaluations reach', 4: 'Linear search failed', 5: 'All lower bounds are equal to the upper bounds', 6: 'Unable to progress', 7: 'User requested end of minimization'} Curious if this approach might work for you. Skipper From matwey.kornilov at gmail.com Sun Jan 2 10:29:14 2011 From: matwey.kornilov at gmail.com (Matwey V. Kornilov) Date: Sun, 02 Jan 2011 18:29:14 +0300 Subject: [SciPy-User] numpy I/O question Message-ID: Hi, I need help with NumPy I/O. I have specific array format in my input text data. Due to bug in data-producing software the negative values are concatenated to the previous values. (i.e. "1.0-3.4 3.1"). It was not critical for me because oddly enough this code in C++ parses it for me: #include int main(){ double a; double b; std::cin >> a >> b; std::cout << "a=" << a << " b=" << b << std::endl; return 0; } It is because operator>>(double) stops before "-" char and the second operator>>(double) runs from this position. The question is how to get the same behaviour for NumPy I/O? From zachary.pincus at yale.edu Sun Jan 2 10:41:33 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 2 Jan 2011 10:41:33 -0500 Subject: [SciPy-User] numpy I/O question In-Reply-To: References: Message-ID: <179562F0-64DD-45B1-A5B1-87CBC86C3CE3@yale.edu> > > I need help with NumPy I/O. I have specific array format in my input > text > data. Due to bug in data-producing software the negative values are > concatenated to the previous values. (i.e. "1.0-3.4 3.1"). Can you just run the text files through sed or something and replace "-" with " -"? From matwey.kornilov at gmail.com Sun Jan 2 10:51:38 2011 From: matwey.kornilov at gmail.com (Matwey V. Kornilov) Date: Sun, 02 Jan 2011 18:51:38 +0300 Subject: [SciPy-User] numpy I/O question References: <179562F0-64DD-45B1-A5B1-87CBC86C3CE3@yale.edu> Message-ID: The input format is out my responsibility. I already have written C++ tool that parses the data good enough. I will be asked 'why should we use python which even can't parse as good as c++ does?' `sed` isn't a solution. Zachary Pincus wrote: >> >> I need help with NumPy I/O. I have specific array format in my input >> text >> data. Due to bug in data-producing software the negative values are >> concatenated to the previous values. (i.e. "1.0-3.4 3.1"). > > Can you just run the text files through sed or something and replace > "-" with " -"? From yury at shurup.com Sun Jan 2 10:59:11 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 02 Jan 2011 16:59:11 +0100 Subject: [SciPy-User] numpy I/O question In-Reply-To: References: <179562F0-64DD-45B1-A5B1-87CBC86C3CE3@yale.edu> Message-ID: <1293983951.6788.6.camel@mypride> On Sun, 2011-01-02 at 18:51 +0300, Matwey V. Kornilov wrote: > > I will be asked 'why should we use python which even can't parse as good as > c++ does?' `sed` isn't a solution. How big are these files in question? Why can't you just load them in memory and do the replacement before feeding them into NumPy if you don't want to pre-process files beforehand? This is just 2-3 lines of code. -- Sincerely yours, Yury V. Zaytsev From matwey.kornilov at gmail.com Sun Jan 2 11:09:37 2011 From: matwey.kornilov at gmail.com (Matwey V. Kornilov) Date: Sun, 02 Jan 2011 19:09:37 +0300 Subject: [SciPy-User] numpy I/O question References: <179562F0-64DD-45B1-A5B1-87CBC86C3CE3@yale.edu> <1293983951.6788.6.camel@mypride> Message-ID: These files are pipe-streams but when they are dumped they are about 50M. Replacement that you described requires O(N) (where N is line length) but C++ operator>> requires O(1) for the same parsing. I hoped there were a way to split data for numpy by regexp instead of delimiter. i.e. np.genfromtxt(StringIO(data), regexp=r"-?[\d\.]+") instead of np.genfromtxt(StringIO(data), delimiter=None) Yury V. Zaytsev wrote: > On Sun, 2011-01-02 at 18:51 +0300, Matwey V. Kornilov wrote: >> >> I will be asked 'why should we use python which even can't parse as good >> as c++ does?' `sed` isn't a solution. > > How big are these files in question? > > Why can't you just load them in memory and do the replacement before > feeding them into NumPy if you don't want to pre-process files > beforehand? This is just 2-3 lines of code. > From zachary.pincus at yale.edu Sun Jan 2 11:21:05 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 2 Jan 2011 11:21:05 -0500 Subject: [SciPy-User] numpy I/O question In-Reply-To: References: <179562F0-64DD-45B1-A5B1-87CBC86C3CE3@yale.edu> <1293983951.6788.6.camel@mypride> Message-ID: <5ED1039A-47B7-4844-9E44-052272C23E24@yale.edu> > These files are pipe-streams but when they are dumped they are about > 50M. > > Replacement that you described requires O(N) (where N is line > length) but > C++ operator>> requires O(1) for the same parsing. Reading the file into an array is still an O(N) operation, so if all you you care about is big-O complexity, there's no difference between doing an O(N) search-and-replace followed by an O(N) load operation versus an O(1) parsing followed by an O(N) load operation. O(2N) = O(N), right? But if you care about constant factors, why are you even proposing regexp matching? Have you even tried writing up the simple case search-and-replace to determine whether it's too slow? If you actually need to optimize the file reading (unlikely), perhaps the fastest option will be to use the subprocess module to open a pipeline to sed and then feed the stdout of that to numpy.loadtxt -- sed is well-optimized to have low constant factors. Indeed, these days disks are such a bottleneck that it can be faster to read a gzipped file from disk and decompress it on the fly and parse the contents than just to read the plain file from disk. But as you say the input format is out of your hands. (And again, if speed matters so much, why are the files ASCII text and not binary? But if speed doesn't matter, why the concern about asymptotic complexity?) Anyway, if for religious reasons sed is unacceptable, another decent option if the files are too large for memory (which 50M is emphatically not) would be to open the text file in chunks, do the search-and-replace, and then cough up those chunks within an iterator that acts as a file-like-object. > I will be asked 'why should we use python which even can't parse as > good as > c++ does?' `sed` isn't a solution. This sounds like a personal problem. Sed is a perfectly decent solution for reformatting broken text files, as is reformatting the files internally to python before passing them to a numpy routine designed to be flexible and fast at handling *delimited* text. The fact that C++ has a particular feature that happens to work well with your buggy input files doesn't mean that "python can't parse as well as c++" -- but hey, if you think c++ is in general a better tool than python or sed or perl or whatever for processing text files, go for it. From matwey.kornilov at gmail.com Sun Jan 2 11:38:57 2011 From: matwey.kornilov at gmail.com (Matwey V. Kornilov) Date: Sun, 02 Jan 2011 19:38:57 +0300 Subject: [SciPy-User] numpy I/O question References: <179562F0-64DD-45B1-A5B1-87CBC86C3CE3@yale.edu> <1293983951.6788.6.camel@mypride> <5ED1039A-47B7-4844-9E44-052272C23E24@yale.edu> Message-ID: Zachary Pincus wrote: >> Replacement that you described requires O(N) (where N is line >> length) but >> C++ operator>> requires O(1) for the same parsing. > > Reading the file into an array is still an O(N) operation, so if all > you you care about is big-O complexity, there's no difference between > doing an O(N) search-and-replace followed by an O(N) load operation > versus an O(1) parsing followed by an O(N) load operation. O(2N) = > O(N), right? Yes, You are right. I mixed char-reading and char-inserting operations. > But if you care about constant factors, why are you even proposing > regexp matching? It was 'typing-before-thinking' >> I will be asked 'why should we use python which even can't parse as >> good as >> c++ does?' `sed` isn't a solution. > > This sounds like a personal problem. Sed is a perfectly decent > solution for reformatting broken text files, as is reformatting the > files internally to python before passing them to a numpy routine > designed to be flexible and fast at handling *delimited* text. It sounds quite reasonable, thank you. From warren.weckesser at enthought.com Sun Jan 2 11:43:50 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 2 Jan 2011 10:43:50 -0600 Subject: [SciPy-User] numpy I/O question In-Reply-To: References: Message-ID: On Sun, Jan 2, 2011 at 9:29 AM, Matwey V. Kornilov < matwey.kornilov at gmail.com> wrote: > > Hi, > > I need help with NumPy I/O. I have specific array format in my input text > data. Due to bug in data-producing software the negative values are > concatenated to the previous values. (i.e. "1.0-3.4 3.1"). It was not > critical for me because oddly enough this code in C++ parses it for me: > > #include > > int main(){ > double a; > double b; > > std::cin >> a >> b; > std::cout << "a=" << a << " b=" << b << std::endl; > > return 0; > } > > It is because operator>>(double) stops before "-" char and the second > operator>>(double) runs from this position. > > The question is how to get the same behaviour for NumPy I/O? > > If the data file has fixed-width fields, you can use numpy.genfromtxt() and give the field widths as the 'delimiter' argument: In [50]: from cStringIO import StringIO In [51]: f = StringIO(' 1.0-2.0 3.0\n-4.0 5.0-6.5\n') In [52]: import numpy as np In [53]: a = np.genfromtxt(f, delimiter=[4, 4, 4]) In [54]: a Out[54]: array([[ 1. , -2. , 3. ], [-4. , 5. , -6.5]]) Of course, that is not the same as the C++ behavior, and it won't work if the field widths are variable. Warren > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.pollastri at inrim.it Mon Jan 3 06:28:10 2011 From: f.pollastri at inrim.it (Fabrizio Pollastri) Date: Mon, 3 Jan 2011 11:28:10 +0000 (UTC) Subject: [SciPy-User] pandas: independent row sorting of data frame Message-ID: Hello, I am trying to convert the following R code to python + pandas. The code sort independently each row of an xts multiseries (xs), obtaining a sorting index and a sorted copy of the original multiseries. sort_index = t(apply(as.matrix(coredata(xs)),1,order,decreasing=TRUE)) sorted_xs = t(apply(as.matrix(coredata(xs)),1,sort,decreasing=TRUE)) Playing with pandas data frame, I found the equivalent statement for sort_index: sort_index = df.tapply(argsort) What is the equivalent to obtain a sorted copy? TIA, Fabrizio Pollastri From yury at shurup.com Mon Jan 3 08:57:32 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 03 Jan 2011 14:57:32 +0100 Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters and then fails to converge In-Reply-To: References: <1293731252.6936.47.camel@mypride> <1293842375.17929.3.camel@mypride> Message-ID: <1294063052.6782.32.camel@mypride> Hi Skipper! On Sat, 2011-01-01 at 11:38 -0500, Skipper Seabold wrote: > I don't see the NaNs (on 64-bit). Actually, I've finally got NaNs on 64-bit also, when trying tnc instead of bfgs with my test script, which made me think that it's not so much platform-specific. I could get rid of NaNs on both platforms by replacing np.inf with a sufficiently large number, such as 100. So I think the fact that I had NaNs on one platform and not another is probably due to subtile differences in function values which might depend on the version of the libraries, machine precision and what not... My conclusion is that NaNs come out, when you have sharp jumps with respect to some of the parameters which make the function non-differentiable. On the other hand what else I can do if the values outside of the parameter range go to infinity and I explicitly told the optimizer not to go there? I have a feeling (which needs more debugging in order to be confirmed) that bfgs does not actually respect the boundaries that I have specified. Did anyone else run into this issue? > Curious if this approach might work for you. I have replaced tnc with bfgs on in my production code and tried to tweak eta values as you suggested, but I just can't get it to converge at all. BFGS tries out different parameters, then, goes to the very edge of the defined boundaries, gets np.inf as a result, tries out NaNs and then comes back and converges somewhere in the acceptable parameter range. TNC does the same but gets stuck constantly trying out NaNs, no matter which eta values I take. That's where I am now... -- Sincerely yours, Yury V. Zaytsev From josef.pktd at gmail.com Mon Jan 3 09:46:53 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 3 Jan 2011 09:46:53 -0500 Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters and then fails to converge In-Reply-To: <1294063052.6782.32.camel@mypride> References: <1293731252.6936.47.camel@mypride> <1293842375.17929.3.camel@mypride> <1294063052.6782.32.camel@mypride> Message-ID: On Mon, Jan 3, 2011 at 8:57 AM, Yury V. Zaytsev wrote: > Hi Skipper! > > On Sat, 2011-01-01 at 11:38 -0500, Skipper Seabold wrote: > >> I don't see the NaNs (on 64-bit). > > Actually, I've finally got NaNs on 64-bit also, when trying tnc instead > of bfgs with my test script, which made me think that it's not so much > platform-specific. I could get rid of NaNs on both platforms by > replacing np.inf with a sufficiently large number, such as 100. > > So I think the fact that I had NaNs on one platform and not another is > probably due to subtile differences in function values which might > depend on the version of the libraries, machine precision and what > not... > > My conclusion is that NaNs come out, when you have sharp jumps with > respect to some of the parameters which make the function > non-differentiable. > > On the other hand what else I can do if the values outside of the > parameter range go to infinity and I explicitly told the optimizer not > to go there? > > I have a feeling (which needs more debugging in order to be confirmed) > that bfgs does not actually respect the boundaries that I have > specified. Did anyone else run into this issue? > >> Curious if this approach might work for you. > > I have replaced tnc with bfgs on in my production code and tried to > tweak eta values as you suggested, but I just can't get it to converge > at all. > > BFGS tries out different parameters, then, goes to the very edge of the > defined boundaries, gets np.inf as a result, tries out NaNs and then > comes back and converges somewhere in the acceptable parameter range. > > TNC does the same but gets stuck constantly trying out NaNs, no matter > which eta values I take. > > That's where I am now... In cases like this I often use smooth penalization when the optimizer gets close to the boundary, or reparameterize. Anne Archibald has several times written on this on the mailing lists. Another alternative is to use for example nlopt http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms >From the description, the optimizers have been modified to not evaluate outside of the bounds, in contrast to the scipy optimizers. I don't know what openopt does. I have not used either of these two packages. I like fmin, it might be slower but it is robust. Josef > > -- > Sincerely yours, > Yury V. Zaytsev > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From wesmckinn at gmail.com Mon Jan 3 11:20:02 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 3 Jan 2011 11:20:02 -0500 Subject: [SciPy-User] pandas: independent row sorting of data frame In-Reply-To: References: Message-ID: On Mon, Jan 3, 2011 at 6:28 AM, Fabrizio Pollastri wrote: > Hello, > > I am trying to convert the following R code to python + pandas. The code sort > independently each row of an xts ?multiseries (xs), obtaining a sorting index > and a sorted copy of the original multiseries. > > sort_index = t(apply(as.matrix(coredata(xs)),1,order,decreasing=TRUE)) > sorted_xs = t(apply(as.matrix(coredata(xs)),1,sort,decreasing=TRUE)) > > Playing with pandas data frame, I found the equivalent statement for sort_index: > > sort_index = df.tapply(argsort) > > What is the equivalent to obtain a sorted copy? > > > TIA, > Fabrizio Pollastri > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi Fabrizio, I'm not that familiar with xts but I think you need only do: sort_xs = df.apply(np.sort, axis=1) sort_index = df.apply(np.argsort, axis=1) Using the apply function with the axis argument is preferable to using tapply-- that function is still around to support old client code (I may add a deprecation warning in the future). This will only be about as fast as the R counterpart-- it would be easy to write a more optimized version, though. NB many NumPy functions work using the array interface, e.g.: np.argsort(df, axis=1) But np.sort isn't one of them. HTH, Wes From yury at shurup.com Mon Jan 3 11:14:59 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 03 Jan 2011 17:14:59 +0100 Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters and then fails to converge In-Reply-To: References: <1293731252.6936.47.camel@mypride> <1293842375.17929.3.camel@mypride> <1294063052.6782.32.camel@mypride> Message-ID: <1294071299.6782.60.camel@mypride> Hi! On Mon, 2011-01-03 at 09:46 -0500, josef.pktd at gmail.com wrote: > In cases like this I often use smooth penalization when the optimizer > gets close to the boundary, or reparameterize. Anne Archibald has > several times written on this on the mailing lists. I will search the list for more specific examples, thanks! > Another alternative is to use for example nlopt > http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms > >From the description, the optimizers have been modified to not > evaluate outside of the bounds, in contrast to the scipy optimizers. Many thanks, I will have a look. This is the first time I hear about nlopt. Probably worth trying out! > I don't know what openopt does. Well, openopt does provide the same methods as SciPy, otherwise there are few custom algorithms for bounded optimization (gsubg and ralg) and connectors to few more Fortan libraries, but the documentation really leaves much to be desired. Also, the interface didn't seem very much convenient to me / installation is a bit complicated etc. In particular, the descriptions of the optimizers all claim to perform the same thing better than any other, but there is no comprehensive comparison and highlights of specific features, such as evaluation outside of the bounds as you mentioned :-( So not knowing the specifics of the algorithms, their limits, advantages and disadvantages, you are pretty much left in the dark just trying out stuff in the hope that something will finally work out... > I have not used either of these two packages. I like fmin, it might be > slower but it is robust. I don't have anything against fmin, but first, I am unaware of the availability of the bounded version of fmin in Scipy and second, when I tried out the unbounded version it really felt like slow as hell... And when you have thousands of parameters and thousands concurrent optimizations running it really doesn't feel inspiring :-( Actually, I tried it again and it really does seem to be much more robust as BFGS just as you claim. At least it almost converged to the same thing starting from two completely distinct points in parameter space, whereas BFGS came up with completely different results. Also, there is no way I can find how to define the number maximum acceptable number of iterations / function evaluations. For me it just stops after 20800 evaluations and there is nothing of this page which seems to be of help: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html Maybe I should peek at the code... If it just were not THAT slow :-( ... Maybe I should try BFGS and then use the resulting vector as a starting point for DS. Why wouldn't some optimization genius write a hybrid DS which would use gradients as an additional guidance?.. Thanks! -- Sincerely yours, Yury V. Zaytsev From yury at shurup.com Mon Jan 3 11:47:08 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 03 Jan 2011 17:47:08 +0100 Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters and then fails to converge In-Reply-To: References: <1293731252.6936.47.camel@mypride> <1293842375.17929.3.camel@mypride> <1294063052.6782.32.camel@mypride> Message-ID: <1294073228.3694.0.camel@mypride> Hi! On Mon, 2011-01-03 at 09:46 -0500, josef.pktd at gmail.com wrote: > In cases like this I often use smooth penalization when the optimizer > gets close to the boundary, or reparameterize. Anne Archibald has > several times written on this on the mailing lists. I will search the list for more specific examples, thanks! > Another alternative is to use for example nlopt > http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms > >From the description, the optimizers have been modified to not > evaluate outside of the bounds, in contrast to the scipy optimizers. Many thanks, I will have a look. This is the first time I hear about nlopt. Probably worth trying out! > I don't know what openopt does. Well, openopt does provide the same methods as SciPy, otherwise there are few custom algorithms for bounded optimization (gsubg and ralg) and connectors to few more Fortan libraries, but the documentation really leaves much to be desired. Also, the interface didn't seem very much convenient to me / installation is a bit complicated etc. In particular, the descriptions of the optimizers all claim to perform the same thing better than any other, but there is no comprehensive comparison and highlights of specific features, such as evaluation outside of the bounds as you mentioned :-( So not knowing the specifics of the algorithms, their limits, advantages and disadvantages, you are pretty much left in the dark just trying out stuff in the hope that something will finally work out... > I have not used either of these two packages. I like fmin, it might be > slower but it is robust. I don't have anything against fmin, but first, I am unaware of the availability of the bounded version of fmin in Scipy and second, when I tried out the unbounded version it really felt like slow as hell... And when you have thousands of parameters and thousands concurrent optimizations running it really doesn't feel inspiring :-( Actually, I tried it again and it really does seem to be much more robust as BFGS just as you claim. At least it almost converged to the same thing starting from two completely distinct points in parameter space, whereas BFGS came up with completely different results. If it just were not THAT slow :-( ... Maybe I should try BFGS and then use the resulting vector as a starting point for DS. Why wouldn't some optimization genius write a hybrid DS which would use gradients as an additional guidance?.. Thanks! -- Sincerely yours, Yury V. Zaytsev From f.pollastri at inrim.it Mon Jan 3 16:21:19 2011 From: f.pollastri at inrim.it (Fabrizio Pollastri) Date: Mon, 3 Jan 2011 21:21:19 +0000 (UTC) Subject: [SciPy-User] pandas: independent row sorting of data frame References: Message-ID: Wes McKinney gmail.com> writes: > > Hi Fabrizio, > > I'm not that familiar with xts but I think you need only do: > > sort_xs = df.apply(np.sort, axis=1) > sort_index = df.apply(np.argsort, axis=1) > > Using the apply function with the axis argument is preferable to using > tapply-- that function is still around to support old client code (I > may add a deprecation warning in the future). > > This will only be about as fast as the R counterpart-- it would be > easy to write a more optimized version, though. > > NB many NumPy functions work using the array interface, e.g.: > > np.argsort(df, axis=1) > > But np.sort isn't one of them. > > HTH, > Wes > Hi Wes, thanks for your hints, but I have some problems with sort. Let see the folowing code. import numpy as np from pandas import DataFrame df = DataFrame({'a': [1,3,1], 'b':[2,2,3], 'c':[3,1,2]}) >>> df a b c 0 1 2 3 1 3 2 1 2 1 3 2 # sort index is ok. sort_index = df.apply(np.argsort, axis=1) >>> sort_index a b c 0 0 1 2 1 2 1 0 2 0 2 1 # sorted df is not as expected: it is equal to df. sorted_df = df.apply(np.sort, axis=1) >>> sorted_df a b c 0 1 2 3 1 3 2 1 2 1 3 2 Where is the trick? TIA, Fabrizio From wesmckinn at gmail.com Mon Jan 3 16:52:02 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 3 Jan 2011 16:52:02 -0500 Subject: [SciPy-User] pandas: independent row sorting of data frame In-Reply-To: References: Message-ID: On Mon, Jan 3, 2011 at 4:21 PM, Fabrizio Pollastri wrote: > Wes McKinney gmail.com> writes: >> >> Hi Fabrizio, >> >> I'm not that familiar with xts but I think you need only do: >> >> sort_xs = df.apply(np.sort, axis=1) >> sort_index = df.apply(np.argsort, axis=1) >> >> Using the apply function with the axis argument is preferable to using >> tapply-- that function is still around to support old client code (I >> may add a deprecation warning in the future). >> >> This will only be about as fast as the R counterpart-- it would be >> easy to write a more optimized version, though. >> >> NB many NumPy functions work using the array interface, e.g.: >> >> np.argsort(df, axis=1) >> >> But np.sort isn't one of them. >> >> HTH, >> Wes >> > > Hi Wes, > > thanks for your hints, but I have some problems with sort. > Let see the folowing code. > > import numpy as np > from pandas import DataFrame > > df = DataFrame({'a': [1,3,1], 'b':[2,2,3], 'c':[3,1,2]}) >>>> df > ? ? a ? ? ? ? ? ? ?b ? ? ? ? ? ? ?c > 0 ? ?1 ? ? ? ? ? ? ?2 ? ? ? ? ? ? ?3 > 1 ? ?3 ? ? ? ? ? ? ?2 ? ? ? ? ? ? ?1 > 2 ? ?1 ? ? ? ? ? ? ?3 ? ? ? ? ? ? ?2 > > # sort index is ok. > sort_index = df.apply(np.argsort, axis=1) >>>> sort_index > ? ? a ? ? ? ? ? ? ?b ? ? ? ? ? ? ?c > 0 ? ?0 ? ? ? ? ? ? ?1 ? ? ? ? ? ? ?2 > 1 ? ?2 ? ? ? ? ? ? ?1 ? ? ? ? ? ? ?0 > 2 ? ?0 ? ? ? ? ? ? ?2 ? ? ? ? ? ? ?1 > > # sorted df is not as expected: it is equal to df. > sorted_df = df.apply(np.sort, axis=1) >>>> sorted_df > ? ? a ? ? ? ? ? ? ?b ? ? ? ? ? ? ?c > 0 ? ?1 ? ? ? ? ? ? ?2 ? ? ? ? ? ? ?3 > 1 ? ?3 ? ? ? ? ? ? ?2 ? ? ? ? ? ? ?1 > 2 ? ?1 ? ? ? ? ? ? ?3 ? ? ? ? ? ? ?2 > > > Where is the trick? > > > TIA, > Fabrizio > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Ah, you are quite right. Hmm, the issue is that Series.sort (overriding ndarray.sort) preserves the link between the data labels and the data, which is usually desirable. Try this: df.apply(lambda x: np.sort(np.asarray(x)), axis=1) this works too (but will be slower with larger arrays): df.apply(sorted, axis=1) From no-reply at dropboxmail.com Mon Jan 3 17:05:04 2011 From: no-reply at dropboxmail.com (Dropbox) Date: Mon, 03 Jan 2011 22:05:04 +0000 Subject: [SciPy-User] Wing Sit invited you to Dropbox Message-ID: <20110103220504.1AEAC47A064@mailman-2.dropboxmail.com> Wing Sit wants you to use Dropbox to sync and share files online and across computers. Get started here: http://www.dropbox.com/link/20.Zef48rmDKo/NjU2NTA1NjQ4Nw?src=referrals_bulk2 - The Dropbox Team ____________________________________________________ To stop receiving invites from Dropbox, please go to http://www.dropbox.com/bl/80a300495bdc/scipy-user%40scipy.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jan 3 23:04:33 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 3 Jan 2011 23:04:33 -0500 Subject: [SciPy-User] integrate.quad logical and ? Message-ID: I was trying to reduce the precision of integrate quad, since I only need 3 decimals, but it didn't work until I reduce also epsrel and not just epsabs. epsabs=1e-4, epsrel=1e-2, limit=150 seems to work well so far. It looks like they both have to be satisfied for quad to stop. Is this correct? The docs don't say anything about it, and the recommendation to look at the detailed explanation doesn't print anything. >>> scipy.integrate.quad_explain() >>> #nothing here >>> integrate.quadrature doc string explicitly says OR Josef From tmp50 at ukr.net Tue Jan 4 04:22:50 2011 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 04 Jan 2011 11:22:50 +0200 Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters andthen fails to converge In-Reply-To: <1294073228.3694.0.camel@mypride> References: <1294063052.6782.32.camel@mypride> <1293731252.6936.47.camel@mypride> <1294073228.3694.0.camel@mypride> <1293842375.17929.3.camel@mypride> Message-ID: hi, Well, openopt does provide the same methods as SciPy, otherwise there are few custom algorithms for bounded optimization (gsubg and ralg) and connectors to few more Fortan libraries, but the documentation really leaves much to be desired. Also, the interface didn't seem very much convenient to me / installation is a bit complicated etc. In particular, the descriptions of the optimizers all claim to perform the same thing better than any other, but there is no comprehensive comparison and highlights of specific features, such as evaluation outside of the bounds as you mentioned :-( ralg (as well as gsubg) is adjusted very well to handle problems with restricted domains. So not knowing the specifics of the algorithms, their limits, advantages and disadvantages, you are pretty much left in the dark just trying out stuff in the hope that something will finally work out... For your problem (nonlinear local minimization) for a solver efficiency matter numbers of: variables, box constraints, linear eq/ineq constraints, nonlinear eq/ineq constraints. Another one very essential issue is having some gradients of active constraints forming linear system close to singular (then lots of NLP solvers will fail to solve it). Are you capable of taking into account all these parameters? I guess it's easy to change solver from the "stuff" instead and try what is better for the nlp involved. Moreover, would I provide some comparison info in the way you would like, after new release of any of the openopt-connected solver I would have to perform the comparison over and over again. Thus I don't see any reason to perform it. D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Tue Jan 4 12:26:11 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 04 Jan 2011 09:26:11 -0800 Subject: [SciPy-User] OT: calling Java from Python In-Reply-To: <4D13B85E.7090906@uci.edu> References: <4D13B5DA.1020205@noaa.gov> <4D13B85E.7090906@uci.edu> Message-ID: <4D235833.2070609@noaa.gov> > On 12/23/2010 12:49 PM, Christopher Barker wrote: >> Are there any active projects supporting calling Java from CPython? I found a bit more on my own, plus got some helpful replies, so here's a summary: On 12/23/10 1:00 PM, Christoph Gohlke wrote: > CellProfiler calls Java libraries > (bioformats, ImageJ) via JNI. Take a look at > . That does look like a good starting point -- it kind of looks like javabridge is its own project, but couldn't find independent references to it. But certainly a good place to start. if I wanted to do that sort of thing. GNU CNI: http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html#About-CNI This looks like a good way to call JAVA from C++, and therefore would be pretty easy to leverage for calling JAVA from Cython, for example. It would also provide the advantage of not having to have it rely on a particular JVM being installed -- you could deliver the gcj runtime along with your binaries. On 12/23/10 2:00 PM, josef.pktd at gmail.com wrote: > JCC 2.7: a C++ code generator for calling Java from C++/Python > http://pypi.python.org/pypi/JCC/2.7 Ah -- very nice -- I should have thought to search Pypi directly! That does look to be pretty much exactly what I'm looking for. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From yury at shurup.com Tue Jan 4 13:12:19 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Tue, 04 Jan 2011 19:12:19 +0100 Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters andthen fails to converge In-Reply-To: References: <1294063052.6782.32.camel@mypride> <1293731252.6936.47.camel@mypride> <1294073228.3694.0.camel@mypride> <1293842375.17929.3.camel@mypride> Message-ID: <1294164739.6880.75.camel@mypride> Hi! On Tue, 2011-01-04 at 11:22 +0200, Dmitrey wrote: > ralg (as well as gsubg) is adjusted very well to handle problems with > restricted domains. gsubg is listed as "unconstrained" at [2], while [3] says that constraints handling code is immature and will most likely fail. I still have no excuse for not trying r-alg out, which I will hopefully do at some point, after exploring the performance of the nlopt solvers. > For your problem (nonlinear local minimization) for a solver > efficiency matter numbers of: variables, box constraints, linear > eq/ineq constraints, nonlinear eq/ineq constraints. Another one very > essential issue is having some gradients of active constraints forming > linear system close to singular (then lots of NLP solvers will fail to > solve it). Are you capable of taking into account all these > parameters? I guess it's easy to change solver from the "stuff" > instead and try what is better for the nlp involved. I am certainly not, because this is pretty much my first encounter with optimization problems. Last week I didn't even have a clue on what classes of problems there are, and which algorithms are mostly used to solve these problems. That's exactly why I would like to see descriptions along the lines of [1], because random trials of the algorithms that are bundled with SciPy (bfgs, tnc, ds, etc.) were largely unsuccessful. At some point, BFGS was converging fast and nice, until I found out that for different starting points it converges to completely different solutions. Then I tried DS which proved to be much more robust, but extremely slow and was all the time getting stuck when the simplex had to get through a "needle's eye" in the parameter space. So I assume that the procedure for choosing the right method should be more of an educated guess, than random trials. Maybe I should re-post the message more specific to my problem to this list. > Moreover, would I provide some comparison info in the way you would > like, after new release of any of the openopt-connected solver I would > have to perform the comparison over and over again. Thus I don't see > any reason to perform it. Writing reasonably good documentation is not fun. Also, it takes a lot of time that you can possibly spend on something else, which you might find more interesting or useful. On the other hand, it makes users happy, especially those that are unfamiliar with the domain. What you find more important is entirely up to you, and please note that I don't blame you for the decisions that you make. However, I find that statements such as "I see no reason to perform it" are provocative :-) If you want to put yourself in your users' shoes, just compare [1] and [2]. Do you honestly think that there is no reason for coming up with a similar list (short description of the algorithm, paper references and personal recommendations for initial orientation) for OpenOpt? I think it would certainly benefit the project if only you had time to do it. [1]: http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms [2]: http://openopt.org/NLP [3]: http://openopt.org/gsubg -- Sincerely yours, Yury V. Zaytsev From josef.pktd at gmail.com Tue Jan 4 15:23:01 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 4 Jan 2011 15:23:01 -0500 Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters andthen fails to converge In-Reply-To: <1294164739.6880.75.camel@mypride> References: <1294063052.6782.32.camel@mypride> <1293731252.6936.47.camel@mypride> <1294073228.3694.0.camel@mypride> <1293842375.17929.3.camel@mypride> <1294164739.6880.75.camel@mypride> Message-ID: On Tue, Jan 4, 2011 at 1:12 PM, Yury V. Zaytsev wrote: > Hi! > > On Tue, 2011-01-04 at 11:22 +0200, Dmitrey wrote: > >> ralg (as well as gsubg) is adjusted very well to handle problems with >> restricted domains. > > gsubg is listed as "unconstrained" at [2], while [3] says that > constraints handling code is immature and will most likely fail. > > I still have no excuse for not trying r-alg out, which I will hopefully > do at some point, after exploring the performance of the nlopt solvers. > >> For your problem (nonlinear local minimization) for a solver >> efficiency matter numbers of: variables, box constraints, linear >> eq/ineq constraints, nonlinear eq/ineq constraints. Another one very >> essential issue is having some gradients of active constraints forming >> linear system close to singular (then lots of NLP solvers will fail to >> solve it). Are you capable of taking into account all these >> parameters? I guess it's easy to change solver from the "stuff" >> instead and try what is better for the nlp involved. > > I am certainly not, because this is pretty much my first encounter with > optimization problems. Last week I didn't even have a clue on what > classes of problems there are, and which algorithms are mostly used to > solve these problems. > > That's exactly why I would like to see descriptions along the lines of > [1], because random trials of the algorithms that are bundled with SciPy > (bfgs, tnc, ds, etc.) were largely unsuccessful. > > At some point, BFGS was converging fast and nice, until I found out that > for different starting points it converges to completely different > solutions. Then I tried DS which proved to be much more robust, but > extremely slow and was all the time getting stuck when the simplex had > to get through a "needle's eye" in the parameter space. "Welcome to the world of messy optimization problems" If you have local maxima, and your optimization problem is not well behaved, then you might need a global optimizer. I think also the algorithms in nlopt will get stuck at local optima, since all these optimizers only use local information. Josef > > So I assume that the procedure for choosing the right method should be > more of an educated guess, than random trials. Maybe I should re-post > the message more specific to my problem to this list. > >> Moreover, would I provide some comparison info in the way you would >> like, after new release of any of the openopt-connected solver I would >> have to perform the comparison over and over again. Thus I don't see >> any reason to perform it. > > Writing reasonably good documentation is not fun. Also, it takes a lot > of time that you can possibly spend on something else, which you might > find more interesting or useful. On the other hand, it makes users > happy, especially those that are unfamiliar with the domain. > > What you find more important is entirely up to you, and please note that > I don't blame you for the decisions that you make. However, I find that > statements such as "I see no reason to perform it" are provocative :-) > > If you want to put yourself in your users' shoes, just compare [1] and > [2]. Do you honestly think that there is no reason for coming up with a > similar list (short description of the algorithm, paper references and > personal recommendations for initial orientation) for OpenOpt? I think > it would certainly benefit the project if only you had time to do it. > > [1]: http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms > [2]: http://openopt.org/NLP > [3]: http://openopt.org/gsubg > > -- > Sincerely yours, > Yury V. Zaytsev > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Tue Jan 4 21:07:40 2011 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 4 Jan 2011 20:07:40 -0600 Subject: [SciPy-User] how to create a phase portrait for a nonlinear diff. eqn. Message-ID: I am teaching a nonlinear controls course this coming semester. I plan to have the students write code to generate phase portraits as a project. It is fairly easy to randomly (or intelligently?) create initial condition seeds and run integrate.odeint. What isn't obvious to me is how to put arrows on phase portrait lines to indicate the direction of the evolution over time. For example, the attached code creates a phase portrait for a fairly simple system. The graph is shown in the attached png. But this equilibrium point is either stable or unstable depending on whether the curve spirals in or out over time. How do I write code to automatically determine the direction of increasing time and indicate it by arrow heads on the graph? I know x, xdot, and time as vectors for each point on the graph. I guess I could numerically determine (d xdot)/dx for each point, but is that the best route to go? And that leads to issues as dx gets small.... Thanks, Ryan -------------- next part -------------- A non-text attachment was scrubbed... Name: pendulum_phase_portrait.py Type: text/x-python Size: 349 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phase_portrait.png Type: image/png Size: 35577 bytes Desc: not available URL: From rob.clewley at gmail.com Tue Jan 4 22:31:46 2011 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 4 Jan 2011 22:31:46 -0500 Subject: [SciPy-User] how to create a phase portrait for a nonlinear diff. eqn. In-Reply-To: References: Message-ID: On Tue, Jan 4, 2011 at 9:07 PM, Ryan Krauss wrote: > I am teaching a nonlinear controls course this coming semester. ?I > plan to have the students write code to generate phase portraits as a > project. ?It is fairly easy to randomly (or intelligently?) create > initial condition seeds and run integrate.odeint. ?What isn't obvious > to me is how to put arrows on phase portrait lines to indicate the > direction of the evolution over time. ?For example, the attached code > creates a phase portrait for a fairly simple system. ?The graph is > shown in the attached png. ?But this equilibrium point is either > stable or unstable depending on whether the curve spirals in or out > over time. ?How do I write code to automatically determine the > direction of increasing time and indicate it by arrow heads on the > graph? > > I know x, xdot, and time as vectors for each point on the graph. ?I > guess I could numerically determine (d xdot)/dx for each point, but is > that the best route to go? ?And that leads to issues as dx gets > small.... Typically, in publications only a small number of arrows per curve is used. You should have no "dx" problems provided you keep sufficiently far from the equilibrium in the plane (e.g., using a radius distance threshold). If I understand your problem correctly, I'd pick no more than three points sufficiently far from the equilibrium on the curve (maybe approx equidistant in arc length), and then simply look at the next point forward in time in the phaseplane. That already gives you a linearized forward direction of a tangent line for your arrow and a basepoint. If your time step is reasonably good then I'd expect this to look presentable, at least for the purposes of a class project! -Rob From ryanlists at gmail.com Wed Jan 5 01:10:55 2011 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 5 Jan 2011 00:10:55 -0600 Subject: [SciPy-User] how to create a phase portrait for a nonlinear diff. eqn. In-Reply-To: References: Message-ID: It's good to know I am not completely crazy.... But then practically what do you do to get nicely looking embedded arrows: ------>-------->------- It feels like I am reinventing the wheel here a bit. It seems like I would have to define a dy and dx for my arrow head and project them along and perpendicular to the tangent. On Tue, Jan 4, 2011 at 9:31 PM, Rob Clewley wrote: > On Tue, Jan 4, 2011 at 9:07 PM, Ryan Krauss wrote: >> I am teaching a nonlinear controls course this coming semester. ?I >> plan to have the students write code to generate phase portraits as a >> project. ?It is fairly easy to randomly (or intelligently?) create >> initial condition seeds and run integrate.odeint. ?What isn't obvious >> to me is how to put arrows on phase portrait lines to indicate the >> direction of the evolution over time. ?For example, the attached code >> creates a phase portrait for a fairly simple system. ?The graph is >> shown in the attached png. ?But this equilibrium point is either >> stable or unstable depending on whether the curve spirals in or out >> over time. ?How do I write code to automatically determine the >> direction of increasing time and indicate it by arrow heads on the >> graph? >> >> I know x, xdot, and time as vectors for each point on the graph. ?I >> guess I could numerically determine (d xdot)/dx for each point, but is >> that the best route to go? ?And that leads to issues as dx gets >> small.... > > Typically, in publications only a small number of arrows per curve is > used. You should have no "dx" problems provided you keep sufficiently > far from the equilibrium in the plane (e.g., using a radius distance > threshold). > > If I understand your problem correctly, I'd pick no more than three > points sufficiently far from the equilibrium on the curve (maybe > approx equidistant in arc length), and then simply look at the next > point forward in time in the phaseplane. That already gives you a > linearized forward direction of a tangent line for your arrow and a > basepoint. If your time step is reasonably good then I'd expect this > to look presentable, at least for the purposes of a class project! > > -Rob > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From alan.isaac at gmail.com Wed Jan 5 09:39:36 2011 From: alan.isaac at gmail.com (Alan G Isaac) Date: Wed, 05 Jan 2011 09:39:36 -0500 Subject: [SciPy-User] how to create a phase portrait for a nonlinear diff. eqn. In-Reply-To: References: Message-ID: <4D2482A8.9020606@gmail.com> On 1/4/2011 9:07 PM, Ryan Krauss wrote: > how to put arrows on phase portrait lines to indicate the > direction of the evolution over time. http://matplotlib.sourceforge.net/examples/pylab_examples/quiver_demo.html fwiw, Alan Isaac From jdh2358 at gmail.com Wed Jan 5 10:08:28 2011 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 5 Jan 2011 09:08:28 -0600 Subject: [SciPy-User] how to create a phase portrait for a nonlinear diff. eqn. In-Reply-To: References: Message-ID: On Tue, Jan 4, 2011 at 8:07 PM, Ryan Krauss wrote: > I am teaching a nonlinear controls course this coming semester. I > plan to have the students write code to generate phase portraits as a > project. It is fairly easy to randomly (or intelligently?) create > initial condition seeds and run integrate.odeint. What isn't obvious > to me is how to put arrows on phase portrait lines to indicate the > direction of the evolution over time. For example, the attached code > creates a phase portrait for a fairly simple system. The graph is > shown in the attached png. But this equilibrium point is either > stable or unstable depending on whether the curve spirals in or out > over time. How do I write code to automatically determine the > direction of increasing time and indicate it by arrow heads on the > graph? > > I know x, xdot, and time as vectors for each point on the graph. I > guess I could numerically determine (d xdot)/dx for each point, but is > that the best route to go? And that leads to issues as dx gets > small.... > here's one example import numpy as np import matplotlib.pyplot as plt import scipy.integrate as integrate def dr(r, f): """ return the derivative of *r* (the rabbit population) evaulated as a function of *r* and *f*. The function should work whether *r* and *f* are scalars, 1D arrays or 2D arrays. The return value should have the same dimensionality (shape) as the inputs *r* and *f*. """ return alpha*r - beta*r*f def df(r, f): """ return the derivative of *f* (the fox population) evaulated as a function of *r* and *f*. The function should work whether *r* and *f* are scalars, 1D arrays or 2D arrays. The return value should have the same dimensionality (shape) as the inputs *r* and *f*. """ return gamma*r*f - delta*f def derivs(state, t): """ Return the derivatives of R and F, stored in the *state* vector:: state = [R, F] The return data should be [dR, dF] which are the derivatives of R and F at position state and time *t* """ R, F = state # and foxes deltar = dr(r, f) # in rabbits deltaf = df(r, f) # in foxes return deltar, deltaf # the parameters for rabbit and fox growth and interactions alpha, delta = 1, .25 beta, gamma = .2, .05 # the initial population of rabbits and foxes r0 = 20 f0 = 10 # create a time array from 0..100 sampled at 0.1 second steps t = np.arange(0.0, 100, 0.1) y0 = [r0, f0] # the initial [rabbits, foxes] state vector # integrate your ODE using scipy.integrate. Read the help to see what # is available HINT: see scipy.integrate.odeint y = integrate.odeint(derivs, y0, t) # the return value from the integration is a Nx2 array. Extract it # into two 1D arrays caled r and f using numpy slice indexing r = y[:,0] # extract the rabbits vector f = y[:,1] # extract the foxes vector # time series plot: plot the population of rabbits and foxes as a # funciton of time plt.figure() plt.plot(t, r, label='rabbits') plt.plot(t, f, label='foxes') plt.xlabel('time (years)') plt.ylabel('population') plt.title('population trajectories') plt.grid() plt.legend() plt.savefig('lotka_volterra.png', dpi=150) plt.savefig('lotka_volterra.eps') # phase-plane plot: plot the population of foxes versus rabbits # make sure you include and xlabel, ylabel and title plt.figure() plt.plot(r, f, color='red') plt.xlabel('rabbits') plt.ylabel('foxes') plt.title('phase plane') # Create 2D arrays for R and F to represent the entire phase plane -- # the point (R[i,j], F[i,j]) is a single (rabbit, fox) combinations. # pass these arrays to the functions dr and df above to get 2D arrays # of dR and dF evaluated at every point in the phase plance. rmax = 1.1 * r.max() fmax = 1.1 * f.max() R, F = np.meshgrid(np.arange(-1, rmax), np.arange(-1, fmax)) dR = dr(R, F) dF = df(R, F) plt.quiver(R, F, dR, dF) # Now find the nul-clines, for dR and dF respectively. These are the # points where dR=0 and dF=0 in the (R, F) phase plane. You can use # matplotlib's countour routine to find the zero level. See the # levels keyword to contour. You will need a fine mesh of R and F, # reevaluate dr and df on the finer grid, and use contour to find the # level curves R, F = np.meshgrid(np.arange(-1, rmax, 0.1), np.arange(-1, fmax, 0.1)) dR = dr(R, F) dF = df(R, F) plt.contour(R, F, dR, levels=[0], linewidths=3, colors='blue') plt.contour(R, F, dF, levels=[0], linewidths=3, colors='green') plt.ylabel('foxes') plt.title('trajectory, direction field and null clines') plt.savefig('lotka_volterra_pplane.png', dpi=150) plt.savefig('lotka_volterra_pplane.eps') plt.show() -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lotka_volterra.py Type: application/octet-stream Size: 3613 bytes Desc: not available URL: From lorenzo.agostino at cern.ch Thu Jan 6 06:23:56 2011 From: lorenzo.agostino at cern.ch (lorenzo agostino) Date: Thu, 6 Jan 2011 12:23:56 +0100 Subject: [SciPy-User] question on refreshing window Message-ID: <7531BB99-0CFA-4E48-AC3D-EF62CB18FCAE@cern.ch> Hello, I am completely new to scipy and fairly new to python. I am playing around a bit with my code and I encountered a problem refreshing the name of a window. Essentially I would like to plot two distributions sequentially on the same window, giving a new title to the window. The problem is that on the second plot an extra window pops up (named Figure 1) instead of simply refreshing the first one. Here is the piece of code: pylab.figure().canvas.set_window_title("Disribution 1") pylab.hist(incr, 10) <- first plot in the first window pylab.show() pylab.figure().canvas.set_window_title("Disribution 2") pylab.hist(myvec, 10) <- second plot in the first window pylab.show() This is the simple version I tried, I tried many different things with no luck. Any idea? Thanks! l. From JRadinger at gmx.at Thu Jan 6 06:27:58 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Thu, 06 Jan 2011 12:27:58 +0100 Subject: [SciPy-User] solving integration, density function In-Reply-To: References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> Message-ID: <20110106112758.65840@gmx.net> Hey Last time you helped me a lot with my normal probabilty density function. My problem now is quite simple, I think it's just a problem with the syntax (brackets): There are two ways to calculate the pdf, with the stats-function and with pure mathematically, but the give different results and I can't find the where I make the mistake: func1 = stats.norm.pdf(x, loc=m, scale=(s1)) func2 = 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) Where is the problem thank you... /j -------- Original-Nachricht -------- > Datum: Tue, 21 Dec 2010 09:18:15 -0500 > Von: Skipper Seabold > An: SciPy Users List > Betreff: Re: [SciPy-User] solving integration, density function > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger > wrote: > > > > -------- Original-Nachricht -------- > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 > >> Von: Gregor Thalhammer > >> An: SciPy Users List > >> Betreff: Re: [SciPy-User] solving integration, density function > > > >> > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: > >> > >> > Hello, > >> > > >> > I am really new to python and Scipy. > >> > I want to solve a integrated function with a python script > >> > and I think Scipy should do that :) > >> > > >> > My task: > >> > > >> > I do have some variables (s, m, K,) which are now absolutely set, but > in > >> future I'll get the values via another process of pyhton. > >> > > >> > s = 400 > >> > m = 0 > >> > K = 1 > >> > > >> > And have have following function: > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is the > density > >> function of the normal distribution a symetrical curve with the mean > (m) of > >> 0. > >> > > >> > The total area under the curve is 1 (100%) which is for an > integration > >> from -inf to +inf. > >> > I want to know x in the case of 99%: meaning that the integral (-x to > >> +x) of the function is 0.99. Due to the symetry of the curve you can > also set > >> the integral from 0 to +x equal to (0.99/2): > >> > > >> > 0.99 = integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > -x, > >> x) > >> > resp. > >> > (0.99/2) = > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > >> 0, x) > >> > > >> > How can I solve that question in Scipy/python > >> > so that I get x in the end. I don't know how to write > >> > the code... > >> > >> > >> ---> > >> erf(x[, out]) > >> > >> ? ? y=erf(z) returns the error function of complex argument defined > as > >> ? ? as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) > >> --- > >> > >> from scipy.special import erf, erfinv > >> erfinv(0.99)*sqrt(2) > >> > >> > >> Gregor > >> > > > > > > Thank you Gregor, > > I only understand a part of your answer... I know that the integral of > the density function is a error function and I know that the argument "from > scipy.special import erf, erfinv" is to load the module. > > > > But how do I write the code including my orignial function so that I can > modify it (I have also another function I want to integrate). how do i > start? I want to save the whole code to a python-script I can then load e.g. > into ArcGIS where I want to use the value of x for further calculations. > > > > Are you always integrating densities? If so, you don't want to use > integrals probably, but you could use scipy.stats > > erfinv(.99)*np.sqrt(2) > 2.5758293035489004 > > from scipy import stats > > stats.norm.ppf(.995) > 2.5758293035489004 > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From warren.weckesser at enthought.com Thu Jan 6 06:48:25 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 6 Jan 2011 05:48:25 -0600 Subject: [SciPy-User] solving integration, density function In-Reply-To: <20110106112758.65840@gmx.net> References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> Message-ID: On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger wrote: > Hey > > Last time you helped me a lot with my normal > probabilty density function. My problem now is > quite simple, I think it's just a problem with > the syntax (brackets): > > There are two ways to calculate the pdf, with the > stats-function and with pure mathematically, but > the give different results and I can't find the > where I make the mistake: > > > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) > func2 = > 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) > > Where is the problem > func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) Warren > thank you... > > /j > > -------- Original-Nachricht -------- > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 > > Von: Skipper Seabold > > An: SciPy Users List > > Betreff: Re: [SciPy-User] solving integration, density function > > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger > > wrote: > > > > > > -------- Original-Nachricht -------- > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 > > >> Von: Gregor Thalhammer > > >> An: SciPy Users List > > >> Betreff: Re: [SciPy-User] solving integration, density function > > > > > >> > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: > > >> > > >> > Hello, > > >> > > > >> > I am really new to python and Scipy. > > >> > I want to solve a integrated function with a python script > > >> > and I think Scipy should do that :) > > >> > > > >> > My task: > > >> > > > >> > I do have some variables (s, m, K,) which are now absolutely set, > but > > in > > >> future I'll get the values via another process of pyhton. > > >> > > > >> > s = 400 > > >> > m = 0 > > >> > K = 1 > > >> > > > >> > And have have following function: > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is the > > density > > >> function of the normal distribution a symetrical curve with the mean > > (m) of > > >> 0. > > >> > > > >> > The total area under the curve is 1 (100%) which is for an > > integration > > >> from -inf to +inf. > > >> > I want to know x in the case of 99%: meaning that the integral (-x > to > > >> +x) of the function is 0.99. Due to the symetry of the curve you can > > also set > > >> the integral from 0 to +x equal to (0.99/2): > > >> > > > >> > 0.99 = integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > > -x, > > >> x) > > >> > resp. > > >> > (0.99/2) = > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > > >> 0, x) > > >> > > > >> > How can I solve that question in Scipy/python > > >> > so that I get x in the end. I don't know how to write > > >> > the code... > > >> > > >> > > >> ---> > > >> erf(x[, out]) > > >> > > >> y=erf(z) returns the error function of complex argument defined > > as > > >> as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) > > >> --- > > >> > > >> from scipy.special import erf, erfinv > > >> erfinv(0.99)*sqrt(2) > > >> > > >> > > >> Gregor > > >> > > > > > > > > > Thank you Gregor, > > > I only understand a part of your answer... I know that the integral of > > the density function is a error function and I know that the argument > "from > > scipy.special import erf, erfinv" is to load the module. > > > > > > But how do I write the code including my orignial function so that I > can > > modify it (I have also another function I want to integrate). how do i > > start? I want to save the whole code to a python-script I can then load > e.g. > > into ArcGIS where I want to use the value of x for further calculations. > > > > > > > Are you always integrating densities? If so, you don't want to use > > integrals probably, but you could use scipy.stats > > > > erfinv(.99)*np.sqrt(2) > > 2.5758293035489004 > > > > from scipy import stats > > > > stats.norm.ppf(.995) > > 2.5758293035489004 > > > > Skipper > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Thu Jan 6 06:51:26 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Thu, 06 Jan 2011 12:51:26 +0100 Subject: [SciPy-User] question on refreshing window In-Reply-To: <7531BB99-0CFA-4E48-AC3D-EF62CB18FCAE@cern.ch> References: <7531BB99-0CFA-4E48-AC3D-EF62CB18FCAE@cern.ch> Message-ID: <1294314686.6871.13.camel@mypride> On Thu, 2011-01-06 at 12:23 +0100, lorenzo agostino wrote: > This is the simple version I tried, I tried many different things with no luck. Any idea? I think you should explicitly specify the number of the figure, i.e. pylab.figure(1).canvas.set_window_title("Disribution 2") Or otherwise save and re-use the handle of the figure. Refer to matplotlib documentation for more details. -- Sincerely yours, Yury V. Zaytsev From JRadinger at gmx.at Thu Jan 6 07:01:56 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Thu, 06 Jan 2011 13:01:56 +0100 Subject: [SciPy-User] solving integration, density function In-Reply-To: References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> Message-ID: <20110106120156.165600@gmx.net> Thank you for the simplification of the formula, but I still get a different result in the case when x Datum: Thu, 6 Jan 2011 05:48:25 -0600 > Von: Warren Weckesser > An: SciPy Users List > Betreff: Re: [SciPy-User] solving integration, density function > On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger > wrote: > > > Hey > > > > Last time you helped me a lot with my normal > > probabilty density function. My problem now is > > quite simple, I think it's just a problem with > > the syntax (brackets): > > > > There are two ways to calculate the pdf, with the > > stats-function and with pure mathematically, but > > the give different results and I can't find the > > where I make the mistake: > > > > > > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) > > func2 = > > 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) > > > > Where is the problem > > > > > func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) > > > Warren > > > > > thank you... > > > > /j > > > > -------- Original-Nachricht -------- > > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 > > > Von: Skipper Seabold > > > An: SciPy Users List > > > Betreff: Re: [SciPy-User] solving integration, density function > > > > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger > > > wrote: > > > > > > > > -------- Original-Nachricht -------- > > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 > > > >> Von: Gregor Thalhammer > > > >> An: SciPy Users List > > > >> Betreff: Re: [SciPy-User] solving integration, density function > > > > > > > >> > > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: > > > >> > > > >> > Hello, > > > >> > > > > >> > I am really new to python and Scipy. > > > >> > I want to solve a integrated function with a python script > > > >> > and I think Scipy should do that :) > > > >> > > > > >> > My task: > > > >> > > > > >> > I do have some variables (s, m, K,) which are now absolutely set, > > but > > > in > > > >> future I'll get the values via another process of pyhton. > > > >> > > > > >> > s = 400 > > > >> > m = 0 > > > >> > K = 1 > > > >> > > > > >> > And have have following function: > > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is the > > > density > > > >> function of the normal distribution a symetrical curve with the > mean > > > (m) of > > > >> 0. > > > >> > > > > >> > The total area under the curve is 1 (100%) which is for an > > > integration > > > >> from -inf to +inf. > > > >> > I want to know x in the case of 99%: meaning that the integral > (-x > > to > > > >> +x) of the function is 0.99. Due to the symetry of the curve you > can > > > also set > > > >> the integral from 0 to +x equal to (0.99/2): > > > >> > > > > >> > 0.99 = > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > > > -x, > > > >> x) > > > >> > resp. > > > >> > (0.99/2) = > > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > > > >> 0, x) > > > >> > > > > >> > How can I solve that question in Scipy/python > > > >> > so that I get x in the end. I don't know how to write > > > >> > the code... > > > >> > > > >> > > > >> ---> > > > >> erf(x[, out]) > > > >> > > > >> y=erf(z) returns the error function of complex argument defined > > > as > > > >> as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) > > > >> --- > > > >> > > > >> from scipy.special import erf, erfinv > > > >> erfinv(0.99)*sqrt(2) > > > >> > > > >> > > > >> Gregor > > > >> > > > > > > > > > > > > Thank you Gregor, > > > > I only understand a part of your answer... I know that the integral > of > > > the density function is a error function and I know that the argument > > "from > > > scipy.special import erf, erfinv" is to load the module. > > > > > > > > But how do I write the code including my orignial function so that I > > can > > > modify it (I have also another function I want to integrate). how do i > > > start? I want to save the whole code to a python-script I can then > load > > e.g. > > > into ArcGIS where I want to use the value of x for further > calculations. > > > > > > > > > > Are you always integrating densities? If so, you don't want to use > > > integrals probably, but you could use scipy.stats > > > > > > erfinv(.99)*np.sqrt(2) > > > 2.5758293035489004 > > > > > > from scipy import stats > > > > > > stats.norm.ppf(.995) > > > 2.5758293035489004 > > > > > > Skipper > > > _______________________________________________ > > > SciPy-User mailing list > > > SciPy-User at scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > > Jetzt informieren: http://www.gmx.net/de/go/freephone > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From warren.weckesser at enthought.com Thu Jan 6 07:10:14 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 6 Jan 2011 06:10:14 -0600 Subject: [SciPy-User] solving integration, density function In-Reply-To: <20110106120156.165600@gmx.net> References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> <20110106120156.165600@gmx.net> Message-ID: On Thu, Jan 6, 2011 at 6:01 AM, Johannes Radinger wrote: > Thank you for the simplification of the formula, > but I still get a different result in the case > when x > here a code to try: > > ******************** > import math > from scipy import stats > > s1 = 3 > m = 0 > p = 1 > x = 2 > > Your values are integers, so the division in the expression (x-m)/s1 in the code below uses integer division, and (x-m)/s1 will be 0. > func = stats.norm.pdf(x, loc=m, scale=(s1)) > func2 = (1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2)) > > Change that to: func2 = (1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/float(s1))**2)) or change the behavior of the division operator by first executing from __future__ import division Warren print func > print func2 > ******************************** > > /j > > -------- Original-Nachricht -------- > > Datum: Thu, 6 Jan 2011 05:48:25 -0600 > > Von: Warren Weckesser > > An: SciPy Users List > > Betreff: Re: [SciPy-User] solving integration, density function > > > On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger > > wrote: > > > > > Hey > > > > > > Last time you helped me a lot with my normal > > > probabilty density function. My problem now is > > > quite simple, I think it's just a problem with > > > the syntax (brackets): > > > > > > There are two ways to calculate the pdf, with the > > > stats-function and with pure mathematically, but > > > the give different results and I can't find the > > > where I make the mistake: > > > > > > > > > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) > > > func2 = > > > 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) > > > > > > Where is the problem > > > > > > > > > func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) > > > > > > Warren > > > > > > > > > thank you... > > > > > > /j > > > > > > -------- Original-Nachricht -------- > > > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 > > > > Von: Skipper Seabold > > > > An: SciPy Users List > > > > Betreff: Re: [SciPy-User] solving integration, density function > > > > > > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger > > > > > wrote: > > > > > > > > > > -------- Original-Nachricht -------- > > > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 > > > > >> Von: Gregor Thalhammer > > > > >> An: SciPy Users List > > > > >> Betreff: Re: [SciPy-User] solving integration, density function > > > > > > > > > >> > > > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: > > > > >> > > > > >> > Hello, > > > > >> > > > > > >> > I am really new to python and Scipy. > > > > >> > I want to solve a integrated function with a python script > > > > >> > and I think Scipy should do that :) > > > > >> > > > > > >> > My task: > > > > >> > > > > > >> > I do have some variables (s, m, K,) which are now absolutely > set, > > > but > > > > in > > > > >> future I'll get the values via another process of pyhton. > > > > >> > > > > > >> > s = 400 > > > > >> > m = 0 > > > > >> > K = 1 > > > > >> > > > > > >> > And have have following function: > > > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is the > > > > density > > > > >> function of the normal distribution a symetrical curve with the > > mean > > > > (m) of > > > > >> 0. > > > > >> > > > > > >> > The total area under the curve is 1 (100%) which is for an > > > > integration > > > > >> from -inf to +inf. > > > > >> > I want to know x in the case of 99%: meaning that the integral > > (-x > > > to > > > > >> +x) of the function is 0.99. Due to the symetry of the curve you > > can > > > > also set > > > > >> the integral from 0 to +x equal to (0.99/2): > > > > >> > > > > > >> > 0.99 = > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > > > > -x, > > > > >> x) > > > > >> > resp. > > > > >> > (0.99/2) = > > > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > > > > >> 0, x) > > > > >> > > > > > >> > How can I solve that question in Scipy/python > > > > >> > so that I get x in the end. I don't know how to write > > > > >> > the code... > > > > >> > > > > >> > > > > >> ---> > > > > >> erf(x[, out]) > > > > >> > > > > >> y=erf(z) returns the error function of complex argument > defined > > > > as > > > > >> as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) > > > > >> --- > > > > >> > > > > >> from scipy.special import erf, erfinv > > > > >> erfinv(0.99)*sqrt(2) > > > > >> > > > > >> > > > > >> Gregor > > > > >> > > > > > > > > > > > > > > > Thank you Gregor, > > > > > I only understand a part of your answer... I know that the integral > > of > > > > the density function is a error function and I know that the argument > > > "from > > > > scipy.special import erf, erfinv" is to load the module. > > > > > > > > > > But how do I write the code including my orignial function so that > I > > > can > > > > modify it (I have also another function I want to integrate). how do > i > > > > start? I want to save the whole code to a python-script I can then > > load > > > e.g. > > > > into ArcGIS where I want to use the value of x for further > > calculations. > > > > > > > > > > > > > Are you always integrating densities? If so, you don't want to use > > > > integrals probably, but you could use scipy.stats > > > > > > > > erfinv(.99)*np.sqrt(2) > > > > 2.5758293035489004 > > > > > > > > from scipy import stats > > > > > > > > stats.norm.ppf(.995) > > > > 2.5758293035489004 > > > > > > > > Skipper > > > > _______________________________________________ > > > > SciPy-User mailing list > > > > SciPy-User at scipy.org > > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > > > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > > > Jetzt informieren: http://www.gmx.net/de/go/freephone > > > _______________________________________________ > > > SciPy-User mailing list > > > SciPy-User at scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -- > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jan 6 07:10:58 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 6 Jan 2011 07:10:58 -0500 Subject: [SciPy-User] solving integration, density function In-Reply-To: <20110106120156.165600@gmx.net> References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> <20110106120156.165600@gmx.net> Message-ID: On Thu, Jan 6, 2011 at 7:01 AM, Johannes Radinger wrote: > Thank you for the simplification of the formula, > but I still get a different result in the case > when x > here a code to try: > > ******************** > import math > from scipy import stats > > s1 = 3 > m = 0 > p = 1 > x = 2 > > func = stats.norm.pdf(x, loc=m, scale=(s1)) > func2 = (1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2)) > > print func > print func2 > ******************************** use floats, I think you just run into integer division (x-m)/s1 Josef > > /j > > -------- Original-Nachricht -------- >> Datum: Thu, 6 Jan 2011 05:48:25 -0600 >> Von: Warren Weckesser >> An: SciPy Users List >> Betreff: Re: [SciPy-User] solving integration, density function > >> On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger >> wrote: >> >> > Hey >> > >> > Last time you helped me a lot with my normal >> > probabilty density function. My problem now is >> > quite simple, I think it's just a problem with >> > the syntax (brackets): >> > >> > There are two ways to calculate the pdf, with the >> > stats-function and with pure mathematically, but >> > the give different results and I can't find the >> > where I make the mistake: >> > >> > >> > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) >> > func2 = >> > 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) >> > >> > Where is the problem >> > >> >> >> func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) >> >> >> Warren >> >> >> >> > thank you... >> > >> > /j >> > >> > -------- Original-Nachricht -------- >> > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 >> > > Von: Skipper Seabold >> > > An: SciPy Users List >> > > Betreff: Re: [SciPy-User] solving integration, density function >> > >> > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger >> > > wrote: >> > > > >> > > > -------- Original-Nachricht -------- >> > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 >> > > >> Von: Gregor Thalhammer >> > > >> An: SciPy Users List >> > > >> Betreff: Re: [SciPy-User] solving integration, density function >> > > > >> > > >> >> > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: >> > > >> >> > > >> > Hello, >> > > >> > >> > > >> > I am really new to python and Scipy. >> > > >> > I want to solve a integrated function with a python script >> > > >> > and I think Scipy should do that :) >> > > >> > >> > > >> > My task: >> > > >> > >> > > >> > I do have some variables (s, m, K,) which are now absolutely set, >> > but >> > > in >> > > >> future I'll get the values via another process of pyhton. >> > > >> > >> > > >> > s = 400 >> > > >> > m = 0 >> > > >> > K = 1 >> > > >> > >> > > >> > And have have following function: >> > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is the >> > > density >> > > >> function of the normal distribution a symetrical curve with the >> mean >> > > (m) of >> > > >> 0. >> > > >> > >> > > >> > The total area under the curve is 1 (100%) which is for an >> > > integration >> > > >> from -inf to +inf. >> > > >> > I want to know x in the case of 99%: meaning that the integral >> (-x >> > to >> > > >> +x) of the function is 0.99. Due to the symetry of the curve you >> can >> > > also set >> > > >> the integral from 0 to +x equal to (0.99/2): >> > > >> > >> > > >> > 0.99 = >> integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), >> > > -x, >> > > >> x) >> > > >> > resp. >> > > >> > (0.99/2) = >> > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), >> > > >> 0, x) >> > > >> > >> > > >> > How can I solve that question in Scipy/python >> > > >> > so that I get x in the end. I don't know how to write >> > > >> > the code... >> > > >> >> > > >> >> > > >> ---> >> > > >> erf(x[, out]) >> > > >> >> > > >> ? ? y=erf(z) returns the error function of complex argument defined >> > > as >> > > >> ? ? as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) >> > > >> --- >> > > >> >> > > >> from scipy.special import erf, erfinv >> > > >> erfinv(0.99)*sqrt(2) >> > > >> >> > > >> >> > > >> Gregor >> > > >> >> > > > >> > > > >> > > > Thank you Gregor, >> > > > I only understand a part of your answer... I know that the integral >> of >> > > the density function is a error function and I know that the argument >> > "from >> > > scipy.special import erf, erfinv" is to load the module. >> > > > >> > > > But how do I write the code including my orignial function so that I >> > can >> > > modify it (I have also another function I want to integrate). how do i >> > > start? I want to save the whole code to a python-script I can then >> load >> > e.g. >> > > into ArcGIS where I want to use the value of x for further >> calculations. >> > > > >> > > >> > > Are you always integrating densities? ?If so, you don't want to use >> > > integrals probably, but you could use scipy.stats >> > > >> > > erfinv(.99)*np.sqrt(2) >> > > 2.5758293035489004 >> > > >> > > from scipy import stats >> > > >> > > stats.norm.ppf(.995) >> > > 2.5758293035489004 >> > > >> > > Skipper >> > > _______________________________________________ >> > > SciPy-User mailing list >> > > SciPy-User at scipy.org >> > > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > -- >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! >> > Jetzt informieren: http://www.gmx.net/de/go/freephone >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > -- > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From tonio+progs at xmon.net Thu Jan 6 09:11:59 2011 From: tonio+progs at xmon.net (tonio) Date: Thu, 06 Jan 2011 15:11:59 +0100 Subject: [SciPy-User] issue in ellipkinc Message-ID: <4D25CDAF.4090102@xmon.net> Hello, I'm a new to scipy and wanted to compute ellipse arc length using ellipkinc. For some values i get buggy results : >>> scipy.special.ellipkinc(2.167539474012470, 0.78698224852071) 3.3429097806972523 => ok >>> scipy.special.ellipkinc(2.1675394740124701, 0.78698224852071) 2.2286065204648353 => nok; btw that value is close to : >>> scipy.special.ellipkinc(math.pi/2, 0.7869822485207) 2.2286065204648144 I'm trying this on debian and ubuntu's version 0.7.2, 64bits +++ tonio From JRadinger at gmx.at Thu Jan 6 09:59:08 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Thu, 06 Jan 2011 15:59:08 +0100 Subject: [SciPy-User] GIS Raster Calculation: Combination of NumPy Arrays and SciPy Density function In-Reply-To: References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> <20110106120156.165600@gmx.net> Message-ID: <20110106145908.118630@gmx.net> Hello... I am working mostly in ArcGIS but as it's Raster Calculator resp. Map Algebra is limited I want to use NumPy Arrays and perform calcultations in SciPy. I can get my Rasterfiles in ArcGIS quite easily transformed/exported into a NumPy Array with the function: newArray = arcpy.RasterToNumPyArray(inRaster) This Raster/Array contains distance values (ranging from around -20000 to + 20000 float). Some time before you helped with some distance-density functions which I want to apply now. New Values should be assigned to the cells/elements in the array by following function: DensityRaster = p * stats.norm.pdf(x, loc=m, scale=s) where p = 0.3, m=0 and s=200, the x-value is the distance value from the single cells in newArray. Can I just define x=newArray ? and use an array as input variable? thank you /j -------- Original-Nachricht -------- > Datum: Thu, 6 Jan 2011 07:10:58 -0500 > Von: josef.pktd at gmail.com > An: SciPy Users List > Betreff: Re: [SciPy-User] solving integration, density function > On Thu, Jan 6, 2011 at 7:01 AM, Johannes Radinger > wrote: > > Thank you for the simplification of the formula, > > but I still get a different result in the case > > when x > > > here a code to try: > > > > ******************** > > import math > > from scipy import stats > > > > s1 = 3 > > m = 0 > > p = 1 > > x = 2 > > > > func = stats.norm.pdf(x, loc=m, scale=(s1)) > > func2 = (1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2)) > > > > print func > > print func2 > > ******************************** > > use floats, I think you just run into integer division > > (x-m)/s1 > > Josef > > > > > /j > > > > -------- Original-Nachricht -------- > >> Datum: Thu, 6 Jan 2011 05:48:25 -0600 > >> Von: Warren Weckesser > >> An: SciPy Users List > >> Betreff: Re: [SciPy-User] solving integration, density function > > > >> On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger > >> wrote: > >> > >> > Hey > >> > > >> > Last time you helped me a lot with my normal > >> > probabilty density function. My problem now is > >> > quite simple, I think it's just a problem with > >> > the syntax (brackets): > >> > > >> > There are two ways to calculate the pdf, with the > >> > stats-function and with pure mathematically, but > >> > the give different results and I can't find the > >> > where I make the mistake: > >> > > >> > > >> > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) > >> > func2 = > >> > 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) > >> > > >> > Where is the problem > >> > > >> > >> > >> func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) > >> > >> > >> Warren > >> > >> > >> > >> > thank you... > >> > > >> > /j > >> > > >> > -------- Original-Nachricht -------- > >> > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 > >> > > Von: Skipper Seabold > >> > > An: SciPy Users List > >> > > Betreff: Re: [SciPy-User] solving integration, density function > >> > > >> > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger > > >> > > wrote: > >> > > > > >> > > > -------- Original-Nachricht -------- > >> > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 > >> > > >> Von: Gregor Thalhammer > >> > > >> An: SciPy Users List > >> > > >> Betreff: Re: [SciPy-User] solving integration, density function > >> > > > > >> > > >> > >> > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: > >> > > >> > >> > > >> > Hello, > >> > > >> > > >> > > >> > I am really new to python and Scipy. > >> > > >> > I want to solve a integrated function with a python script > >> > > >> > and I think Scipy should do that :) > >> > > >> > > >> > > >> > My task: > >> > > >> > > >> > > >> > I do have some variables (s, m, K,) which are now absolutely > set, > >> > but > >> > > in > >> > > >> future I'll get the values via another process of pyhton. > >> > > >> > > >> > > >> > s = 400 > >> > > >> > m = 0 > >> > > >> > K = 1 > >> > > >> > > >> > > >> > And have have following function: > >> > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is the > >> > > density > >> > > >> function of the normal distribution a symetrical curve with the > >> mean > >> > > (m) of > >> > > >> 0. > >> > > >> > > >> > > >> > The total area under the curve is 1 (100%) which is for an > >> > > integration > >> > > >> from -inf to +inf. > >> > > >> > I want to know x in the case of 99%: meaning that the integral > >> (-x > >> > to > >> > > >> +x) of the function is 0.99. Due to the symetry of the curve you > >> can > >> > > also set > >> > > >> the integral from 0 to +x equal to (0.99/2): > >> > > >> > > >> > > >> > 0.99 = > >> integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > >> > > -x, > >> > > >> x) > >> > > >> > resp. > >> > > >> > (0.99/2) = > >> > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > >> > > >> 0, x) > >> > > >> > > >> > > >> > How can I solve that question in Scipy/python > >> > > >> > so that I get x in the end. I don't know how to write > >> > > >> > the code... > >> > > >> > >> > > >> > >> > > >> ---> > >> > > >> erf(x[, out]) > >> > > >> > >> > > >> ? ? y=erf(z) returns the error function of complex argument > defined > >> > > as > >> > > >> ? ? as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) > >> > > >> --- > >> > > >> > >> > > >> from scipy.special import erf, erfinv > >> > > >> erfinv(0.99)*sqrt(2) > >> > > >> > >> > > >> > >> > > >> Gregor > >> > > >> > >> > > > > >> > > > > >> > > > Thank you Gregor, > >> > > > I only understand a part of your answer... I know that the > integral > >> of > >> > > the density function is a error function and I know that the > argument > >> > "from > >> > > scipy.special import erf, erfinv" is to load the module. > >> > > > > >> > > > But how do I write the code including my orignial function so > that I > >> > can > >> > > modify it (I have also another function I want to integrate). how > do i > >> > > start? I want to save the whole code to a python-script I can then > >> load > >> > e.g. > >> > > into ArcGIS where I want to use the value of x for further > >> calculations. > >> > > > > >> > > > >> > > Are you always integrating densities? ?If so, you don't want to > use > >> > > integrals probably, but you could use scipy.stats > >> > > > >> > > erfinv(.99)*np.sqrt(2) > >> > > 2.5758293035489004 > >> > > > >> > > from scipy import stats > >> > > > >> > > stats.norm.ppf(.995) > >> > > 2.5758293035489004 > >> > > > >> > > Skipper > >> > > _______________________________________________ > >> > > SciPy-User mailing list > >> > > SciPy-User at scipy.org > >> > > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > -- > >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > >> > Jetzt informieren: http://www.gmx.net/de/go/freephone > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > > -- > > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > > Jetzt informieren: http://www.gmx.net/de/go/freephone > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser From zachary.pincus at yale.edu Thu Jan 6 10:29:14 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 6 Jan 2011 10:29:14 -0500 Subject: [SciPy-User] KDE: Scott's factor reference? Message-ID: <2E621CF9-C20A-4D13-9BDF-DDB9E76B3A15@yale.edu> Hi all, I've been wading through the old literature on gaussian KDE for a little while trying to find a reference for the "Scott's factor" rule- of-thumb for gaussian KDE bandwidth selection (n**(-1/(d+4)), where n is the number of data points and d their dimension; this factor is multiplied by the covariance matrix to yield the bandwidths). I can find a lot of Scott's later contributions of fancier methods, but nothing about this basic one... Anyone know off the top of their head? Thanks, Zach From josef.pktd at gmail.com Thu Jan 6 11:25:19 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 6 Jan 2011 11:25:19 -0500 Subject: [SciPy-User] KDE: Scott's factor reference? In-Reply-To: <2E621CF9-C20A-4D13-9BDF-DDB9E76B3A15@yale.edu> References: <2E621CF9-C20A-4D13-9BDF-DDB9E76B3A15@yale.edu> Message-ID: On Thu, Jan 6, 2011 at 10:29 AM, Zachary Pincus wrote: > Hi all, > > I've been wading through the old literature on gaussian KDE for a > little while trying to find a reference for the "Scott's factor" rule- > of-thumb for gaussian KDE bandwidth selection (n**(-1/(d+4)), where n > is the number of data points and d their dimension; this factor is > multiplied by the covariance matrix to yield the bandwidths). > > I can find a lot of Scott's later contributions of fancier methods, > but nothing about this basic one... Scotts 1992 is the reference in Haerdle http://books.google.com/books?id=qPCmAOS-CoMC&pg=PA73&lpg=PA73&dq=scott%27s+factor+rule-+of-thumb+hardle&source=bl&ots=kTNHJpyk6w&sig=5wwCOzThGsIzXOyVax2AbKQ11Rw&hl=en&ei=MOwlTdC3F4aBlAeRsZDNAQ&sa=X&oi=book_result&ct=result&resnum=1&sqi=2&ved=0CBYQ6AEwAA#v=onepage&q&f=false Haerdle's book is also online, but I need to look for the link. Josef > > Anyone know off the top of their head? > > Thanks, > Zach > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Thu Jan 6 11:37:54 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 6 Jan 2011 11:37:54 -0500 Subject: [SciPy-User] KDE: Scott's factor reference? In-Reply-To: References: <2E621CF9-C20A-4D13-9BDF-DDB9E76B3A15@yale.edu> Message-ID: On Thu, Jan 6, 2011 at 11:25 AM, wrote: > On Thu, Jan 6, 2011 at 10:29 AM, Zachary Pincus wrote: >> Hi all, >> >> I've been wading through the old literature on gaussian KDE for a >> little while trying to find a reference for the "Scott's factor" rule- >> of-thumb for gaussian KDE bandwidth selection (n**(-1/(d+4)), where n >> is the number of data points and d their dimension; this factor is >> multiplied by the covariance matrix to yield the bandwidths). >> >> I can find a lot of Scott's later contributions of fancier methods, >> but nothing about this basic one... > > Scotts 1992 is the reference in Haerdle > > http://books.google.com/books?id=qPCmAOS-CoMC&pg=PA73&lpg=PA73&dq=scott%27s+factor+rule-+of-thumb+hardle&source=bl&ots=kTNHJpyk6w&sig=5wwCOzThGsIzXOyVax2AbKQ11Rw&hl=en&ei=MOwlTdC3F4aBlAeRsZDNAQ&sa=X&oi=book_result&ct=result&resnum=1&sqi=2&ved=0CBYQ6AEwAA#v=onepage&q&f=false > > Haerdle's book is also online, but I need to look for the link. > > Josef I think it's equation (3.70) in http://fedc.wiwi.hu-berlin.de/xplore/ebooks/html/spm/spmhtmlnode18.html with page reference to scott 92 p 152 more online Haerdle is here http://fedc.wiwi.hu-berlin.de/xplore/ebooks/html/ Josef > >> >> Anyone know off the top of their head? >> >> Thanks, >> Zach >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From josef.pktd at gmail.com Thu Jan 6 11:50:20 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 6 Jan 2011 11:50:20 -0500 Subject: [SciPy-User] GIS Raster Calculation: Combination of NumPy Arrays and SciPy Density function In-Reply-To: <20110106145908.118630@gmx.net> References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> <20110106120156.165600@gmx.net> <20110106145908.118630@gmx.net> Message-ID: On Thu, Jan 6, 2011 at 9:59 AM, Johannes Radinger wrote: > Hello... > > I am working mostly in ArcGIS but as it's Raster Calculator resp. Map Algebra is limited I want to use NumPy Arrays and perform calcultations in SciPy. > > I can get my Rasterfiles in ArcGIS quite easily transformed/exported into a NumPy Array with the function: > > newArray = arcpy.RasterToNumPyArray(inRaster) > > This Raster/Array contains distance values (ranging from around -20000 to + 20000 float). > > Some time before you helped with some distance-density functions which I want to apply now. New Values should be assigned to the cells/elements in the array by following function: > > DensityRaster = p * stats.norm.pdf(x, loc=m, scale=s) > > where p = 0.3, m=0 and s=200, the x-value is the distance value from the single cells in newArray. Can I just define x=newArray ? and use an array as input variable? Yes, just try it out. It's fully vectorized. loc=0 is the default, so in this case you wouldn't need to specify loc=m. If your array is very large, there is a shortcut to (maybe) avoid some intermediate arrays. Josef > > thank you > /j > > > -------- Original-Nachricht -------- >> Datum: Thu, 6 Jan 2011 07:10:58 -0500 >> Von: josef.pktd at gmail.com >> An: SciPy Users List >> Betreff: Re: [SciPy-User] solving integration, density function > >> On Thu, Jan 6, 2011 at 7:01 AM, Johannes Radinger >> wrote: >> > Thank you for the simplification of the formula, >> > but I still get a different result in the case >> > when x> > >> > here a code to try: >> > >> > ******************** >> > import math >> > from scipy import stats >> > >> > s1 = 3 >> > m = 0 >> > p = 1 >> > x = 2 >> > >> > func = stats.norm.pdf(x, loc=m, scale=(s1)) >> > func2 = (1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2)) >> > >> > print func >> > print func2 >> > ******************************** >> >> use floats, I think you just run into integer division >> >> (x-m)/s1 >> >> Josef >> >> > >> > /j >> > >> > -------- Original-Nachricht -------- >> >> Datum: Thu, 6 Jan 2011 05:48:25 -0600 >> >> Von: Warren Weckesser >> >> An: SciPy Users List >> >> Betreff: Re: [SciPy-User] solving integration, density function >> > >> >> On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger >> >> wrote: >> >> >> >> > Hey >> >> > >> >> > Last time you helped me a lot with my normal >> >> > probabilty density function. My problem now is >> >> > quite simple, I think it's just a problem with >> >> > the syntax (brackets): >> >> > >> >> > There are two ways to calculate the pdf, with the >> >> > stats-function and with pure mathematically, but >> >> > the give different results and I can't find the >> >> > where I make the mistake: >> >> > >> >> > >> >> > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) >> >> > func2 = >> >> > 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) >> >> > >> >> > Where is the problem >> >> > >> >> >> >> >> >> func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) >> >> >> >> >> >> Warren >> >> >> >> >> >> >> >> > thank you... >> >> > >> >> > /j >> >> > >> >> > -------- Original-Nachricht -------- >> >> > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 >> >> > > Von: Skipper Seabold >> >> > > An: SciPy Users List >> >> > > Betreff: Re: [SciPy-User] solving integration, density function >> >> > >> >> > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger >> >> >> > > wrote: >> >> > > > >> >> > > > -------- Original-Nachricht -------- >> >> > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 >> >> > > >> Von: Gregor Thalhammer >> >> > > >> An: SciPy Users List >> >> > > >> Betreff: Re: [SciPy-User] solving integration, density function >> >> > > > >> >> > > >> >> >> > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: >> >> > > >> >> >> > > >> > Hello, >> >> > > >> > >> >> > > >> > I am really new to python and Scipy. >> >> > > >> > I want to solve a integrated function with a python script >> >> > > >> > and I think Scipy should do that :) >> >> > > >> > >> >> > > >> > My task: >> >> > > >> > >> >> > > >> > I do have some variables (s, m, K,) which are now absolutely >> set, >> >> > but >> >> > > in >> >> > > >> future I'll get the values via another process of pyhton. >> >> > > >> > >> >> > > >> > s = 400 >> >> > > >> > m = 0 >> >> > > >> > K = 1 >> >> > > >> > >> >> > > >> > And have have following function: >> >> > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is the >> >> > > density >> >> > > >> function of the normal distribution a symetrical curve with the >> >> mean >> >> > > (m) of >> >> > > >> 0. >> >> > > >> > >> >> > > >> > The total area under the curve is 1 (100%) which is for an >> >> > > integration >> >> > > >> from -inf to +inf. >> >> > > >> > I want to know x in the case of 99%: meaning that the integral >> >> (-x >> >> > to >> >> > > >> +x) of the function is 0.99. Due to the symetry of the curve you >> >> can >> >> > > also set >> >> > > >> the integral from 0 to +x equal to (0.99/2): >> >> > > >> > >> >> > > >> > 0.99 = >> >> integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), >> >> > > -x, >> >> > > >> x) >> >> > > >> > resp. >> >> > > >> > (0.99/2) = >> >> > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), >> >> > > >> 0, x) >> >> > > >> > >> >> > > >> > How can I solve that question in Scipy/python >> >> > > >> > so that I get x in the end. I don't know how to write >> >> > > >> > the code... >> >> > > >> >> >> > > >> >> >> > > >> ---> >> >> > > >> erf(x[, out]) >> >> > > >> >> >> > > >> ? ? y=erf(z) returns the error function of complex argument >> defined >> >> > > as >> >> > > >> ? ? as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) >> >> > > >> --- >> >> > > >> >> >> > > >> from scipy.special import erf, erfinv >> >> > > >> erfinv(0.99)*sqrt(2) >> >> > > >> >> >> > > >> >> >> > > >> Gregor >> >> > > >> >> >> > > > >> >> > > > >> >> > > > Thank you Gregor, >> >> > > > I only understand a part of your answer... I know that the >> integral >> >> of >> >> > > the density function is a error function and I know that the >> argument >> >> > "from >> >> > > scipy.special import erf, erfinv" is to load the module. >> >> > > > >> >> > > > But how do I write the code including my orignial function so >> that I >> >> > can >> >> > > modify it (I have also another function I want to integrate). how >> do i >> >> > > start? I want to save the whole code to a python-script I can then >> >> load >> >> > e.g. >> >> > > into ArcGIS where I want to use the value of x for further >> >> calculations. >> >> > > > >> >> > > >> >> > > Are you always integrating densities? ?If so, you don't want to >> use >> >> > > integrals probably, but you could use scipy.stats >> >> > > >> >> > > erfinv(.99)*np.sqrt(2) >> >> > > 2.5758293035489004 >> >> > > >> >> > > from scipy import stats >> >> > > >> >> > > stats.norm.ppf(.995) >> >> > > 2.5758293035489004 >> >> > > >> >> > > Skipper >> >> > > _______________________________________________ >> >> > > SciPy-User mailing list >> >> > > SciPy-User at scipy.org >> >> > > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > -- >> >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! >> >> > Jetzt informieren: http://www.gmx.net/de/go/freephone >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> > >> > -- >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! >> > Jetzt informieren: http://www.gmx.net/de/go/freephone >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - > jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From zachary.pincus at yale.edu Thu Jan 6 11:58:27 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 6 Jan 2011 11:58:27 -0500 Subject: [SciPy-User] KDE: Scott's factor reference? In-Reply-To: References: <2E621CF9-C20A-4D13-9BDF-DDB9E76B3A15@yale.edu> Message-ID: <1659B4D2-2B1D-4958-807B-DC7F8851800A@yale.edu> Aah, thanks a ton! On Jan 6, 2011, at 11:37 AM, josef.pktd at gmail.com wrote: > On Thu, Jan 6, 2011 at 11:25 AM, wrote: >> On Thu, Jan 6, 2011 at 10:29 AM, Zachary Pincus > > wrote: >>> Hi all, >>> >>> I've been wading through the old literature on gaussian KDE for a >>> little while trying to find a reference for the "Scott's factor" >>> rule- >>> of-thumb for gaussian KDE bandwidth selection (n**(-1/(d+4)), >>> where n >>> is the number of data points and d their dimension; this factor is >>> multiplied by the covariance matrix to yield the bandwidths). >>> >>> I can find a lot of Scott's later contributions of fancier methods, >>> but nothing about this basic one... >> >> Scotts 1992 is the reference in Haerdle >> >> http://books.google.com/books?id=qPCmAOS-CoMC&pg=PA73&lpg=PA73&dq=scott%27s+factor+rule-+of-thumb+hardle&source=bl&ots=kTNHJpyk6w&sig=5wwCOzThGsIzXOyVax2AbKQ11Rw&hl=en&ei=MOwlTdC3F4aBlAeRsZDNAQ&sa=X&oi=book_result&ct=result&resnum=1&sqi=2&ved=0CBYQ6AEwAA#v >> =onepage&q&f=false >> >> Haerdle's book is also online, but I need to look for the link. >> >> Josef > > I think it's equation (3.70) in > http://fedc.wiwi.hu-berlin.de/xplore/ebooks/html/spm/ > spmhtmlnode18.html > > with page reference to scott 92 p 152 > > more online Haerdle is here http://fedc.wiwi.hu-berlin.de/xplore/ebooks/html/ > > Josef > >> >>> >>> Anyone know off the top of their head? >>> >>> Thanks, >>> Zach >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From jordan.nickerson at gmail.com Thu Jan 6 14:46:49 2011 From: jordan.nickerson at gmail.com (jordan) Date: Thu, 6 Jan 2011 19:46:49 +0000 (UTC) Subject: [SciPy-User] using an optimizer/solver to solve f(x)=0 Message-ID: I'm trying to solve an objective function to get an answer as close to zero as possible. However, the function is not terribly well behaved and takes 4 parameters as inputs. So far I've been using fmin and returning the absolute value of the function to try to get close to zero. I was wondering if there is a better approach? I looked into solvers but it seems like they only solve for a univariate case, or they expect the function to return an array of the same dimension as that of the set of parameters. Thanks jordan From josef.pktd at gmail.com Thu Jan 6 15:06:29 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 6 Jan 2011 15:06:29 -0500 Subject: [SciPy-User] using an optimizer/solver to solve f(x)=0 In-Reply-To: References: Message-ID: On Thu, Jan 6, 2011 at 2:46 PM, jordan wrote: > > I'm trying to solve an objective function to get an answer as close to zero as > possible. However, the function is not terribly well behaved and takes 4 > parameters as inputs. So far I've been using fmin and returning the absolute > value of the function to try to get close to zero. I think the standard way is to use a quadratic loss function, since abs is not differentiable. Josef > > I was wondering if there is a better approach? I looked into solvers but it > seems like they only solve for a univariate case, or they expect the function to > return an array of the same dimension as that of the set of parameters. > > Thanks > jordan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sebastian.walter at gmail.com Thu Jan 6 15:26:30 2011 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 6 Jan 2011 21:26:30 +0100 Subject: [SciPy-User] using an optimizer/solver to solve f(x)=0 In-Reply-To: References: Message-ID: Could you explain your problem in more detail? For instance the function f(x) = x_1^2 + x_2^2 - 1 has infinitely many solutions (the unit circle). I know that optimization algorithms can have problems with problems where the solution is not unique. Maybe you can add additional constraints to the problem to make the minimizer unique? Sebastian On Thu, Jan 6, 2011 at 8:46 PM, jordan wrote: > > I'm trying to solve an objective function to get an answer as close to zero as > possible. However, the function is not terribly well behaved and takes 4 > parameters as inputs. So far I've been using fmin and returning the absolute > value of the function to try to get close to zero. > > I was wondering if there is a better approach? I looked into solvers but it > seems like they only solve for a univariate case, or they expect the function to > return an array of the same dimension as that of the set of parameters. > > Thanks > jordan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Thu Jan 6 17:58:29 2011 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 6 Jan 2011 16:58:29 -0600 Subject: [SciPy-User] how to create a phase portrait for a nonlinear diff. eqn. In-Reply-To: References: Message-ID: Thanks for the examples. I had seen them, but wasn't thinking they were exactly what I needed at the time. I was stuck on finding the tangent arrow values. But I think I see it now. On Wed, Jan 5, 2011 at 9:08 AM, John Hunter wrote: > > > On Tue, Jan 4, 2011 at 8:07 PM, Ryan Krauss wrote: >> >> I am teaching a nonlinear controls course this coming semester. ?I >> plan to have the students write code to generate phase portraits as a >> project. ?It is fairly easy to randomly (or intelligently?) create >> initial condition seeds and run integrate.odeint. ?What isn't obvious >> to me is how to put arrows on phase portrait lines to indicate the >> direction of the evolution over time. ?For example, the attached code >> creates a phase portrait for a fairly simple system. ?The graph is >> shown in the attached png. ?But this equilibrium point is either >> stable or unstable depending on whether the curve spirals in or out >> over time. ?How do I write code to automatically determine the >> direction of increasing time and indicate it by arrow heads on the >> graph? >> >> I know x, xdot, and time as vectors for each point on the graph. ?I >> guess I could numerically determine (d xdot)/dx for each point, but is >> that the best route to go? ?And that leads to issues as dx gets >> small.... > > > here's one example > > import numpy as np > import matplotlib.pyplot as plt > import scipy.integrate as integrate > > def dr(r, f): > ??? """ > ??? return the derivative of *r* (the rabbit population) evaulated as a > ??? function of *r* and *f*.? The function should work whether *r* and *f* > ??? are scalars, 1D arrays or 2D arrays.? The return value should have > ??? the same dimensionality (shape) as the inputs *r* and *f*. > ??? """ > ??? return alpha*r - beta*r*f > > def df(r, f): > ??? """ > ??? return the derivative of *f* (the fox population) evaulated as a > ??? function of *r* and *f*.? The function should work whether *r* and *f* > ??? are scalars, 1D arrays or 2D arrays.? The return value should have > ??? the same dimensionality (shape) as the inputs *r* and *f*. > ??? """ > ??? return gamma*r*f - delta*f > > def derivs(state, t): > ??? """ > ??? Return the derivatives of R and F, stored in the *state* vector:: > > ?????? state = [R, F] > > ??? The return data should be [dR, dF] which are the derivatives of R > ??? and F at position state and time *t* > ??? """ > ??? R, F = state????????? # and foxes > ??? deltar = dr(r, f)???? # in rabbits > ??? deltaf = df(r, f)???? # in foxes > ??? return deltar, deltaf > > # the parameters for rabbit and fox growth and interactions > alpha, delta = 1, .25 > beta, gamma = .2, .05 > > # the initial population of rabbits and foxes > r0 = 20 > f0 = 10 > > # create a time array from 0..100 sampled at 0.1 second steps > t = np.arange(0.0, 100, 0.1) > > > y0 = [r0, f0]? # the initial [rabbits, foxes] state vector > > # integrate your ODE using scipy.integrate.? Read the help to see what > # is available > ?HINT: see scipy.integrate.odeint > y = integrate.odeint(derivs, y0, t) > > # the return value from the integration is a Nx2 array.? Extract it > # into two 1D arrays caled r and f using numpy slice indexing > r = y[:,0]? # extract the rabbits vector > f = y[:,1]? # extract the foxes vector > > # time series plot: plot the population of rabbits and foxes as a > # funciton of time > plt.figure() > plt.plot(t, r, label='rabbits') > plt.plot(t, f, label='foxes') > plt.xlabel('time (years)') > plt.ylabel('population') > plt.title('population trajectories') > plt.grid() > plt.legend() > plt.savefig('lotka_volterra.png', dpi=150) > plt.savefig('lotka_volterra.eps') > > # phase-plane plot: plot the population of foxes versus rabbits > # make sure you include and xlabel, ylabel and title > plt.figure() > plt.plot(r, f, color='red') > plt.xlabel('rabbits') > plt.ylabel('foxes') > plt.title('phase plane') > > > # Create 2D arrays for R and F to represent the entire phase plane -- > # the point (R[i,j], F[i,j]) is a single (rabbit, fox) combinations. > # pass these arrays to the functions dr and df above to get 2D arrays > # of dR and dF evaluated at every point in the phase plance. > rmax = 1.1 * r.max() > fmax = 1.1 * f.max() > R, F = np.meshgrid(np.arange(-1, rmax), np.arange(-1, fmax)) > dR = dr(R, F) > dF = df(R, F) > plt.quiver(R, F, dR, dF) > > > # Now find the nul-clines, for dR and dF respectively.? These are the > # points where dR=0 and dF=0 in the (R, F) phase plane.? You can use > # matplotlib's countour routine to find the zero level.? See the > # levels keyword to contour.? You will need a fine mesh of R and F, > # reevaluate dr and df on the finer grid, and use contour to find the > # level curves > R, F = np.meshgrid(np.arange(-1, rmax, 0.1), np.arange(-1, fmax, 0.1)) > dR = dr(R, F) > dF = df(R, F) > plt.contour(R, F, dR, levels=[0], linewidths=3, colors='blue') > plt.contour(R, F, dF, levels=[0], linewidths=3, colors='green') > plt.ylabel('foxes') > plt.title('trajectory, direction field and null clines') > > plt.savefig('lotka_volterra_pplane.png', dpi=150) > plt.savefig('lotka_volterra_pplane.eps') > plt.show() > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From brian.murphy at unitn.it Fri Jan 7 06:30:52 2011 From: brian.murphy at unitn.it (Brian Murphy) Date: Fri, 7 Jan 2011 12:30:52 +0100 Subject: [SciPy-User] code for multidimensional scaling? Message-ID: <4D26F96C.3070607@unitn.it> Hi, I'm new to the list, so I hope my question is appropriate. I'm looking for code that implements multi-dimensional scaling (e.g. like Matlab's mdscale command) in Python. My best guess was that I would find it in the Scikit Learn package, but couldn't turn anything up. Any suggestions? thanks and regards, Brian -- Brian Murphy Post-Doctoral Researcher Language, Interaction and Computation Lab Centre for Mind/Brain Sciences University of Trento http://clic.cimec.unitn.it/brian/ From matthieu.brucher at gmail.com Fri Jan 7 06:37:13 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 7 Jan 2011 12:37:13 +0100 Subject: [SciPy-User] code for multidimensional scaling? In-Reply-To: <4D26F96C.3070607@unitn.it> References: <4D26F96C.3070607@unitn.it> Message-ID: Hi Brian, The code for MDS was available, but it was pulled out while refactored. Our objectives are to first refactor the spectral algorithms before MDS, because they are faster than MDS. Still, if you want to try the current version (that will be refactored), you can go on http://scikit-learn.sf.net, get to the repository on github, get my repository, and then check out the manifold branch. Cheers, Matthieu 2011/1/7 Brian Murphy > Hi, > > I'm new to the list, so I hope my question is appropriate. I'm looking > for code that implements multi-dimensional scaling (e.g. like Matlab's > mdscale command) in Python. My best guess was that I would find it in > the Scikit Learn package, but couldn't turn anything up. > > Any suggestions? > > thanks and regards, > > Brian > > -- > Brian Murphy > Post-Doctoral Researcher > Language, Interaction and Computation Lab > Centre for Mind/Brain Sciences > University of Trento > http://clic.cimec.unitn.it/brian/ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From JRadinger at gmx.at Fri Jan 7 06:43:11 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Fri, 07 Jan 2011 12:43:11 +0100 Subject: [SciPy-User] Raster Distance calculation of an numpy array? In-Reply-To: <4D26F96C.3070607@unitn.it> References: <4D26F96C.3070607@unitn.it> Message-ID: <20110107114311.176740@gmx.net> Hej, I am also looking for a cost distance algorithm I can use with python: I have a numpy array (extracted from arcgis) which represents a rastarized river (river=1, else=nodatavalue) and I have a point shape file which is the startpoint. I want now to calcutlate how far each rastercell (rivercell) is away (flow-distance) from the given point. It is possible to calculate with the ArcGIS-CostDistance tool, but is there also a native python/numpy/scipy way? thanks /j -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From JRadinger at gmx.at Fri Jan 7 06:50:15 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Fri, 07 Jan 2011 12:50:15 +0100 Subject: [SciPy-User] GIS Raster Calculation: Combination of NumPy Arrays and SciPy Density function In-Reply-To: References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> <20110106120156.165600@gmx.net> <20110106145908.118630@gmx.net> Message-ID: <20110107115015.176710@gmx.net> -------- Original-Nachricht -------- > Datum: Thu, 6 Jan 2011 11:50:20 -0500 > Von: josef.pktd at gmail.com > An: SciPy Users List > Betreff: Re: [SciPy-User] GIS Raster Calculation: Combination of NumPy Arrays and SciPy Density function > On Thu, Jan 6, 2011 at 9:59 AM, Johannes Radinger > wrote: > > Hello... > > > > I am working mostly in ArcGIS but as it's Raster Calculator resp. Map > Algebra is limited I want to use NumPy Arrays and perform calcultations in > SciPy. > > > > I can get my Rasterfiles in ArcGIS quite easily transformed/exported > into a NumPy Array with the function: > > > > newArray = arcpy.RasterToNumPyArray(inRaster) > > > > This Raster/Array contains distance values (ranging from around -20000 > to + 20000 float). > > > > Some time before you helped with some distance-density functions which I > want to apply now. New Values should be assigned to the cells/elements in > the array by following function: > > > > DensityRaster = p * stats.norm.pdf(x, loc=m, scale=s) > > > > where p = 0.3, m=0 and s=200, the x-value is the distance value from the > single cells in newArray. Can I just define x=newArray ? and use an array > as input variable? > > Yes, just try it out. It's fully vectorized. > loc=0 is the default, so in this case you wouldn't need to specify loc=m. > > If your array is very large, there is a shortcut to (maybe) avoid some > intermediate arrays. Oh thank you, that works perfectly. What do you mean with a shortcut? And just another short question about the stats.norm.cdf function: That is calculating the cumulative probabilty from the smallest value (most negative x) to the point x, but is there also a function which gives the cumulative sum away from 0 to point x in both directions (negativ and positive? Like an integration of the probabilty function from 0 to (-)x? thank you /j > > Josef > > > > > > thank you > > /j > > > > > > -------- Original-Nachricht -------- > >> Datum: Thu, 6 Jan 2011 07:10:58 -0500 > >> Von: josef.pktd at gmail.com > >> An: SciPy Users List > >> Betreff: Re: [SciPy-User] solving integration, density function > > > >> On Thu, Jan 6, 2011 at 7:01 AM, Johannes Radinger > >> wrote: > >> > Thank you for the simplification of the formula, > >> > but I still get a different result in the case > >> > when x >> > > >> > here a code to try: > >> > > >> > ******************** > >> > import math > >> > from scipy import stats > >> > > >> > s1 = 3 > >> > m = 0 > >> > p = 1 > >> > x = 2 > >> > > >> > func = stats.norm.pdf(x, loc=m, scale=(s1)) > >> > func2 = (1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2)) > >> > > >> > print func > >> > print func2 > >> > ******************************** > >> > >> use floats, I think you just run into integer division > >> > >> (x-m)/s1 > >> > >> Josef > >> > >> > > >> > /j > >> > > >> > -------- Original-Nachricht -------- > >> >> Datum: Thu, 6 Jan 2011 05:48:25 -0600 > >> >> Von: Warren Weckesser > >> >> An: SciPy Users List > >> >> Betreff: Re: [SciPy-User] solving integration, density function > >> > > >> >> On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger > >> >> wrote: > >> >> > >> >> > Hey > >> >> > > >> >> > Last time you helped me a lot with my normal > >> >> > probabilty density function. My problem now is > >> >> > quite simple, I think it's just a problem with > >> >> > the syntax (brackets): > >> >> > > >> >> > There are two ways to calculate the pdf, with the > >> >> > stats-function and with pure mathematically, but > >> >> > the give different results and I can't find the > >> >> > where I make the mistake: > >> >> > > >> >> > > >> >> > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) > >> >> > func2 = > >> >> > > 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) > >> >> > > >> >> > Where is the problem > >> >> > > >> >> > >> >> > >> >> func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) > >> >> > >> >> > >> >> Warren > >> >> > >> >> > >> >> > >> >> > thank you... > >> >> > > >> >> > /j > >> >> > > >> >> > -------- Original-Nachricht -------- > >> >> > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 > >> >> > > Von: Skipper Seabold > >> >> > > An: SciPy Users List > >> >> > > Betreff: Re: [SciPy-User] solving integration, density function > >> >> > > >> >> > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger > >> > >> >> > > wrote: > >> >> > > > > >> >> > > > -------- Original-Nachricht -------- > >> >> > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 > >> >> > > >> Von: Gregor Thalhammer > >> >> > > >> An: SciPy Users List > >> >> > > >> Betreff: Re: [SciPy-User] solving integration, density > function > >> >> > > > > >> >> > > >> > >> >> > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: > >> >> > > >> > >> >> > > >> > Hello, > >> >> > > >> > > >> >> > > >> > I am really new to python and Scipy. > >> >> > > >> > I want to solve a integrated function with a python script > >> >> > > >> > and I think Scipy should do that :) > >> >> > > >> > > >> >> > > >> > My task: > >> >> > > >> > > >> >> > > >> > I do have some variables (s, m, K,) which are now > absolutely > >> set, > >> >> > but > >> >> > > in > >> >> > > >> future I'll get the values via another process of pyhton. > >> >> > > >> > > >> >> > > >> > s = 400 > >> >> > > >> > m = 0 > >> >> > > >> > K = 1 > >> >> > > >> > > >> >> > > >> > And have have following function: > >> >> > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is > the > >> >> > > density > >> >> > > >> function of the normal distribution a symetrical curve with > the > >> >> mean > >> >> > > (m) of > >> >> > > >> 0. > >> >> > > >> > > >> >> > > >> > The total area under the curve is 1 (100%) which is for an > >> >> > > integration > >> >> > > >> from -inf to +inf. > >> >> > > >> > I want to know x in the case of 99%: meaning that the > integral > >> >> (-x > >> >> > to > >> >> > > >> +x) of the function is 0.99. Due to the symetry of the curve > you > >> >> can > >> >> > > also set > >> >> > > >> the integral from 0 to +x equal to (0.99/2): > >> >> > > >> > > >> >> > > >> > 0.99 = > >> >> integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > >> >> > > -x, > >> >> > > >> x) > >> >> > > >> > resp. > >> >> > > >> > (0.99/2) = > >> >> > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), > >> >> > > >> 0, x) > >> >> > > >> > > >> >> > > >> > How can I solve that question in Scipy/python > >> >> > > >> > so that I get x in the end. I don't know how to write > >> >> > > >> > the code... > >> >> > > >> > >> >> > > >> > >> >> > > >> ---> > >> >> > > >> erf(x[, out]) > >> >> > > >> > >> >> > > >> ? ? y=erf(z) returns the error function of complex argument > >> defined > >> >> > > as > >> >> > > >> ? ? as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) > >> >> > > >> --- > >> >> > > >> > >> >> > > >> from scipy.special import erf, erfinv > >> >> > > >> erfinv(0.99)*sqrt(2) > >> >> > > >> > >> >> > > >> > >> >> > > >> Gregor > >> >> > > >> > >> >> > > > > >> >> > > > > >> >> > > > Thank you Gregor, > >> >> > > > I only understand a part of your answer... I know that the > >> integral > >> >> of > >> >> > > the density function is a error function and I know that the > >> argument > >> >> > "from > >> >> > > scipy.special import erf, erfinv" is to load the module. > >> >> > > > > >> >> > > > But how do I write the code including my orignial function so > >> that I > >> >> > can > >> >> > > modify it (I have also another function I want to integrate). > how > >> do i > >> >> > > start? I want to save the whole code to a python-script I can > then > >> >> load > >> >> > e.g. > >> >> > > into ArcGIS where I want to use the value of x for further > >> >> calculations. > >> >> > > > > >> >> > > > >> >> > > Are you always integrating densities? ?If so, you don't want to > >> use > >> >> > > integrals probably, but you could use scipy.stats > >> >> > > > >> >> > > erfinv(.99)*np.sqrt(2) > >> >> > > 2.5758293035489004 > >> >> > > > >> >> > > from scipy import stats > >> >> > > > >> >> > > stats.norm.ppf(.995) > >> >> > > 2.5758293035489004 > >> >> > > > >> >> > > Skipper > >> >> > > _______________________________________________ > >> >> > > SciPy-User mailing list > >> >> > > SciPy-User at scipy.org > >> >> > > http://mail.scipy.org/mailman/listinfo/scipy-user > >> >> > > >> >> > -- > >> >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > >> >> > Jetzt informieren: http://www.gmx.net/de/go/freephone > >> >> > _______________________________________________ > >> >> > SciPy-User mailing list > >> >> > SciPy-User at scipy.org > >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> >> > > >> > > >> > -- > >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > >> > Jetzt informieren: http://www.gmx.net/de/go/freephone > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > > Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - > > jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From josef.pktd at gmail.com Fri Jan 7 07:08:56 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 7 Jan 2011 07:08:56 -0500 Subject: [SciPy-User] GIS Raster Calculation: Combination of NumPy Arrays and SciPy Density function In-Reply-To: <20110107115015.176710@gmx.net> References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> <20110106120156.165600@gmx.net> <20110106145908.118630@gmx.net> <20110107115015.176710@gmx.net> Message-ID: On Fri, Jan 7, 2011 at 6:50 AM, Johannes Radinger wrote: > > -------- Original-Nachricht -------- >> Datum: Thu, 6 Jan 2011 11:50:20 -0500 >> Von: josef.pktd at gmail.com >> An: SciPy Users List >> Betreff: Re: [SciPy-User] GIS Raster Calculation: Combination of NumPy Arrays and SciPy Density function > >> On Thu, Jan 6, 2011 at 9:59 AM, Johannes Radinger >> wrote: >> > Hello... >> > >> > I am working mostly in ArcGIS but as it's Raster Calculator resp. Map >> Algebra is limited I want to use NumPy Arrays and perform calcultations in >> SciPy. >> > >> > I can get my Rasterfiles in ArcGIS quite easily transformed/exported >> into a NumPy Array with the function: >> > >> > newArray = arcpy.RasterToNumPyArray(inRaster) >> > >> > This Raster/Array contains distance values (ranging from around -20000 >> to + 20000 float). >> > >> > Some time before you helped with some distance-density functions which I >> want to apply now. New Values should be assigned to the cells/elements in >> the array by following function: >> > >> > DensityRaster = p * stats.norm.pdf(x, loc=m, scale=s) >> > >> > where p = 0.3, m=0 and s=200, the x-value is the distance value from the >> single cells in newArray. Can I just define x=newArray ? and use an array >> as input variable? >> >> Yes, just try it out. It's fully vectorized. >> loc=0 is the default, so in this case you wouldn't need to specify loc=m. >> >> If your array is very large, there is a shortcut to (maybe) avoid some >> intermediate arrays. > > Oh thank you, that works perfectly. What do you mean with a shortcut? > > And just another short question about the stats.norm.cdf function: That is calculating the cumulative probabilty from the smallest value (most negative x) to the point x, but is there also a function which gives the cumulative sum away from 0 to point x in both directions (negativ and positive? Like an integration of the probabilty function from 0 to (-)x? cdf is the standard definition of the cdf, integration from lower support limit, attribute .a, for normal it's -np.inf, to point x. there is also sf(x) = 1-cdf(x) in general Prob(0 > thank you > /j >> >> Josef >> >> >> > >> > thank you >> > /j >> > >> > >> > -------- Original-Nachricht -------- >> >> Datum: Thu, 6 Jan 2011 07:10:58 -0500 >> >> Von: josef.pktd at gmail.com >> >> An: SciPy Users List >> >> Betreff: Re: [SciPy-User] solving integration, density function >> > >> >> On Thu, Jan 6, 2011 at 7:01 AM, Johannes Radinger >> >> wrote: >> >> > Thank you for the simplification of the formula, >> >> > but I still get a different result in the case >> >> > when x> >> > >> >> > here a code to try: >> >> > >> >> > ******************** >> >> > import math >> >> > from scipy import stats >> >> > >> >> > s1 = 3 >> >> > m = 0 >> >> > p = 1 >> >> > x = 2 >> >> > >> >> > func = stats.norm.pdf(x, loc=m, scale=(s1)) >> >> > func2 = (1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2)) >> >> > >> >> > print func >> >> > print func2 >> >> > ******************************** >> >> >> >> use floats, I think you just run into integer division >> >> >> >> (x-m)/s1 >> >> >> >> Josef >> >> >> >> > >> >> > /j >> >> > >> >> > -------- Original-Nachricht -------- >> >> >> Datum: Thu, 6 Jan 2011 05:48:25 -0600 >> >> >> Von: Warren Weckesser >> >> >> An: SciPy Users List >> >> >> Betreff: Re: [SciPy-User] solving integration, density function >> >> > >> >> >> On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger >> >> >> wrote: >> >> >> >> >> >> > Hey >> >> >> > >> >> >> > Last time you helped me a lot with my normal >> >> >> > probabilty density function. My problem now is >> >> >> > quite simple, I think it's just a problem with >> >> >> > the syntax (brackets): >> >> >> > >> >> >> > There are two ways to calculate the pdf, with the >> >> >> > stats-function and with pure mathematically, but >> >> >> > the give different results and I can't find the >> >> >> > where I make the mistake: >> >> >> > >> >> >> > >> >> >> > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) >> >> >> > func2 = >> >> >> > >> 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) >> >> >> > >> >> >> > Where is the problem >> >> >> > >> >> >> >> >> >> >> >> >> func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) >> >> >> >> >> >> >> >> >> Warren >> >> >> >> >> >> >> >> >> >> >> >> > thank you... >> >> >> > >> >> >> > /j >> >> >> > >> >> >> > -------- Original-Nachricht -------- >> >> >> > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 >> >> >> > > Von: Skipper Seabold >> >> >> > > An: SciPy Users List >> >> >> > > Betreff: Re: [SciPy-User] solving integration, density function >> >> >> > >> >> >> > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger >> >> >> >> >> > > wrote: >> >> >> > > > >> >> >> > > > -------- Original-Nachricht -------- >> >> >> > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 >> >> >> > > >> Von: Gregor Thalhammer >> >> >> > > >> An: SciPy Users List >> >> >> > > >> Betreff: Re: [SciPy-User] solving integration, density >> function >> >> >> > > > >> >> >> > > >> >> >> >> > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: >> >> >> > > >> >> >> >> > > >> > Hello, >> >> >> > > >> > >> >> >> > > >> > I am really new to python and Scipy. >> >> >> > > >> > I want to solve a integrated function with a python script >> >> >> > > >> > and I think Scipy should do that :) >> >> >> > > >> > >> >> >> > > >> > My task: >> >> >> > > >> > >> >> >> > > >> > I do have some variables (s, m, K,) which are now >> absolutely >> >> set, >> >> >> > but >> >> >> > > in >> >> >> > > >> future I'll get the values via another process of pyhton. >> >> >> > > >> > >> >> >> > > >> > s = 400 >> >> >> > > >> > m = 0 >> >> >> > > >> > K = 1 >> >> >> > > >> > >> >> >> > > >> > And have have following function: >> >> >> > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is >> the >> >> >> > > density >> >> >> > > >> function of the normal distribution a symetrical curve with >> the >> >> >> mean >> >> >> > > (m) of >> >> >> > > >> 0. >> >> >> > > >> > >> >> >> > > >> > The total area under the curve is 1 (100%) which is for an >> >> >> > > integration >> >> >> > > >> from -inf to +inf. >> >> >> > > >> > I want to know x in the case of 99%: meaning that the >> integral >> >> >> (-x >> >> >> > to >> >> >> > > >> +x) of the function is 0.99. Due to the symetry of the curve >> you >> >> >> can >> >> >> > > also set >> >> >> > > >> the integral from 0 to +x equal to (0.99/2): >> >> >> > > >> > >> >> >> > > >> > 0.99 = >> >> >> integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), >> >> >> > > -x, >> >> >> > > >> x) >> >> >> > > >> > resp. >> >> >> > > >> > (0.99/2) = >> >> >> > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), >> >> >> > > >> 0, x) >> >> >> > > >> > >> >> >> > > >> > How can I solve that question in Scipy/python >> >> >> > > >> > so that I get x in the end. I don't know how to write >> >> >> > > >> > the code... >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> ---> >> >> >> > > >> erf(x[, out]) >> >> >> > > >> >> >> >> > > >> ? ? y=erf(z) returns the error function of complex argument >> >> defined >> >> >> > > as >> >> >> > > >> ? ? as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) >> >> >> > > >> --- >> >> >> > > >> >> >> >> > > >> from scipy.special import erf, erfinv >> >> >> > > >> erfinv(0.99)*sqrt(2) >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> Gregor >> >> >> > > >> >> >> >> > > > >> >> >> > > > >> >> >> > > > Thank you Gregor, >> >> >> > > > I only understand a part of your answer... I know that the >> >> integral >> >> >> of >> >> >> > > the density function is a error function and I know that the >> >> argument >> >> >> > "from >> >> >> > > scipy.special import erf, erfinv" is to load the module. >> >> >> > > > >> >> >> > > > But how do I write the code including my orignial function so >> >> that I >> >> >> > can >> >> >> > > modify it (I have also another function I want to integrate). >> how >> >> do i >> >> >> > > start? I want to save the whole code to a python-script I can >> then >> >> >> load >> >> >> > e.g. >> >> >> > > into ArcGIS where I want to use the value of x for further >> >> >> calculations. >> >> >> > > > >> >> >> > > >> >> >> > > Are you always integrating densities? ?If so, you don't want to >> >> use >> >> >> > > integrals probably, but you could use scipy.stats >> >> >> > > >> >> >> > > erfinv(.99)*np.sqrt(2) >> >> >> > > 2.5758293035489004 >> >> >> > > >> >> >> > > from scipy import stats >> >> >> > > >> >> >> > > stats.norm.ppf(.995) >> >> >> > > 2.5758293035489004 >> >> >> > > >> >> >> > > Skipper >> >> >> > > _______________________________________________ >> >> >> > > SciPy-User mailing list >> >> >> > > SciPy-User at scipy.org >> >> >> > > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> >> >> > -- >> >> >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! >> >> >> > Jetzt informieren: http://www.gmx.net/de/go/freephone >> >> >> > _______________________________________________ >> >> >> > SciPy-User mailing list >> >> >> > SciPy-User at scipy.org >> >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> >> > >> >> > -- >> >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! >> >> > Jetzt informieren: http://www.gmx.net/de/go/freephone >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > -- >> > Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - >> > jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From zachary.pincus at yale.edu Fri Jan 7 07:17:03 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 7 Jan 2011 07:17:03 -0500 Subject: [SciPy-User] Raster Distance calculation of an numpy array? In-Reply-To: <20110107114311.176740@gmx.net> References: <4D26F96C.3070607@unitn.it> <20110107114311.176740@gmx.net> Message-ID: Hi, You could use code that Almar Klein (and I to a lesser degree) wrote for the scikits.image package -- it's an implementation of Dijkstra's algorithm ("Minimum Cost Path") for numpy arrays. You provide a "costs" array and a starting point and it finds the total cost to travel to every point in the array from that starting point. (Multiple start points and specified end points are allowed too.) So your "costs" array would be the rasterized river; let river pixels have cost-1 to traverse, and let other pixels have some very high cost. (As it stands, the algorithm chokes on infinite-cost pixels when no specified endpoint are given, as it forever tries to find a cheap path to these pixels -- I need to fix this...): import scikits.image.graph as graph import numpy costs = numpy.empty((10,10)) costs.fill(1e10) # non-river costs[3,:] = 1 # a simple river mcp = graph.MCP_Geometric(costs) cumulative_costs, traceback = mcp.find_costs([[3,0]]) cumulative_costs now contains the distance from the starting point to every other river point. Zach On Jan 7, 2011, at 6:43 AM, Johannes Radinger wrote: > Hej, > > I am also looking for a cost distance algorithm I can use with python: > > I have a numpy array (extracted from arcgis) which represents a > rastarized > river (river=1, else=nodatavalue) and I have a point shape file > which is the startpoint. I want now to calcutlate how far each > rastercell (rivercell) is away (flow-distance) from the given point. > It is possible to calculate with the ArcGIS-CostDistance tool, but > is there also a native python/numpy/scipy way? > > thanks > /j > -- > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From JRadinger at gmx.at Fri Jan 7 07:47:40 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Fri, 07 Jan 2011 13:47:40 +0100 Subject: [SciPy-User] Raster Distance calculation of an numpy array? In-Reply-To: References: <4D26F96C.3070607@unitn.it> <20110107114311.176740@gmx.net> Message-ID: <20110107124740.199740@gmx.net> -------- Original-Nachricht -------- > Datum: Fri, 7 Jan 2011 07:17:03 -0500 > Von: Zachary Pincus > An: SciPy Users List > Betreff: Re: [SciPy-User] Raster Distance calculation of an numpy array? > Hi, > > You could use code that Almar Klein (and I to a lesser degree) wrote > for the scikits.image package -- it's an implementation of Dijkstra's > algorithm ("Minimum Cost Path") for numpy arrays. You provide a > "costs" array and a starting point and it finds the total cost to > travel to every point in the array from that starting point. (Multiple > start points and specified end points are allowed too.) > > So your "costs" array would be the rasterized river; let river pixels > have cost-1 to traverse, and let other pixels have some very high > cost. (As it stands, the algorithm chokes on infinite-cost pixels when > no specified endpoint are given, as it forever tries to find a cheap > path to these pixels -- I need to fix this...): > > import scikits.image.graph as graph > import numpy > costs = numpy.empty((10,10)) > costs.fill(1e10) # non-river > costs[3,:] = 1 # a simple river > mcp = graph.MCP_Geometric(costs) > cumulative_costs, traceback = mcp.find_costs([[3,0]]) > > cumulative_costs now contains the distance from the starting point to > every other river point. > > Zach > Just some addional questions which are still unclear: 1) how can I define the starting point? 2) My numpy array is derived from GIS so I know that each cell (now array element) is 10x10 metres. So the geografical distance between two cell depends if they are connected straight or diagonal. How is this handled with your algorithm? Eg. North-South travel from one cell to another is 10 m (from the center to center) but a travel from Northeast to Southwest (diagonal) is a distance around 14.14 metres. 3) As I have to calculate quite big datasets very often, is it possible to exclude NoData fields (or elements with a value 1e10) from the calculation to decrease processing time? thank you for your help zach! > > > > On Jan 7, 2011, at 6:43 AM, Johannes Radinger wrote: > > > Hej, > > > > I am also looking for a cost distance algorithm I can use with python: > > > > I have a numpy array (extracted from arcgis) which represents a > > rastarized > > river (river=1, else=nodatavalue) and I have a point shape file > > which is the startpoint. I want now to calcutlate how far each > > rastercell (rivercell) is away (flow-distance) from the given point. > > It is possible to calculate with the ArcGIS-CostDistance tool, but > > is there also a native python/numpy/scipy way? > > > > thanks > > /j > > -- > > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > > Jetzt informieren: http://www.gmx.net/de/go/freephone > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From josef.pktd at gmail.com Fri Jan 7 08:18:43 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 7 Jan 2011 08:18:43 -0500 Subject: [SciPy-User] GIS Raster Calculation: Combination of NumPy Arrays and SciPy Density function In-Reply-To: References: <20101221110650.32180@gmx.net> <07515CE8-8F03-40C6-9A29-FA1AE7AE8AF1@gmail.com> <20101221124827.53380@gmx.net> <20110106112758.65840@gmx.net> <20110106120156.165600@gmx.net> <20110106145908.118630@gmx.net> <20110107115015.176710@gmx.net> Message-ID: On Fri, Jan 7, 2011 at 7:08 AM, wrote: > On Fri, Jan 7, 2011 at 6:50 AM, Johannes Radinger wrote: >> >> -------- Original-Nachricht -------- >>> Datum: Thu, 6 Jan 2011 11:50:20 -0500 >>> Von: josef.pktd at gmail.com >>> An: SciPy Users List >>> Betreff: Re: [SciPy-User] GIS Raster Calculation: Combination of NumPy Arrays and SciPy Density function >> >>> On Thu, Jan 6, 2011 at 9:59 AM, Johannes Radinger >>> wrote: >>> > Hello... >>> > >>> > I am working mostly in ArcGIS but as it's Raster Calculator resp. Map >>> Algebra is limited I want to use NumPy Arrays and perform calcultations in >>> SciPy. >>> > >>> > I can get my Rasterfiles in ArcGIS quite easily transformed/exported >>> into a NumPy Array with the function: >>> > >>> > newArray = arcpy.RasterToNumPyArray(inRaster) >>> > >>> > This Raster/Array contains distance values (ranging from around -20000 >>> to + 20000 float). >>> > >>> > Some time before you helped with some distance-density functions which I >>> want to apply now. New Values should be assigned to the cells/elements in >>> the array by following function: >>> > >>> > DensityRaster = p * stats.norm.pdf(x, loc=m, scale=s) >>> > >>> > where p = 0.3, m=0 and s=200, the x-value is the distance value from the >>> single cells in newArray. Can I just define x=newArray ? and use an array >>> as input variable? >>> >>> Yes, just try it out. It's fully vectorized. >>> loc=0 is the default, so in this case you wouldn't need to specify loc=m. >>> >>> If your array is very large, there is a shortcut to (maybe) avoid some >>> intermediate arrays. >> >> Oh thank you, that works perfectly. What do you mean with a shortcut? use the internal method _pdf . It avoids the generic overhead, but the user is responsible that it makes sense >>> scale = 2 >>> stats.norm.pdf(np.ones((3,3)), scale=scale) array([[ 0.17603266, 0.17603266, 0.17603266], [ 0.17603266, 0.17603266, 0.17603266], [ 0.17603266, 0.17603266, 0.17603266]]) >>> stats.norm._pdf(np.ones((3,3))/scale)/scale array([[ 0.17603266, 0.17603266, 0.17603266], [ 0.17603266, 0.17603266, 0.17603266], [ 0.17603266, 0.17603266, 0.17603266]]) It might save a bit of memory and time, but I never actually compared the performance Josef >> >> And just another short question about the stats.norm.cdf function: That is calculating the cumulative probabilty from the smallest value (most negative x) to the point x, but is there also a function which gives the cumulative sum away from 0 to point x in both directions (negativ and positive? Like an integration of the probabilty function from 0 to (-)x? > > cdf is the standard definition of the cdf, integration from lower > support limit, attribute .a, for normal it's -np.inf, to point x. > > there is also sf(x) = 1-cdf(x) > > in general > Prob(0 > for symmetric centered ?Prob(0 for the normal you don't gain anything compared to cdf(x) - 0.5 > > Josef >> >> thank you >> /j >>> >>> Josef >>> >>> >>> > >>> > thank you >>> > /j >>> > >>> > >>> > -------- Original-Nachricht -------- >>> >> Datum: Thu, 6 Jan 2011 07:10:58 -0500 >>> >> Von: josef.pktd at gmail.com >>> >> An: SciPy Users List >>> >> Betreff: Re: [SciPy-User] solving integration, density function >>> > >>> >> On Thu, Jan 6, 2011 at 7:01 AM, Johannes Radinger >>> >> wrote: >>> >> > Thank you for the simplification of the formula, >>> >> > but I still get a different result in the case >>> >> > when x>> >> > >>> >> > here a code to try: >>> >> > >>> >> > ******************** >>> >> > import math >>> >> > from scipy import stats >>> >> > >>> >> > s1 = 3 >>> >> > m = 0 >>> >> > p = 1 >>> >> > x = 2 >>> >> > >>> >> > func = stats.norm.pdf(x, loc=m, scale=(s1)) >>> >> > func2 = (1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2)) >>> >> > >>> >> > print func >>> >> > print func2 >>> >> > ******************************** >>> >> >>> >> use floats, I think you just run into integer division >>> >> >>> >> (x-m)/s1 >>> >> >>> >> Josef >>> >> >>> >> > >>> >> > /j >>> >> > >>> >> > -------- Original-Nachricht -------- >>> >> >> Datum: Thu, 6 Jan 2011 05:48:25 -0600 >>> >> >> Von: Warren Weckesser >>> >> >> An: SciPy Users List >>> >> >> Betreff: Re: [SciPy-User] solving integration, density function >>> >> > >>> >> >> On Thu, Jan 6, 2011 at 5:27 AM, Johannes Radinger >>> >> >> wrote: >>> >> >> >>> >> >> > Hey >>> >> >> > >>> >> >> > Last time you helped me a lot with my normal >>> >> >> > probabilty density function. My problem now is >>> >> >> > quite simple, I think it's just a problem with >>> >> >> > the syntax (brackets): >>> >> >> > >>> >> >> > There are two ways to calculate the pdf, with the >>> >> >> > stats-function and with pure mathematically, but >>> >> >> > the give different results and I can't find the >>> >> >> > where I make the mistake: >>> >> >> > >>> >> >> > >>> >> >> > func1 = stats.norm.pdf(x, loc=m, scale=(s1)) >>> >> >> > func2 = >>> >> >> > >>> 1/((s1)*(math.sqrt(2*math.pi))))*(math.exp(((-0.5)*((x-m)/(s1)))**2) >>> >> >> > >>> >> >> > Where is the problem >>> >> >> > >>> >> >> >>> >> >> >>> >> >> func2 = 1/(s1*math.sqrt(2*math.pi)) * math.exp(-0.5*((x-m)/s1)**2) >>> >> >> >>> >> >> >>> >> >> Warren >>> >> >> >>> >> >> >>> >> >> >>> >> >> > thank you... >>> >> >> > >>> >> >> > /j >>> >> >> > >>> >> >> > -------- Original-Nachricht -------- >>> >> >> > > Datum: Tue, 21 Dec 2010 09:18:15 -0500 >>> >> >> > > Von: Skipper Seabold >>> >> >> > > An: SciPy Users List >>> >> >> > > Betreff: Re: [SciPy-User] solving integration, density function >>> >> >> > >>> >> >> > > On Tue, Dec 21, 2010 at 7:48 AM, Johannes Radinger >>> >> >>> >> >> > > wrote: >>> >> >> > > > >>> >> >> > > > -------- Original-Nachricht -------- >>> >> >> > > >> Datum: Tue, 21 Dec 2010 13:20:47 +0100 >>> >> >> > > >> Von: Gregor Thalhammer >>> >> >> > > >> An: SciPy Users List >>> >> >> > > >> Betreff: Re: [SciPy-User] solving integration, density >>> function >>> >> >> > > > >>> >> >> > > >> >>> >> >> > > >> Am 21.12.2010 um 12:06 schrieb Johannes Radinger: >>> >> >> > > >> >>> >> >> > > >> > Hello, >>> >> >> > > >> > >>> >> >> > > >> > I am really new to python and Scipy. >>> >> >> > > >> > I want to solve a integrated function with a python script >>> >> >> > > >> > and I think Scipy should do that :) >>> >> >> > > >> > >>> >> >> > > >> > My task: >>> >> >> > > >> > >>> >> >> > > >> > I do have some variables (s, m, K,) which are now >>> absolutely >>> >> set, >>> >> >> > but >>> >> >> > > in >>> >> >> > > >> future I'll get the values via another process of pyhton. >>> >> >> > > >> > >>> >> >> > > >> > s = 400 >>> >> >> > > >> > m = 0 >>> >> >> > > >> > K = 1 >>> >> >> > > >> > >>> >> >> > > >> > And have have following function: >>> >> >> > > >> > (1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2) which is >>> the >>> >> >> > > density >>> >> >> > > >> function of the normal distribution a symetrical curve with >>> the >>> >> >> mean >>> >> >> > > (m) of >>> >> >> > > >> 0. >>> >> >> > > >> > >>> >> >> > > >> > The total area under the curve is 1 (100%) which is for an >>> >> >> > > integration >>> >> >> > > >> from -inf to +inf. >>> >> >> > > >> > I want to know x in the case of 99%: meaning that the >>> integral >>> >> >> (-x >>> >> >> > to >>> >> >> > > >> +x) of the function is 0.99. Due to the symetry of the curve >>> you >>> >> >> can >>> >> >> > > also set >>> >> >> > > >> the integral from 0 to +x equal to (0.99/2): >>> >> >> > > >> > >>> >> >> > > >> > 0.99 = >>> >> >> integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), >>> >> >> > > -x, >>> >> >> > > >> x) >>> >> >> > > >> > resp. >>> >> >> > > >> > (0.99/2) = >>> >> >> > > integral((1/((s*K)*sqrt(2*pi)))*exp((-1/2*(((x-m)/s*K))^2)), >>> >> >> > > >> 0, x) >>> >> >> > > >> > >>> >> >> > > >> > How can I solve that question in Scipy/python >>> >> >> > > >> > so that I get x in the end. I don't know how to write >>> >> >> > > >> > the code... >>> >> >> > > >> >>> >> >> > > >> >>> >> >> > > >> ---> >>> >> >> > > >> erf(x[, out]) >>> >> >> > > >> >>> >> >> > > >> ? ? y=erf(z) returns the error function of complex argument >>> >> defined >>> >> >> > > as >>> >> >> > > >> ? ? as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) >>> >> >> > > >> --- >>> >> >> > > >> >>> >> >> > > >> from scipy.special import erf, erfinv >>> >> >> > > >> erfinv(0.99)*sqrt(2) >>> >> >> > > >> >>> >> >> > > >> >>> >> >> > > >> Gregor >>> >> >> > > >> >>> >> >> > > > >>> >> >> > > > >>> >> >> > > > Thank you Gregor, >>> >> >> > > > I only understand a part of your answer... I know that the >>> >> integral >>> >> >> of >>> >> >> > > the density function is a error function and I know that the >>> >> argument >>> >> >> > "from >>> >> >> > > scipy.special import erf, erfinv" is to load the module. >>> >> >> > > > >>> >> >> > > > But how do I write the code including my orignial function so >>> >> that I >>> >> >> > can >>> >> >> > > modify it (I have also another function I want to integrate). >>> how >>> >> do i >>> >> >> > > start? I want to save the whole code to a python-script I can >>> then >>> >> >> load >>> >> >> > e.g. >>> >> >> > > into ArcGIS where I want to use the value of x for further >>> >> >> calculations. >>> >> >> > > > >>> >> >> > > >>> >> >> > > Are you always integrating densities? ?If so, you don't want to >>> >> use >>> >> >> > > integrals probably, but you could use scipy.stats >>> >> >> > > >>> >> >> > > erfinv(.99)*np.sqrt(2) >>> >> >> > > 2.5758293035489004 >>> >> >> > > >>> >> >> > > from scipy import stats >>> >> >> > > >>> >> >> > > stats.norm.ppf(.995) >>> >> >> > > 2.5758293035489004 >>> >> >> > > >>> >> >> > > Skipper >>> >> >> > > _______________________________________________ >>> >> >> > > SciPy-User mailing list >>> >> >> > > SciPy-User at scipy.org >>> >> >> > > http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> > >>> >> >> > -- >>> >> >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! >>> >> >> > Jetzt informieren: http://www.gmx.net/de/go/freephone >>> >> >> > _______________________________________________ >>> >> >> > SciPy-User mailing list >>> >> >> > SciPy-User at scipy.org >>> >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> > >>> >> > >>> >> > -- >>> >> > NEU: FreePhone - kostenlos mobil telefonieren und surfen! >>> >> > Jetzt informieren: http://www.gmx.net/de/go/freephone >>> >> > _______________________________________________ >>> >> > SciPy-User mailing list >>> >> > SciPy-User at scipy.org >>> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> > >>> >> _______________________________________________ >>> >> SciPy-User mailing list >>> >> SciPy-User at scipy.org >>> >> http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> > -- >>> > Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - >>> > jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> -- >> NEU: FreePhone - kostenlos mobil telefonieren und surfen! >> Jetzt informieren: http://www.gmx.net/de/go/freephone >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From zachary.pincus at yale.edu Fri Jan 7 08:29:49 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 7 Jan 2011 08:29:49 -0500 Subject: [SciPy-User] Raster Distance calculation of an numpy array? In-Reply-To: <20110107124740.199740@gmx.net> References: <4D26F96C.3070607@unitn.it> <20110107114311.176740@gmx.net> <20110107124740.199740@gmx.net> Message-ID: <86D8974C-59D4-4B74-AA6E-E9E4F260C8F6@yale.edu> OK, I've updated my branch of scikits.image with some simple fixes to make this sort of thing easier; hopefully St?fan will pull it into the main branch soon, but for now here it is: https://github.com/zachrahan/scikits.image/tree/mcp_infinite Here's an updated example: import scikits.image.graph as graph import numpy costs = numpy.empty((10,10)) costs.fill(-1) # negative-cost pixels will not be considered costs[3,:] = 1 # a straight "river" starting_point = [3,0] mcp = graph.MCP_Geometric(costs) cumulative_costs = mcp.find_costs([starting_point])[0] > Just some addional questions which are still unclear: > 1) how can I define the starting point? The MCP and MCP_Geometric classes are pretty well-documented... once you get the scikit installed you can peruse the docstrings interactively (do you use ipython? it has great features for docstring- reading, among other useful stuff). For now, you can also peruse the code (with docstrings embedded) here: https://github.com/zachrahan/scikits.image/blob/mcp_infinite/scikits/image/graph/_mcp.pyx To answer your question, you pass a list of starting points to the find_costs method, which returns an array of the cumulative costs (inf for array elements not considered) and a "traceback" array, which indicates the predecessor of each array element in the minimum-cost path from the start. A convenience method traceback() will trace through this and return a list of indices from the start to any specified endpoint. > 2) My numpy array is derived from GIS so I know that each cell (now > array element) is 10x10 metres. So the geografical distance between > two cell depends if they are connected straight or diagonal. How is > this handled with your algorithm? Eg. North-South travel from one > cell to another is 10 m (from the center to center) but a travel > from Northeast to Southwest (diagonal) is a distance around 14.14 > metres. The MCP_Geometric class knows that diagonal moves should cost sqrt(2) times as much. If you want your cost array in meters instead of pixels, just set each pixel's cost to 10, for 10x10-meter pixels. (The base MCP class counts all moves as the same cost.) Also, you can exclude diagonal moves (or specify a list of allowable move directions), which is useful in some cases, but not really this one. Also, for fun, say you had measures of "downstream flow rates" that varied across the river -- you could use the "seconds/10-meters" flow rate as the cost input, and then the cumulative cost returned would be not river distance, but time to reach a certain point. (Though this is of course not a fluid simulation... the MCP classes have no capability to deal with anisotropic travel costs [e.g. downstream is faster than cross-stream or upstream], but it would give an OK approximation for downstream travel times, and keep the paths centered in the fast- flowing middle channel, say.) > 3) As I have to calculate quite big datasets very often, is it > possible to exclude NoData fields (or elements with a value 1e10) > from the calculation to decrease processing time? My new branch on github now excludes negative or infinite-cost elements from consideration, which will speed things a lot. There are somewhat faster algorithms to calculate these sort of weighted distance transforms (fast marching methods, e.g.), but Dijkstra's algorithm is decent for this, and a lot more general. (The MCP classes are implemented in cython and thus compiled down to C, too, and run pretty quickly.) Zach > thank you for your help zach! > > > >> >> >> >> On Jan 7, 2011, at 6:43 AM, Johannes Radinger wrote: >> >>> Hej, >>> >>> I am also looking for a cost distance algorithm I can use with >>> python: >>> >>> I have a numpy array (extracted from arcgis) which represents a >>> rastarized >>> river (river=1, else=nodatavalue) and I have a point shape file >>> which is the startpoint. I want now to calcutlate how far each >>> rastercell (rivercell) is away (flow-distance) from the given point. >>> It is possible to calculate with the ArcGIS-CostDistance tool, but >>> is there also a native python/numpy/scipy way? >>> >>> thanks >>> /j >>> -- >>> NEU: FreePhone - kostenlos mobil telefonieren und surfen! >>> Jetzt informieren: http://www.gmx.net/de/go/freephone >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From f.pollastri at inrim.it Sun Jan 2 13:36:13 2011 From: f.pollastri at inrim.it (Fabrizio Pollastri) Date: Sun, 02 Jan 2011 19:36:13 +0100 Subject: [SciPy-User] pandas: independent row sorting of data frame Message-ID: <4D20C59D.5000001@inrim.it> Hello, I am trying to convert the following R code to python + pandas. The code sort independently each row of an xts multiseries (xs), obtaining a sorting index and a sorted copy of the original multiseries. sort_index = t(apply(as.matrix(coredata(xs)),1,order,decreasing=TRUE)) sorted_xs = t(apply(as.matrix(coredata(xs)),1,sort,decreasing=TRUE)) Playing with pandas data frame, I found the equivalent statement for sort_index: sort_index = df.tapply(argsort) What is the equivalent to obtain a sorted copy? TIA, Fabrizio Pollastri From armin.hoebart at aon.at Mon Jan 3 07:42:50 2011 From: armin.hoebart at aon.at (Tintifax) Date: Mon, 3 Jan 2011 04:42:50 -0800 (PST) Subject: [SciPy-User] [SciPy-user] installing scikits.timeseries Message-ID: <30577564.post@talk.nabble.com> I tried to install scikits.timeseries via "easy_install" and received following Error Message: Found executable C:\MinGW32\bin\gcc.exe Found executable C:\MinGW32\bin\g++.exe zip_safe flag not set; analyzing archive contents... scikits.timeseries.version: module references __file__ Adding scikits.timeseries 0.91.3 to easy-install.pth file Installed c:\python26\lib\site-packages\scikits.timeseries-0.91.3-py2.6-win32.eg g Processing dependencies for scikits.timeseries Finished processing dependencies for scikits.timeseries C:\Python26\lib\site-packages\numpy\distutils\misc_util.py:251: RuntimeWarning: Parent module 'numpy.distutils' not found while handling absolute import from numpy.distutils import log Does anybody have an idea how I can install scikits.timeseries without getting this error message? I really appreciate every little hint. Thank You!! -- View this message in context: http://old.nabble.com/installing-scikits.timeseries-tp30577564p30577564.html Sent from the Scipy-User mailing list archive at Nabble.com. From Jordan.Nickerson at phd.mccombs.utexas.edu Thu Jan 6 15:31:36 2011 From: Jordan.Nickerson at phd.mccombs.utexas.edu (Jordan Nickerson {msbcb755}) Date: Thu, 6 Jan 2011 14:31:36 -0600 Subject: [SciPy-User] using an optimizer/solver to solve f(x)=0 In-Reply-To: References: Message-ID: <67DA7573-C5A3-472E-9E5E-2C50402C4923@phd.mccombs.utexas.edu> It's possible that there are multiple solutions to the problem, but I only need one. Sent from my iPhone On Jan 6, 2011, at 2:28 PM, "Sebastian Walter" wrote: > Could you explain your problem in more detail? > For instance the function f(x) = x_1^2 + x_2^2 - 1 > has infinitely many solutions (the unit circle). > I know that optimization algorithms can have problems with problems > where the solution is not unique. > Maybe you can add additional constraints to the problem to make the > minimizer unique? > > Sebastian > > > On Thu, Jan 6, 2011 at 8:46 PM, jordan wrote: >> >> I'm trying to solve an objective function to get an answer as close to zero as >> possible. However, the function is not terribly well behaved and takes 4 >> parameters as inputs. So far I've been using fmin and returning the absolute >> value of the function to try to get close to zero. >> >> I was wondering if there is a better approach? I looked into solvers but it >> seems like they only solve for a univariate case, or they expect the function to >> return an array of the same dimension as that of the set of parameters. >> >> Thanks >> jordan >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Fri Jan 7 19:06:00 2011 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 7 Jan 2011 18:06:00 -0600 Subject: [SciPy-User] genetic algorithm In-Reply-To: <177c69c0-2755-4f84-a981-a860bd0bb6a0@v17g2000prc.googlegroups.com> References: <177c69c0-2755-4f84-a981-a860bd0bb6a0@v17g2000prc.googlegroups.com> Message-ID: On Wed, Dec 29, 2010 at 12:29, aashish gagneja wrote: > ?hi > i need your kind attention and help for my problem > i hav ?to design FIR filter using genetic algortihm in Matlab,kindly > help me to design such filter using GAtoolbox or by Matlab programming > or by C++ programming,in my proposed filter power consumption reduces > as hamming distance (signal toggling between bits)is reduced,also mean > square error is reduced ,so overall fitness function is > F = w1fM + w2fH > where > fM and fH are fitness function due to mean sq error and error due to > hamming distance and w1,w2 are weights such that w1+w2 =1 > also F = 1/1+ Etot > where Etot is total error > > KINDLY GUIDE ME HOW TO USE GADS TOOLBOX FOR THIS,ie OBJECTIVE > FUNCTION,LINEAR CONSTRAINTS ETC I'm sorry, but this is not an appropriate mailing list for MATLAB or C++ questions. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From brian.murphy at unitn.it Sat Jan 8 11:46:41 2011 From: brian.murphy at unitn.it (Brian Murphy) Date: Sat, 8 Jan 2011 17:46:41 +0100 Subject: [SciPy-User] code for multidimensional scaling? In-Reply-To: References: <4D26F96C.3070607@unitn.it> Message-ID: <4D2894F1.6040700@unitn.it> Hi Matthieu thanks very much. I downloaded your fork and installed it, but ran into some problems. Installation seemed to work OK, but tests failed, and the module wasn't available from inside ipython. I am running Ubuntu 9.10 (Karmic Koala), and Python 2.6. I followed the instructions on in the README and at http://scikit-learn.sourceforge.net/install.html. I checked prerequisites: sudo apt-get install python-dev python-numpy python-setuptools python-scipy libatlas-dev g++ libatlas-dev doesn't seem to be available for my distribution, but libatlas3gf-base and libatlas-base-dev appear to the be appropriate (CPU independent) versions. Then from the directory holding my git cloned copy of your fork, I ran: sudo python setup.py install ... which threw warnings, but no errors. Testing did fail though... nosetests / sudo python setup.py test ... with the following errors ... ERROR: Failure: ImportError (No module named compression.barycenters) FAIL: test_train (scikits.learn.tests.test_gmm.TestGMMWithTiedCovars) FAIL: test_train (scikits.learn.tests.test_gmm.TestGMMWithFullCovars) FAIL: Test BayesianRidge on diabetes FAIL: Check the performance of hashing numpy arrays: If I try to use it from ipython I get: In [1]: import scikits.learn --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /home/brian/CIMeC/conceptNeuro/pyMVPAdev/scikitLearn/scikit-learn/ in () ImportError: No module named learn Next, I also tried hacking a solution by installing your fork from source over the NeuroDebian distributed version of scikits-learn (v4.2), but that made no difference. I checked the mailing lists too, and can't find any relevant posts. Any idea what I am doing wrong? any suggestions welcome. If this is a general problem to do with source installation, and nothing to do with your fork, let me know and I'll send a query to the mailing list. best, Brian Matthieu Brucher wrote: > Hi Brian, > > The code for MDS was available, but it was pulled out while > refactored. Our objectives are to first refactor the spectral > algorithms before MDS, because they are faster than MDS. > Still, if you want to try the current version (that will be > refactored), you can go on http://scikit-learn.sf.net, get to the > repository on github, get my repository, and then check out the > manifold branch. > > Cheers, > > Matthieu > > 2011/1/7 Brian Murphy > > > Hi, > > I'm new to the list, so I hope my question is appropriate. I'm looking > for code that implements multi-dimensional scaling (e.g. like Matlab's > mdscale command) in Python. My best guess was that I would find it in > the Scikit Learn package, but couldn't turn anything up. > > Any suggestions? > > thanks and regards, > > Brian > > -- > Brian Murphy > Post-Doctoral Researcher > Language, Interaction and Computation Lab > Centre for Mind/Brain Sciences > University of Trento > http://clic.cimec.unitn.it/brian/ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher -- Brian Murphy Post-Doctoral Researcher Language, Interaction and Computation Lab Centre for Mind/Brain Sciences University of Trento http://clic.cimec.unitn.it/brian/ From pgarrone at optusnet.com.au Sat Jan 8 20:18:08 2011 From: pgarrone at optusnet.com.au (Peter John Garrone) Date: Sun, 9 Jan 2011 12:18:08 +1100 Subject: [SciPy-User] using an optimizer/solver to solve f(x)=0 In-Reply-To: References: Message-ID: <20110109011808.GA21808@bacchus> Do not the solvers find x where Ax = B, while you appear to have a minimization problem, so the "optimize" area would be the place to look, and you are on the right track? Perhaps you have a problem that is physically stable. If so, you could integrate it over time using a differential equation solver such as odeint, until it reaches equilibrium. On Thu, Jan 06, 2011 at 07:46:49PM +0000, jordan wrote: > > I'm trying to solve an objective function to get an answer as close to zero as > possible. However, the function is not terribly well behaved and takes 4 > parameters as inputs. So far I've been using fmin and returning the absolute > value of the function to try to get close to zero. > > I was wondering if there is a better approach? I looked into solvers but it > seems like they only solve for a univariate case, or they expect the function to > return an array of the same dimension as that of the set of parameters. > > Thanks > jordan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ben.root at ou.edu Sat Jan 8 22:05:53 2011 From: ben.root at ou.edu (Benjamin Root) Date: Sat, 8 Jan 2011 21:05:53 -0600 Subject: [SciPy-User] code for multidimensional scaling? In-Reply-To: <4D2894F1.6040700@unitn.it> References: <4D26F96C.3070607@unitn.it> <4D2894F1.6040700@unitn.it> Message-ID: On Sat, Jan 8, 2011 at 10:46 AM, Brian Murphy wrote: > Hi Matthieu > > thanks very much. I downloaded your fork and installed it, but ran into > some problems. Installation seemed to work OK, but tests failed, and the > module wasn't available from inside ipython. > > I am running Ubuntu 9.10 (Karmic Koala), and Python 2.6. I followed the > instructions on in the README and at > http://scikit-learn.sourceforge.net/install.html. I checked > prerequisites: > > sudo apt-get install python-dev python-numpy python-setuptools > python-scipy libatlas-dev g++ > > libatlas-dev doesn't seem to be available for my distribution, but > libatlas3gf-base and libatlas-base-dev appear to the be appropriate (CPU > independent) versions. > > Then from the directory holding my git cloned copy of your fork, I ran: > sudo python setup.py install > ... which threw warnings, but no errors. Testing did fail though... > nosetests / sudo python setup.py test > ... with the following errors ... > ERROR: Failure: ImportError (No module named compression.barycenters) > FAIL: test_train (scikits.learn.tests.test_gmm.TestGMMWithTiedCovars) > FAIL: test_train (scikits.learn.tests.test_gmm.TestGMMWithFullCovars) > FAIL: Test BayesianRidge on diabetes > FAIL: Check the performance of hashing numpy arrays: > > If I try to use it from ipython I get: > In [1]: import scikits.learn > > --------------------------------------------------------------------------- > ImportError Traceback (most recent > call last) > > /home/brian/CIMeC/conceptNeuro/pyMVPAdev/scikitLearn/scikit-learn/ console> in () > ImportError: No module named learn > > Next, I also tried hacking a solution by installing your fork from > source over the NeuroDebian distributed version of scikits-learn (v4.2), > but that made no difference. I checked the mailing lists too, and can't > find any relevant posts. > > Any idea what I am doing wrong? > > any suggestions welcome. If this is a general problem to do with source > installation, and nothing to do with your fork, let me know and I'll > send a query to the mailing list. > > best, > > Brian > > Brian, libatlas-dev is available for Ubuntu. Make sure that the Universe repo is turn on. There is also libatlas-base-dev that contains the bulk of the package. I suspect most of your issues is due to the lack of the libatlas packages. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgarrone at optusnet.com.au Sun Jan 9 05:44:38 2011 From: pgarrone at optusnet.com.au (Peter John Garrone) Date: Sun, 9 Jan 2011 21:44:38 +1100 Subject: [SciPy-User] Need a fast cuda/opencl sundials nvector implementation Message-ID: <20110109104438.GA23674@bacchus> Hi, A couple weeks ago I mentioned that I was having problems with integrating ODE's once the problem size reached several thousand states. The ode solver was taking too long compared to functional and jacobian evaluations. Since then, I have located the PySUNDIALS package that interfaces to the sundials ODE solving library. This package opens up to allow use of different vector types and has various iterative solvers available, and also extra callbacks to allow use of these solvers. The use of this package allowed larger problems to be worked on without running into memory problems due to dense and banded matrix sizes, and was also faster with the large size problems. However this algorithm problem is still being encountered, where the cpu time is being consumed by the algorithm and not the callbacks. The design of the sundials interface allows its nvector type, which carries out the state-based internal calculations, to be re-implemented. One approach might be to re-implement the nvector type using CUDA or OPENCL. Using google, some papers hint at this being done, but my searches are otherwise fruitless. I wonder if anybody knows some project, preferably open-source, where something like this has been attempted. Although it is designed to be reimplemented in C, one could well do it using python ctypes. Thanks in advance for any reply. From sebastian.walter at gmail.com Sun Jan 9 06:21:17 2011 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sun, 9 Jan 2011 12:21:17 +0100 Subject: [SciPy-User] Need a fast cuda/opencl sundials nvector implementation In-Reply-To: <20110109104438.GA23674@bacchus> References: <20110109104438.GA23674@bacchus> Message-ID: Hello Peter, so you are saying that you want to replace the built-in functionality to solve large sparse linear systems by your own code which exploits the structure of your Jacobian? Sebastian On Sun, Jan 9, 2011 at 11:44 AM, Peter John Garrone wrote: > Hi, > ?A couple weeks ago I mentioned that I was having problems with integrating ODE's once the problem size reached several thousand states. The ode solver was taking too long compared to functional and jacobian evaluations. > > ?Since then, I have located the PySUNDIALS package that interfaces to the sundials ODE solving library. This package opens up to allow use of different vector types and has various iterative solvers available, and also extra callbacks to allow use of these solvers. The use of this package allowed larger problems to be worked on without running into memory problems due to dense and banded matrix sizes, and was also faster with the large size problems. > > ?However this algorithm problem is still being encountered, where the cpu time is being consumed by the algorithm and not the callbacks. The design of the sundials interface allows its nvector type, which carries out the state-based internal calculations, to be re-implemented. One approach might be to re-implement the nvector type using CUDA or OPENCL. Using google, some papers hint at this being done, but my searches are otherwise fruitless. > > ?I wonder if anybody knows some project, preferably open-source, where something like this has been attempted. Although it is designed to be reimplemented in C, one could well do it using python ctypes. > > ?Thanks in advance for any reply. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From denis-bz-gg at t-online.de Sun Jan 9 10:54:31 2011 From: denis-bz-gg at t-online.de (denis) Date: Sun, 9 Jan 2011 07:54:31 -0800 (PST) Subject: [SciPy-User] code for multidimensional scaling? In-Reply-To: <4D26F96C.3070607@unitn.it> References: <4D26F96C.3070607@unitn.it> Message-ID: <75622cac-e43e-4a63-afba-97afa908d7fe@i41g2000vbn.googlegroups.com> Brian, folks, here's a simple mdscale in 10 lines, using np.linalg.eigh: fast enough for N up to 1000 or so. What are your N, dim, p ? (Two threads, one on MDS and another on Min-cost Path, got merged here in google groups ??) cheers -- denis # Multidim scaling using np.linalg.eigh from __future__ import division import numpy as np __date__ = "2011-01-09 Jan denis" def mdscale_eigh( D, p=2 ): """ in: D pairwise distances, |Xi - Xj| out: M pvecs N x p with |Mi - Mj| ~ D (mod flip, rotate) uses np.linalg.eigh """ D2 = D**2 av = D2.mean( axis=0 ) B = -.5 * ((D2 - av) .T - av + D2.mean()) evals, evecs = np.linalg.eigh( B ) # few biggest evals / evecs ? pvals = evals[-p:] ** .5 pvecs = evecs[:,-p:] M = pvecs * pvals return M - M.mean( axis=0 ) #............................................................................... if __name__ == "__main__": import sys import scipy.spatial.distance as ssdist # $scipy/spatial/ distance.py N = 1000 dim = 3 p = 2 plot = 0 seed = 1 exec "\n".join( sys.argv[1:] ) # run this.py N=1000 dim=3 p=2: 19 sec np.set_printoptions( 1, threshold=100, edgeitems=5, suppress=True ) np.random.seed(seed) print "\nN=%d dim=%d p=%d" % (N, dim, p) X = np.random.uniform( -10, 10, size=(N,dim) ) X -= X.mean( axis=0 ) D = ssdist.squareform( ssdist.pdist( X, "euclidean" )) def pa( A, name="" ): """ print A summary: mean histo etc. """ histo = np.histogram( A, 5 ) print "%s av %.2g histo %s edges %s" % ( name, A.mean(), histo[0], histo[1]) # print A.round().astype(int) pa( D, "D" ) #............................................................................... M = mdscale_eigh( D, p ) M -= M.mean( axis=0 ) # ? print "M eigenvectors:\n", M.T.round().astype(int) mdist = ssdist.squareform( ssdist.pdist( M, "euclidean" )) pa( mdist, "mdist" ) On Jan 7, 12:30?pm, Brian Murphy wrote: > Hi, > > I'm new to the list, so I hope my question is appropriate. I'm looking > for code that implements multi-dimensional scaling (e.g. like Matlab's > mdscale command) in Python. From polish at dtgroup.com Sun Jan 9 16:34:56 2011 From: polish at dtgroup.com (Nathaniel Polish) Date: Sun, 09 Jan 2011 16:34:56 -0500 Subject: [SciPy-User] Python and Eclipse Message-ID: Anyone have any experience with Python on Eclipse? Is this a hopelessly antiquated approach that only a c/c++/java programmer could want? Nat From yury at shurup.com Sun Jan 9 17:14:28 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 09 Jan 2011 23:14:28 +0100 Subject: [SciPy-User] Python and Eclipse In-Reply-To: References: Message-ID: <1294611268.15720.14.camel@mypride> On Sun, 2011-01-09 at 16:34 -0500, Nathaniel Polish wrote: > Anyone have any experience with Python on Eclipse? > > > > Is this a hopelessly antiquated approach that only a c/c++/java programmer > could want? It is not exactly clear to me what is your actual question. Q: Is it OK to use Eclipse + PyDev to develop in Python + SciPy? A: Yes, many people do it. There are other IDE's out there as well. Q: Is it OK to use Ant with Python? A: Generally, Python has its own build / distributions tools. A: If using Ant makes you more productive, it's up to you. Q: ... A: ... HTH, -- Sincerely yours, Yury V. Zaytsev From polish at dtgroup.com Sun Jan 9 17:30:00 2011 From: polish at dtgroup.com (Nathaniel Polish) Date: Sun, 09 Jan 2011 17:30:00 -0500 Subject: [SciPy-User] Python and Eclipse In-Reply-To: <1294611268.15720.14.camel@mypride> References: <1294611268.15720.14.camel@mypride> Message-ID: <8C6F05827E2ECF5743D95CD2@[192.168.1.112]> I suppose that I was asking a style and taste question. There are lots of environments -- some are in use for historical reasons -- some are more modern. I for example have been known to write c code in emacs, compile with gcc and debug in gdb. I would never recommend it to anyone under the age of 40 since IDEs such as Eclipse are better in just about every way. I was looking for recommendations for development environments that are considered by the community to be "modern". --On Sunday, January 09, 2011 11:14 PM +0100 "Yury V. Zaytsev" wrote: > On Sun, 2011-01-09 at 16:34 -0500, Nathaniel Polish wrote: >> Anyone have any experience with Python on Eclipse? >> >> >> >> Is this a hopelessly antiquated approach that only a c/c++/java >> programmer could want? > > It is not exactly clear to me what is your actual question. > > Q: Is it OK to use Eclipse + PyDev to develop in Python + SciPy? > A: Yes, many people do it. There are other IDE's out there as well. > > Q: Is it OK to use Ant with Python? > A: Generally, Python has its own build / distributions tools. > A: If using Ant makes you more productive, it's up to you. > > Q: ... > A: ... > > HTH, > > -- > Sincerely yours, > Yury V. Zaytsev > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From gael.varoquaux at normalesup.org Sun Jan 9 17:32:29 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 9 Jan 2011 23:32:29 +0100 Subject: [SciPy-User] Python and Eclipse In-Reply-To: <8C6F05827E2ECF5743D95CD2@[192.168.1.112]> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@[192.168.1.112]> Message-ID: <20110109223229.GO22885@phare.normalesup.org> On Sun, Jan 09, 2011 at 05:30:00PM -0500, Nathaniel Polish wrote: > I was looking for recommendations for development environments that are > considered by the community to be "modern". vim for code editing, ipython for exploring, debugging and profiling That's what I use, and I am under 40 (actually still under 30 for a short while) :$ From yury at shurup.com Sun Jan 9 18:17:16 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 10 Jan 2011 00:17:16 +0100 Subject: [SciPy-User] Python and Eclipse In-Reply-To: <8C6F05827E2ECF5743D95CD2@[192.168.1.112]> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@[192.168.1.112]> Message-ID: <1294615036.15720.24.camel@mypride> On Sun, 2011-01-09 at 17:30 -0500, Nathaniel Polish wrote: > I would never recommend it to anyone under the age of 40 since IDEs > such as Eclipse are better in just about every way. This is so arguable... Ok, let's not get started on that, at least it is now clear what did you have in mind when you were asking your question. The combinations that I've seen people using for SciPy / Numpy: 1) Aptana (= Eclipse + PyDev) 2) PyCharm (= IDEA + Python plugin, commercial) 3) Wingware IDE (commercial) 4) vim / emacs + Python shell 5) Eric, Spyder, other lightweight IDE's All of them, can be, of course, complimented by ipython -pylab for quick experimentation. Specific choice is purely a matter of taste / what makes you personally most productive. > I was looking for recommendations for development environments that are > considered by the community to be "modern". Well, I personally use PyCharm. I guess I am a very modern guy. Not sure if it's a compliment though :-) It's paid-for, but if you have ever used any of the other JetBrains IDE's you can understand why one would want to pay for it. -- Sincerely yours, Yury V. Zaytsev From cournape at gmail.com Sun Jan 9 20:15:44 2011 From: cournape at gmail.com (David Cournapeau) Date: Mon, 10 Jan 2011 10:15:44 +0900 Subject: [SciPy-User] Python and Eclipse In-Reply-To: <8C6F05827E2ECF5743D95CD2@192.168.1.112> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> Message-ID: On Mon, Jan 10, 2011 at 7:30 AM, Nathaniel Polish wrote: > I suppose that I was asking a style and taste question. ?There are lots of > environments -- some are in use for historical reasons -- some are more > modern. ?I for example have been known to write c code in emacs, compile > with gcc and debug in gdb. ?I would never recommend it to anyone under the > age of 40 since IDEs such as Eclipse are better in just about every way. Quite a few people think otherwise, and prefer emacs+toolchain over IDE. Modern or not is irrelevant: what matters is what an IDE gives you that you cannot get with a tool like emacs. In python's, case, my experience is not much. Other people may prefer to get their VCS, etc.... integrated in one tool. Some languages like java have pretty good IDE, but what they offer is not available for python (refactoring, for example - I have not seen much of anything useful for advanced refactoring in python yet). cheers, David From charlesr.harris at gmail.com Sun Jan 9 21:13:24 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Jan 2011 19:13:24 -0700 Subject: [SciPy-User] Python and Eclipse In-Reply-To: <1294615036.15720.24.camel@mypride> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294615036.15720.24.camel@mypride> Message-ID: On Sun, Jan 9, 2011 at 4:17 PM, Yury V. Zaytsev wrote: > On Sun, 2011-01-09 at 17:30 -0500, Nathaniel Polish wrote: > > > I would never recommend it to anyone under the age of 40 since IDEs > > such as Eclipse are better in just about every way. > > This is so arguable... Ok, let's not get started on that, at least it is > now clear what did you have in mind when you were asking your question. > > And I was so hoping for an emacs/vim flame war ;) > The combinations that I've seen people using for SciPy / Numpy: > > 1) Aptana (= Eclipse + PyDev) > 2) PyCharm (= IDEA + Python plugin, commercial) > 3) Wingware IDE (commercial) > 4) vim / emacs + Python shell > 5) Eric, Spyder, other lightweight IDE's > > I've seen folks on windows running Eclipse using the Python(x,y) distribution. It looked pretty cool. If I have to use windows I'll probably give it a shot. > All of them, can be, of course, complimented by ipython -pylab for quick > experimentation. Specific choice is purely a matter of taste / what > makes you personally most productive. > > > I was looking for recommendations for development environments that are > > considered by the community to be "modern". > > Well, I personally use PyCharm. I guess I am a very modern guy. Not sure > if it's a compliment though :-) It's paid-for, but if you have ever used > any of the other JetBrains IDE's you can understand why one would want > to pay for it. > > Off to google the name. I don't end up using these IDE's but its fun to see what's out there. And maybe I'm just an old dog. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From polish at dtgroup.com Sun Jan 9 21:33:56 2011 From: polish at dtgroup.com (Nathaniel Polish) Date: Sun, 09 Jan 2011 21:33:56 -0500 Subject: [SciPy-User] Python and Eclipse In-Reply-To: References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294615036.15720.24.camel@mypride> Message-ID: <760A279CF63831CD4A521829@[192.168.1.112]> Oh please. Emacs is for real men. We hack code in lisp in an emacs shell. I first used emacs in 1984. I used it to read email and net news. It was not just a text editor, it was a lifestyle. But that has NOTHING to do with Python. Though I bet you could build a great Python interpreter in emacs. More to the point, I find that some languages/systems are much better learned and understood within a particular development environment. I'll probably stick with emacs plus the python shell for the moment. However, I'll probably tryout the eclipse offering. I came across enthought.com (they seem to host this list). It looks interesting. Is it worthwhile as a distribution or just a way for commercial-types to get paid support? If anyone wants to exchange emacs/vi flames with me that's fine, but its probably off-list material. Nat --On Sunday, January 09, 2011 7:13 PM -0700 Charles R Harris wrote: > > > > On Sun, Jan 9, 2011 at 4:17 PM, Yury V. Zaytsev wrote: > > > On Sun, 2011-01-09 at 17:30 -0500, Nathaniel Polish wrote: > >> I would never recommend it to anyone under the age of 40 since IDEs >> ?such as Eclipse are better in just about every way. > > This is so arguable... Ok, let's not get started on that, at least it is > now clear what did you have in mind when you were asking your question. > > > > > And I was so hoping for an emacs/vim flame war ;) > ? > > The combinations that I've seen people using for SciPy / Numpy: > > 1) Aptana (= Eclipse + PyDev) > 2) PyCharm (= IDEA + Python plugin, commercial) > 3) Wingware IDE (commercial) > 4) vim / emacs + Python shell > 5) Eric, Spyder, other lightweight IDE's > > > > > I've seen folks on windows running Eclipse using the Python(x,y) > distribution. It looked pretty cool. If I have to use windows I'll > probably give it a shot. > ? > > All of them, can be, of course, complimented by ipython -pylab for quick > experimentation. Specific choice is purely a matter of taste / what > makes you personally most productive. > > >> I was looking for recommendations for development environments that are >> considered by the community to be "modern". > > Well, I personally use PyCharm. I guess I am a very modern guy. Not sure > if it's a compliment though :-) It's paid-for, but if you have ever used > any of the other JetBrains IDE's you can understand why one would want > to pay for it. > > > > > Off to google the name. I don't end up using these IDE's but its fun to > see what's out there. And maybe I'm just an old dog. > > Chuck > From pgmdevlist at gmail.com Fri Jan 7 17:17:46 2011 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 7 Jan 2011 23:17:46 +0100 Subject: [SciPy-User] [SciPy-user] installing scikits.timeseries In-Reply-To: <30577564.post@talk.nabble.com> References: <30577564.post@talk.nabble.com> Message-ID: On Jan 3, 2011, at 1:42 PM, Tintifax wrote: > > I tried to install scikits.timeseries via "easy_install" and received > following Error Message: Mmh, strange. Have you tried to download the package and install it the classical way (viz, through python setup.py install) ? let me know how it goes... P. From yury at shurup.com Mon Jan 10 02:47:09 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 10 Jan 2011 08:47:09 +0100 Subject: [SciPy-User] Python and Eclipse In-Reply-To: References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> Message-ID: <1294645629.6771.20.camel@mypride> On Mon, 2011-01-10 at 10:15 +0900, David Cournapeau wrote: > Some languages like java have pretty good IDE, but what they offer is > not available for python (refactoring, for example - I have not seen > much of anything useful for advanced refactoring in python yet). That's one of the major reasons for me to stick to PyCharm at the moment. They have quite a bit of most common *working* refactorings available and also are very responsive towards bug reports and enhancement requests in this area. Otherwise, it's what you said: integrated experience. -- Sincerely yours, Yury V. Zaytsev From pgarrone at optusnet.com.au Mon Jan 10 04:13:17 2011 From: pgarrone at optusnet.com.au (Peter John Garrone) Date: Mon, 10 Jan 2011 20:13:17 +1100 Subject: [SciPy-User] Need a fast cuda/opencl sundials nvector implementation In-Reply-To: References: <20110109104438.GA23674@bacchus> Message-ID: <20110110091317.GA24567@bacchus> Hi, Thanks for replying. The sundials distribution, for cvode, provides a "psolve" preconditioning callback. Parameters to this callback include gamma and R, and the user is expected to set a vector called Z to the solution of (I - gamma*J)Z = R, where J is the Jacobian. I am doing this with standard scipy gmres, and it is very fast, and I have no problems there. I have not measured it, merely observed that it solves a ~100000 state system in a fraction of a second. The problem seems to me to be that if I add up all the cpu time occupied with the four sundial-cvode callbacks, that is psetup, psolve, jtimes, and the function itself, it is only a small fraction, a few percent, of the total cpu time. So optimising these callbacks is pointless. This leaves the vector operations, which the user can replace. I assume that they are implemented fairly optimally, but not exploiting a GPU. So what I am looking for is an implementation of the sundials NVector_serial construct using CUDA or OpenCL. If no such implementation is available, I might have a go at attempting to implement it, after verifying my theory that it is the vector ops taking all the cpu. However if something else is freely available, I would like to try it. Peter On Sun, Jan 09, 2011 at 12:21:17PM +0100, Sebastian Walter wrote: > Hello Peter, > so you are saying that you want to replace the built-in functionality > to solve large sparse linear systems by your own code which exploits > the structure of your Jacobian? > > Sebastian > > From pawel.kw at gmail.com Mon Jan 10 04:50:59 2011 From: pawel.kw at gmail.com (=?ISO-8859-2?Q?Pawe=B3_Kwa=B6niewski?=) Date: Mon, 10 Jan 2011 10:50:59 +0100 Subject: [SciPy-User] using an optimizer/solver to solve f(x)=0 In-Reply-To: References: Message-ID: If you do have a minimization problem, as it seems, you can try using optimize. You can find pretty a good description of the available algorithms here: http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html (if you haven't found it already). If this doesn't work for some reason, you can try PyMinuit (http://code.google.com/p/pyminuit/) - it worked for me. I can also recommend the website of Douglas Applegate: http://sites.google.com/site/applegatearchive/software/python-model-fittingHe has there some nice wrap around PyMinuit functions, designed for fitting, but can be very helpful in understanding how it works. Hope this can help. 2011/1/6 jordan > > I'm trying to solve an objective function to get an answer as close to zero > as > possible. However, the function is not terribly well behaved and takes 4 > parameters as inputs. So far I've been using fmin and returning the > absolute > value of the function to try to get close to zero. > > I was wondering if there is a better approach? I looked into solvers but it > seems like they only solve for a univariate case, or they expect the > function to > return an array of the same dimension as that of the set of parameters. > > Thanks > jordan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Pawe? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.murphy at unitn.it Mon Jan 10 05:20:54 2011 From: brian.murphy at unitn.it (Brian Murphy) Date: Mon, 10 Jan 2011 11:20:54 +0100 Subject: [SciPy-User] installing from source on Ubuntu 9.10 (Karmic) In-Reply-To: References: Message-ID: <4D2ADD86.3080501@unitn.it> Dear Ben, thanks for the tip. I already had the Universe repository selected, and it doesn't offer me libatlas-dev. It seems that variant only exists for more recent Ubuntus, e.g.: http://packages.ubuntu.com/maverick/libatlas-dev For Karmic I'm offered libatlas-base-dev, libatlas-headers, libatlas3gf-base, and then a series of other CPU specific variants (Pentium 3/4 and 3dNow). I've installed these three, and checked the associated libs (libblas, libc6, liblapack) are also there. Here's the output from apt-get to illustrate: brian at brian2-desktop:~/CIMeC/conceptNeuro/pyMVPAdev$ sudo apt-get install python-dev python-numpy python-setuptools python-scipy g++ libblas-dev libc6-dev liblapack-dev libatlas-base-dev libatlas-dev Reading package lists... Done Building dependency tree Reading state information... Done python-dev is already the newest version. python-numpy is already the newest version. python-setuptools is already the newest version. python-scipy is already the newest version. g++ is already the newest version. libblas-dev is already the newest version. libblas-dev set to manually installed. libc6-dev is already the newest version. liblapack-dev is already the newest version. liblapack-dev set to manually installed. libatlas-base-dev is already the newest version. E: Couldn't find package libatlas-dev brian at brian2-desktop:~/CIMeC/conceptNeuro/pyMVPAdev$ Now that I've tried to install two versions, multiple times, both with python setup.py install, and with apt-get, I imagine there might be conflicts. Just to confirm, to "clean" the system again and start from scratch, I have been doing the following: sudo apt-get purge python-scikits-learn python-scikits-learn-doc python-scikits-learn-lib sudo easy_install -m scikits.learn sudo rm -Rf /usr/local/lib/python2.6/dist-packages/scikits.learn-... Is that correct? best, Brian > Message: 2 > Date: Sat, 8 Jan 2011 21:05:53 -0600 > From: Benjamin Root > Subject: Re: [SciPy-User] code for multidimensional scaling? > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Sat, Jan 8, 2011 at 10:46 AM, Brian Murphy wrote: > > >> Hi Matthieu >> >> thanks very much. I downloaded your fork and installed it, but ran into >> some problems. Installation seemed to work OK, but tests failed, and the >> module wasn't available from inside ipython. >> >> I am running Ubuntu 9.10 (Karmic Koala), and Python 2.6. I followed the >> instructions on in the README and at >> http://scikit-learn.sourceforge.net/install.html. I checked >> prerequisites: >> >> sudo apt-get install python-dev python-numpy python-setuptools >> python-scipy libatlas-dev g++ >> >> libatlas-dev doesn't seem to be available for my distribution, but >> libatlas3gf-base and libatlas-base-dev appear to the be appropriate (CPU >> independent) versions. >> >> Then from the directory holding my git cloned copy of your fork, I ran: >> sudo python setup.py install >> ... which threw warnings, but no errors. Testing did fail though... >> nosetests / sudo python setup.py test >> ... with the following errors ... >> ERROR: Failure: ImportError (No module named compression.barycenters) >> FAIL: test_train (scikits.learn.tests.test_gmm.TestGMMWithTiedCovars) >> FAIL: test_train (scikits.learn.tests.test_gmm.TestGMMWithFullCovars) >> FAIL: Test BayesianRidge on diabetes >> FAIL: Check the performance of hashing numpy arrays: >> >> If I try to use it from ipython I get: >> In [1]: import scikits.learn >> >> --------------------------------------------------------------------------- >> ImportError Traceback (most recent >> call last) >> >> /home/brian/CIMeC/conceptNeuro/pyMVPAdev/scikitLearn/scikit-learn/> console> in () >> ImportError: No module named learn >> >> Next, I also tried hacking a solution by installing your fork from >> source over the NeuroDebian distributed version of scikits-learn (v4.2), >> but that made no difference. I checked the mailing lists too, and can't >> find any relevant posts. >> >> Any idea what I am doing wrong? >> >> any suggestions welcome. If this is a general problem to do with source >> installation, and nothing to do with your fork, let me know and I'll >> send a query to the mailing list. >> >> best, >> >> Brian >> >> >> > Brian, libatlas-dev is available for Ubuntu. Make sure that the Universe > repo is turn on. There is also libatlas-base-dev that contains the bulk of > the package. > > I suspect most of your issues is due to the lack of the libatlas packages. > > Ben Root From yury at shurup.com Mon Jan 10 05:22:19 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 10 Jan 2011 11:22:19 +0100 Subject: [SciPy-User] installing from source on Ubuntu 9.10 (Karmic) In-Reply-To: <4D2ADD86.3080501@unitn.it> References: <4D2ADD86.3080501@unitn.it> Message-ID: <1294654939.6771.44.camel@mypride> On Mon, 2011-01-10 at 11:20 +0100, Brian Murphy wrote: > Now that I've tried to install two versions, multiple times, both with > python setup.py install, and with apt-get, I imagine there might be > conflicts. Just to confirm, to "clean" the system again and start from > scratch, I have been doing the following: > sudo apt-get purge python-scikits-learn python-scikits-learn-doc > python-scikits-learn-lib > sudo easy_install -m scikits.learn > sudo rm -Rf /usr/local/lib/python2.6/dist-packages/scikits.learn-... > > Is that correct? FYI, an easy way to rebuild from source on Ubuntu is to let APT fetch the correct dependencies for you against which the original packages were built: $ sudo apt-get build-dep python-numpy python-scipy python-scikits-learn This way, you won't have to worry about installing the right libatlas development files etc. -- Sincerely yours, Yury V. Zaytsev From cournape at gmail.com Mon Jan 10 05:39:03 2011 From: cournape at gmail.com (David Cournapeau) Date: Mon, 10 Jan 2011 19:39:03 +0900 Subject: [SciPy-User] Python and Eclipse In-Reply-To: <1294645629.6771.20.camel@mypride> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> Message-ID: On Mon, Jan 10, 2011 at 4:47 PM, Yury V. Zaytsev wrote: > On Mon, 2011-01-10 at 10:15 +0900, David Cournapeau wrote: > >> Some languages like java have pretty good IDE, but what they offer is >> not available for python (refactoring, for example - I have not seen >> much of anything useful for advanced refactoring in python yet). > > That's one of the major reasons for me to stick to PyCharm at the > moment. They have quite a bit of most common *working* refactorings > available and also are very responsive towards bug reports and > enhancement requests in this area. Can you do things like method and object attributes rename reliably across a project ? It has always seemed it would be near impossible to do this in a language like python, but if it worked, it would be a good reason to buy such a tool IMO. cheers, David From sebastian.walter at gmail.com Mon Jan 10 06:12:52 2011 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Mon, 10 Jan 2011 12:12:52 +0100 Subject: [SciPy-User] Need a fast cuda/opencl sundials nvector implementation In-Reply-To: <20110110091317.GA24567@bacchus> References: <20110109104438.GA23674@bacchus> <20110110091317.GA24567@bacchus> Message-ID: 1) Why exactly are you asking this question on the scipy mailing list and not on the sundials mailing list? 2) I would have thought that the solution of the nonlinear system (I - gamma*J)Z = R takes most of the computation time. But apparently I am missing something and I'm curious what it is. Sebastian On Mon, Jan 10, 2011 at 10:13 AM, Peter John Garrone wrote: > Hi, > ?Thanks for replying. > ?The sundials distribution, for cvode, provides a "psolve" preconditioning callback. Parameters to this callback include gamma and R, and the user is expected to set a vector called Z to the solution of (I - gamma*J)Z = R, where J is the Jacobian. I am doing this with standard scipy gmres, and it is very fast, and I have no problems there. I have not measured it, merely observed that it solves a ~100000 state system in a fraction of a second. > > ?The problem seems to me to be that if I add up all the cpu time occupied with the four sundial-cvode callbacks, that is psetup, psolve, jtimes, and the function itself, it is only a small fraction, a few percent, of the total cpu time. So optimising these callbacks is pointless. This leaves the vector operations, which the user can replace. I assume that they are implemented fairly optimally, but not exploiting a GPU. > > ?So what I am looking for is an implementation of the sundials NVector_serial construct using CUDA or OpenCL. > > ?If no such implementation is available, I might have a go at attempting to implement it, after verifying my theory that it is the vector ops taking all the cpu. However if something else is freely available, I would like to try it. > > ?Peter > > > On Sun, Jan 09, 2011 at 12:21:17PM +0100, Sebastian Walter wrote: >> Hello Peter, >> so you are saying that you want to replace the built-in functionality >> to solve large sparse linear systems by your own code which exploits >> the structure of your Jacobian? >> >> Sebastian >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From yury at shurup.com Mon Jan 10 07:28:38 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 10 Jan 2011 13:28:38 +0100 Subject: [SciPy-User] Python and Eclipse In-Reply-To: References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> Message-ID: <1294662518.6771.57.camel@mypride> On Mon, 2011-01-10 at 19:39 +0900, David Cournapeau wrote: > Can you do things like method and object attributes rename reliably > across a project ? It has always seemed it would be near impossible to > do this in a language like python, but if it worked, it would be a > good reason to buy such a tool IMO. I have to agree that it's very complicated in general, but for me it worked better than I would have done manually with search and replace. I mostly use other refactorings i.e. introduce a variable, replace variable with expression, rename private class attributes etc. You shouldn't trust me on whether it is going to work in your case or not, just try it out on your project whenever you've got some spare time and figure out whether it works for you. -- Sincerely yours, Yury V. Zaytsev From JRadinger at gmx.at Mon Jan 10 08:39:31 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 10 Jan 2011 14:39:31 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <1294662518.6771.57.camel@mypride> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> Message-ID: <20110110133931.118620@gmx.net> hello... I need some support from you: I want to install the scikits.image (http://scikits.appspot.com/image) module on my Windows 7 machine. As scikits.image doesn't come with an windows installer I have to compile it myself (my first time). What I got so far is: 1) I got the scikits.image folder downloaded (including the setup.py) to my documentsfolder 2) I got minGW including GCC installed (in C:/minGW) and set the environmental variable for the PATH in windows 3) I installed Cython (newest version) And how should I proceed now?? How do I check if everything is set correctly and how should I now compile? pls help me! thank you a lot /J -- Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser From jeanluc.menut at free.fr Mon Jan 10 08:53:52 2011 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Mon, 10 Jan 2011 14:53:52 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110133931.118620@gmx.net> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> Message-ID: <4D2B0F70.10509@free.fr> Hi > And how should I proceed now?? How do I check if everything is set correctly and how should I now compile? I never compiled anything on windows but according to the documentation : >The SciKit may be installed globally using >python setup.py install >or locally using >python setup.py install --prefix=${HOME} >If you prefer, you can use it without installing, by simply adding >this path to your PYTHONPATH variable and compiling the extensions:: > python setup.py build_ext -i So you have to try one of the command line above (you'l probably need administrator's right for the first one), and look if it works or not. From g.plantageneto at runbox.com Mon Jan 10 08:59:28 2011 From: g.plantageneto at runbox.com (g.plantageneto at runbox.com) Date: Mon, 10 Jan 2011 14:59:28 +0100 (CET) Subject: [SciPy-User] Speed-up simple function Message-ID: Hi everybody, I have some functions in python that perform simple computations (they compute the values of some long polynomials). Since I apply them to rather large arrays (10^5 elements) these functions slow down the script quite a bit. Is there a quick and simple way to speed up these functions? Thanks, Andrea From JRadinger at gmx.at Mon Jan 10 09:00:01 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 10 Jan 2011 15:00:01 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <4D2B0F70.10509@free.fr> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> Message-ID: <20110110140001.116760@gmx.net> That's were I got stucked: I just typed 'python setup.py install' into my windows cmd, which I was running as admin. It returns that 'python' isn't recognized... -------- Original-Nachricht -------- > Datum: Mon, 10 Jan 2011 14:53:52 +0100 > Von: Jean-Luc Menut > An: SciPy Users List > Betreff: Re: [SciPy-User] installing/compiling scipy module on windows, mingw > Hi > > > And how should I proceed now?? How do I check if everything is set > correctly and how should I now compile? > > I never compiled anything on windows but according to the documentation : > > >The SciKit may be installed globally using > >python setup.py install > > >or locally using > >python setup.py install --prefix=${HOME} > > >If you prefer, you can use it without installing, by simply adding > >this path to your PYTHONPATH variable and compiling the extensions:: > > python setup.py build_ext -i > > > So you have to try one of the command line above (you'l probably need > administrator's right for the first one), and look if it works or not. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser From nwerneck at gmail.com Mon Jan 10 09:20:18 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Mon, 10 Jan 2011 12:20:18 -0200 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: Message-ID: <20110110142018.GA23377@spirit> I would suggest you to write a small piece of code using either scipy weave or Cython. I have been using Cython a lot, and it really pays off. But if you only have a simple expression such as a polynomial to calculate, and if something simpler such as weave works for you, it might be the best choice. ++nic On Mon, Jan 10, 2011 at 02:59:28PM +0100, g.plantageneto at runbox.com wrote: > > Hi everybody, > > I have some functions in python that perform simple computations (they compute the values of some long polynomials). Since I apply them to rather large arrays (10^5 elements) these functions slow down the script quite a bit. Is there a quick and simple way to speed up these functions? > > Thanks, > Andrea > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolau Werneck C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15 Linux user #460716 "I never claimed infallibility. You have to leave that to experts in that field like the Pope." -- Edsger Dijkstra From johnl at cs.wisc.edu Mon Jan 10 09:25:48 2011 From: johnl at cs.wisc.edu (J. David Lee) Date: Mon, 10 Jan 2011 08:25:48 -0600 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: Message-ID: <4D2B16EC.7000603@cs.wisc.edu> Andrea, Here is an example of scipy.weave that I generally use as a starting point: from scipy.weave import inline from scipy.weave.converters import blitz from pylab import * def foo(arr, c): ret = arr.copy() cvars = ['ret', 'c'] # Using blitz converter gives the following for an array arr: # Narr[0] is length of first dimension # Narr[1] is length of second dimension # arr(a, b) is the element arr[a][b] code = """ int i = Nret[0]; while(i--) { ret(i) += c; } """ inline(code, cvars, type_converters = blitz) return ret a = arange(5) foo(a, 1) print a I generally create all my python objects outside the code so I don't screw up any reference counting. One caveat is that you need to run the code in a single process the first time around or you can end up with some strange errors. David On 01/10/2011 07:59 AM, g.plantageneto at runbox.com wrote: > Hi everybody, > > I have some functions in python that perform simple computations (they compute the values of some long polynomials). Since I apply them to rather large arrays (10^5 elements) these functions slow down the script quite a bit. Is there a quick and simple way to speed up these functions? > > Thanks, > Andrea > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From jeanluc.menut at free.fr Mon Jan 10 09:27:56 2011 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Mon, 10 Jan 2011 15:27:56 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110140001.116760@gmx.net> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> Message-ID: <4D2B176C.8010202@free.fr> Le 10/01/2011 15:00, Johannes Radinger a ?crit : > That's were I got stucked: > > I just typed 'python setup.py install' into my windows cmd, which I was running as admin. It returns that 'python' isn't recognized... that means that python is not in your system path : go to Computer->system properties -> advanced system settings -> advanced -> Environment variables. Select Path and click on edit. You need to add the path to your python executable (C:\Python26 in my case). Don't forget to add ";" as a separator. From charlesr.harris at gmail.com Mon Jan 10 09:32:34 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 10 Jan 2011 07:32:34 -0700 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: Message-ID: On Mon, Jan 10, 2011 at 6:59 AM, wrote: > > Hi everybody, > > I have some functions in python that perform simple computations (they > compute the values of some long polynomials). Since I apply them to rather > large arrays (10^5 elements) these functions slow down the script quite a > bit. Is there a quick and simple way to speed up these functions? > > Can you be more specific? If it's just one polynomial you can compute is all in one go. If it is a different polynomial for each array element you probably need to use Cython or some such. And what do you mean by "long" polynomial? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Mon Jan 10 09:33:13 2011 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 10 Jan 2011 15:33:13 +0100 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: Message-ID: <4D2B18A9.1000409@relativita.com> On 01/10/2011 02:59 PM, g.plantageneto at runbox.com wrote: > Hi everybody, > > I have some functions in python that perform simple computations (they compute the values of some long polynomials). Since I apply them to rather large arrays (10^5 elements) these functions slow down the script quite a bit. Is there a quick and simple way to speed up these functions? > Hi, Could you give us some more details? If the issue is the large number of polynomial evaluations I believe that it is already fast in numpy. Eg., 10^5 evaluations of a pretty long polynomial (100 terms): In [18]: %timeit np.polyval(np.linspace(-1,1,100),np.linspace(0,5,100000)) 1 loops, best of 3: 360 ms per loop Best, Emanuele From g.plantageneto at runbox.com Mon Jan 10 09:45:28 2011 From: g.plantageneto at runbox.com (g.plantageneto at runbox.com) Date: Mon, 10 Jan 2011 15:45:28 +0100 (CET) Subject: [SciPy-User] Speed-up simple function In-Reply-To: <4D2B18A9.1000409@relativita.com> Message-ID: Thanks for the quick answers. To be more specific, the slow part is something like this (see comments in the code): for tstep in xrange(0,end): # I need this cycle otherwise I run out of memory VAR1 = file.variables['VAR1'][tstep,:,:,:] #Read from a NetCDF file VAR2 = file.variables['VAR2'][tstep,:,:,:] #Read another variable from the same NetCDF file # Use the data read above in two functions. I do also some reshaping to have the correct # input for the function. The polynomials computed in the functions are of order 10 at most. COMP1 = FUNC1(VAR1,VAR2,np.tile(5000,VAR1.shape),np.reshape(np.repeat(SOMETHING,nrep),VAR1.shape)) COMP2 = FUNC2(VAR1,COMP1,np.tile(5000,VAR1.shape)) #Compute an average on the output of previous computations RESULT[tstep] = np.average(COMP2,weights=W) Thanks for any suggestion, Andrea ----- Start Original Message ----- Sent: Mon, 10 Jan 2011 15:33:13 +0100 From: Emanuele Olivetti To: g.plantageneto at runbox.com, SciPy Users List Subject: Re: [SciPy-User] Speed-up simple function > On 01/10/2011 02:59 PM, g.plantageneto at runbox.com wrote: > > Hi everybody, > > > > I have some functions in python that perform simple computations (they compute the values of some long polynomials). Since I apply them to rather large arrays (10^5 elements) these functions slow down the script quite a bit. Is there a quick and simple way to speed up these functions? > > > > Hi, > > Could you give us some more details? If the issue is the large number of polynomial > evaluations I believe that it is already fast in numpy. Eg., 10^5 evaluations > of a pretty long polynomial (100 terms): > > In [18]: %timeit np.polyval(np.linspace(-1,1,100),np.linspace(0,5,100000)) > 1 loops, best of 3: 360 ms per loop > > Best, > > Emanuele ----- End Original Message ----- From charlesr.harris at gmail.com Mon Jan 10 10:21:00 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 10 Jan 2011 08:21:00 -0700 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: <4D2B18A9.1000409@relativita.com> Message-ID: On Mon, Jan 10, 2011 at 7:45 AM, wrote: > > Thanks for the quick answers. To be more specific, the slow part is > something like this (see comments in the code): > > for tstep in xrange(0,end): # I need this cycle otherwise I run out of > memory > VAR1 = file.variables['VAR1'][tstep,:,:,:] #Read from a NetCDF > file > VAR2 = file.variables['VAR2'][tstep,:,:,:] #Read another > variable from the same NetCDF file > # Use the data read above in two functions. I do also some reshaping > to have the correct > # input for the function. The polynomials computed in the functions > are of order 10 at most. > COMP1 = > FUNC1(VAR1,VAR2,np.tile(5000,VAR1.shape),np.reshape(np.repeat(SOMETHING,nrep),VAR1.shape)) > COMP2 = FUNC2(VAR1,COMP1,np.tile(5000,VAR1.shape)) > #Compute an average on the output of previous computations > RESULT[tstep] = np.average(COMP2,weights=W) > > This looks terribly over complicated. What are you trying to do and where is the polynomial? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.plantageneto at runbox.com Mon Jan 10 10:47:44 2011 From: g.plantageneto at runbox.com (g.plantageneto at runbox.com) Date: Mon, 10 Jan 2011 16:47:44 +0100 (CET) Subject: [SciPy-User] Speed-up simple function In-Reply-To: Message-ID: > > > This looks terribly over complicated. What are you trying to do and where is > the polynomial? > I see, sorry, I'll try to put it more clearly. My computation is something like this: for each time step, I compute the value of some polynomial on all array elements, then I average the results to obtain a single number: ############################################### for tstep in xrange(0,end): VAR1 = Read from some NetCDF file (~10^5 elements) VAR2 = Read from some NetCDF file (~10^5 elements) COMP1 = FUNC1(VAR1,VAR2) COMP2 = FUNC2(VAR1,COMP1) RESULT[tstep] = np.average(COMP2,weights=W) ############################################### I checked and the bottleneck is really in the computations done by FUNC1 and FUNC2. The polynomials are in the functions FUNC1 and FUNC2 (from python seawater library, a library that provides some standard functions for seawater properties). As an example (they are all rather similar, just polynomials), one of the functions is: ########################################################### def _dens0(S,T): """Density of seawater at zero pressure""" # --- Define constants --- a0 = 999.842594 a1 = 6.793952e-2 a2 = -9.095290e-3 a3 = 1.001685e-4 a4 = -1.120083e-6 a5 = 6.536332e-9 b0 = 8.24493e-1 b1 = -4.0899e-3 b2 = 7.6438e-5 b3 = -8.2467e-7 b4 = 5.3875e-9 c0 = -5.72466e-3 c1 = 1.0227e-4 c2 = -1.6546e-6 d0 = 4.8314e-4 # --- Computations --- # Density of pure water SMOW = a0 + (a1 + (a2 + (a3 + (a4 + a5*T)*T)*T)*T)*T # More temperature polynomials RB = b0 + (b1 + (b2 + (b3 + b4*T)*T)*T)*T RC = c0 + (c1 + c2*T)*T return SMOW + RB*S + RC*(S**1.5) + d0*S*S #################################################### From JRadinger at gmx.at Mon Jan 10 11:11:00 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 10 Jan 2011 17:11:00 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <4D2B176C.8010202@free.fr> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> Message-ID: <20110110161100.66590@gmx.net> Oh yes...there was a spelling mistake in the PATH settings for pyhton, that is working know but I get now the error message: file "setup.py", line 23 in import setuptools Import Error: No module named setuptools does that mean I have first to install setuptools? or is something wrong with the module I want to compile? thanks /j -------- Original-Nachricht -------- > Datum: Mon, 10 Jan 2011 15:27:56 +0100 > Von: Jean-Luc Menut > An: SciPy Users List > Betreff: Re: [SciPy-User] installing/compiling scipy module on windows, mingw > Le 10/01/2011 15:00, Johannes Radinger a ?crit : > > That's were I got stucked: > > > > I just typed 'python setup.py install' into my windows cmd, which I was > running as admin. It returns that 'python' isn't recognized... > > that means that python is not in your system path : go to > Computer->system properties -> advanced system settings -> advanced -> > Environment variables. Select Path and click on edit. You need to add > the path to your python executable (C:\Python26 in my case). Don't > forget to add ";" as a separator. > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser From nwerneck at gmail.com Mon Jan 10 11:14:46 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Mon, 10 Jan 2011 14:14:46 -0200 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: Message-ID: <20110110161445.GA28284@spirit> Hi. I tried this function in my Atom N450 computer, with two vectors with 100.000 numbers, and it took only 90ms. Do you really need it faster than this? Do you have to make these calculations to a lot of different 100k long vectors, or something like that?... If you really need to speed that up, you just have to implement this _dens0 function using weave, Cython or something like that. I can give you some more info about Cython, write me at my address if you want. ++nic On Mon, Jan 10, 2011 at 04:47:44PM +0100, g.plantageneto at runbox.com wrote: > > > > > > This looks terribly over complicated. What are you trying to do and where is > > the polynomial? > > > > I see, sorry, I'll try to put it more clearly. My computation is something like this: for each time step, I compute the value of some polynomial on all array elements, then I average the results to obtain a single number: > > ############################################### > > for tstep in xrange(0,end): > VAR1 = Read from some NetCDF file (~10^5 elements) > VAR2 = Read from some NetCDF file (~10^5 elements) > COMP1 = FUNC1(VAR1,VAR2) > COMP2 = FUNC2(VAR1,COMP1) > RESULT[tstep] = np.average(COMP2,weights=W) > > ############################################### > > I checked and the bottleneck is really in the computations done by FUNC1 and FUNC2. > The polynomials are in the functions FUNC1 and FUNC2 (from python seawater library, a library that provides some standard functions for seawater properties). As an example (they are all rather similar, just polynomials), one of the functions is: > > ########################################################### > > def _dens0(S,T): > """Density of seawater at zero pressure""" > > # --- Define constants --- > a0 = 999.842594 > a1 = 6.793952e-2 > a2 = -9.095290e-3 > a3 = 1.001685e-4 > a4 = -1.120083e-6 > a5 = 6.536332e-9 > > b0 = 8.24493e-1 > b1 = -4.0899e-3 > b2 = 7.6438e-5 > b3 = -8.2467e-7 > b4 = 5.3875e-9 > > c0 = -5.72466e-3 > c1 = 1.0227e-4 > c2 = -1.6546e-6 > > d0 = 4.8314e-4 > > # --- Computations --- > # Density of pure water > SMOW = a0 + (a1 + (a2 + (a3 + (a4 + a5*T)*T)*T)*T)*T > > # More temperature polynomials > RB = b0 + (b1 + (b2 + (b3 + b4*T)*T)*T)*T > RC = c0 + (c1 + c2*T)*T > > return SMOW + RB*S + RC*(S**1.5) + d0*S*S > > #################################################### > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolau Werneck C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15 Linux user #460716 "A language that doesn't affect the way you think about programming, is not worth knowing." -- Alan J. Perlis From jeanluc.menut at free.fr Mon Jan 10 11:19:30 2011 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Mon, 10 Jan 2011 17:19:30 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110161100.66590@gmx.net> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> Message-ID: <4D2B3192.9000600@free.fr> > does that mean I have first to install setuptools? Yes, but check first if it's not installed in the python's Scripts subdirectory and if this directory is in the path http://pypi.python.org/pypi/setuptools From gael.varoquaux at normalesup.org Mon Jan 10 11:20:04 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 10 Jan 2011 17:20:04 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110161100.66590@gmx.net> References: <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> Message-ID: <20110110162004.GC22378@phare.normalesup.org> On Mon, Jan 10, 2011 at 05:11:00PM +0100, Johannes Radinger wrote: > Oh yes...there was a spelling mistake in the PATH settings for pyhton, > that is working know but I get now the error message: > file "setup.py", line 23 in > import setuptools > Import Error: No module named setuptools > does that mean I have first to install setuptools? It does indeed. HTH, Gael From faltet at pytables.org Mon Jan 10 11:52:33 2011 From: faltet at pytables.org (Francesc Alted) Date: Mon, 10 Jan 2011 17:52:33 +0100 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: Message-ID: <201101101752.33150.faltet@pytables.org> A Monday 10 January 2011 16:47:44 g.plantageneto at runbox.com escrigu?: > > This looks terribly over complicated. What are you trying to do and > > where is the polynomial? > > I see, sorry, I'll try to put it more clearly. My computation is > something like this: for each time step, I compute the value of some > polynomial on all array elements, then I average the results to > obtain a single number: [clip] As others have said, Cython is good option. Perhaps even easier would be Numexpr, that let's you get nice speed-ups on complex expressions, and as a plus, it lets you use any number of cores that your machine has. I've made a small demo (attached) on how Numexpr works based on your polynomial. Here are my results of running it on a 6-core machine: Computing with NumPy with 10^5 elements: time spent: 0.0169 Computing with Numexpr with 10^5 elements: time spent for 1 threads: 0.0052 speed-up: 3.22 time spent for 2 threads: 0.0028 speed-up: 6.11 time spent for 3 threads: 0.0022 speed-up: 7.82 time spent for 4 threads: 0.0016 speed-up: 10.26 time spent for 5 threads: 0.0014 speed-up: 12.31 time spent for 6 threads: 0.0013 speed-up: 13.35 Cheers, -- Francesc Alted -------------- next part -------------- A non-text attachment was scrubbed... Name: test.py Type: text/x-python Size: 1961 bytes Desc: not available URL: From JRadinger at gmx.at Mon Jan 10 12:12:34 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 10 Jan 2011 18:12:34 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110162004.GC22378@phare.normalesup.org> References: <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> Message-ID: <20110110171234.280040@gmx.net> Thank you... That was pretty easy to install... and seems to work now... but it seems that always when i think i solved a problem a new one appears... no it starts compiling and than it stops with the argument that cython isn't recognized...I copied the path of cython to PATH but the problem still exists...how can I test if cython is working and if PATH is set correctly? or what do I have to set in the environmental setting? /j -------- Original-Nachricht -------- > Datum: Mon, 10 Jan 2011 17:20:04 +0100 > Von: Gael Varoquaux > An: SciPy Users List > Betreff: Re: [SciPy-User] installing/compiling scipy module on windows, mingw > On Mon, Jan 10, 2011 at 05:11:00PM +0100, Johannes Radinger wrote: > > Oh yes...there was a spelling mistake in the PATH settings for pyhton, > > that is working know but I get now the error message: > > > file "setup.py", line 23 in > > import setuptools > > Import Error: No module named setuptools > > > does that mean I have first to install setuptools? > > It does indeed. > > HTH, > > Gael > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From gael.varoquaux at normalesup.org Mon Jan 10 12:14:31 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 10 Jan 2011 18:14:31 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110171234.280040@gmx.net> References: <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> <20110110171234.280040@gmx.net> Message-ID: <20110110171431.GC6938@phare.normalesup.org> On Mon, Jan 10, 2011 at 06:12:34PM +0100, Johannes Radinger wrote: > no it starts compiling and than it stops with the argument that > cython isn't recognized... Darn, cython shouldn't be needed to build the scikit. Looks like we let something slip through during the last release. Could you please copy the exact error message here, and we'll do a bug fix release ASAP. Ga?l From batchelordb at ornl.gov Mon Jan 10 12:19:02 2011 From: batchelordb at ornl.gov (Don Batchelor) Date: Mon, 10 Jan 2011 12:19:02 -0500 Subject: [SciPy-User] updating netcdf file in scipy.io.netcdf Message-ID: How does one read in a netcdf file, modify some variables, and write out the updated file using scipy.io.netcdf? In the old Scientific.IO.NetCDF there is the file open mode 'r+' that allows to both read and write to a file. I don't see an equivalent functionality in scipy.io.netcdf. I have tried just opening the file with mode 'r', modifying some netcdf variables, and doing a file close to see what would happen. But when I try to change the netcdf variables I get "RuntimeError: array is not writeable". See below. It never reaches the 'close' statement. It must be possible to do this. Otherwise how could one make use of a record variable? Traceback (most recent call last): File "netcdf_python_example.py", line 61, in Te[:] = radius[:] File "/Users/dbh/Python_scripts/netcdf.py", line 677, in __setitem__ self.data[index] = data RuntimeError: array is not writeable -- Donald B. Batchelor Plasma Theory Group Phone: (865) 574-1288 Fax: (865) 576-7926 E-mail: batchelordb at ornl.gov Oak Ridge National Laboratory Fusion Energy Division P. O. Box 2008 Oak Ridge, TN 37831-6169 From nwerneck at gmail.com Mon Jan 10 12:22:29 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Mon, 10 Jan 2011 15:22:29 -0200 Subject: [SciPy-User] Speed-up simple function In-Reply-To: <201101101752.33150.faltet@pytables.org> References: <201101101752.33150.faltet@pytables.org> Message-ID: <20110110172229.GA6544@spirit> That is one excellent suggestion, using multiple cores is probably the best thing to do. I have tried a basic Cython implementation (because it's such a basic and interesting problem), but I couldn't yet reach a speedup of even 2.0. The big villain in the expression is the 'S**1.5'. Removing that from my Cython code gives a 12x speedup. Avoiding that exponentiation can be probably beneficial to other techniques too... I have recently done a code that uses the rsqrt SSE instruction for approximate square roots. This is very useful... Wouldn't it be nice to have that on scipy? ++nic On Mon, Jan 10, 2011 at 05:52:33PM +0100, Francesc Alted wrote: > A Monday 10 January 2011 16:47:44 g.plantageneto at runbox.com escrigu?: > > > This looks terribly over complicated. What are you trying to do and > > > where is the polynomial? > > > > I see, sorry, I'll try to put it more clearly. My computation is > > something like this: for each time step, I compute the value of some > > polynomial on all array elements, then I average the results to > > obtain a single number: > [clip] > > As others have said, Cython is good option. Perhaps even easier would > be Numexpr, that let's you get nice speed-ups on complex expressions, > and as a plus, it lets you use any number of cores that your machine > has. > > I've made a small demo (attached) on how Numexpr works based on your > polynomial. Here are my results of running it on a 6-core machine: > > Computing with NumPy with 10^5 elements: > time spent: 0.0169 > Computing with Numexpr with 10^5 elements: > time spent for 1 threads: 0.0052 speed-up: 3.22 > time spent for 2 threads: 0.0028 speed-up: 6.11 > time spent for 3 threads: 0.0022 speed-up: 7.82 > time spent for 4 threads: 0.0016 speed-up: 10.26 > time spent for 5 threads: 0.0014 speed-up: 12.31 > time spent for 6 threads: 0.0013 speed-up: 13.35 > > Cheers, > > -- > Francesc Alted > import math > import numpy as np > from numpy.testing import assert_array_almost_equal > import numexpr as ne > from time import time > > N = 1e5 # number of elements in arrays > NTIMES = 100 > > def _dens0(S,T, kernel): > """Density of seawater at zero pressure""" > > # --- Define constants --- > a0 = 999.842594 > a1 = 6.793952e-2 > a2 = -9.095290e-3 > a3 = 1.001685e-4 > a4 = -1.120083e-6 > a5 = 6.536332e-9 > > b0 = 8.24493e-1 > b1 = -4.0899e-3 > b2 = 7.6438e-5 > b3 = -8.2467e-7 > b4 = 5.3875e-9 > > c0 = -5.72466e-3 > c1 = 1.0227e-4 > c2 = -1.6546e-6 > > d0 = 4.8314e-4 > > # --- Computations --- > # Density of pure water > if kernel == "numpy": > SMOW = a0 + (a1 + (a2 + (a3 + (a4 + a5*T)*T)*T)*T)*T > # More temperature polynomials > RB = b0 + (b1 + (b2 + (b3 + b4*T)*T)*T)*T > RC = c0 + (c1 + c2*T)*T > result = SMOW + RB*S + RC*(S**1.5) + d0*S*S > elif kernel == "numexpr": > SMOW = "a0 + (a1 + (a2 + (a3 + (a4 + a5*T)*T)*T)*T)*T" > # More temperature polynomials > RB = "b0 + (b1 + (b2 + (b3 + b4*T)*T)*T)*T" > RC = "c0 + (c1 + c2*T)*T" > poly = "(%s) + (%s)*S + (%s)*(S**1.5) + (d0*S*S)" % (SMOW, RB, RC) > result = ne.evaluate(poly) > else: > raise ValueError, "Unsupported kernel" > > return result > > S = np.random.rand(N) > R = np.random.rand(N) > > print "Computing with NumPy:" > t0 = time() > for i in range(NTIMES): > r1 = _dens0(S, R, "numpy") > tnp = (time()-t0) / NTIMES > print "time spent: %.4f" % tnp > > print "Computing with Numexpr with 10^%d elements:" % int(math.log10(N)) > for nthreads in range(1, ne.ncores+1): > ne.set_num_threads(nthreads) > t0 = time() > for i in range(NTIMES): > r2 = _dens0(S, R, "numexpr") > tne = (time()-t0) / NTIMES > print "time spent for %d threads: %.4f" % (nthreads, tne), > print "speed-up: %.2f" % (tnp / tne) > > assert_array_almost_equal(r1, r2, 15, "Arrays are not equal") > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolau Werneck C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15 Linux user #460716 "C++ is history repeated as tragedy. Java is history repeated as farce." -- Scott McKay From JRadinger at gmx.at Mon Jan 10 12:23:43 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 10 Jan 2011 18:23:43 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110171431.GC6938@phare.normalesup.org> References: <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> <20110110171234.280040@gmx.net> <20110110171431.GC6938@phare.normalesup.org> Message-ID: <20110110172343.161140@gmx.net> It is the scikit.image i try to install! and in the depends-file it says that cython is needed. nevertheless here what I get back from the cmd when I try to compile: C:\Users\XY\Downloads\z-scikits\z-scikits.image>python setup.py install Could not load nose. Unit tests not available. cython -o C:\Users\XY\Downloads\z-scikits\z-scikits.image\scikits \image\opencv\opencv_backend.c.new C:\Users\XY\Downloads\z-scikit s\z-scikits.image\scikits\image\opencv\opencv_backend.pyx 'cython' is not recognized as an internal or external command, operable program or batch file. Cython compilation of opencv_backend.pyx failed. Falling back on pre-generated f ile. Traceback (most recent call last): File "setup.py", line 90, in 'scivi = scikits.image.scripts.scivi:main'] File "C:\Program Files (x86)\Python26\ArcGIS10.0\lib\site-packages\numpy\distutils\core.py", line 152, in setup config = configuration() File "setup.py", line 40, in configuration config.add_subpackage(DISTNAME) File "C:\Program Files (x86)\Python26\ArcGIS10.0\lib\site-packages\numpy\distutils\misc_util.py", line 972, in add_subpackage caller_level = 2) File "C:\Program Files (x86)\Python26\ArcGIS10.0\lib\site-packages\numpy\distutils\misc_util.py", line 941, in get_subpackage caller_level = caller_level + 1) File "C:\Program Files (x86)\Python26\ArcGIS10.0\lib\site-packages\numpy\distutils\misc_util.py", line 878, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scikits\image\setup.py", line 8, in configuration config.add_subpackage('opencv') File "C:\Program Files (x86)\Python26\ArcGIS10.0\lib\site-packages\numpy\distutils\misc_util.py", line 972, in add_subpackage caller_level = 2) File "C:\Program Files (x86)\Python26\ArcGIS10.0\lib\site-packages\numpy\distutils\misc_util.py", line 941, in get_subpackage caller_level = caller_level + 1) File "C:\Program Files (x86)\Python26\ArcGIS10.0\lib\site-packages\numpy\distutils\misc_util.py", line 878, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scikits\image\opencv\setup.py", line 20, in configuration cython(cython_files, working_path=base_path) File "C:\Users\XY\Downloads\z-scikits\z-scikits.image\scikits\i mage\_build.py", line 41, in cython os.remove(c_file_new) WindowsError: [Error 2] The system cannot find the file specified: 'C:\\Users\\XY\\Downloads\\z-scikits\\z-scikits.image\\scikits\\image\\opencv\\opencv_backend.c.new' hopefully that brings some light into the dark /j -------- Original-Nachricht -------- > Datum: Mon, 10 Jan 2011 18:14:31 +0100 > Von: Gael Varoquaux > An: SciPy Users List > Betreff: Re: [SciPy-User] installing/compiling scipy module on windows, mingw > On Mon, Jan 10, 2011 at 06:12:34PM +0100, Johannes Radinger wrote: > > no it starts compiling and than it stops with the argument that > > cython isn't recognized... > > Darn, cython shouldn't be needed to build the scikit. Looks like we let > something slip through during the last release. Could you please copy the > exact error message here, and we'll do a bug fix release ASAP. > > Ga?l > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl From gael.varoquaux at normalesup.org Mon Jan 10 12:25:46 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 10 Jan 2011 18:25:46 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110172343.161140@gmx.net> References: <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> <20110110171234.280040@gmx.net> <20110110171431.GC6938@phare.normalesup.org> <20110110172343.161140@gmx.net> Message-ID: <20110110172546.GO29886@phare.normalesup.org> On Mon, Jan 10, 2011 at 06:23:43PM +0100, Johannes Radinger wrote: > It is the scikit.image i try to install! Sorry! I have been reading my mail too fast. I feel better: no bugfix release to do for the scikit-learn. G From charlesr.harris at gmail.com Mon Jan 10 12:30:57 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 10 Jan 2011 10:30:57 -0700 Subject: [SciPy-User] Speed-up simple function In-Reply-To: <20110110172229.GA6544@spirit> References: <201101101752.33150.faltet@pytables.org> <20110110172229.GA6544@spirit> Message-ID: On Mon, Jan 10, 2011 at 10:22 AM, Nicolau Werneck wrote: > That is one excellent suggestion, using multiple cores is probably the > best thing to do. I have tried a basic Cython implementation (because > it's such a basic and interesting problem), but I couldn't yet reach a > speedup of even 2.0. > > The big villain in the expression is the 'S**1.5'. Removing that from > my Cython code gives a 12x speedup. Avoiding that exponentiation can > be probably beneficial to other techniques too... > > The square root itself should be fast so I suspect that a log is being used. What happens if you use S*sqrt(S) instead? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwerneck at gmail.com Mon Jan 10 12:57:18 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Mon, 10 Jan 2011 15:57:18 -0200 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: <201101101752.33150.faltet@pytables.org> <20110110172229.GA6544@spirit> Message-ID: <20110110175718.GA8177@spirit> Great suggestion. I have tried just modifying the original numpy expression replacing the S**1.5 for S*sqrt(S), and just by doing that I already got a 2x speedup. In the Cython version, using S*sqrt(S) gives a 7.3 speedup. Much better than using exponentiation. Using the approximate rsqrt will probably bring that closer to 10x. ++nic On Mon, Jan 10, 2011 at 10:30:57AM -0700, Charles R Harris wrote: > On Mon, Jan 10, 2011 at 10:22 AM, Nicolau Werneck > wrote: > > That is one excellent suggestion, using multiple cores is probably the > best thing to do. I have tried a basic Cython implementation (because > it's such a basic and interesting problem), but I couldn't yet reach a > speedup of even 2.0. > > The big villain in the expression is the 'S**1.5'. Removing that from > my Cython code gives a 12x speedup. Avoiding that exponentiation can > be probably beneficial to other techniques too... > > The square root itself should be fast so I suspect that a log is being > used. What happens if you use S*sqrt(S) instead? > > > > Chuck > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolau Werneck C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15 Linux user #460716 "I wanted to change the world. But I have found that the only thing one can be sure of changing is oneself." -- Aldous Huxley From faltet at pytables.org Mon Jan 10 13:15:10 2011 From: faltet at pytables.org (Francesc Alted) Date: Mon, 10 Jan 2011 19:15:10 +0100 Subject: [SciPy-User] Speed-up simple function In-Reply-To: References: <20110110172229.GA6544@spirit> Message-ID: <201101101915.10464.faltet@pytables.org> A Monday 10 January 2011 18:30:57 Charles R Harris escrigu?: > On Mon, Jan 10, 2011 at 10:22 AM, Nicolau Werneck wrote: > > That is one excellent suggestion, using multiple cores is probably > > the best thing to do. I have tried a basic Cython implementation > > (because it's such a basic and interesting problem), but I > > couldn't yet reach a speedup of even 2.0. > > > > The big villain in the expression is the 'S**1.5'. Removing that > > from my Cython code gives a 12x speedup. Avoiding that > > exponentiation can be probably beneficial to other techniques > > too... > > The square root itself should be fast so I suspect that a log is > being used. What happens if you use S*sqrt(S) instead? For the Numexpr case, this does not change anything as this optimization is already applied by the internal optimizer (to be strict, this is applied only if the `optimization` flag in `evaluate()` is set to "aggressive", but this is the default anyway). -- Francesc Alted From ben.root at ou.edu Mon Jan 10 13:20:11 2011 From: ben.root at ou.edu (Benjamin Root) Date: Mon, 10 Jan 2011 12:20:11 -0600 Subject: [SciPy-User] updating netcdf file in scipy.io.netcdf In-Reply-To: References: Message-ID: On Mon, Jan 10, 2011 at 11:19 AM, Don Batchelor wrote: > How does one read in a netcdf file, modify some variables, and write > out the updated file using scipy.io.netcdf? In the old > Scientific.IO.NetCDF there is the file open mode 'r+' that allows to > both read and write to a file. I don't see an equivalent > functionality in scipy.io.netcdf. > > I have tried just opening the file with mode 'r', modifying some > netcdf variables, and doing a file close to see what would happen. > But when I try to change the netcdf variables I get "RuntimeError: > array is not writeable". See below. It never reaches the 'close' > statement. It must be possible to do this. Otherwise how could one > make use of a record variable? > > Traceback (most recent call last): > File "netcdf_python_example.py", line 61, in > Te[:] = radius[:] > File "/Users/dbh/Python_scripts/netcdf.py", line 677, in __setitem__ > self.data[index] = data > RuntimeError: array is not writeable > > > This is because the netcdf module uses memmap, so writing to netcdf variables that come from a read-only file won't work. Does using 'rw' not work for a mode? I haven't tried such a thing, so I don't know what would happen. I usually open a file for reading and create a different file for writing. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From scipy.optimize at googlemail.com Mon Jan 10 17:25:18 2011 From: scipy.optimize at googlemail.com (scipy.optimize) Date: Mon, 10 Jan 2011 23:25:18 +0100 Subject: [SciPy-User] Parameter for simulated annealing Message-ID: To whom it may concern, I am trying to find good settings for simulated annealing. But at the moment I fail. The fmin-optimizer is working properly but he does not find the global minimum. No big surprise. My problem: I have 6 Variables. 4 of them should be between -0.05 and 0.05 and 2 of them between -45 and 45. I normalize them so they are all around -0.1 to 0.1. The target function should be 1. I have implemented constraints by adding a 10 to my target function if one of my variables are beyond their range. But the simulated annealing does not respect the constraints. Most of my iterations it tried to find a value out of my given range. What I have done: I vary 2 parameter: dwell and T0 dwell from 50 to 1000 T0 from 0.2 to 1.2 And this with the 3 different models (anneal_boltzman, anneal_cauchy and anneal_fast). When I have T0=0.2 and dwell=100 the computing range simulated annealing is trying is about -0.08 to 0.08. This was my best result. When I raise T0 and dwell the range grow. But even after 600 iteration there is not a trend to see. I have computed nearly 20 variations of these parameter but I could not find a big sensitivity. The variation was basically blind because I have no exactly idea what these 2 parameter stands for. I have read the manuals up and down. Can anyone give me an approximate range in which I should search my perfect parameter for dwell and T0? The best thing would be, when I can make a rough estimate of my minimum with the simulated annealing and after this I would run the fmin-optimizer. Because the biggest problem is that 1 computing (1 iteration) takes about 1h. So If I can separate the search I can save much time. (I have a testmodus in which 1 Iteration takes about 15min.) I hope someone can help me. Anyway, thanks for your help. Kind regards, Marius -------------- next part -------------- An HTML attachment was scrubbed... URL: From brennan.williams at visualreservoir.com Mon Jan 10 20:42:07 2011 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Tue, 11 Jan 2011 14:42:07 +1300 Subject: [SciPy-User] removing multiple occurrences of a specific value (or range of values) from an array Message-ID: <4D2BB56F.3020605@visualreservoir.com> I have a numpy array and I use .min(), .max(), .std(), average(..), median(...) etc to get various stats values. Depending on where the data originally came from, the array can contain a null value which could be 1.0e+20 or similar (can vary from dataset to dataset). Due to rounding errors this can sometimes appear as something like 1.0000002004e+20 etc etc. So I want to be able to correctly calculate the stats values by ignoring the null values. I also want to be able to replace the null values with another value (for plotting/exporting). What's the best way to do this without looping over the elements of the array? Thanks Brennan From josef.pktd at gmail.com Mon Jan 10 20:48:12 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Jan 2011 20:48:12 -0500 Subject: [SciPy-User] removing multiple occurrences of a specific value (or range of values) from an array In-Reply-To: <4D2BB56F.3020605@visualreservoir.com> References: <4D2BB56F.3020605@visualreservoir.com> Message-ID: On Mon, Jan 10, 2011 at 8:42 PM, Brennan Williams wrote: > I have a numpy array and I use .min(), .max(), .std(), average(..), > median(...) etc to get various stats values. > > Depending on where the data originally came from, the array can contain > a null value which could be 1.0e+20 or similar (can vary from dataset to > dataset). Due to rounding errors this can sometimes appear as something > like 1.0000002004e+20 etc etc. > > So I want to be able to correctly calculate the stats values by ignoring > the null values. > > I also want to be able to replace the null values with another value > (for plotting/exporting). > > What's the best way to do this without looping over the elements of the > array? If you don't have anything large, then you could just do x[x>1e19]=np.nan or filter them out, or convert to masked array. Josef > > Thanks > > Brennan > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From brennan.williams at visualreservoir.com Mon Jan 10 20:57:54 2011 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Tue, 11 Jan 2011 14:57:54 +1300 Subject: [SciPy-User] removing multiple occurrences of a specific value (or range of values) from an array In-Reply-To: References: <4D2BB56F.3020605@visualreservoir.com> Message-ID: <4D2BB922.7010905@visualreservoir.com> On 11/01/2011 2:48 p.m., josef.pktd at gmail.com wrote: > On Mon, Jan 10, 2011 at 8:42 PM, Brennan Williams > wrote: >> I have a numpy array and I use .min(), .max(), .std(), average(..), >> median(...) etc to get various stats values. >> >> Depending on where the data originally came from, the array can contain >> a null value which could be 1.0e+20 or similar (can vary from dataset to >> dataset). Due to rounding errors this can sometimes appear as something >> like 1.0000002004e+20 etc etc. >> >> So I want to be able to correctly calculate the stats values by ignoring >> the null values. >> >> I also want to be able to replace the null values with another value >> (for plotting/exporting). >> >> What's the best way to do this without looping over the elements of the >> array? > If you don't have anything large, then you could just do > > x[x>1e19]=np.nan > > or filter them out, or convert to masked array. > the array is usually <10,000 values, often <1000 On a separate note I found that .std() didn't return a valid value when I have a lot of 1.0e+20's in the array. I realise that it is probably a single precision issue and I probably won't need to worry about this in future but I presume I should use .std(dtype=float64) ? > Josef > From josef.pktd at gmail.com Mon Jan 10 21:15:30 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Jan 2011 21:15:30 -0500 Subject: [SciPy-User] removing multiple occurrences of a specific value (or range of values) from an array In-Reply-To: <4D2BB922.7010905@visualreservoir.com> References: <4D2BB56F.3020605@visualreservoir.com> <4D2BB922.7010905@visualreservoir.com> Message-ID: On Mon, Jan 10, 2011 at 8:57 PM, Brennan Williams wrote: > On 11/01/2011 2:48 p.m., josef.pktd at gmail.com wrote: >> On Mon, Jan 10, 2011 at 8:42 PM, Brennan Williams >> ?wrote: >>> I have a numpy array and I use .min(), .max(), .std(), average(..), >>> median(...) etc to get various stats values. >>> >>> Depending on where the data originally came from, the array can contain >>> a null value which could be 1.0e+20 or similar (can vary from dataset to >>> dataset). Due to rounding errors this can sometimes appear as something >>> like 1.0000002004e+20 etc etc. >>> >>> So I want to be able to correctly calculate the stats values by ignoring >>> the null values. >>> >>> I also want to be able to replace the null values with another value >>> (for plotting/exporting). >>> >>> What's the best way to do this without looping over the elements of the >>> array? >> If you don't have anything large, then you could just do >> >> x[x>1e19]=np.nan >> >> or filter them out, or convert to masked array. >> > the array is usually <10,000 values, often <1000 > > On a separate note I found that .std() didn't return a valid value when > I have a lot of 1.0e+20's in the array. I realise that it is probably a > single precision issue and I probably won't need to worry about this in > future but I presume I should use .std(dtype=float64) ? I'm not sure I understand Are you using np.std on the array with missing values still in there encoded with 1e20? That wouldn't give the right values. may numpy seems to promote automatically (contrary to the docs) >>> np.arange(5, dtype='float32').std() 1.4142135623730951 >>> np.arange(5, dtype='float32').std().dtype dtype('float64') >>> np.arange(5, dtype='float32').dtype dtype('float32') Josef > >> Josef >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From brennan.williams at visualreservoir.com Mon Jan 10 21:33:36 2011 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Tue, 11 Jan 2011 15:33:36 +1300 Subject: [SciPy-User] removing multiple occurrences of a specific value (or range of values) from an array In-Reply-To: References: <4D2BB56F.3020605@visualreservoir.com> <4D2BB922.7010905@visualreservoir.com> Message-ID: <4D2BC180.1020803@visualreservoir.com> On 11/01/2011 3:15 p.m., josef.pktd at gmail.com wrote: > On Mon, Jan 10, 2011 at 8:57 PM, Brennan Williams > wrote: >> On 11/01/2011 2:48 p.m., josef.pktd at gmail.com wrote: >>> On Mon, Jan 10, 2011 at 8:42 PM, Brennan Williams >>> wrote: >>>> I have a numpy array and I use .min(), .max(), .std(), average(..), >>>> median(...) etc to get various stats values. >>>> >>>> Depending on where the data originally came from, the array can contain >>>> a null value which could be 1.0e+20 or similar (can vary from dataset to >>>> dataset). Due to rounding errors this can sometimes appear as something >>>> like 1.0000002004e+20 etc etc. >>>> >>>> So I want to be able to correctly calculate the stats values by ignoring >>>> the null values. >>>> >>>> I also want to be able to replace the null values with another value >>>> (for plotting/exporting). >>>> >>>> What's the best way to do this without looping over the elements of the >>>> array? >>> If you don't have anything large, then you could just do >>> >>> x[x>1e19]=np.nan >>> >>> or filter them out, or convert to masked array. >>> >> the array is usually<10,000 values, often<1000 >> >> On a separate note I found that .std() didn't return a valid value when >> I have a lot of 1.0e+20's in the array. I realise that it is probably a >> single precision issue and I probably won't need to worry about this in >> future but I presume I should use .std(dtype=float64) ? > I'm not sure I understand > Are you using np.std on the array with missing values still in there > encoded with 1e20? > That wouldn't give the right values. I was, but that was because I didn't realise the array had bad/missing values in it. In theory the data coming in should just have been an array of zero's but it is inconsistent, for some reason I haven't been able to work out yet. > may numpy seems to promote automatically (contrary to the docs) > >>>> np.arange(5, dtype='float32').std() > 1.4142135623730951 >>>> np.arange(5, dtype='float32').std().dtype > dtype('float64') >>>> np.arange(5, dtype='float32').dtype > dtype('float32') > Hmmm a=np.range(5,dtype='float32') b=a*1.0e+20 b.std() returns inf whereas b.std(dtype='float64') returns 1.4142....e+20 Brennan From charlesr.harris at gmail.com Mon Jan 10 21:39:47 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 10 Jan 2011 19:39:47 -0700 Subject: [SciPy-User] Parameter for simulated annealing In-Reply-To: References: Message-ID: On Mon, Jan 10, 2011 at 3:25 PM, scipy.optimize < scipy.optimize at googlemail.com> wrote: > To whom it may concern, > > I am trying to find good settings for simulated annealing. But at the > moment I fail. The fmin-optimizer is working properly but he does not find > the global minimum. No big surprise. > > My problem: > I have 6 Variables. 4 of them should be between -0.05 and 0.05 and 2 of > them between -45 and 45. I normalize them so they are all around -0.1 to > 0.1. > The target function should be 1. I have implemented constraints by adding a > 10 to my target function if one of my variables are beyond their range. But > the simulated annealing does not respect the constraints. Most of my > iterations it tried to find a value out of my given range. > > What I have done: > I vary 2 parameter: dwell and T0 > dwell from 50 to 1000 > T0 from 0.2 to 1.2 > And this with the 3 different models (anneal_boltzman, anneal_cauchy and > anneal_fast). > > When I have T0=0.2 and dwell=100 the computing range simulated annealing is > trying is about -0.08 to 0.08. This was my best result. When I raise T0 and > dwell the range grow. But even after 600 iteration there is not a trend to > see. > I have computed nearly 20 variations of these parameter but I could not > find a big sensitivity. > > The variation was basically blind because I have no exactly idea what these > 2 parameter stands for. I have read the manuals up and down. > > Can anyone give me an approximate range in which I should search my perfect > parameter for dwell and T0? The best thing would be, when I can make a rough > estimate of my minimum with the simulated annealing and after this I would > run the fmin-optimizer. Because the biggest problem is that 1 computing (1 > iteration) takes about 1h. So If I can separate the search I can save much > time. (I have a testmodus in which 1 Iteration takes about 15min.) > > I hope someone can help me. Anyway, thanks for your help. > > I also hope someone has a good answer because my experience is that simulated annealing is rather sensitive to the choice of parameters. When it works, it's great, but genetic algorithms tend to be more robust. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jan 10 23:07:03 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Jan 2011 23:07:03 -0500 Subject: [SciPy-User] Parameter for simulated annealing In-Reply-To: References: Message-ID: On Mon, Jan 10, 2011 at 9:39 PM, Charles R Harris wrote: > > > On Mon, Jan 10, 2011 at 3:25 PM, scipy.optimize > wrote: >> >> To whom it may concern, >> >> I am trying to find good settings for simulated annealing. But at the >> moment I fail. The fmin-optimizer is working properly but he does not find >> the global minimum. No big surprise. >> >> My problem: >> I have 6 Variables. 4 of them should be between -0.05 and 0.05 and 2 of >> them between -45 and 45. I normalize them so they are all around -0.1 to >> 0.1. >> The target function should be 1. I have implemented constraints by adding >> a 10 to my target function if one of my variables are beyond their range. >> But the simulated annealing does not respect the constraints. Most of my >> iterations it tried to find a value out of my given range. >> >> What I have done: >> I vary 2 parameter: dwell and T0 >> dwell from 50 to 1000 >> T0 from 0.2 to 1.2 >> And this with the 3 different models (anneal_boltzman, anneal_cauchy and >> anneal_fast). >> >> When I have T0=0.2 and dwell=100 the computing range simulated annealing >> is trying is about -0.08 to 0.08. This was my best result. When I raise T0 >> and dwell the range grow. But even after 600 iteration there is not a trend >> to see. >> I have computed nearly 20 variations of these parameter but I could not >> find a big sensitivity. >> >> The variation was basically blind because I have no exactly idea what >> these 2 parameter stands for. I have read the manuals up and down. >> >> Can anyone give me an approximate range in which I should search my >> perfect parameter for dwell and T0? The best thing would be, when I can make >> a rough estimate of my minimum with the simulated annealing and after this I >> would run the fmin-optimizer.?Because the biggest problem is that 1 >> computing (1 iteration) takes about 1h. So If I can separate the search I >> can save much time. (I have a testmodus in which 1 Iteration takes about >> 15min.) >> >> I hope someone can help me. Anyway, thanks for your help. I think "upper" and "lower" are set too high for your problem, new points are evaluated with a change for up to +/1 100 which is too large of your parameters are in the range -0.1, 0.1 upper and lower are the only parameters I looked at, all the other ones are more difficult to figure out. I haven't seen any documentation, there is only the source. I never tried to plot the distribution for new random draws, and I don't know how concentrated they are. One useful way of debugging is to save all parameter values that are tried out from your objective function to a global variable or print them out to see whether they are in a reasonable range for your example. If you expect your parameters to be in the range -0.1, 0.1 (if I understand correctly), then I would try setting lower=-0.1 or -0.05 and upper the same but positive. Just some thoughts when I was playing with it, I haven't seen any good explanations on the mailing list (yet). Josef >> > > I also hope someone has a good answer because my experience is that > simulated annealing is rather sensitive to the choice of parameters. When it > works, it's great, but genetic algorithms tend to be more robust. > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From Chris.Barker at noaa.gov Mon Jan 10 23:29:57 2011 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 10 Jan 2011 20:29:57 -0800 Subject: [SciPy-User] updating netcdf file in scipy.io.netcdf In-Reply-To: References: Message-ID: <4D2BDCC5.3000902@noaa.gov> On 1/10/2011 10:20 AM, Benjamin Root wrote: > How does one read in a netcdf file, modify some variables, and write > out the updated file using scipy.io.netcdf? I use Jeff Whitaker's NetCDF4 package -- it does support this -- that may be an option for you. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Mon Jan 10 23:31:51 2011 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 10 Jan 2011 20:31:51 -0800 Subject: [SciPy-User] Speed-up simple function In-Reply-To: <20110110161445.GA28284@spirit> References: <20110110161445.GA28284@spirit> Message-ID: <4D2BDD37.7040604@noaa.gov> On 1/10/2011 8:14 AM, Nicolau Werneck wrote: > If you really need to speed that up, you just have to implement this > _dens0 function using weave, Cython or something like that. I can give > you some more info about Cython, write me at my address if you want. Tee way the original is written uses a fair number of temporaries, I'll bet it could be sped up with pure numpy is that was addressed -- probably not as fast as Cython or numexpr, but maybe easier. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From a at gaydenko.com Tue Jan 11 05:39:07 2011 From: a at gaydenko.com (Andrew Gaydenko) Date: Tue, 11 Jan 2011 13:39:07 +0300 Subject: [SciPy-User] bandlimiting sample sequence Message-ID: <201101111339.07153.a@gaydenko.com> Hi! Say, I have given sample rate Fs and a sequence of (soft-generated) samples. The aim is to limit band below Fs/2. I mean any limiting params are also given (for example, attenuation at 0.9 * Fs/2 is -1db or less, attenuation at Fs/2 must be -60db or more, gain oscillation in some frequencies range (say, from 0Hz to 0.8 * Fs/2) must be in +/-0.01db range. Has anybody examples (or references to publically available examples) I can take to understand how to band limit a sequence decribed way? Andrew From jeanluc.menut at free.fr Tue Jan 11 06:17:48 2011 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Tue, 11 Jan 2011 12:17:48 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110110172343.161140@gmx.net> References: <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> <20110110171234.280040@gmx.net> <20110110171431.GC6938@phare.normalesup.org> <20110110172343.161140@gmx.net> Message-ID: <4D2C3C5C.90008@free.fr> > Could not load nose. Unit tests not available. You should maybe install nose, it'll test your scikit at the end of the installation. > 'cython' is not recognized as an internal or external command, > operable program or batch file. it seems that you don't have cython installed or in the path. Type "cython" in cmd to test. From JRadinger at gmx.at Tue Jan 11 06:56:58 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Tue, 11 Jan 2011 12:56:58 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <4D2C3C5C.90008@free.fr> References: <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> <20110110171234.280040@gmx.net> <20110110171431.GC6938@phare.normalesup.org> <20110110172343.161140@gmx.net> <4D2C3C5C.90008@free.fr> Message-ID: <20110111115658.143350@gmx.net> -------- Original-Nachricht -------- > Datum: Tue, 11 Jan 2011 12:17:48 +0100 > Von: Jean-Luc Menut > An: SciPy Users List > Betreff: Re: [SciPy-User] installing/compiling scipy module on windows, mingw > > > > Could not load nose. Unit tests not available. > > > You should maybe install nose, it'll test your scikit at the end of the > installation. I'm trying to install nose...will let you know what it'll test > > > > 'cython' is not recognized as an internal or external command, > > operable program or batch file. > > it seems that you don't have cython installed or in the path. Type > "cython" in cmd to test. I installed cython... with the precompiled binaries for windows and python 2.6... I can then find cython in my python directory and added the containing folder of cython (C:\Program Files (x86)\python26\ArcGIS10.0\Lib\site-packages;) to PATH? is that correct so far? > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl From jeanluc.menut at free.fr Tue Jan 11 07:02:41 2011 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Tue, 11 Jan 2011 13:02:41 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110111115658.143350@gmx.net> References: <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> <20110110171234.280040@gmx.net> <20110110171431.GC6938@phare.normalesup.org> <20110110172343.161140@gmx.net> <4D2C3C5C.90008@free.fr> <20110111115658.143350@gmx.net> Message-ID: <4D2C46E1.10908@free.fr> > I installed cython... with the precompiled binaries for windows and python 2.6... >I can then find cython in my python directory and added the containing folder of >cython (C:\Program Files (x86)\python26\ArcGIS10.0\Lib\site-packages;) to PATH? is that correct so far? The "ArcGIS10.0" part of the path surprise me, but excpt for that it seems correct. did you check if cython.exe was really in the directory. From JRadinger at gmx.at Tue Jan 11 07:35:34 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Tue, 11 Jan 2011 13:35:34 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <4D2C46E1.10908@free.fr> References: <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> <20110110171234.280040@gmx.net> <20110110171431.GC6938@phare.normalesup.org> <20110110172343.161140@gmx.net> <4D2C3C5C.90008@free.fr> <20110111115658.143350@gmx.net> <4D2C46E1.10908@free.fr> Message-ID: <20110111123534.132310@gmx.net> -------- Original-Nachricht -------- > Datum: Tue, 11 Jan 2011 13:02:41 +0100 > Von: Jean-Luc Menut > An: SciPy Users List > Betreff: Re: [SciPy-User] installing/compiling scipy module on windows, mingw > > > I installed cython... with the precompiled binaries for windows and > python 2.6... > >I can then find cython in my python directory and added the containing > folder of > >cython (C:\Program Files (x86)\python26\ArcGIS10.0\Lib\site-packages;) > to PATH? is that correct so far? > > > The "ArcGIS10.0" part of the path surprise me, but excpt for that it > seems correct. did you check if cython.exe was really in the directory. oh thank you for that hint...there are cython files in the directory but no exe...it seems that the cython win-installer didn't properly (completetly) install cython. there are only following cython files in the named directory: 07.01.2011 20:55 Cython 14.12.2010 14:52 1.872 Cython-0.14-py2.6.egg-info 13.12.2010 23:42 209 cython.py 10.01.2011 14:06 345 cython.pyc 10.01.2011 14:06 345 cython.pyo how should i proceed?? is the cython.exe somewhere hidden or just not installed?? there is only the Cython-0.14.win32-py2.6 - exe file which i used for the doubleclicked for the installation. thanks /j > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser From faltet at pytables.org Tue Jan 11 07:39:50 2011 From: faltet at pytables.org (Francesc Alted) Date: Tue, 11 Jan 2011 13:39:50 +0100 Subject: [SciPy-User] Speed-up simple function In-Reply-To: <20110110175718.GA8177@spirit> References: <20110110175718.GA8177@spirit> Message-ID: <201101111339.50220.faltet@pytables.org> A Monday 10 January 2011 18:57:18 Nicolau Werneck escrigu?: > Great suggestion. I have tried just modifying the original numpy > expression replacing the S**1.5 for S*sqrt(S), and just by doing that > I already got a 2x speedup. > > In the Cython version, using S*sqrt(S) gives a 7.3 speedup. Much > better than using exponentiation. Using the approximate rsqrt will > probably bring that closer to 10x. Mmh, for double precision you cannot expect big speed-ups (at least, until the new AVX instruction set is broadly available). Here it is an estimation on the speed-up you can get by using Numexpr+VML, which uses SSEx (SSE4 for my case): >>> x = np.linspace(0,1,1e6) >>> timeit np.sqrt(x) 100 loops, best of 3: 6.69 ms per loop >>> timeit ne.evaluate("sqrt(x)") 100 loops, best of 3: 4.37 ms per loop # only 1.5x speed-up With simple precision things are different: >>> x = np.linspace(0,1,1e6).astype('f4') >>> timeit np.sqrt(x) 100 loops, best of 3: 4.61 ms per loop >>> timeit ne.evaluate("sqrt(x)") 1000 loops, best of 3: 1.83 ms per loop # 2.5x speed-up In my opinion, as newer processors will wear more cores into them, multithreading will become a simpler (and cheaper) option for accelerating this sort of computations (as well as computations limited by memory bandwidth). SIMD could help in getting more speed, of course, but as I see it, it is multithreading that will be key for computational problems in the next future (present?). -- Francesc Alted From jeanluc.menut at free.fr Tue Jan 11 09:18:06 2011 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Tue, 11 Jan 2011 15:18:06 +0100 Subject: [SciPy-User] installing/compiling scipy module on windows, mingw In-Reply-To: <20110111123534.132310@gmx.net> References: <1294645629.6771.20.camel@mypride> <1294662518.6771.57.camel@mypride> <20110110133931.118620@gmx.net> <4D2B0F70.10509@free.fr> <20110110140001.116760@gmx.net> <4D2B176C.8010202@free.fr> <20110110161100.66590@gmx.net> <20110110162004.GC22378@phare.normalesup.org> <20110110171234.280040@gmx.net> <20110110171431.GC6938@phare.normalesup.org> <20110110172343.161140@gmx.net> <4D2C3C5C.90008@free.fr> <20110111115658.143350@gmx.net> <4D2C46E1.10908@free.fr> <20110111123534.132310@gmx.net> Message-ID: <4D2C669E.7090505@free.fr> > how should i proceed?? is the cython.exe somewhere Hmm, I've checked, there is no cython executable. Go to this page and try to follow the beginning of the tutorial to check if cython is correctly installed : http://docs.cython.org/src/userguide/tutorial.html From bsouthey at gmail.com Tue Jan 11 09:55:24 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 11 Jan 2011 08:55:24 -0600 Subject: [SciPy-User] removing multiple occurrences of a specific value (or range of values) from an array In-Reply-To: <4D2BC180.1020803@visualreservoir.com> References: <4D2BB56F.3020605@visualreservoir.com> <4D2BB922.7010905@visualreservoir.com> <4D2BC180.1020803@visualreservoir.com> Message-ID: <4D2C6F5C.2080505@gmail.com> On 01/10/2011 08:33 PM, Brennan Williams wrote: > On 11/01/2011 3:15 p.m., josef.pktd at gmail.com wrote: >> On Mon, Jan 10, 2011 at 8:57 PM, Brennan Williams >> wrote: >>> On 11/01/2011 2:48 p.m., josef.pktd at gmail.com wrote: >>>> On Mon, Jan 10, 2011 at 8:42 PM, Brennan Williams >>>> wrote: >>>>> I have a numpy array and I use .min(), .max(), .std(), average(..), >>>>> median(...) etc to get various stats values. >>>>> >>>>> Depending on where the data originally came from, the array can contain >>>>> a null value which could be 1.0e+20 or similar (can vary from dataset to >>>>> dataset). Due to rounding errors this can sometimes appear as something >>>>> like 1.0000002004e+20 etc etc. >>>>> >>>>> So I want to be able to correctly calculate the stats values by ignoring >>>>> the null values. >>>>> >>>>> I also want to be able to replace the null values with another value >>>>> (for plotting/exporting). >>>>> >>>>> What's the best way to do this without looping over the elements of the >>>>> array? >>>> If you don't have anything large, then you could just do >>>> >>>> x[x>1e19]=np.nan >>>> >>>> or filter them out, or convert to masked array. >>>> >>> the array is usually<10,000 values, often<1000 >>> >>> On a separate note I found that .std() didn't return a valid value when >>> I have a lot of 1.0e+20's in the array. I realise that it is probably a >>> single precision issue and I probably won't need to worry about this in >>> future but I presume I should use .std(dtype=float64) ? >> I'm not sure I understand >> Are you using np.std on the array with missing values still in there >> encoded with 1e20? >> That wouldn't give the right values. > I was, but that was because I didn't realise the array had bad/missing > values in it. In theory the data coming in should just have been an > array of zero's but it is inconsistent, for some reason I haven't been > able to work out yet. >> may numpy seems to promote automatically (contrary to the docs) >> >>>>> np.arange(5, dtype='float32').std() >> 1.4142135623730951 >>>>> np.arange(5, dtype='float32').std().dtype >> dtype('float64') >>>>> np.arange(5, dtype='float32').dtype >> dtype('float32') >> > Hmmm > > > a=np.range(5,dtype='float32') > b=a*1.0e+20 > b.std() > returns inf > whereas > b.std(dtype='float64') > returns 1.4142....e+20 > > Brennan > That's because you are not being careful about your floating point precision. Since standard deviation involves squaring, so the required immediate precision is over 1.e+40 which exceeds the 32-bit precision. (On my x86 64-bit Linux system, 1.e+308 is about the limit for 64bit and 1.e+4932 is the limit for float128 - these limits do vary across processors and operating systems.) >>> print np.finfo(np.float32) Machine parameters for float32 --------------------------------------------------------------------- precision= 6 resolution= 1.0000000e-06 machep= -23 eps= 1.1920929e-07 negep = -24 epsneg= 5.9604645e-08 minexp= -126 tiny= 1.1754944e-38 maxexp= 128 max= 3.4028235e+38 nexp = 8 min= -max --------------------------------------------------------------------- You can also see this just by doing: >>> b*b Warning: overflow encountered in multiply array([ 0., inf, inf, inf, inf], dtype=float32) Bruce From nwerneck at gmail.com Tue Jan 11 16:29:51 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Tue, 11 Jan 2011 19:29:51 -0200 Subject: [SciPy-User] bandlimiting sample sequence In-Reply-To: <201101111339.07153.a@gaydenko.com> References: <201101111339.07153.a@gaydenko.com> Message-ID: <20110111212951.GA3528@spirit> You mean you need to design a low-pass filter?... You need to select a filter type, and then use functions such as scipy.signal.cheb1ord to find out the minimum filter order that fulfils your requirements. Then you calculate the filter coefficients with e.g. scipy.signal.cheby1, and filter the signal with lfilter(). ++nic On Tue, Jan 11, 2011 at 01:39:07PM +0300, Andrew Gaydenko wrote: > Hi! > > Say, I have given sample rate Fs and a sequence of (soft-generated) samples. > The aim is to limit band below Fs/2. I mean any limiting params are also given > (for example, attenuation at 0.9 * Fs/2 is -1db or less, attenuation at Fs/2 > must be -60db or more, gain oscillation in some frequencies range (say, from > 0Hz to 0.8 * Fs/2) must be in +/-0.01db range. > > Has anybody examples (or references to publically available examples) I can > take to understand how to band limit a sequence decribed way? > > > Andrew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolau Werneck C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15 Linux user #460716 "One man's "magic" is another man's engineering. "Supernatural" is a null word. " -- Robert Heinlein From a at gaydenko.com Tue Jan 11 17:11:21 2011 From: a at gaydenko.com (Andrew Gaydenko) Date: Wed, 12 Jan 2011 01:11:21 +0300 Subject: [SciPy-User] bandlimiting sample sequence In-Reply-To: <20110111212951.GA3528@spirit> References: <201101111339.07153.a@gaydenko.com> <20110111212951.GA3528@spirit> Message-ID: <201101120111.21917.a@gaydenko.com> On Wednesday, January 12, 2011 00:29:51 Nicolau Werneck wrote: > You mean you need to design a low-pass filter?... Yes, I mean LPF. > You need to select a filter type, and then use functions such as > scipy.signal.cheb1ord to find out the minimum filter order that > fulfils your requirements. Then you calculate the filter coefficients > with e.g. scipy.signal.cheby1, and filter the signal with lfilter(). > > ++nic Thanks for the plan! - I will try. > > On Tue, Jan 11, 2011 at 01:39:07PM +0300, Andrew Gaydenko wrote: > > Hi! > > > > Say, I have given sample rate Fs and a sequence of (soft-generated) > > samples. The aim is to limit band below Fs/2. I mean any limiting params > > are also given (for example, attenuation at 0.9 * Fs/2 is -1db or less, > > attenuation at Fs/2 must be -60db or more, gain oscillation in some > > frequencies range (say, from 0Hz to 0.8 * Fs/2) must be in +/-0.01db > > range. > > > > Has anybody examples (or references to publically available examples) I > > can take to understand how to band limit a sequence decribed way? > > > > > > Andrew > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user From brennan.williams at visualreservoir.com Tue Jan 11 17:46:57 2011 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Wed, 12 Jan 2011 11:46:57 +1300 Subject: [SciPy-User] removing multiple occurrences of a specific value (or range of values) from an array In-Reply-To: <4D2C6F5C.2080505@gmail.com> References: <4D2BB56F.3020605@visualreservoir.com> <4D2BB922.7010905@visualreservoir.com> <4D2BC180.1020803@visualreservoir.com> <4D2C6F5C.2080505@gmail.com> Message-ID: <4D2CDDE1.7040901@visualreservoir.com> On 12/01/2011 3:55 a.m., Bruce Southey wrote: > On 01/10/2011 08:33 PM, Brennan Williams wrote: >> On 11/01/2011 3:15 p.m., josef.pktd at gmail.com wrote: >>> On Mon, Jan 10, 2011 at 8:57 PM, Brennan Williams >>> wrote: >>>> On 11/01/2011 2:48 p.m., josef.pktd at gmail.com wrote: >>>>> On Mon, Jan 10, 2011 at 8:42 PM, Brennan Williams >>>>> wrote: >>>>>> I have a numpy array and I use .min(), .max(), .std(), average(..), >>>>>> median(...) etc to get various stats values. >>>>>> >>>>>> Depending on where the data originally came from, the array can contain >>>>>> a null value which could be 1.0e+20 or similar (can vary from dataset to >>>>>> dataset). Due to rounding errors this can sometimes appear as something >>>>>> like 1.0000002004e+20 etc etc. >>>>>> >>>>>> So I want to be able to correctly calculate the stats values by ignoring >>>>>> the null values. >>>>>> >>>>>> I also want to be able to replace the null values with another value >>>>>> (for plotting/exporting). >>>>>> >>>>>> What's the best way to do this without looping over the elements of the >>>>>> array? >>>>> If you don't have anything large, then you could just do >>>>> >>>>> x[x>1e19]=np.nan >>>>> >>>>> or filter them out, or convert to masked array. >>>>> >>>> the array is usually<10,000 values, often<1000 >>>> >>>> On a separate note I found that .std() didn't return a valid value when >>>> I have a lot of 1.0e+20's in the array. I realise that it is probably a >>>> single precision issue and I probably won't need to worry about this in >>>> future but I presume I should use .std(dtype=float64) ? >>> I'm not sure I understand >>> Are you using np.std on the array with missing values still in there >>> encoded with 1e20? >>> That wouldn't give the right values. >> I was, but that was because I didn't realise the array had bad/missing >> values in it. In theory the data coming in should just have been an >> array of zero's but it is inconsistent, for some reason I haven't been >> able to work out yet. >>> may numpy seems to promote automatically (contrary to the docs) >>> >>>>>> np.arange(5, dtype='float32').std() >>> 1.4142135623730951 >>>>>> np.arange(5, dtype='float32').std().dtype >>> dtype('float64') >>>>>> np.arange(5, dtype='float32').dtype >>> dtype('float32') >>> >> Hmmm >> >> >> a=np.range(5,dtype='float32') >> b=a*1.0e+20 >> b.std() >> returns inf >> whereas >> b.std(dtype='float64') >> returns 1.4142....e+20 >> >> Brennan >> > That's because you are not being careful about your floating point > precision. > > Since standard deviation involves squaring, so the required immediate > precision is over 1.e+40 which exceeds the 32-bit precision. (On my x86 > 64-bit Linux system, 1.e+308 is about the limit for 64bit and 1.e+4932 > is the limit for float128 - these limits do vary across processors and > operating systems.) > > >>> print np.finfo(np.float32) > Machine parameters for float32 > --------------------------------------------------------------------- > precision= 6 resolution= 1.0000000e-06 > machep= -23 eps= 1.1920929e-07 > negep = -24 epsneg= 5.9604645e-08 > minexp= -126 tiny= 1.1754944e-38 > maxexp= 128 max= 3.4028235e+38 > nexp = 8 min= -max > --------------------------------------------------------------------- > > You can also see this just by doing: > >>> b*b > Warning: overflow encountered in multiply > array([ 0., inf, inf, inf, inf], dtype=float32) > > Bruce > > > My values shouldn't be up in the e+20 range anyway, it's effectively a null or indeterminate value. However that isn't to say that I shouldn't code up to handle it which I will now do. The slightly confusing thing to me is that .std() returns a float64 but if you don't specify .std(dtype='float64') then you run into the precision problem. So presumably, internally, std is using 32-bit and only converts to 64-bit on output. Either way it's fairly obvious what is going on and easy to sort out once you (as in me) realise the mistake. Thanks for the tip about finfo Brennan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From klemm at phys.ethz.ch Tue Jan 11 18:02:48 2011 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Wed, 12 Jan 2011 00:02:48 +0100 Subject: [SciPy-User] bottleneck group_median Message-ID: <2A0AF3B6-3D5C-4224-BDFE-945BEEA6A1E7@phys.ethz.ch> Hello, I am looking at the bottleneck package and the benchmarks are really impressive. Would it be possible, or is it planned, to implement a group_median, analogous to the group_mean? I am at the moment working on a project where I have to search through large arrays and compute medians for certain values, which is quite slow in conventional numpy/scipy. Therefore a group_median would be absolutely fantastic for me. I would promise to contribute, but unfortunately my C skills are so limited that I would really not be of much help in finite time. Thanks for a cool package, Hanno From kwgoodman at gmail.com Tue Jan 11 18:28:15 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 11 Jan 2011 15:28:15 -0800 Subject: [SciPy-User] bottleneck group_median In-Reply-To: <2A0AF3B6-3D5C-4224-BDFE-945BEEA6A1E7@phys.ethz.ch> References: <2A0AF3B6-3D5C-4224-BDFE-945BEEA6A1E7@phys.ethz.ch> Message-ID: On Tue, Jan 11, 2011 at 3:02 PM, Hanno Klemm wrote: > I am looking at the bottleneck package and the benchmarks are really > impressive. Would it be possible, or is it planned, to implement a > group_median, analogous to the group_mean? The focus of the next release of Bottleneck (v0.3) is moving window functions. I recently added move_min, move_max, move_nanmin, move_nanmax. (I don't yet know how to efficiently find a moving median. Anyone have suggestions for algorithms?) Bottleneck has a fast median function that can used to build a group_median function. I plan to work on the group functions in v0.4. Before adding functions (like group_median) I'd like to review the function signature of the group function. Suggestions welcomed. Any changes you'd like to see in the inputs/outputs of the group functions? > I am at the moment working on a project where I have to search through > large arrays and compute medians for certain values, which is quite > slow in conventional numpy/scipy. Therefore a group_median would be > absolutely fantastic for me. I think scipy.ndimage has the ability to do group functions. I haven't looked into it yet. If someone knows how, I'd like to see an example so that I can use it for benchmarking. > I would promise to contribute, but unfortunately my C skills are so > limited that I would really not be of much help in finite time. It's very helpful to have users, especially when they report problems or typos or suggestions. From lanceboyle at qwest.net Wed Jan 12 00:54:32 2011 From: lanceboyle at qwest.net (Jerry) Date: Tue, 11 Jan 2011 22:54:32 -0700 Subject: [SciPy-User] bandlimiting sample sequence In-Reply-To: <201101111339.07153.a@gaydenko.com> References: <201101111339.07153.a@gaydenko.com> Message-ID: You should be careful here. As I understand it, your samples have been generated by a program ("soft-generated"). ___THEY ARE ALREADY BANDLIMITED.___ Any and all aliasing has already been done and you can't do anything about it. For example, say you want to generate samples of a square wave so you make a sequence like for example -1 -1 -1 1 1 1 -1 -1 -1 1 1 1 and so on. Those are samples of a non- bandlimited analog square wave and the aliasing of what would be out- of-band harmonics is already done and there is nothing that you can do about it. Make such a wave and play it as an audio file and you will hear what I mean. Low pass filtering will not remove the aliasing. What you need to do is to construct a bunch of truncated sinc functions of the proper weights--that will give you approximate samples of a bandlimited square wave--the approximation depends on how long your sinc function is and how you truncate it--a nice window is good. Look up windowed sinc functions. On Jan 11, 2011, at 3:39 AM, Andrew Gaydenko wrote: > Hi! > > Say, I have given sample rate Fs and a sequence of (soft-generated) > samples. > The aim is to limit band below Fs/2. I mean any limiting params are > also given > (for example, attenuation at 0.9 * Fs/2 is -1db or less, attenuation > at Fs/2 > must be -60db or more, gain oscillation in some frequencies range > (say, from > 0Hz to 0.8 * Fs/2) must be in +/-0.01db range. > > Has anybody examples (or references to publically available > examples) I can > take to understand how to band limit a sequence decribed way? > > > Andrew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From florisje at hotmail.com Wed Jan 12 02:45:18 2011 From: florisje at hotmail.com (Floris) Date: Wed, 12 Jan 2011 07:45:18 +0000 (UTC) Subject: [SciPy-User] Reading TDM/TDMS Files with scipy References: Message-ID: Nils Wagner iam.uni-stuttgart.de> writes: > > Hi all, > > Is it possible to read TDM/TDMS files with scipy ? > > I found a tool for Matlab > http://zone.ni.com/devzone/cda/epd/p/id/5957 > > Nils > Hello Nils, I made a little tool for that: pyTDMS. http://sourceforge.net/projects/pytdms/ Hope that helps. Floris From seb.haase at gmail.com Wed Jan 12 03:38:45 2011 From: seb.haase at gmail.com (Sebastian Haase) Date: Wed, 12 Jan 2011 09:38:45 +0100 Subject: [SciPy-User] Reading TDM/TDMS Files with scipy In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 8:45 AM, Floris wrote: > Nils Wagner iam.uni-stuttgart.de> writes: > >> >> Hi all, >> >> Is it possible to read TDM/TDMS files with scipy ? >> >> I found a tool for Matlab >> http://zone.ni.com/devzone/cda/epd/p/id/5957 >> >> Nils >> > > > Hello Nils, > I made a little tool for that: pyTDMS. > http://sourceforge.net/projects/pytdms/ > Hope that helps. > Floris > Hi Floris, this is great news ! I hope I find time to try it out soon. - Sebastian From pgarrone at optusnet.com.au Wed Jan 12 06:53:24 2011 From: pgarrone at optusnet.com.au (Peter John Garrone) Date: Wed, 12 Jan 2011 22:53:24 +1100 Subject: [SciPy-User] Created buffers are not writable Message-ID: <20110112115324.GA31411@bacchus> Hi, I am attempting to map a numpy array to a block of memory in a callback. I need to set this data, and all my calculations use numpy arrays. Unfortunately the arrays are read-only, because the "buffer" class is returning a read-only buffer. I am nowhere specifying read-only. Its great that python is stopping me from accidentally modifying some shared data, but I really do wish to set it. Can anybody tell me how to create a read-write buffer? There are only about 5000 hits on the word buffer in the python source distribution. --------------------------------------------------------------------- import numpy as np import ctypes libc = ctypes.CDLL("libc.so.6") N = 10 ptr = libc.malloc(N*8) ptr_float = ctypes.cast(ptr, ctypes.POINTER(ctypes.c_double*N)) bfr = buffer(ptr_float.contents) #bfr[0] = 1 X = np.frombuffer(bfr) ctypes.memset(ptr, 0, N*8) print X X[0] = 1 --------------------------------------------------------------------- Output from above: --------------------------------------------------------------------- [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Traceback (most recent call last): File "xx.py", line 11, in X[0] = 1 RuntimeError: array is not writeable --------------------------------------------------------------------- And if the bfr[0] = 1 line is uncommented: --------------------------------------------------------------------- Traceback (most recent call last): File "xx.py", line 8, in bfr[0] = 1 TypeError: buffer is read-only --------------------------------------------------------------------- Thanks in advance. From g.plantageneto at runbox.com Wed Jan 12 07:05:25 2011 From: g.plantageneto at runbox.com (g.plantageneto at runbox.com) Date: Wed, 12 Jan 2011 13:05:25 +0100 (CET) Subject: [SciPy-User] Speed up sin/sqrt functions with cython Message-ID: Hi, I am using cython to speed up some computations (btw, thanks a lot to the people who gave me suggestions on a previous thread). I don't understand how to apply the fast C-coded sin/sqrt functions to a numpy array. I can import from math.h like this: cdef extern from "math.h" double sin(double) but then I can't use this function on a numpy ndarray, obviously. Any ideas? It sounds silly to me to write a cycle on ndarray elements. Thanks From Jerome.Kieffer at esrf.fr Wed Jan 12 07:40:21 2011 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Wed, 12 Jan 2011 13:40:21 +0100 Subject: [SciPy-User] Speed up sin/sqrt functions with cython In-Reply-To: References: Message-ID: <20110112134021.c219a330.Jerome.Kieffer@esrf.fr> On Wed, 12 Jan 2011 13:05:25 +0100 (CET) wrote: > > Hi, > > I am using cython to speed up some computations (btw, thanks a lot to the people who gave me suggestions on a previous thread). > I don't understand how to apply the fast C-coded sin/sqrt functions to a numpy array. > I can import from math.h like this: > > cdef extern from "math.h" > double sin(double) > > but then I can't use this function on a numpy ndarray, obviously. > > Any ideas? It sounds silly to me to write a cycle on ndarray elements. it is not ... and you will gain a lot in speed. -- J?r?me Kieffer On-Line Data analysis / Software Group ISDD / ESRF tel +33 476 882 445 From wesmckinn at gmail.com Wed Jan 12 09:53:48 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 12 Jan 2011 09:53:48 -0500 Subject: [SciPy-User] Speed up sin/sqrt functions with cython In-Reply-To: <20110112134021.c219a330.Jerome.Kieffer@esrf.fr> References: <20110112134021.c219a330.Jerome.Kieffer@esrf.fr> Message-ID: On Wed, Jan 12, 2011 at 7:40 AM, Jerome Kieffer wrote: > On Wed, 12 Jan 2011 13:05:25 +0100 (CET) > wrote: > >> >> Hi, >> >> I am using cython to speed up some computations (btw, thanks a lot to the people who gave me suggestions on a previous thread). >> I don't understand how to apply the fast C-coded sin/sqrt functions to a numpy array. >> I can import from math.h like this: >> >> cdef extern from "math.h" >> ? ? double sin(double) >> >> but then I can't use this function on a numpy ndarray, obviously. >> >> Any ideas? It sounds silly to me to write a cycle on ndarray elements. > > it is not ... and you will gain a lot in speed. > > -- > J?r?me Kieffer > On-Line Data analysis / Software Group > ISDD / ESRF > tel +33 476 882 445 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > http://docs.cython.org/src/userguide/numpy_tutorial.html From bsouthey at gmail.com Wed Jan 12 10:07:02 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 12 Jan 2011 09:07:02 -0600 Subject: [SciPy-User] removing multiple occurrences of a specific value (or range of values) from an array In-Reply-To: <4D2CDDE1.7040901@visualreservoir.com> References: <4D2BB56F.3020605@visualreservoir.com> <4D2BB922.7010905@visualreservoir.com> <4D2BC180.1020803@visualreservoir.com> <4D2C6F5C.2080505@gmail.com> <4D2CDDE1.7040901@visualreservoir.com> Message-ID: <4D2DC396.3080403@gmail.com> On 01/11/2011 04:46 PM, Brennan Williams wrote: > On 12/01/2011 3:55 a.m., Bruce Southey wrote: >> On 01/10/2011 08:33 PM, Brennan Williams wrote: >>> On 11/01/2011 3:15 p.m., josef.pktd at gmail.com wrote: >>>> On Mon, Jan 10, 2011 at 8:57 PM, Brennan Williams >>>> wrote: >>>>> On 11/01/2011 2:48 p.m., josef.pktd at gmail.com wrote: >>>>>> On Mon, Jan 10, 2011 at 8:42 PM, Brennan Williams >>>>>> wrote: >>>>>>> I have a numpy array and I use .min(), .max(), .std(), average(..), >>>>>>> median(...) etc to get various stats values. >>>>>>> >>>>>>> Depending on where the data originally came from, the array can contain >>>>>>> a null value which could be 1.0e+20 or similar (can vary from dataset to >>>>>>> dataset). Due to rounding errors this can sometimes appear as something >>>>>>> like 1.0000002004e+20 etc etc. >>>>>>> >>>>>>> So I want to be able to correctly calculate the stats values by ignoring >>>>>>> the null values. >>>>>>> >>>>>>> I also want to be able to replace the null values with another value >>>>>>> (for plotting/exporting). >>>>>>> >>>>>>> What's the best way to do this without looping over the elements of the >>>>>>> array? >>>>>> If you don't have anything large, then you could just do >>>>>> >>>>>> x[x>1e19]=np.nan >>>>>> >>>>>> or filter them out, or convert to masked array. >>>>>> >>>>> the array is usually<10,000 values, often<1000 >>>>> >>>>> On a separate note I found that .std() didn't return a valid value when >>>>> I have a lot of 1.0e+20's in the array. I realise that it is probably a >>>>> single precision issue and I probably won't need to worry about this in >>>>> future but I presume I should use .std(dtype=float64) ? >>>> I'm not sure I understand >>>> Are you using np.std on the array with missing values still in there >>>> encoded with 1e20? >>>> That wouldn't give the right values. >>> I was, but that was because I didn't realise the array had bad/missing >>> values in it. In theory the data coming in should just have been an >>> array of zero's but it is inconsistent, for some reason I haven't been >>> able to work out yet. >>>> may numpy seems to promote automatically (contrary to the docs) >>>> >>>>>>> np.arange(5, dtype='float32').std() >>>> 1.4142135623730951 >>>>>>> np.arange(5, dtype='float32').std().dtype >>>> dtype('float64') >>>>>>> np.arange(5, dtype='float32').dtype >>>> dtype('float32') >>>> >>> Hmmm >>> >>> >>> a=np.range(5,dtype='float32') >>> b=a*1.0e+20 >>> b.std() >>> returns inf >>> whereas >>> b.std(dtype='float64') >>> returns 1.4142....e+20 >>> >>> Brennan >>> >> That's because you are not being careful about your floating point >> precision. >> >> Since standard deviation involves squaring, so the required immediate >> precision is over 1.e+40 which exceeds the 32-bit precision. (On my x86 >> 64-bit Linux system, 1.e+308 is about the limit for 64bit and 1.e+4932 >> is the limit for float128 - these limits do vary across processors and >> operating systems.) >> >> >>> print np.finfo(np.float32) >> Machine parameters for float32 >> --------------------------------------------------------------------- >> precision= 6 resolution= 1.0000000e-06 >> machep= -23 eps= 1.1920929e-07 >> negep = -24 epsneg= 5.9604645e-08 >> minexp= -126 tiny= 1.1754944e-38 >> maxexp= 128 max= 3.4028235e+38 >> nexp = 8 min= -max >> --------------------------------------------------------------------- >> >> You can also see this just by doing: >> >>> b*b >> Warning: overflow encountered in multiply >> array([ 0., inf, inf, inf, inf], dtype=float32) >> >> Bruce >> >> >> > My values shouldn't be up in the e+20 range anyway, it's effectively a > null or indeterminate value. However that isn't to say that I shouldn't > code up to handle it which I will now do. That is why I like masked arrays because it can handle arbitrary values for that situation - obviously you do need some idea first that these type of values exist. The other aspect is that numpy uses float64 by default so this type of issue is 'hidden' until more extreme cases occur. Then you can potentially move to a higher precision if os/cpu supports it. > The slightly confusing thing to me is that .std() returns a float64 but > if you don't specify .std(dtype='float64') then you run into the > precision problem. So presumably, internally, std is using 32-bit and > only converts to 64-bit on output. I think many of us thought that this was the case but Keith pointed out that was incorrect when the axis argument is not None: http://mail.scipy.org/pipermail/numpy-discussion/2010-December/054253.html Thus, currently the output dtype will be determined by the input dtype, the 'dtype' argument and axis argument. If the axis is None (default), then the output dtype will be float64 for all inputs (integers and floats) unless the input has a better precision than float64 (like float128). When the 'dtype' argument is given, the output dtype is the dtype with the better precision of float64 and precision 'dtype' argument (so if dtype=np.float128 then output will be float128). Late last year it was found that when the axis argument is specified to be something not None then numpy does not necessarily upconvert the input to float64. For example, the output dtype is the same as the input dtype or the 'dtype' argument eventhough in both cases the precision is less than float64. >>> np.array([[1, 2], [3, 4]], dtype=np.float32).mean(axis=0).dtype dtype('float32') >>> np.array([[1, 2], [3, 4]], dtype=np.int8).mean(axis=0, dtype=np.float32).dtype dtype('float32') > Either way it's fairly obvious what > is going on and easy to sort out once you (as in me) realise the mistake. > > Thanks for the tip about finfo > > Brennan FYI: iinfo is the integer version. Bruce From kwgoodman at gmail.com Wed Jan 12 10:26:39 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 12 Jan 2011 07:26:39 -0800 Subject: [SciPy-User] bottleneck group_median In-Reply-To: References: <2A0AF3B6-3D5C-4224-BDFE-945BEEA6A1E7@phys.ethz.ch> Message-ID: On Tue, Jan 11, 2011 at 3:28 PM, Keith Goodman wrote: > On Tue, Jan 11, 2011 at 3:02 PM, Hanno Klemm wrote: > >> I am looking at the bottleneck package and the benchmarks are really >> impressive. Would it be possible, or is it planned, to implement a >> group_median, analogous to the group_mean? > > The focus of the next release of Bottleneck (v0.3) is moving window > functions. I recently added move_min, move_max, move_nanmin, > move_nanmax. (I don't yet know how to efficiently find a moving > median. Anyone have suggestions for algorithms?) > > Bottleneck has a fast median function that can used to build a > group_median function. I plan to work on the group functions in v0.4. > Before adding functions (like group_median) I'd like to review the > function signature of the group function. Suggestions welcomed. > > Any changes you'd like to see in the inputs/outputs of the group functions? > >> I am at the moment working on a project where I have to search through >> large arrays and compute medians for certain values, which is quite >> slow in conventional numpy/scipy. Therefore a group_median would be >> absolutely fantastic for me. > > I think scipy.ndimage has the ability to do group functions. I haven't > looked into it yet. If someone knows how, I'd like to see an example > so that I can use it for benchmarking. > >> I would promise to contribute, but unfortunately my C skills are so >> limited that I would really not be of much help in finite time. > > It's very helpful to have users, especially when they report problems > or typos or suggestions. BTW, bottleneck does have a slow, brute-force, generic group function that I use for unit testing. You could combine that with a fast median function to get a group median: >> from bottleneck.slow.group import group_func >> a = np.array([1,2,3,4,5]) >> label = ['a', 'b', 'a', 'b', 'b'] >> group_func(bn.median, a, label) (array([ 2., 4.]), ['a', 'b']) From nwerneck at gmail.com Wed Jan 12 12:27:00 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Wed, 12 Jan 2011 15:27:00 -0200 Subject: [SciPy-User] Speed up sin/sqrt functions with cython In-Reply-To: References: Message-ID: <20110112172700.GA6634@spirit> On Wed, Jan 12, 2011 at 01:05:25PM +0100, g.plantageneto at runbox.com wrote: > > Hi, > > I am using cython to speed up some computations (btw, thanks a lot to the people who gave me suggestions on a previous thread). > I don't understand how to apply the fast C-coded sin/sqrt functions to a numpy array. > I can import from math.h like this: > > cdef extern from "math.h" > double sin(double) > > but then I can't use this function on a numpy ndarray, obviously. > > Any ideas? It sounds silly to me to write a cycle on ndarray elements. Hi. When I started using Cython I also thought that was strange, but that is really how it works. It gets faster than using Numpy because numpy makes a new loop for each operation, and also stores intermediate results in new arrays. With Cython (and other tools) you make a single loop where you calculate complex expressions completely at each iteration without storing intermediate results in memory. When you implement a function in Cython you must create this loop over the array values, and also make sure you use "cdef" in the necessary variables. You must take care because it is very easy to forget to declare a variable properly, and although the program will work correctly, it will be slower. You must also use @cython.boundscheck(False) @cython.wraparound(False) to speed up the array accesses. You can even use C pointers to the arrays, but that doesn't add much from just switching off these checks. Regarding the sin and sqrt, you should not that simply using sin inside Cython shouldn't accelerate much from e.g. doing a sin(x) on an array from Python. The speedup is more related to the memory accesses and number of loops. But you can try to substitute the sin calculations using memoization, for example, or polynomials. And as I mentioned previously, you can use the rsqrt instruction in the case of sqrt calculations. I wrote about that in my blog a few months ago: http://xor0110.wordpress.com/2010/09/16/using-the-sse-rsqrt-from-python-via-cython/ ++nic -- Nicolau Werneck C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15 Linux user #460716 "We should continually be striving to transform every art into a science: in the process, we advance the art." -- Donald Knuth From robert.kern at gmail.com Wed Jan 12 12:31:08 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Jan 2011 11:31:08 -0600 Subject: [SciPy-User] Created buffers are not writable In-Reply-To: <20110112115324.GA31411@bacchus> References: <20110112115324.GA31411@bacchus> Message-ID: On Wed, Jan 12, 2011 at 05:53, Peter John Garrone wrote: > > Hi, > ? I am attempting to map a numpy array to a block of memory in a callback. > I need to set this data, and all my calculations use numpy arrays. > Unfortunately the arrays are read-only, because the "buffer" class is returning > a read-only buffer. I am nowhere specifying read-only. > Its great that python is stopping me from accidentally modifying some shared data, > but I really do wish to set it. > Can anybody tell me how to create a read-write buffer? > There are only about 5000 hits on the word buffer in the python source distribution. > > --------------------------------------------------------------------- > import numpy as np > import ctypes > libc = ctypes.CDLL("libc.so.6") > N = 10 > ptr = libc.malloc(N*8) > ptr_float = ctypes.cast(ptr, ctypes.POINTER(ctypes.c_double*N)) > bfr = buffer(ptr_float.contents) > #bfr[0] = 1 > X = np.frombuffer(bfr) > ctypes.memset(ptr, 0, N*8) > print X > X[0] = 1 > --------------------------------------------------------------------- Use numpy.ctypeslib.as_array(ptr_float.contents) instead. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From klemm at phys.ethz.ch Wed Jan 12 18:19:19 2011 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Thu, 13 Jan 2011 00:19:19 +0100 Subject: [SciPy-User] bottleneck group_median In-Reply-To: References: <2A0AF3B6-3D5C-4224-BDFE-945BEEA6A1E7@phys.ethz.ch> Message-ID: <8D5BAF35-6A50-4588-80B1-B767161E1EB4@phys.ethz.ch> Am 12.01.2011 um 16:26 schrieb Keith Goodman: > On Tue, Jan 11, 2011 at 3:28 PM, Keith Goodman > wrote: >> On Tue, Jan 11, 2011 at 3:02 PM, Hanno Klemm >> wrote: >> >>> I am looking at the bottleneck package and the benchmarks are really >>> impressive. Would it be possible, or is it planned, to implement a >>> group_median, analogous to the group_mean? >> >> The focus of the next release of Bottleneck (v0.3) is moving window >> functions. I recently added move_min, move_max, move_nanmin, >> move_nanmax. (I don't yet know how to efficiently find a moving >> median. Anyone have suggestions for algorithms?) >> >> Bottleneck has a fast median function that can used to build a >> group_median function. I plan to work on the group functions in v0.4. >> Before adding functions (like group_median) I'd like to review the >> function signature of the group function. Suggestions welcomed. >> >> Any changes you'd like to see in the inputs/outputs of the group >> functions? >> >>> I am at the moment working on a project where I have to search >>> through >>> large arrays and compute medians for certain values, which is quite >>> slow in conventional numpy/scipy. Therefore a group_median would be >>> absolutely fantastic for me. >> >> I think scipy.ndimage has the ability to do group functions. I >> haven't >> looked into it yet. If someone knows how, I'd like to see an example >> so that I can use it for benchmarking. >> >>> I would promise to contribute, but unfortunately my C skills are so >>> limited that I would really not be of much help in finite time. >> >> It's very helpful to have users, especially when they report problems >> or typos or suggestions. > > BTW, bottleneck does have a slow, brute-force, generic group function > that I use for unit testing. You could combine that with a fast median > function to get a group median: > Thanks for the suggstion. At the moment I am playing around with using the group_mapper function to get a dictionary of the values that I need and then shoving the subselected array to the median function. I will test which approach is faster. Hanno >>> from bottleneck.slow.group import group_func >>> a = np.array([1,2,3,4,5]) >>> label = ['a', 'b', 'a', 'b', 'b'] >>> group_func(bn.median, a, label) > (array([ 2., 4.]), ['a', 'b']) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kwgoodman at gmail.com Wed Jan 12 18:28:06 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 12 Jan 2011 15:28:06 -0800 Subject: [SciPy-User] bottleneck group_median In-Reply-To: <8D5BAF35-6A50-4588-80B1-B767161E1EB4@phys.ethz.ch> References: <2A0AF3B6-3D5C-4224-BDFE-945BEEA6A1E7@phys.ethz.ch> <8D5BAF35-6A50-4588-80B1-B767161E1EB4@phys.ethz.ch> Message-ID: On Wed, Jan 12, 2011 at 3:19 PM, Hanno Klemm wrote: > Am 12.01.2011 um 16:26 schrieb Keith Goodman: >> BTW, bottleneck does have a slow, brute-force, generic group function >> that I use for unit testing. You could combine that with a fast median >> function to get a group median: >> > > Thanks for the suggstion. At the moment I am playing around with using > the group_mapper function to get a dictionary of the values that I > need and then shoving the subselected array to the median function. > > I will test which approach is faster. I hope your approach is faster since that will be good news for bottleneck. Subselecting with arr.take() might help speed things up. From mdekauwe at gmail.com Thu Jan 13 00:31:28 2011 From: mdekauwe at gmail.com (mdekauwe) Date: Wed, 12 Jan 2011 21:31:28 -0800 (PST) Subject: [SciPy-User] [SciPy-user] Matching up arrays? In-Reply-To: <30254994.post@talk.nabble.com> References: <30254994.post@talk.nabble.com> Message-ID: <30659547.post@talk.nabble.com> new solution, though not entirely sure it is anymore efficient. m = dict(zip(m_date, mm)) o = dict(zip(o_date, oo)) matches = filter(m.has_key, o.keys()) x = [mm[index] for index, yr in enumerate(matches) if yr in m_date] y = [oo[index] for index, yr in enumerate(matches) if yr in o_date] mdekauwe wrote: > > Hi, > > So I have 2 arrays which hold some data and for each array I have an array > with the associated date stamp. The dates vary between arrays, so what I > would like to be able to do is just subset the data so I end up with two > arrays if there is a matching time stamp between arrays (with the idea > being I would do some comparative stats on these arrays). How I solved it > seems a bit ugly and I wondered if anyone had a better idea? > > e.g. > > m_date = np.array(['1998-01-01 00:00:00', '1999-01-01 00:00:00', > '2000-01-01 00:00:00', > '2005-01-01 00:00:00']) > o_date = np.array(['1998-01-01 00:00:00', '1999-01-01 00:00:00', > '2000-01-01 00:00:00']) > > mm = np.array([ 3.5732, 4.5761, 4.0994, 3.9031]) > oo = np.array([ 5.84, 5.66, 5.83]) > > x, y = [], [] > o = np.vstack((o_date, oo)) > m = np.vstack((m_date, mm)) > for i in xrange(o.shape[1]): > for j in xrange(m.shape[1]): > if m[0,j] == o[0,i]: > x.append(m[1,j]) > y.append(o[1,i]) > > thanks > > Martin > > -- View this message in context: http://old.nabble.com/Matching-up-arrays--tp30254994p30659547.html Sent from the Scipy-User mailing list archive at Nabble.com. From mail.till at gmx.de Thu Jan 13 08:37:58 2011 From: mail.till at gmx.de (Till Stensitzki) Date: Thu, 13 Jan 2011 13:37:58 +0000 (UTC) Subject: [SciPy-User] Separable NLLS - Variable Projection in Python? Message-ID: Hello, has anyone done seperable nonlinear least squares in Python? There is the VARPRO.f in the netlib libary, but my Fortran knowlage is very shallow. If no-one did it, how hard is it to wrap the Fortran code? greetings Till From stevenj at alum.mit.edu Sat Jan 8 21:05:16 2011 From: stevenj at alum.mit.edu (Steven G. Johnson) Date: Sat, 8 Jan 2011 18:05:16 -0800 (PST) Subject: [SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters andthen fails to converge In-Reply-To: References: <1294063052.6782.32.camel@mypride> <1293731252.6936.47.camel@mypride> <1294073228.3694.0.camel@mypride> <1293842375.17929.3.camel@mypride> <1294164739.6880.75.camel@mypride> Message-ID: On Jan 4, 3:23?pm, josef.p... at gmail.com wrote: > If you have local maxima, and your optimization problem is not well > behaved, then you might need a global optimizer. > I think also the algorithms innloptwill get stuck at local optima, > since all these optimizers only use local information. NLopt also includes global optimization algorithms. (Of course, global optimization problems can be NP hard, so there is no magic bullet in general.) From fpm at u.washington.edu Tue Jan 11 11:21:49 2011 From: fpm at u.washington.edu (cassiope) Date: Tue, 11 Jan 2011 08:21:49 -0800 (PST) Subject: [SciPy-User] bandlimiting sample sequence In-Reply-To: <201101111339.07153.a@gaydenko.com> References: <201101111339.07153.a@gaydenko.com> Message-ID: <9cc85964-a106-44af-8630-8fedba0f17fe@w29g2000vba.googlegroups.com> On Jan 11, 2:39?am, Andrew Gaydenko wrote: > Hi! > > Say, I have given sample rate Fs and a sequence of (soft-generated) samples. > The aim is to limit band below Fs/2. I mean any limiting params are also given > (for example, attenuation at 0.9 * Fs/2 is -1db or less, attenuation at Fs/2 > must be -60db or more, gain oscillation in some frequencies range (say, from > 0Hz to 0.8 * Fs/2) must be in +/-0.01db range. > > Has anybody examples (or references to publically available examples) I can > take to understand how to band limit a sequence decribed way? > > Andrew ??? Not sure what you're doing. Once a real process is sampled, whatever aliasing that might occur has already happened. You can't digitally remove aliased values unless they occur in a part of the digital spectrum that you aren't interested in anyway. Otherwise, it seems as though you're simply designing a filter with a particular set of specifications - which is a topic that has been exhaustively covered in many books. (Though you haven't said anything regarding FIR vs IIR, for example). If I'm way off base, perhaps you could clarify your question[s]. From rthompsonj at gmail.com Tue Jan 11 16:21:11 2011 From: rthompsonj at gmail.com (Robert Thompson) Date: Tue, 11 Jan 2011 13:21:11 -0800 Subject: [SciPy-User] scipy integration question Message-ID: <4D2CC9C7.2010801@gmail.com> I'm just learning python today so please forgive me. I come from a C background (although I wouldn't call myself good) so I thought migrating some programs over to python would be a good start. I'm trying to do an integration involving multiple variables and it's giving me completely different answers from my C program and I have no idea why. here's a snippet from my c code that I'm trying to emulate: |for(j=0; j From benoit.code at gmx.fr Thu Jan 13 05:37:04 2011 From: benoit.code at gmx.fr (benoit.code at gmx.fr) Date: Thu, 13 Jan 2011 11:37:04 +0100 Subject: [SciPy-User] Weave obsolete and/or tutorial obsolete ? + numpy.ma.MaskedArray Message-ID: <20110113140001.104830@gmx.com> Hi everyone, I've installed Weave and Scipy (maybe not fully because weave/examples/ wasn't on my computer, and I had to install Nose to run >>>weave.test()). I got no error during Weave's test and only one for Scipy (with test_implicit, also some S=Skipped and two K=Knownfails and Warnings), but the problem is not there. My problem is: I'm wondering if Weave's tutorial is obsolete or if it's Weave itself (my version is 0.4.9). This file : http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt is in the setup directory of Weave for Scipy 0.8.0 (also in Scipy 0.7.0, I checked) but the code example (see below) don't work (ImportError: cannot import name blitz_type_factories ; the same with scalar_spec). 641 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L641 from weave.blitz_tools import blitz_type_factories 642 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L642 from weave import scalar_spec 643 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L643 from weave import inline 644 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L644 def _cast_copy_transpose(type,a_2d): 645 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L645 assert(len(shape(a_2d)) == 2) 646 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L646 new_array = zeros(shape(a_2d),type) 647 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L647 NumPy_type = scalar_spec.NumPy_to_blitz_type_mapping[type] 648 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L648 code = \ 649 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L649 """ 650 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L650 for(int i = 0;i < _Na_2d[0]; i++) 651 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L651 for(int j = 0; j < _Na_2d[1]; j++) 652 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L652 new_array(i,j) = (%s) a_2d(j,i); 653 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L653 """ % NumPy_type 654 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L654 inline(code,['new_array','a_2d'], 655 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L655 type_factories = blitz_type_factories,compiler='gcc') 656 http://projects.scipy.org/scipy/browser/trunk/scipy/weave/doc/tutorial.txt#L656 return new_array Manipulatingenvironment variables won't correct ImportErrors because nowhere in Weave's source code exist strings 'scalar_spec' and 'blitz_type_factories'. I've found how to patch this error (and some others) : With inline(l.654), type_factories has to be corrected with type_converters, shape(l.646) is in fact a_2d.shape and to replace "blitz_type_factories", the blitz list, imported from scipy.weave.converters seems to work. >>> from scipy.weave.converters import blitz >>> blitz [(file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name), (file:: name: no_name)] This list is strange but it works as well as None for simple variable types (because type_converters is an optional key word). Yet, that doesn't correct the missing scalar_spec import, but that brings a question: is the tutorial obsolete and where can I find an up-to-date version, or is Weave obsolete (and that code is supposed to work)? I'm trying to use scipy.weave.inline() (or scipy.weave.inline_tools.inline(), don't know if there is a difference) to compute on a masked array (numpy.ma.MaskedArray). Can you tell me which "factory" knows how to convert my variable to C? In the weave directory there are : - base_spec - blitz_spec - cpp_namespace_spec - c_spec - numpy_scalar_spec - standard_array_spec - swig2_spec - vtk_spec but python's help for those modules isn't explicit (for example, nothing about the NumPy_to_blitz_type_mapping variable) and, to come back to "Is Weave obsolete itself ?", this help links to the Module Docs, on the official Python website : << Help on module scipy.weave.numpy_scalar_spec in scipy.weave: NAME scipy.weave.numpy_scalar_spec FILE /usr/lib/python2.6/dist-packages/scipy/weave/numpy_scalar_spec.py MODULE DOCS http://docs.python.org/library/scipy.weave.numpy_scalar_spec ...>> But this link is broken. That seems strange: why would Python delete this doc? By the way, Scipy's and Weave's link are broken also : http://docs.python.org/library/scipy http://docs.python.org/library/scipy.weave and because no help about numpy_scalar_spec can be found on Scipy.org, I'm hoping this email will be helpful. Thanks a lot, Beno?t. PS : SVN link is broken also svn co http://svn.scipy.org/svn/scipy/trunk/Lib/weave weave -> URL doesn't exist -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Thu Jan 13 15:20:08 2011 From: e.antero.tammi at gmail.com (eat) Date: Thu, 13 Jan 2011 20:20:08 +0000 (UTC) Subject: [SciPy-User] Separable NLLS - Variable Projection in Python? References: Message-ID: Till Stensitzki gmx.de> writes: Hi, > > Hello, > has anyone done seperable nonlinear least squares in Python? > There is the VARPRO.f in the netlib libary, but my Fortran knowlage is very > shallow. If no-one did it, how hard is it to wrap the Fortran code? > > greetings > Till > Can't say much about the Fortran code, but I have done some variable projections stuff years ago with Matlab. My interest was to recovery some missing data values. I'm assuming that you have checked that scipy provides proper functionality for the outer (nonlinear) optimization? If so, then I might be able to contribute some code. Could you be specific and elaborate more of your particular requirements and needs? Regards, eat From mail.till at gmx.de Thu Jan 13 17:11:32 2011 From: mail.till at gmx.de (Till) Date: Thu, 13 Jan 2011 22:11:32 +0000 (UTC) Subject: [SciPy-User] Separable NLLS - Variable Projection in Python? References: Message-ID: > > Can't say much about the Fortran code, but I have done some variable > projections stuff years ago with Matlab. My interest was to recovery some > missing data values. > > I'm assuming that you have checked that scipy provides proper functionality > for the outer (nonlinear) optimization? > > If so, then I might be able to contribute some code. > > Could you be specific and elaborate more of your particular requirements and > needs? > > Regards, > eat > Yep, i did have a look at scipys functionality. My problem is to fit 2d (time and wavelength) spectroscopic data. That means 400 channels at 350 time-points each. The model used is a sum of (folded) exponentials - the non linear base functions - with different coefficients - the separable part - for each channel. Solving with just normal least squares is not really possible and takes its time (around 7*400 parameters). After a day of reading relevant literature, i was able to solve it in python, using the normal leastsq routine. Instead just minimizing the residuals, which has numerical problems (and was my first try), minimizing the variable projection functional worked great. If anyone is interested in the source or relevant literature, just ask. greetings Till From gregor.thalhammer at gmail.com Thu Jan 13 12:33:19 2011 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Thu, 13 Jan 2011 18:33:19 +0100 Subject: [SciPy-User] Speed up sin/sqrt functions with cython In-Reply-To: References: Message-ID: <54D148C4-8D13-415C-9078-E684BA8A2B9B@gmail.com> Am 12.1.2011 um 13:05 schrieb : > > Hi, > > I am using cython to speed up some computations (btw, thanks a lot to the people who gave me suggestions on a previous thread). > I don't understand how to apply the fast C-coded sin/sqrt functions to a numpy array. > I can import from math.h like this: > > cdef extern from "math.h" > double sin(double) > > but then I can't use this function on a numpy ndarray, obviously. > > Any ideas? It sounds silly to me to write a cycle on ndarray elements. > > Thanks For sin/sqrt computations on large arrays, numexpr + VML might give you better results than a cython implementation, and with less effort. numexpr with VML uses Intels VML library which gives much better performance than a C loop, at least for sin, for sqrt the performance is similar. In addition, numexpr-VML makes use of all your CPU cores. Gregor From ralf.gommers at googlemail.com Fri Jan 14 11:16:01 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 15 Jan 2011 00:16:01 +0800 Subject: [SciPy-User] scipy integration question In-Reply-To: <4D2CC9C7.2010801@gmail.com> References: <4D2CC9C7.2010801@gmail.com> Message-ID: On Wed, Jan 12, 2011 at 5:21 AM, Robert Thompson wrote: > I'm just learning python today so please forgive me. I come from a C > background (although I wouldn't call myself good) so I thought migrating > some programs over to python would be a good start. I'm trying to do an > integration involving multiple variables and it's giving me completely > different answers from my C program and I have no idea why. > > here's a snippet from my c code that I'm trying to emulate: > for(j=0; j { > k[j] = pow(10.,log10(kmin)+j*dk); > q = k[j]/(Om*h*h); > W = 3.*(sin(k[j]*R)-k[j]*R*cos(k[j]*R))/pow(k[j]*R,3.) > T = bunch of random stuff, it's a function of q alone though; > P[j] = k[j]*T*T; > ptemp = P[j]; > ktemp = k[j]; > sigma2[j] = qromb(k_integrate,0,kmax); > } > > double k_integrate(double ktemp,double ptemp, double W) > { > integrand_k = ktemp*ktemp*ktemp*ptemp*W*W; > return integrand_k; > } > > And here is my corresponding python code: > from scipy.integrate import quad > from pylab import * > > Odm = 0.26 > Ob = 0.04 > Om = Odm+Ob > Ok = 0.0 > Ol = 0.7 > O0 = Ol+Om > h = 0.7 > > rho0 = 1.462e11 #Msun/Mpc^3 > z = 0 > kbins = 10000 > kmin = 0.000001 > kmax = 10000 > dk = (log10(kmax)-log10(kmin))/kbins > > Ho = h*100*1.02272e-12 > c = 3.067e-7 > > def k_integrand(k,W,P): > return k**3*W**2*P > > M = 14.4963 > R = ((3*10**(M))/(4*pi*rho0))**(1./3.) > > powerspec, wavenumber, sigma2 = [],[],[] > > for j in range(0,kbins): > k = 10**(log10(kmin)+j*dk) > wavenumber.append(k) > > q = k/(Om*h**2) > W = 3.*(sin(k*R)-k*R*cos(k*R))/(k*R)**3 > T = log(1.+2.34*q)/(2.34*q) * > (1.+3.89*q+(16.1*q)**2.+(5.46*q)**3.+(6.71*q)**4.)**(-1./4.) > P = k*T**2 > powerspec.append(P) > > sigmaresult, err = quad(k_integrand,0,kmax,args=(W,P)) > sigma2.append(sigmaresult) > > > If I stick in k=1 the C program returns log10(sigma2)=16.076 but the python > script returns log10(sigma2)=6.777. Any ideas on what is causing this huge > discrepancy? I checked q,W,T,&P and they are in agreement so I know it's > something to do with the integration. Thanks in advance for your time! > > I'm not sure what the for loop and the lists in your Python code are for, but I suspect you want something like this: from scipy.integrate import quad from numpy import * Odm = 0.26 Ob = 0.04 Om = Odm + Ob h = 0.7 rho0 = 1.462e11 #Msun/Mpc^3 M = 14.4963 R = ((3*10**(M))/(4*pi*rho0))**(1./3.) def k_integrand(k): q = k/(Om*h**2) W = 3.*(sin(k*R)-k*R*cos(k*R))/(k*R)**3 T = log(1.+2.34*q)/(2.34*q) * (1.+3.89*q+(16.1*q)**2.+(5.46*q)**3.+(6.71*q)**4.)**(-1./4.) P = k*T**2 return k**3*W**2*P kmax = 1e4 sigmaresult, err = quad(k_integrand, 0, kmax) This gives a very small number because your k_integrand is <1e-11 over most of the range. If you replace it with a known function (like integrate f(x) = x from 0 to kmax) you'll see if you get the right answer or not. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Jan 14 12:23:25 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Jan 2011 17:23:25 +0000 (UTC) Subject: [SciPy-User] scipy integration question References: <4D2CC9C7.2010801@gmail.com> Message-ID: Tue, 11 Jan 2011 13:21:11 -0800, Robert Thompson wrote: [clip] > def k_integrand(k,W,P): > return k**3*W**2*P [clip] > sigmaresult, err = quad(k_integrand,0,kmax,args=(W,P)) That integral can be done analytically. The result is kmax**4*W**2*P/4. Perhaps the Python code you wrote does not correspond to what you intend to do? It's difficult to help more without knowing in more detail what it is that you want to compute --- if you post the mathematical description of the problem, maybe someone can show how to translate that to Scipy. -- Pauli Virtanen From pgarrone at optusnet.com.au Sat Jan 15 16:40:09 2011 From: pgarrone at optusnet.com.au (Peter John Garrone) Date: Sun, 16 Jan 2011 08:40:09 +1100 Subject: [SciPy-User] Integrating large ODE problems with sundials. Message-ID: <20110115214009.GA7304@bacchus> Hi, Some time ago I wrote that I was having problems with ODE algorithm solving, where most of the time appeared to be lost in the algorithm. In fact the problem appears to be within the PySUNDIALS implementation that I downloaded from sourceforge, pysundials-2.3.0-rc2. When I implemented my own simple interface, in fact only about five percent of the cpu time is spent within the algorithm. My approximate times for a 100000 state problem were: dydt/right-hand side equation: 20 percent Jacobian and sparse matrix creation: 35 percent Preconditioner sparse solve: 40 percent Remainder, including algorithm: 5 percent With the pysundials implementation, the remainder portion would have been about 95 percent, with the other categories adding up to about 4 percent. So I do conclude there is something poor about the pysundials implementation, possibly the N_Vector implementation, for big problems, using standard python. Cheers, From lorenzo.isella at gmail.com Sun Jan 16 14:29:12 2011 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sun, 16 Jan 2011 20:29:12 +0100 Subject: [SciPy-User] SciPy and Diffusion Equation Message-ID: <4D334708.3010805@gmail.com> Dear All, I hope that this is not too off-topic. I wonder if there is any tool under the scipy umbrella to deal with diffusion problems. I have tackled these problems with proprietary software in the past (e.g. Comsol Multiphysics), but I wonder if there are other (simple) pythonic options. To fix the ideas, consider c(x,t) as a time-dependent concentration profile and the equation \partial_t c= D\partial_x^2 c-\alpha c +beta\delta(x_0) i.e. a standard diffusion equation with a source term beta\delta(x_0) (what I mean is that the initial density distribution is a delta centered about x_0, one can probably remove that term from the equation while giving the initial density distribution at t=0). However, it all boils down to a diffusion equation in an interval [0,L] with an initial density profile c(x,t_0). In the case of the equation above, I have an analytical solution, but in general I may want to add sources, non-linear terms etc...so it should be considered as a starting point. Any suggestion is appreciated. Cheers Lorenzo From jr at sun.ac.za Sun Jan 16 14:35:25 2011 From: jr at sun.ac.za (Johann Rohwer) Date: Sun, 16 Jan 2011 21:35:25 +0200 Subject: [SciPy-User] SciPy and Diffusion Equation In-Reply-To: <4D334708.3010805@gmail.com> References: <4D334708.3010805@gmail.com> Message-ID: <4D33487D.5040501@sun.ac.za> FiPy (http://www.ctcms.nist.gov/fipy/) is a general PDE solver written in Python, based on the finite volume approach. It should be able to handle diffusion problems and a number of examples are available from the website. Johann On 16/01/2011 21:29, Lorenzo Isella wrote: > Dear All, > I hope that this is not too off-topic. > I wonder if there is any tool under the scipy umbrella to deal with > diffusion problems. I have tackled these problems with proprietary > software in the past (e.g. Comsol Multiphysics), but I wonder if there > are other (simple) pythonic options. > To fix the ideas, consider c(x,t) as a time-dependent concentration > profile and the equation > > \partial_t c= D\partial_x^2 c-\alpha c +beta\delta(x_0) > > i.e. a standard diffusion equation with a source term beta\delta(x_0) > (what I mean is that the initial density distribution is a delta > centered about x_0, one can probably remove that term from the equation > while giving the initial density distribution at t=0). > However, it all boils down to a diffusion equation in an interval [0,L] > with an initial density profile c(x,t_0). In the case of the equation > above, I have an analytical solution, but in general I may want to add > sources, non-linear terms etc...so it should be considered as a starting > point. > Any suggestion is appreciated. > Cheers > > Lorenzo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From lou_boog2000 at yahoo.com Sun Jan 16 14:59:21 2011 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 16 Jan 2011 11:59:21 -0800 (PST) Subject: [SciPy-User] SciPy and Diffusion Equation In-Reply-To: <4D334708.3010805@gmail.com> References: <4D334708.3010805@gmail.com> Message-ID: <279851.49656.qm@web34405.mail.mud.yahoo.com> This is not a python solution, but you should look at FreeFEM++ (do a google). Very good freeware for finite element numerical solutions. Quite impressive and, as in the name, Free! Not hard to use either. -- Lou Pecora, my views are my own. ----- Original Message ---- From: Lorenzo Isella To: scipy-user at scipy.org Sent: Sun, January 16, 2011 11:29:12 AM Subject: [SciPy-User] SciPy and Diffusion Equation Dear All, I hope that this is not too off-topic. I wonder if there is any tool under the scipy umbrella to deal with diffusion problems. I have tackled these problems with proprietary software in the past (e.g. Comsol Multiphysics), but I wonder if there are other (simple) pythonic options. To fix the ideas, consider c(x,t) as a time-dependent concentration profile and the equation \partial_t c= D\partial_x^2 c-\alpha c +beta\delta(x_0) i.e. a standard diffusion equation with a source term beta\delta(x_0) (what I mean is that the initial density distribution is a delta centered about x_0, one can probably remove that term from the equation while giving the initial density distribution at t=0). However, it all boils down to a diffusion equation in an interval [0,L] with an initial density profile c(x,t_0). In the case of the equation above, I have an analytical solution, but in general I may want to add sources, non-linear terms etc...so it should be considered as a starting point. Any suggestion is appreciated. Cheers Lorenzo _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From thierry.montagu at cea.fr Mon Jan 17 04:26:29 2011 From: thierry.montagu at cea.fr (MONTAGU Thierry) Date: Mon, 17 Jan 2011 10:26:29 +0100 Subject: [SciPy-User] how to do easy things in octave with numpy ? Message-ID: <4D340B45.10600@cea.fr> Hi all scipy user i wonder how to do the following easy thing in octave within a python/numpy script. in octave:matlab, you instantiate a= zeros(10) for example and for example, you can do in one line a[2].m = 10 or do for i = 1 : length(a) a[i].m = 25; end How to do this with numpy ? I guess we need to create a python class ? any idea ? Regards -- Thierry From pawel.kw at gmail.com Mon Jan 17 05:33:51 2011 From: pawel.kw at gmail.com (=?ISO-8859-2?Q?Pawe=B3_Kwa=B6niewski?=) Date: Mon, 17 Jan 2011 11:33:51 +0100 Subject: [SciPy-User] how to do easy things in octave with numpy ? In-Reply-To: <4D340B45.10600@cea.fr> References: <4D340B45.10600@cea.fr> Message-ID: Hi, I'm not an octave user, but I tried to just type in what you wrote here, and here's what I can tell you: 2011/1/17 MONTAGU Thierry > Hi all scipy user > > i wonder how to do the following easy thing in octave within a > python/numpy script. > > in octave:matlab, you instantiate > > a= zeros(10) for example > > I see, that the above gives you a 10x10 matrix of zeros. You can have the same in python doing the following: from numpy import * a = zeros((10,10)) > and for example, you can do in one line > a[2].m = 10 > > or do > > for i = 1 : length(a) > a[i].m = 25; > end > > As for the above, I don't quite understand what you mean - if I try to type this as you have given it here, I'm getting a syntax error. Can you explain what you mean or provide working code? Maybe I need to import some packaged before? As I said, I'm no octave user... > How to do this with numpy ? I guess we need to create a python class ? > any idea ? > > Regards > > -- > Thierry > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Pawe? -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Mon Jan 17 06:00:52 2011 From: e.antero.tammi at gmail.com (eat) Date: Mon, 17 Jan 2011 11:00:52 +0000 (UTC) Subject: [SciPy-User] how to do easy things in octave with numpy ? References: <4D340B45.10600@cea.fr> Message-ID: MONTAGU Thierry cea.fr> writes: Hi, > > Hi all scipy user > > i wonder how to do the following easy thing in octave within a > python/numpy script. > > in octave:matlab, you instantiate > > a= zeros(10) for example > > and for example, you can do in one line > a[2].m = 10 This is not valid Octave syntax. Could you post short working code (in octave) and I'll translate that to python. Regards, eat > > or do > > for i = 1 : length(a) > a[i].m = 25; > end > > How to do this with numpy ? I guess we need to create a python class ? > any idea ? > > Regards > From denis-bz-gg at t-online.de Mon Jan 17 10:00:57 2011 From: denis-bz-gg at t-online.de (denis) Date: Mon, 17 Jan 2011 07:00:57 -0800 (PST) Subject: [SciPy-User] 2-pass k-means: first do a random sample of sqrt(N) ? Message-ID: <4436f144-b2e9-472d-a667-63571f6bbdc2@30g2000yql.googlegroups.com> Folks, yet another k-means initializer: - first do a random sample of say sqrt(N) of the points - run full k-means from those centres. How well this works depends of course on how well a sqrt(N) sample approximates the whole, as well as on N, dim, k, maxiter, delta, phase of the moon ... The sqrt(N) is heuristic. (Theoretically one could recurse for very large N.) One could also tighten delta etc. for the first pass. Anyway, on Gaussian clusters I get N 10000 dim 2 k 20 ninit 10 iter 5 delta 0.01 nwarp 0 init="sample" time: 5.4 sec vq distances: av 0.0821 quartiles 0.058 0.082 0.11 init="k-means++" time: 53.4 sec vq distances: av 0.0803 quartiles 0.057 0.081 0.1 N 10000 dim 10 k 20 ninit 10 iter 5 delta 0.01 nwarp 0 time: 8.3 sec vq distances: av 0.718 quartiles 0.63 0.72 0.8 time: 220.8 sec vq distances: av 0.715 quartiles 0.63 0.72 0.8 This is with the scikits.learn k-means; the one in scipy 0.8 uses absolute not relative error and may be buggy too, long thread last July. Comments please, links to *real* test data please ? cheers -- denis From tomo.bbe at gmail.com Mon Jan 17 10:05:53 2011 From: tomo.bbe at gmail.com (James) Date: Mon, 17 Jan 2011 15:05:53 +0000 Subject: [SciPy-User] SciPy and Diffusion Equation In-Reply-To: <4D334708.3010805@gmail.com> References: <4D334708.3010805@gmail.com> Message-ID: You can implement a fairly basic Method of Lines solution using odeint by applying some basic spatial differencing formulae. I have done this in the past for 1D and 2D advection-dispersion problems, but it soon becomes quite slow for practical problems. Regards, James On Sun, Jan 16, 2011 at 7:29 PM, Lorenzo Isella wrote: > Dear All, > I hope that this is not too off-topic. > I wonder if there is any tool under the scipy umbrella to deal with > diffusion problems. I have tackled these problems with proprietary > software in the past (e.g. Comsol Multiphysics), but I wonder if there > are other (simple) pythonic options. > To fix the ideas, consider c(x,t) as a time-dependent concentration > profile and the equation > > \partial_t c= D\partial_x^2 c-\alpha c +beta\delta(x_0) > > i.e. a standard diffusion equation with a source term beta\delta(x_0) > (what I mean is that the initial density distribution is a delta > centered about x_0, one can probably remove that term from the equation > while giving the initial density distribution at t=0). > However, it all boils down to a diffusion equation in an interval [0,L] > with an initial density profile c(x,t_0). In the case of the equation > above, I have an analytical solution, but in general I may want to add > sources, non-linear terms etc...so it should be considered as a starting > point. > Any suggestion is appreciated. > Cheers > > Lorenzo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From opossumnano at gmail.com Mon Jan 17 11:01:48 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Mon, 17 Jan 2011 17:01:48 +0100 Subject: [SciPy-User] MDP release 3.0 Message-ID: <20110117160148.GD25627@tulpenbaum.cognition.tu-berlin.de> We are glad to announce release 3.0 of the Modular toolkit for Data Processing (MDP). MDP is a Python library of widely used data processing algorithms that can be combined according to a pipeline analogy to build more complex data processing software. The base of available algorithms includes signal processing methods (Principal Component Analysis, Independent Component Analysis, Slow Feature Analysis), manifold learning methods ([Hessian] Locally Linear Embedding), several classifiers, probabilistic methods (Factor Analysis, RBM), data pre-processing methods, and many others. What's new in version 3.0? -------------------------- - Python 3 support - New extensions: caching and gradient - Automatically generated wrappers for scikits.learn algorithms - Shogun and libsvm wrappers - New algorithms: convolution, several classifiers and several user-contributed nodes - Several new examples on the homepage - Improved and expanded tutorial - Several improvements and bug fixes - New license: MDP goes BSD! Resources --------- Download: http://sourceforge.net/projects/mdp-toolkit/files Homepage: http://mdp-toolkit.sourceforge.net Mailing list: http://lists.sourceforge.net/mailman/listinfo/mdp-toolkit-users Acknowledgments --------------- We thank the contributors to this release: Sven D?hne, Alberto Escalante, Valentin Haenel, Yaroslav Halchenko, Sebastian H?fer, Michael Hull, Samuel John, Jos? Quesada, Ariel Rokem, Benjamin Schrauwen, David Verstraeten, Katharina Maria Zeiner. The MDP developers, Pietro Berkes Zbigniew J?drzejewski-Szmek Rike-Benjamin Schuppner Niko Wilbert Tiziano Zito From Gregor.Thalhammer at i-med.ac.at Fri Jan 14 08:06:37 2011 From: Gregor.Thalhammer at i-med.ac.at (Gregor Thalhammer) Date: Fri, 14 Jan 2011 14:06:37 +0100 Subject: [SciPy-User] Separable NLLS - Variable Projection in Python? In-Reply-To: References: Message-ID: <62255370-3256-4072-A567-EC4901744A02@i-med.ac.at> I have written a pure python implementation for Levenberq-Marquardt, also for seperable nonlinear problems, with good (robust) results for fitting few peaks in 2D images. My experience was that (for my problem) solving the normal equation instead of doing a qr decomposition (as scipy's leastsq does) gave a big speed improvement. (Yes I know it's numerically less stable). I can share this code, but it's incomplete. Gregor Am 13.1.2011 um 14:37 schrieb Till Stensitzki: > Hello, > has anyone done seperable nonlinear least squares in Python? > There is the VARPRO.f in the netlib libary, but my Fortran knowlage is very > shallow. If no-one did it, how hard is it to wrap the Fortran code? > > greetings > Till > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From kunal.t2 at gmail.com Sat Jan 15 11:39:59 2011 From: kunal.t2 at gmail.com (kunal ghosh) Date: Sat, 15 Jan 2011 22:09:59 +0530 Subject: [SciPy-User] Multiplying very large matrices Message-ID: Hi all, while implementing Locality Preserving Projections , at one point i have to perform X L X.transpose() these matrices are large (32256 x 32256) so i get "out of memory" error. I assume, as the dataset gets larger one would come across this problem , how would one go about solving this ? Is there a common trick that is used to deal with such problems ? Or the workstation calculating these problems needs to have HUGE amounts of physical memory ? I am using python and numpy / scipy -- regards ------- Kunal Ghosh Dept of Computer Sc. & Engineering. Sir MVIT Bangalore,India permalink: member.acm.org/~kunal.t2 Blog:kunalghosh.wordpress.com Website:www.kunalghosh.net46.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Jan 17 11:34:34 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 17 Jan 2011 17:34:34 +0100 Subject: [SciPy-User] 2-pass k-means: first do a random sample of sqrt(N) ? In-Reply-To: <4436f144-b2e9-472d-a667-63571f6bbdc2@30g2000yql.googlegroups.com> References: <4436f144-b2e9-472d-a667-63571f6bbdc2@30g2000yql.googlegroups.com> Message-ID: <20110117163434.GH7977@phare.normalesup.org> On Mon, Jan 17, 2011 at 07:00:57AM -0800, denis wrote: > Folks, > yet another k-means initializer: > - first do a random sample of say sqrt(N) of the points > - run full k-means from those centres. > How well this works depends of course on how well a sqrt(N) sample > approximates the whole, as well as on N, dim, k, maxiter, delta, > phase of the moon ... > The sqrt(N) is heuristic. > (Theoretically one could recurse for very large N.) > One could also tighten delta etc. for the first pass. Sounds reasonnable to me. This is a bit in the direction of implementing k-means as an online algorithm, which we are discussing. > Anyway, on Gaussian clusters I get > N 10000 dim 2 k 20 ninit 10 iter 5 delta 0.01 nwarp 0 > init="sample" > time: 5.4 sec vq distances: av 0.0821 quartiles 0.058 0.082 > 0.11 > init="k-means++" > time: 53.4 sec vq distances: av 0.0803 quartiles 0.057 0.081 > 0.1 Yes, I have found that 'k-means++' wasn't terribly useful on big data. > N 10000 dim 10 k 20 ninit 10 iter 5 delta 0.01 nwarp 0 > time: 8.3 sec vq distances: av 0.718 quartiles 0.63 0.72 0.8 > time: 220.8 sec vq distances: av 0.715 quartiles 0.63 0.72 > 0.8 I am not sure I understand these numbers. > This is with the scikits.learn k-means; Why don't you discuss this on the scikits-learn mailing list: https://lists.sourceforge.net/lists/listinfo/scikit-learn-general Also, we would need to time on different usecases, to see how much we gain (I couldn't understand your timings above, sorry). There are quite a few very knowledgeable people and they might have an opinion on whether it is best to wait for the online version of the kmeans to be available (I'd say: give it 6 months) or if it is worth hacking a specific init. Thanks for your input, Ga?l From gael.varoquaux at normalesup.org Mon Jan 17 11:38:29 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 17 Jan 2011 17:38:29 +0100 Subject: [SciPy-User] Multiplying very large matrices In-Reply-To: References: Message-ID: <20110117163829.GI7977@phare.normalesup.org> On Sat, Jan 15, 2011 at 10:09:59PM +0530, kunal ghosh wrote: > while implementing Locality Preserving Projections , > at one point i have to perform X L X.transpose() > these matrices are large (32256 x 32256) so i get "out of memory" error. > I assume, as the dataset gets larger one would come across this problem , > how would > one go about solving this ? Is there a common trick that is used to deal > with such problems ? > Or the workstation calculating these problems needs to have HUGE ?amounts > of physical memory ? Maybe there is a random projection/random sampling algorithm to solve your problem on partial data: http://metaoptimize.com/qa/questions/1640/whats-a-good-introduction-to-random-projections Gael From e.antero.tammi at gmail.com Mon Jan 17 12:16:12 2011 From: e.antero.tammi at gmail.com (eat) Date: Mon, 17 Jan 2011 17:16:12 +0000 (UTC) Subject: [SciPy-User] Multiplying very large matrices References: Message-ID: kunal ghosh gmail.com> writes: Hi, > > > Hi all, > while implementing Locality Preserving Projections , > at one point i have to perform X L X.transpose() > these matrices are large (32256 x 32256) so i get "out of memory" error. > > > I assume, as the dataset gets larger one would come across this problem , how would > one go about solving this ? Is there a common trick that is used to deal with such problems ? > Or the workstation calculating these problems needs to have HUGE ?amounts of physical memory ? > > > I am using python and numpy / scipy-- regards-------Kunal GhoshDept of Computer Sc. & Engineering.Sir MVITBangalore,Indiapermalink: member.acm.org/~kunal.t2Blog:kunalghosh.wordpress.comWebsite:www.kunalghosh.net 46.net Perhaps some linear algebra will help you to rearange the calculations, especially if your matrices are not full rank. Forexample projection to subspace (M_hat= PM): In [1]: M= randn(1e5, 1e1) In [2]: U, s, V= svd(M, full_matrices= False) In [3]: U.shape Out[3]: (100000, 10) In [4]: timeit dot(U, dot(U.T, M)) 10 loops, best of 3: 45.2 ms per loop In [5]: timeit # dot(dot(U, U.T), M)) # would consume all memory and even if had enough memory it would be very slow My 2 cents, eat > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From mmueller at python-academy.de Mon Jan 17 15:06:18 2011 From: mmueller at python-academy.de (=?ISO-8859-15?Q?Mike_M=FCller?=) Date: Mon, 17 Jan 2011 21:06:18 +0100 Subject: [SciPy-User] Scientific tools tutorial at PyCon US, March 9, 2011 In-Reply-To: References: Message-ID: <4D34A13A.6030104@python-academy.de> Scientific Python Tools not only for Scientists and Engineers ============================================================= This is the title of my three-hour tutorial at PyCon US: http://us.pycon.org/2011/schedule/sessions/164/ It is a compressed version of my much longer course about: * NumPy * SciPy * matplotlib/IPython * extensions with C and Fortran So if your are new to these tools and go to PyCon, you might consider taking the tutorial. Also, if you know somebody who would likely be interested in this tutorial, please spread the word. Thanks. Mike -- Mike M?ller mmueller at python-academy.de From briedel at wisc.edu Mon Jan 17 16:55:11 2011 From: briedel at wisc.edu (Benedikt Riedel) Date: Mon, 17 Jan 2011 15:55:11 -0600 Subject: [SciPy-User] Fitting a Gaussian Message-ID: Hi all, I am a bit dumbfounded. I have been trying to figure out how to fit a gaussian to a data set that I have. The only thing I have been able to dig up with so far is that I have to get scipy to compute the mean and standard deviation for me and then basically plug those into a gaussian. Just seems a little bit odd to me considering that Octave has the mygaussfit function included or is there an easier method? Cheers, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jan 17 17:09:16 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 17 Jan 2011 17:09:16 -0500 Subject: [SciPy-User] Fitting a Gaussian In-Reply-To: References: Message-ID: On Mon, Jan 17, 2011 at 4:55 PM, Benedikt Riedel wrote: > Hi all, > > I am a bit dumbfounded. I have been trying to figure out how to fit a > gaussian to a data set that I have. The only thing I have been able to dig > up with so far is that I have to get scipy to compute the mean and standard > deviation for me and then basically plug those into a gaussian. Just seems a > little bit odd to me considering that Octave has the mygaussfit function > included or is there an easier method? I don't know what a mygaussfit does, but we have this: >>> rvs = np.random.randn(200) >>> from scipy import stats >>> stats.norm.fit(rvs) (-0.0092383641087203858, 1.0567583206929831) >>> loc, scale = stats.norm.fit(rvs) >>> rvs.mean() -0.0092383641087203858 >>> rvs.std() 1.0567583206929831 >>> loc, scale (-0.0092383641087203858, 1.0567583206929831) >>> myfrozennorm = stats.norm(loc=loc, scale=scale) >>> myfrozennorm.cdf(2) 0.97137010763982468 >>> This assumes that all observations are identically distributed and independent from each other. Otherwise, it get's a little bit more complicated. Josef > > Cheers, > > Ben > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jake.biesinger at gmail.com Mon Jan 17 17:25:31 2011 From: jake.biesinger at gmail.com (Jacob Biesinger) Date: Mon, 17 Jan 2011 14:25:31 -0800 (PST) Subject: [SciPy-User] Efficient 2-d arrays using standard python? Message-ID: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> Hi all, This question is specifically about how to get some numpy functionality in standard python. Sorry if that is a bit off-topic! Using numpy, I can create large 2-dimensional arrays quite easily. >>> import numpy >>> mylist = numpy.zeros((100000000,2), dtype=numpy.int32) Unfortunately, my target audience may not have numpy so I'd prefer not to use it. Similarly, a list-of-tuples using standard python syntax. >>> mylist = [(0,0) for i in xrange(100000000) but this method uses way too much memory (>4GB for 100 million items, compared to 1.5GB for numpy method). Since I want to keep the two elements together during a sort, I *can't* use array.array. >>> mylist = [array.array('i',xrange(100000000)), array.array('i',xrange(100000000))] If I knew the size in advance, I could use ctypes arrays. >>> from ctypes import * >>> class myStruct(Structure): >>> _fields_ = [('x',c_int),('y',c_int)] >>> mylist_type = myStruct * 100000000 >>> mylist = mylist_type() but I don't know that size (and it can vary between 1 million-200 million), so preallocating doesn't seem to be an option. Is there a python standard library way of creating *efficient* 2-dimensional lists/arrays, still allowing me to sort and append? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From briedel at wisc.edu Mon Jan 17 17:29:37 2011 From: briedel at wisc.edu (Benedikt Riedel) Date: Mon, 17 Jan 2011 16:29:37 -0600 Subject: [SciPy-User] Fitting a Gaussian In-Reply-To: References: Message-ID: Mygaussfit basically fits for the sigma, mu and A, for the function f(x)=Aexp(-(x-mu)^2/(2sigma^2)). What I am most interested is the A in this case and the sigma, since I am trying to compare two gaussians. Thanks for the input. Cheers, Ben On Mon, Jan 17, 2011 at 16:09, wrote: > On Mon, Jan 17, 2011 at 4:55 PM, Benedikt Riedel wrote: > > Hi all, > > > > I am a bit dumbfounded. I have been trying to figure out how to fit a > > gaussian to a data set that I have. The only thing I have been able to > dig > > up with so far is that I have to get scipy to compute the mean and > standard > > deviation for me and then basically plug those into a gaussian. Just > seems a > > little bit odd to me considering that Octave has the mygaussfit function > > included or is there an easier method? > > I don't know what a mygaussfit does, but we have this: > > >>> rvs = np.random.randn(200) > >>> from scipy import stats > >>> stats.norm.fit(rvs) > (-0.0092383641087203858, 1.0567583206929831) > >>> loc, scale = stats.norm.fit(rvs) > >>> rvs.mean() > -0.0092383641087203858 > >>> rvs.std() > 1.0567583206929831 > >>> loc, scale > (-0.0092383641087203858, 1.0567583206929831) > >>> myfrozennorm = stats.norm(loc=loc, scale=scale) > >>> myfrozennorm.cdf(2) > 0.97137010763982468 > >>> > > This assumes that all observations are identically distributed and > independent from each other. Otherwise, it get's a little bit more > complicated. > > Josef > > > > > Cheers, > > > > Ben > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Benedikt Riedel Graduate Student University of Wisconsin-Madison Department of Physics Office: 2304 Chamberlin Hall Lab: 6247 Chamberlin Hall Tel: (608) 301-5736 Cell: (213) 519-1771 Lab: (608) 262-5916 -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jan 17 17:45:04 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 17 Jan 2011 17:45:04 -0500 Subject: [SciPy-User] Fitting a Gaussian In-Reply-To: References: Message-ID: On Mon, Jan 17, 2011 at 5:29 PM, Benedikt Riedel wrote: > Mygaussfit basically fits for the sigma, mu and A, for the function > f(x)=Aexp(-(x-mu)^2/(2sigma^2)). What I am most interested is the A in this > case and the sigma, since I am trying to compare two gaussians. f(x) is not a proper density function (A cannot be freely chosen), so maybe you just want to use scipy.optimize.curve_fit. What kind of data do you have? Josef > > Thanks for the input. > > Cheers, > > Ben > > On Mon, Jan 17, 2011 at 16:09, wrote: >> >> On Mon, Jan 17, 2011 at 4:55 PM, Benedikt Riedel wrote: >> > Hi all, >> > >> > I am a bit dumbfounded. I have been trying to figure out how to fit a >> > gaussian to a data set that I have. The only thing I have been able to >> > dig >> > up with so far is that I have to get scipy to compute the mean and >> > standard >> > deviation for me and then basically plug those into a gaussian. Just >> > seems a >> > little bit odd to me considering that Octave has the mygaussfit function >> > included or is there an easier method? >> >> I don't know what a mygaussfit does, but we have this: >> >> >>> rvs = np.random.randn(200) >> >>> from scipy import stats >> >>> stats.norm.fit(rvs) >> (-0.0092383641087203858, 1.0567583206929831) >> >>> loc, scale = stats.norm.fit(rvs) >> >>> rvs.mean() >> -0.0092383641087203858 >> >>> rvs.std() >> 1.0567583206929831 >> >>> loc, scale >> (-0.0092383641087203858, 1.0567583206929831) >> >>> myfrozennorm = stats.norm(loc=loc, scale=scale) >> >>> myfrozennorm.cdf(2) >> 0.97137010763982468 >> >>> >> >> This assumes that all observations are identically distributed and >> independent from each other. Otherwise, it get's a little bit more >> complicated. >> >> Josef >> >> > >> > Cheers, >> > >> > Ben >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Benedikt Riedel > Graduate Student University of Wisconsin-Madison > Department of Physics > Office: 2304 Chamberlin Hall > Lab: 6247 Chamberlin Hall > Tel:? (608) 301-5736 > Cell: (213) 519-1771 > Lab: (608) 262-5916 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From david_baddeley at yahoo.com.au Mon Jan 17 17:50:10 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Mon, 17 Jan 2011 14:50:10 -0800 (PST) Subject: [SciPy-User] Fitting a Gaussian In-Reply-To: References: Message-ID: <162203.42266.qm@web113412.mail.gq1.yahoo.com> Sounds like you might be trying to fit a curve with a Gaussian, rather than find the parameters of a set of Gaussian distributed observations as Josef assumed. In this case you'll want to define a model function eg: import numpy as np def gauss(p, x): A, mu, sigma = p return A*np.exp(-(x-mu)**2/(2*sigma**2)) and then have a look at scipy.optimize.curvefit , or if you've got an older version of scipy scipy.optimize.leastsq (in which case you'll need a misfit function - ie y - f(x) instead cheers, David ________________________________ From: Benedikt Riedel To: SciPy Users List Sent: Tue, 18 January, 2011 11:29:37 AM Subject: Re: [SciPy-User] Fitting a Gaussian Mygaussfit basically fits for the sigma, mu and A, for the function f(x)=Aexp(-(x-mu)^2/(2sigma^2)). What I am most interested is the A in this case and the sigma, since I am trying to compare two gaussians. Thanks for the input. Cheers, Ben On Mon, Jan 17, 2011 at 16:09, wrote: On Mon, Jan 17, 2011 at 4:55 PM, Benedikt Riedel wrote: >> Hi all, >> >> I am a bit dumbfounded. I have been trying to figure out how to fit a >> gaussian to a data set that I have. The only thing I have been able to dig >> up with so far is that I have to get scipy to compute the mean and standard >> deviation for me and then basically plug those into a gaussian. Just seems a >> little bit odd to me considering that Octave has the mygaussfit function >> included or is there an easier method? > >I don't know what a mygaussfit does, but we have this: > >>>> rvs = np.random.randn(200) >>>> from scipy import stats >>>> stats.norm.fit(rvs) >(-0.0092383641087203858, 1.0567583206929831) >>>> loc, scale = stats.norm.fit(rvs) >>>> rvs.mean() >-0.0092383641087203858 >>>> rvs.std() >1.0567583206929831 >>>> loc, scale >(-0.0092383641087203858, 1.0567583206929831) >>>> myfrozennorm = stats.norm(loc=loc, scale=scale) >>>> myfrozennorm.cdf(2) >0.97137010763982468 >>>> > >This assumes that all observations are identically distributed and >independent from each other. Otherwise, it get's a little bit more >complicated. > >Josef > >> >> Cheers, >> >> Ben >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > -- Benedikt Riedel Graduate Student University of Wisconsin-Madison Department of Physics Office: 2304 Chamberlin Hall Lab: 6247 Chamberlin Hall Tel: (608) 301-5736 Cell: (213) 519-1771 Lab: (608) 262-5916 -------------- next part -------------- An HTML attachment was scrubbed... URL: From briedel at wisc.edu Mon Jan 17 17:53:54 2011 From: briedel at wisc.edu (Benedikt Riedel) Date: Mon, 17 Jan 2011 16:53:54 -0600 Subject: [SciPy-User] Fitting a Gaussian In-Reply-To: <162203.42266.qm@web113412.mail.gq1.yahoo.com> References: <162203.42266.qm@web113412.mail.gq1.yahoo.com> Message-ID: Yeah that is what I was trying to do. I am trying to get around using the least squares method, but I guess I am stuck with that. Thanks for the clarification. Cheers, Ben On Mon, Jan 17, 2011 at 16:50, David Baddeley wrote: > Sounds like you might be trying to fit a curve with a Gaussian, rather than > find the parameters of a set of Gaussian distributed observations as Josef > assumed. In this case you'll want to define a model function eg: > > import numpy as np > > def gauss(p, x): > A, mu, sigma = p > return A*np.exp(-(x-mu)**2/(2*sigma**2)) > > and then have a look at scipy.optimize.curvefit , or if you've got an older > version of scipy scipy.optimize.leastsq (in which case you'll need a misfit > function - ie y - f(x) instead > > cheers, > David > > ------------------------------ > *From:* Benedikt Riedel > *To:* SciPy Users List > *Sent:* Tue, 18 January, 2011 11:29:37 AM > *Subject:* Re: [SciPy-User] Fitting a Gaussian > > Mygaussfit basically fits for the sigma, mu and A, for the function > f(x)=Aexp(-(x-mu)^2/(2sigma^2)). What I am most interested is the A in this > case and the sigma, since I am trying to compare two gaussians. > > Thanks for the input. > > Cheers, > > Ben > > On Mon, Jan 17, 2011 at 16:09, wrote: > >> On Mon, Jan 17, 2011 at 4:55 PM, Benedikt Riedel >> wrote: >> > Hi all, >> > >> > I am a bit dumbfounded. I have been trying to figure out how to fit a >> > gaussian to a data set that I have. The only thing I have been able to >> dig >> > up with so far is that I have to get scipy to compute the mean and >> standard >> > deviation for me and then basically plug those into a gaussian. Just >> seems a >> > little bit odd to me considering that Octave has the mygaussfit function >> > included or is there an easier method? >> >> I don't know what a mygaussfit does, but we have this: >> >> >>> rvs = np.random.randn(200) >> >>> from scipy import stats >> >>> stats.norm.fit(rvs) >> (-0.0092383641087203858, 1.0567583206929831) >> >>> loc, scale = stats.norm.fit(rvs) >> >>> rvs.mean() >> -0.0092383641087203858 >> >>> rvs.std() >> 1.0567583206929831 >> >>> loc, scale >> (-0.0092383641087203858, 1.0567583206929831) >> >>> myfrozennorm = stats.norm(loc=loc, scale=scale) >> >>> myfrozennorm.cdf(2) >> 0.97137010763982468 >> >>> >> >> This assumes that all observations are identically distributed and >> independent from each other. Otherwise, it get's a little bit more >> complicated. >> >> Josef >> >> > >> > Cheers, >> > >> > Ben >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.maibaum at gmail.com Mon Jan 17 17:55:46 2011 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Mon, 17 Jan 2011 14:55:46 -0800 Subject: [SciPy-User] Efficient 2-d arrays using standard python? In-Reply-To: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> References: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> Message-ID: On Jan 17, 2011, at 2:25 PM, Jacob Biesinger wrote: > Using numpy, I can create large 2-dimensional arrays quite easily. > >>> import numpy > >>> mylist = numpy.zeros((100000000,2), dtype=numpy.int32) > > Unfortunately, my target audience may not have numpy so I'd prefer not to use it. > > Similarly, a list-of-tuples using standard python syntax. > >>> mylist = [(0,0) for i in xrange(100000000) > > but this method uses way too much memory (>4GB for 100 million items, compared to 1.5GB for numpy method). How do you measure memory consumption? On my system (Snow Leopard, Python 2.6 64-bit) both expressions seem to use about the same amount of memory, at least if the operating system's Activity Monitor is any indication. Is the standard python integer type on your system 32 bit, as you request explicitly in the numpy version? You could also try mylist = [(0,0)] * 100000000 which seems much faster. Hope this helps, Lutz From robert.kern at gmail.com Mon Jan 17 17:57:53 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 17 Jan 2011 16:57:53 -0600 Subject: [SciPy-User] Efficient 2-d arrays using standard python? In-Reply-To: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> References: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> Message-ID: On Mon, Jan 17, 2011 at 16:25, Jacob Biesinger wrote: > Is there a python standard library way of creating *efficient* 2-dimensional > lists/arrays, still allowing me to sort and append? I don't think so. Sorry. Depending on the context, it may be possible to ask your audience to install numpy (or you can provide them with Python+numpy yourself). It's usually not difficult. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From jake.biesinger at gmail.com Mon Jan 17 18:30:40 2011 From: jake.biesinger at gmail.com (Jacob Biesinger) Date: Mon, 17 Jan 2011 15:30:40 -0800 Subject: [SciPy-User] Efficient 2-d arrays using standard python? In-Reply-To: References: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> Message-ID: > > > Similarly, a list-of-tuples using standard python syntax. > > >>> mylist = [(0,0) for i in xrange(100000000) > > > > but this method uses way too much memory (>4GB for 100 million items, > compared to 1.5GB for numpy method). > > How do you measure memory consumption? On my system (Snow Leopard, Python > 2.6 64-bit) both expressions seem to use about the same amount of memory, at > least if the operating system's Activity Monitor is any indication. Is the > standard python integer type on your system 32 bit, as you request > explicitly in the numpy version? > I'm on 64-bit linux, no special flavors of python here, and yes just the system monitor. Yes, it should be a 32-bit int in those tuples (until it overflows, iirc?) > You could also try > > mylist = [(0,0)] * 100000000 > > which seems much faster. > Isn't this creating 100 million references to the same tuple object? Or perhaps it's a list with 100 million tuples that point to the same two int objects? O_o In this trivial case, they all have the same contents (0,0) but that's not what I'm shooting for :) Running it as In [1]: mylist = [] In [2]: for i in xrange(100000000): ...: mylist.append((i,i+1)) is actually creating different tuple objects and ints internally, and takes a *lot* more space than the numpy implementation (which also takes 2x more space than your tuple example on my machine). Thanks for the replies-- Guess I'll try: import numpy # do numpy stuff except ImportError: # do python-only stuff -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jan 17 18:49:55 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 17 Jan 2011 18:49:55 -0500 Subject: [SciPy-User] Fitting a Gaussian In-Reply-To: References: <162203.42266.qm@web113412.mail.gq1.yahoo.com> Message-ID: On Mon, Jan 17, 2011 at 5:53 PM, Benedikt Riedel wrote: > Yeah that is what I was trying to do. I am trying to get around using the > least squares method, but I guess I am stuck with that. Thanks for the > clarification. This one http://www.physics.umd.edu/courses/Phys375/Appelbaum_Fall2009/LABS/LAB0/mygaussfit.m uses only polyfit, but I didn't look carefully enough to understand what it is doing. It should by easy to translate into numpython. Josef > > Cheers, > > Ben > > On Mon, Jan 17, 2011 at 16:50, David Baddeley > wrote: >> >> Sounds like you might be trying to fit a curve with a Gaussian, rather >> than find the parameters of a set of Gaussian distributed observations as >> Josef assumed. In this case you'll want to define a model function eg: >> import numpy as np >> def gauss(p, x): >> ?? ?A, mu, sigma = p >> ?? ?return A*np.exp(-(x-mu)**2/(2*sigma**2)) >> and then have a look at scipy.optimize.curvefit , or if you've got an >> older version of scipy scipy.optimize.leastsq (in which case you'll need a >> misfit function - ie y - f(x) instead >> cheers, >> David >> ________________________________ >> From: Benedikt Riedel >> To: SciPy Users List >> Sent: Tue, 18 January, 2011 11:29:37 AM >> Subject: Re: [SciPy-User] Fitting a Gaussian >> >> Mygaussfit basically fits for the sigma, mu and A, for the function >> f(x)=Aexp(-(x-mu)^2/(2sigma^2)). What I am most interested is the A in this >> case and the sigma, since I am trying to compare two gaussians. >> >> Thanks for the input. >> >> Cheers, >> >> Ben >> >> On Mon, Jan 17, 2011 at 16:09, wrote: >>> >>> On Mon, Jan 17, 2011 at 4:55 PM, Benedikt Riedel >>> wrote: >>> > Hi all, >>> > >>> > I am a bit dumbfounded. I have been trying to figure out how to fit a >>> > gaussian to a data set that I have. The only thing I have been able to >>> > dig >>> > up with so far is that I have to get scipy to compute the mean and >>> > standard >>> > deviation for me and then basically plug those into a gaussian. Just >>> > seems a >>> > little bit odd to me considering that Octave has the mygaussfit >>> > function >>> > included or is there an easier method? >>> >>> I don't know what a mygaussfit does, but we have this: >>> >>> >>> rvs = np.random.randn(200) >>> >>> from scipy import stats >>> >>> stats.norm.fit(rvs) >>> (-0.0092383641087203858, 1.0567583206929831) >>> >>> loc, scale = stats.norm.fit(rvs) >>> >>> rvs.mean() >>> -0.0092383641087203858 >>> >>> rvs.std() >>> 1.0567583206929831 >>> >>> loc, scale >>> (-0.0092383641087203858, 1.0567583206929831) >>> >>> myfrozennorm = stats.norm(loc=loc, scale=scale) >>> >>> myfrozennorm.cdf(2) >>> 0.97137010763982468 >>> >>> >>> >>> This assumes that all observations are identically distributed and >>> independent from each other. Otherwise, it get's a little bit more >>> complicated. >>> >>> Josef >>> >>> > >>> > Cheers, >>> > >>> > Ben >>> > >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> > >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From david at silveregg.co.jp Mon Jan 17 21:16:58 2011 From: david at silveregg.co.jp (David) Date: Tue, 18 Jan 2011 11:16:58 +0900 Subject: [SciPy-User] Efficient 2-d arrays using standard python? In-Reply-To: References: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> Message-ID: <4D34F81A.2080608@silveregg.co.jp> On 01/18/2011 07:55 AM, Lutz Maibaum wrote: > > How do you measure memory consumption? On my system (Snow Leopard, Python 2.6 64-bit) both expressions seem to use about the same amount of memory, at least if the operating system's Activity Monitor is any indication. Is the standard python integer type on your system 32 bit, as you request explicitly in the numpy version? It may be true for this special case, but only because the special values you are using (0 here). In general, at least for CPython, a list of N integers will use *much* more memory than a a numpy array. Basically, for a numpy array of N 32 bits integers, sizeof(array) = N * 4 + overhead bytes, where overhead is mostly independent of N, and bounded anyway. For a list of N integers, you have sizeof(list) >= overhead(list) + N * sizeof(PyObject*) + N * sizeof(integer), where sizeof(integer) is at least 12 bytes (4 bytes for the value, 4 bytes for the reference counting, and 4 bytes for the object pointer). IOW, for a 32 bits machine (the best case), a list of N integers use something like 4 times the same space as a numpy array. On a 64 bits machine, the size taken by the numpy array will not change so much, whereas the size will be almost doubled for the list case (everything sees its size doubled except for the integer value itself). cheers, David From njs at pobox.com Mon Jan 17 21:35:17 2011 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 17 Jan 2011 18:35:17 -0800 Subject: [SciPy-User] Efficient 2-d arrays using standard python? In-Reply-To: <4D34F81A.2080608@silveregg.co.jp> References: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> <4D34F81A.2080608@silveregg.co.jp> Message-ID: On Mon, Jan 17, 2011 at 6:16 PM, David wrote: > On a 64 bits > machine, the size taken by the numpy array will not change so much, > whereas the size will be almost doubled for the list case (everything > sees its size doubled except for the integer value itself). ObNitPick: The integer value itself will be doubled too; python 'int' is 64-bit on 64-bit machines. (Well, at least on Linux; it might only be 32-bits on Windows with its LLP64.) # Python 2.5.2 on x86-64 Linux: >>> type(2 ** 50) >>> type(2 ** 65) -- Nathaniel From jjstickel at vcn.com Mon Jan 17 21:59:17 2011 From: jjstickel at vcn.com (Jonathan Stickel) Date: Mon, 17 Jan 2011 19:59:17 -0700 Subject: [SciPy-User] Fitting a Gaussian In-Reply-To: References: Message-ID: <4D350205.2050709@vcn.com> On 01/17/2011 07:17 PM, scipy-user-request at scipy.org wrote: > Date: Mon, 17 Jan 2011 16:53:54 -0600 > From: Benedikt Riedel > Subject: Re: [SciPy-User] Fitting a Gaussian > Yeah that is what I was trying to do. I am trying to get around using the > least squares method, but I guess I am stuck with that. Thanks for the > clarification. > > Cheers, > > Ben > I haven't followed all the details of this thread, but have you tried taking the first few moments of your distribution curve, using something like cumtrapz? From the moments you can calculate the total area, the centroid, and the variance. Jonathan From gary.pajer at gmail.com Mon Jan 17 22:26:07 2011 From: gary.pajer at gmail.com (Gary Pajer) Date: Mon, 17 Jan 2011 22:26:07 -0500 Subject: [SciPy-User] Python and Eclipse In-Reply-To: <760A279CF63831CD4A521829@192.168.1.112> References: <1294611268.15720.14.camel@mypride> <8C6F05827E2ECF5743D95CD2@192.168.1.112> <1294615036.15720.24.camel@mypride> <760A279CF63831CD4A521829@192.168.1.112> Message-ID: On Sun, Jan 9, 2011 at 9:33 PM, Nathaniel Polish wrote: > Oh please. Emacs is for real men. We hack code in lisp in an emacs shell. > I first used emacs in 1984. I used it to read email and net news. It was > not just a text editor, it was a lifestyle. But that has NOTHING to do > with Python. Though I bet you could build a great Python interpreter in > emacs. > > I'm surprised that more people don't know that Thomas Jefferson originally wrote the Declaration of Independence in emacs. He was convinced to copy it out in longhand when he realized that his contemporaries who used vi (they didn't have vim back then) wouldn't take it seriously otherwise. The rest is history. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Mon Jan 17 23:37:10 2011 From: david at silveregg.co.jp (David) Date: Tue, 18 Jan 2011 13:37:10 +0900 Subject: [SciPy-User] Efficient 2-d arrays using standard python? In-Reply-To: References: <14674260.1571.1295303131809.JavaMail.geo-discussion-forums@yqlb6> <4D34F81A.2080608@silveregg.co.jp> Message-ID: <4D3518F6.2080509@silveregg.co.jp> On 01/18/2011 11:35 AM, Nathaniel Smith wrote: > On Mon, Jan 17, 2011 at 6:16 PM, David wrote: >> On a 64 bits >> machine, the size taken by the numpy array will not change so much, >> whereas the size will be almost doubled for the list case (everything >> sees its size doubled except for the integer value itself). > > ObNitPick: The integer value itself will be doubled too; python 'int' > is 64-bit on 64-bit machines. (Well, at least on Linux; it might only > be 32-bits on Windows with its LLP64.) > > # Python 2.5.2 on x86-64 Linux: >>>> type(2 ** 50) > >>>> type(2 ** 65) > As the value of a int object is defined as a C long (at least in the version I am looking at, 2.5.5), it will indeed be 4 bytes in windows 64. In any case, most of the memory is taken by pointers because of the multiple indirections for any sequence in cpython, which is what numpy avoids by design. cheers, David From kunal.t2 at gmail.com Tue Jan 18 02:35:34 2011 From: kunal.t2 at gmail.com (kunal) Date: Tue, 18 Jan 2011 13:05:34 +0530 Subject: [SciPy-User] Multiplying very large matrices In-Reply-To: References: Message-ID: <4D3542C6.4030608@gmail.com> On 01/17/2011 10:46 PM, eat wrote: > kunal ghosh gmail.com> writes: > > Hi, >> >> Hi all, >> while implementing Locality Preserving Projections , >> at one point i have to perform X L X.transpose() >> these matrices are large (32256 x 32256) so i get "out of memory" error. >> >> >> I assume, as the dataset gets larger one would come across this problem , > how would >> one go about solving this ? Is there a common trick that is used to deal > with such problems ? >> Or the workstation calculating these problems needs to have HUGE amounts of > physical memory ? >> >> I am using python and numpy / scipy-- regards-------Kunal GhoshDept of > Computer Sc.& Engineering.Sir MVITBangalore,Indiapermalink: > member.acm.org/~kunal.t2Blog:kunalghosh.wordpress.comWebsite:www.kunalghosh.net > 46.net > Perhaps some linear algebra will help you to rearange the calculations, > especially if your matrices are not full rank. > > Forexample projection to subspace (M_hat= PM): > In [1]: M= randn(1e5, 1e1) > > In [2]: U, s, V= svd(M, full_matrices= False) > > In [3]: U.shape > Out[3]: (100000, 10) > > In [4]: timeit dot(U, dot(U.T, M)) > 10 loops, best of 3: 45.2 ms per loop > > In [5]: timeit # dot(dot(U, U.T), M)) > # would consume all memory and even if had enough memory it would be very slow nice suggestions eat ! Will look into it. Thanks, > > My 2 cents, > eat >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- regards ------- Kunal Ghosh Dept of Computer Sc.& Engineering. Sir MVIT Bangalore,India permalink: member.acm.org/~kunal.t2 Blog:kunalghosh.wordpress.com Website:www.kunalghosh.net46.net From kunal.t2 at gmail.com Tue Jan 18 02:48:52 2011 From: kunal.t2 at gmail.com (kunal) Date: Tue, 18 Jan 2011 13:18:52 +0530 Subject: [SciPy-User] Multiplying very large matrices In-Reply-To: <20110117163829.GI7977@phare.normalesup.org> References: <20110117163829.GI7977@phare.normalesup.org> Message-ID: <4D3545E4.7080908@gmail.com> On 01/17/2011 10:08 PM, Gael Varoquaux wrote: > On Sat, Jan 15, 2011 at 10:09:59PM +0530, kunal ghosh wrote: >> while implementing Locality Preserving Projections , >> at one point i have to perform X L X.transpose() >> these matrices are large (32256 x 32256) so i get "out of memory" error. >> I assume, as the dataset gets larger one would come across this problem , >> how would >> one go about solving this ? Is there a common trick that is used to deal >> with such problems ? >> Or the workstation calculating these problems needs to have HUGE ?amounts >> of physical memory ? > Maybe there is a random projection/random sampling algorithm to solve > your problem on partial data: > > http://metaoptimize.com/qa/questions/1640/whats-a-good-introduction-to-random-projections Hi Gael, I was unaware of random projections as a means for dimensionality reduction . I will look into it. Thanks, > Gael > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- regards ------- Kunal Ghosh Dept of Computer Sc.& Engineering. Sir MVIT Bangalore,India permalink: member.acm.org/~kunal.t2 Blog:kunalghosh.wordpress.com Website:www.kunalghosh.net46.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry.montagu at cea.fr Tue Jan 18 03:08:58 2011 From: thierry.montagu at cea.fr (MONTAGU Thierry) Date: Tue, 18 Jan 2011 09:08:58 +0100 Subject: [SciPy-User] (no subject) In-Reply-To: References: Message-ID: <4D354A9A.9000900@cea.fr> hi, thank you for answering and sorry, for posting wrong code. here is a working matlab/octave code : a = [] % or a = zeros(10) m = 23 a(3).m = 22 my question remain the same. Kind Regards Thierry > > ------------------------------ > > Message: 4 > Date: Mon, 17 Jan 2011 10:26:29 +0100 > From: MONTAGU Thierry > Subject: [SciPy-User] how to do easy things in octave with numpy ? > To: scipy-user at scipy.org > Message-ID:<4D340B45.10600 at cea.fr> > Content-Type: text/plain; charset=ISO-8859-15; format=flowed > > Hi all scipy user > > i wonder how to do the following easy thing in octave within a > python/numpy script. > > in octave:matlab, you instantiate > > a= zeros(10) for example > > and for example, you can do in one line > a[2].m = 10 > > or do > > for i = 1 : length(a) > a[i].m = 25; > end > > How to do this with numpy ? I guess we need to create a python class ? > any idea ? > > Regards > -- Thierry Montagu CEA Saclay DRT/LIST/DCSI/LM2S b?t. 516 P 91191 Gif sur Yvette 01 69 08 88 19 From e.antero.tammi at gmail.com Tue Jan 18 04:25:06 2011 From: e.antero.tammi at gmail.com (eat) Date: Tue, 18 Jan 2011 09:25:06 +0000 (UTC) Subject: [SciPy-User] (no subject) References: <4D354A9A.9000900@cea.fr> Message-ID: MONTAGU Thierry cea.fr> writes: Hi, > > hi, > > thank you for answering and sorry, for posting wrong code. > > here is a working matlab/octave code : > > a = [] % or a = zeros(10) > m = 23 What is the point of the above rows? > a(3).m = 22 This seem to be structure array in Octave, so you could use for example numpy recarray In [1]: a= recarray((10,), dtype= [('m', int)]) In [2]: for i in xrange(a.size): .....: a[i].m= 25 Recarray is only one alternative to use, therefore it would be much more beneficial if you could provide a small working example in Octave inorder to figure out more suitable python translation. Regards, eat > > my question remain the same. > > Kind Regards > > Thierry > > > > > ------------------------------ > > > > Message: 4 > > Date: Mon, 17 Jan 2011 10:26:29 +0100 > > From: MONTAGU Thierry cea.fr> > > Subject: [SciPy-User] how to do easy things in octave with numpy ? > > To: scipy-user scipy.org > > Message-ID:<4D340B45.10600 cea.fr> > > Content-Type: text/plain; charset=ISO-8859-15; format=flowed > > > > Hi all scipy user > > > > i wonder how to do the following easy thing in octave within a > > python/numpy script. > > > > in octave:matlab, you instantiate > > > > a= zeros(10) for example > > > > and for example, you can do in one line > > a[2].m = 10 > > > > or do > > > > for i = 1 : length(a) > > a[i].m = 25; > > end > > > > How to do this with numpy ? I guess we need to create a python class ? > > any idea ? > > > > Regards > > > From guyer at nist.gov Tue Jan 18 09:20:40 2011 From: guyer at nist.gov (Jonathan Guyer) Date: Tue, 18 Jan 2011 09:20:40 -0500 Subject: [SciPy-User] SciPy and Diffusion Equation In-Reply-To: <4D33487D.5040501@sun.ac.za> References: <4D334708.3010805@gmail.com> <4D33487D.5040501@sun.ac.za> Message-ID: FiPy can indeed handle problems like this in 1D, 2D & 3D and with non-linear coefficients and arbitrary combinations of terms and arbitrary combinations of equations on multiple solution variables. See http://www.ctcms.nist.gov/fipy/examples/README.html for examples. FiPy is not "under the scipy umbrella" but happily sits alongside it. - Jon (a FiPy developer) On Jan 16, 2011, at 2:35 PM, Johann Rohwer wrote: > FiPy (http://www.ctcms.nist.gov/fipy/) is a general PDE solver written in > Python, based on the finite volume approach. It should be able to handle > diffusion problems and a number of examples are available from the website. > > Johann > > On 16/01/2011 21:29, Lorenzo Isella wrote: >> Dear All, >> I hope that this is not too off-topic. >> I wonder if there is any tool under the scipy umbrella to deal with >> diffusion problems. I have tackled these problems with proprietary >> software in the past (e.g. Comsol Multiphysics), but I wonder if there >> are other (simple) pythonic options. >> To fix the ideas, consider c(x,t) as a time-dependent concentration >> profile and the equation >> >> \partial_t c= D\partial_x^2 c-\alpha c +beta\delta(x_0) >> >> i.e. a standard diffusion equation with a source term beta\delta(x_0) >> (what I mean is that the initial density distribution is a delta >> centered about x_0, one can probably remove that term from the equation >> while giving the initial density distribution at t=0). >> However, it all boils down to a diffusion equation in an interval [0,L] >> with an initial density profile c(x,t_0). In the case of the equation >> above, I have an analytical solution, but in general I may want to add >> sources, non-linear terms etc...so it should be considered as a starting >> point. >> Any suggestion is appreciated. >> Cheers >> >> Lorenzo >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From seb.haase at gmail.com Tue Jan 18 10:38:14 2011 From: seb.haase at gmail.com (Sebastian Haase) Date: Tue, 18 Jan 2011 16:38:14 +0100 Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym Message-ID: Hi, I was trying to do some 2d Gaussian fitting on my image data using scipy.optimize.leastsq. This worked, but it still takes about 1ms for fitting sigma and peakHeight on a 7x7 pixel image area. So I'm thinking about speeding things up: Writing the objective function and the Jacobian in C (I like using SWIG) would be one thing. But I'm also thinking that for such small problems, the overhead of function calls back and forth in Python (for each parameter set evaluation) would be problematic. So my question is: Can I access the underlying lmder_ function (which is normally called from leastsq via minpack [ _minpackmodule.c ]) directly from my SWIGged extenstion module ? I'm thinking of LoadLibrary (win32) and dlopen/dlsym (linux) ... The advantage would be that I would _not_ have to find another C implementation of Levenberg-Marquardt. I found http://www.ics.forth.gr/~lourakis/levmar/index.html, but that didn't even compile on debian and would be another dependency. Thanks, Sebastian Haase From abli at freemail.hu Tue Jan 18 14:58:28 2011 From: abli at freemail.hu (=?ISO-8859-2?Q?=C1bel_D=E1niel?=) Date: Tue, 18 Jan 2011 20:58:28 +0100 (CET) Subject: [SciPy-User] Using scipy.signal.fftconvolve() and scipy.signal.convolve() Message-ID: [apologies if this might get duplicated, it appears my first submission didn't show up on the mailling list] Hi! I would like to ask some help with the use of scipy.signal.convolve and scipy.signal.fftconvolve. (On a greyscale 2d image.) Based on the documentation of fftconvolve (which is simply 'See convolve.'), I am assuming that they should give (mostly) the same result. (I.e. the result won't be exactly identical since they are using different methods, but they shouldn't be too different.) However, I am getting drastically different results: using convolve results in basically random noise, while fftconvolve gives a very sharp peak. I uploaded a short program with the input and the results I am getting to http://hal.elte.hu/~abeld/scipy_signal_issue/ Am I doing something wrong? Should there be such a difference in the output of these functions? What is causing the difference? (I am using Ubuntu Lucid, version of python-scipy package is 0.7.0-2ubuntu0.1) Thanks in advance, Daniel Abel abli at freemail.hu From david_baddeley at yahoo.com.au Tue Jan 18 15:16:30 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 18 Jan 2011 12:16:30 -0800 (PST) Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym In-Reply-To: References: Message-ID: <904894.90812.qm@web113410.mail.gq1.yahoo.com> Hi Sebastian, I've done something similar - I can tell you that writing a c objective function definitely speeds things up, even with the function call overhead from python, although I too would like to be able to call the fortran methods directly from c. Had a quick look at the leastsq code, but it was sufficiently intractable that I put it towards the back of my priority list. If writing a c objective function I'd suggest you avoid SWIG and go with the numpy c api as SWIG does introduce a bit of overhead. You can also get a speed up by generating the Gaussian into a pre-allocated numpy array, although you have to be a bit careful if you do this for the residuals as lmderr holds a pointer/reference to the last set of residuals so if you overwrite these things don't work. I also found that you can't give leastsq a c goal function directly & that you have to wrap it in a python function (it does some introspection on the function object to get the function name to print a nice message about convergence, or lack thereof). Even with these caveats, I did find a substantial improvement over just using a Gaussian coded in python. I could probably give you my c coded objective functions to give you something to work from, if desired. cheers, David ----- Original Message ---- From: Sebastian Haase To: SciPy Users List Sent: Wed, 19 January, 2011 4:38:14 AM Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym Hi, I was trying to do some 2d Gaussian fitting on my image data using scipy.optimize.leastsq. This worked, but it still takes about 1ms for fitting sigma and peakHeight on a 7x7 pixel image area. So I'm thinking about speeding things up: Writing the objective function and the Jacobian in C (I like using SWIG) would be one thing. But I'm also thinking that for such small problems, theHi overhead of function calls back and forth in Python (for each parameter set evaluation) would be problematic. So my question is: Can I access the underlying lmder_ function (which is normally called from leastsq via minpack [ _minpackmodule.c ]) directly from my SWIGged extenstion module ? I'm thinking of LoadLibrary (win32) and dlopen/dlsym (linux) ... The advantage would be that I would _not_ have to find another C implementation of Levenberg-Marquardt. I found http://www.ics.forth.gr/~lourakis/levmar/index.html, but that didn't even compile on debian and would be another dependency. Thanks, Sebastian Haase _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From david_baddeley at yahoo.com.au Tue Jan 18 15:26:13 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 18 Jan 2011 12:26:13 -0800 (PST) Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym In-Reply-To: <904894.90812.qm@web113410.mail.gq1.yahoo.com> References: <904894.90812.qm@web113410.mail.gq1.yahoo.com> Message-ID: <996762.94533.qm@web113402.mail.gq1.yahoo.com> Should have also mentioned that I've given levmar a whirl, and although you got a subtle performance increase (I think it was ~ 2 fold), it wasn't as spectactular as I'd have hoped and in general I didn't think it worth the hassle (the GPL bit might have also put me off somewhat). That said, if you do decide to go that route I could probably give you a copy of the python module I drummed up using lmderr to fit a Gaussian. I don't remember having too much trouble getting it to compile under linux (ubuntu) (although I might have had to hack one or two parts - I know my strategy regarding dependencies was going to be to put a local copy in the source tree and statically link it). cheers, David ----- Original Message ---- From: David Baddeley To: SciPy Users List Sent: Wed, 19 January, 2011 9:16:30 AM Subject: Re: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym Hi Sebastian, I've done something similar - I can tell you that writing a c objective function definitely speeds things up, even with the function call overhead from python, although I too would like to be able to call the fortran methods directly from c. Had a quick look at the leastsq code, but it was sufficiently intractable that I put it towards the back of my priority list. If writing a c objective function I'd suggest you avoid SWIG and go with the numpy c api as SWIG does introduce a bit of overhead. You can also get a speed up by generating the Gaussian into a pre-allocated numpy array, although you have to be a bit careful if you do this for the residuals as lmderr holds a pointer/reference to the last set of residuals so if you overwrite these things don't work. I also found that you can't give leastsq a c goal function directly & that you have to wrap it in a python function (it does some introspection on the function object to get the function name to print a nice message about convergence, or lack thereof). Even with these caveats, I did find a substantial improvement over just using a Gaussian coded in python. I could probably give you my c coded objective functions to give you something to work from, if desired. cheers, David ----- Original Message ---- From: Sebastian Haase To: SciPy Users List Sent: Wed, 19 January, 2011 4:38:14 AM Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym Hi, I was trying to do some 2d Gaussian fitting on my image data using scipy.optimize.leastsq. This worked, but it still takes about 1ms for fitting sigma and peakHeight on a 7x7 pixel image area. So I'm thinking about speeding things up: Writing the objective function and the Jacobian in C (I like using SWIG) would be one thing. But I'm also thinking that for such small problems, theHi overhead of function calls back and forth in Python (for each parameter set evaluation) would be problematic. So my question is: Can I access the underlying lmder_ function (which is normally called from leastsq via minpack [ _minpackmodule.c ]) directly from my SWIGged extenstion module ? I'm thinking of LoadLibrary (win32) and dlopen/dlsym (linux) ... The advantage would be that I would _not_ have to find another C implementation of Levenberg-Marquardt. I found http://www.ics.forth.gr/~lourakis/levmar/index.html, but that didn't even compile on debian and would be another dependency. Thanks, Sebastian Haase _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From david_baddeley at yahoo.com.au Tue Jan 18 16:19:24 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 18 Jan 2011 13:19:24 -0800 (PST) Subject: [SciPy-User] Using scipy.signal.fftconvolve() and scipy.signal.convolve() In-Reply-To: References: Message-ID: <519471.22161.qm@web113412.mail.gq1.yahoo.com> What you're getting form fftconvolve looks about right - with ordinary convolve I suspect your problem might be that you're using 8 bit ints and it's overflowing & thus giving you the random noise pattern. Ffts cast their inputs to double first. I'd suggest casting your image to float - ie: a = a.astype('f') before doing the standard convolutions. cheers, David ----- Original Message ---- From: ?bel D?niel To: scipy-user Sent: Wed, 19 January, 2011 8:58:28 AM Subject: [SciPy-User] Using scipy.signal.fftconvolve() and scipy.signal.convolve() [apologies if this might get duplicated, it appears my first submission didn't show up on the mailling list] Hi! I would like to ask some help with the use of scipy.signal.convolve and scipy.signal.fftconvolve. (On a greyscale 2d image.) Based on the documentation of fftconvolve (which is simply 'See convolve.'), I am assuming that they should give (mostly) the same result. (I.e. the result won't be exactly identical since they are using different methods, but they shouldn't be too different.) However, I am getting drastically different results: using convolve results in basically random noise, while fftconvolve gives a very sharp peak. I uploaded a short program with the input and the results I am getting to http://hal.elte.hu/~abeld/scipy_signal_issue/ Am I doing something wrong? Should there be such a difference in the output of these functions? What is causing the difference? (I am using Ubuntu Lucid, version of python-scipy package is 0.7.0-2ubuntu0.1) Thanks in advance, Daniel Abel abli at freemail.hu _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From Chris.Barker at noaa.gov Tue Jan 18 18:18:31 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 18 Jan 2011 15:18:31 -0800 Subject: [SciPy-User] (no subject) In-Reply-To: References: <4D354A9A.9000900@cea.fr> Message-ID: <4D361FC7.6050800@noaa.gov> On 1/18/11 1:25 AM, eat wrote: >> a = [] % or a = zeros(10) >> m = 23 > What is the point of the above rows? > >> a(3).m = 22 > This seem to be structure array in Octave, Does Octave give you arrays of structures with zeros(10)? >so you could use for example numpy > recarray > In [1]: a= recarray((10,), dtype= [('m', int)]) > > In [2]: for i in xrange(a.size): > .....: a[i].m= 25 > > Recarray is only one alternative to use, therefore it would be much more > beneficial if you could provide a small working example in Octave inorder to > figure out more suitable python translation. or better yet, a description of what you need to do - very rarely does 'translating' directly from one language to another get you the best result. For the most part, python/numpy is more powerful and flexible than Matlab/Octave, so there can be whole new ways of doing things. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From alex.liberzon at gmail.com Tue Jan 18 18:32:49 2011 From: alex.liberzon at gmail.com (Alex Liberzon) Date: Wed, 19 Jan 2011 01:32:49 +0200 Subject: [SciPy-User] (no subject) Message-ID: <1249D747-2F37-487C-9D78-2706AE9AA781@gmail.com> Hi, While moving from Matlab to Numpy/Scipy/Matplotlib I need sometimes to work with Matlab figures. It would be nice if we could load Matlab figures in Python, extract the data and some basic parameters of how it was looking in Matlab and create its "clone" in matplotlib. At present the Matlab figures are MAT files and in the near future it will be HDF5, both formats are loadable. However, I have not found any reference for such attempts to load and parse the FIG files. As a beginner I find it difficult to write a large piece of code. However, if there are other interested users that can cooperate on such, I'd gladly contribute some hours for this task. Meanwhile, to show the proof-of-concept attempt is attached below. All your useful comments and suggestions are very welcome. Thank you, Alex #------------ loadfig.py ---------------------- # """ Loadfig loads simple Matlab figures as MAT files and plots the lines using matplotlib """ import numpy as np from scipy.io import loadmat import matplotlib.pyplot as plt def lowest_order(d): """ lowestsc_order(hgstruct) finds the handle of axes (i.e. children of figure handles, etc.) """ while not 'graph2d.lineseries' in d['type']: d = d['children'][0][0] return d def get_data(d): """ get_data(hgstruct) extracts XData, YData, Color and Marker of each line in the given axis Parameters ---------- hgstruct : structure obtained from lowest_order(loadmat('matlab.fig')) """ xd,yd,dispname,marker,colors = [],[],[],[],[] for i in d: if i['type'] == 'graph2d.lineseries': xd.append(i['properties'][0][0]['XData'][0][0]) yd.append(i['properties'][0][0]['YData'][0][0]) dispname.append(i['properties'][0][0]['DisplayName'][0][0]) marker.append(i['properties'][0][0]['Marker'][0][0]) colors.append(i['properties'][0][0]['Color'][0][0]) return np.asarray(xd),np.asarray(yd),dispname,np.asarray(marker).astype('S1'),colors def plot_data(xd,yd,dispname=None,marker=None,colors=None): """ plot_data(xd,yd,dispname=None,marker=None,colors=None) plots the data sets extracted by get_data(lowest_order(loadmat('matlab.fig'))) Parameters ---------- xd,yd : array_like data arrays dispname : array of strings to be used in legend, optional marker : array of characters markers, e.g. ['o','x'], optional colors : array of color sets in RGB, e.g. [[0,0,1],[1,0,0]], optional """ for i,n in enumerate(xd): plt.plot(xd[i].T,yd[i].T,color=tuple(colors[i]),marker=marker[i],linewidth=0) plt.legend(dispname) plt.show() def main(filename): """ main(filename) loads the filename (with the extension .fig) which is Matlab figure. At the moment only simple 2D lines are supported. Examples ------- >>> loadfig('matlab.fig') # is equivalent to: >>> d = loadmat(filename) >>> d = d['hgS_070000'] >>> xd,yd,dispname,marker,colors = get_data(lowest_order(d)) >>> plot_data(xd,yd,dispname,marker,colors) """ d = loadmat(filename) # ver 7.2 or lower: d = d['hgS_070000'] xd,yd,dispname,marker,colors = get_data(lowest_order(d)) plot_data(xd,yd,dispname,marker,colors) if __name__ == "__main__": import sys import os try: filename = sys.argv[1] main(filename) except: print("Wrong file") # ----------------------------- EOF loadfig.py --------------------------- # Alex Liberzon Turbulence Structure Laboratory [http://www.eng.tau.ac.il/efdl] School of Mechanical Engineering Tel Aviv University Ramat Aviv 69978 Israel Tel: +972-3-640-8928 Lab: +972-3-640-6860 (telefax) E-mail: alexlib at eng.tau.ac.il From chris.rodgers at berkeley.edu Tue Jan 18 21:21:37 2011 From: chris.rodgers at berkeley.edu (Chris Rodgers) Date: Tue, 18 Jan 2011 18:21:37 -0800 Subject: [SciPy-User] Using scipy.signal.fftconvolve() and scipy.signal.convolve() In-Reply-To: <519471.22161.qm@web113412.mail.gq1.yahoo.com> References: <519471.22161.qm@web113412.mail.gq1.yahoo.com> Message-ID: While the output from the convolution statements looks qualitatively correct to me, I always have difficulty interpreting them because I don't know the "lags" associated with each value. By lag I mean the amount by which one signal was delayed before calculating the dot product of the two signals. In Matlab for example, the conv function returns both the values and a separate array called "lags" of the same size, which is very helpful. The scipy documentation is not very clear on this point. Does anyone know a good resource documenting the lags in each of the 3 modes of operation ('same', 'valid', 'full')? I tried convolving dummy sequences ([0,1,0,0] etc) but I couldn't figure it out. Thanks!! Chris On Tue, Jan 18, 2011 at 1:19 PM, David Baddeley wrote: > What you're getting form fftconvolve looks about right - with ordinary convolve > I suspect your problem might be that you're using 8 bit ints and it's > overflowing & thus giving you the random noise pattern. Ffts cast their inputs > to double first. I'd suggest casting your image to float - ie: > a = a.astype('f') > before doing the standard convolutions. > > cheers, > David > > > ----- Original Message ---- > From: ?bel D?niel > To: scipy-user > Sent: Wed, 19 January, 2011 8:58:28 AM > Subject: [SciPy-User] Using scipy.signal.fftconvolve() and > scipy.signal.convolve() > > [apologies if this might get duplicated, it appears my first > submission didn't show up on the mailling list] > > Hi! > > I would like to ask some help with the use of scipy.signal.convolve > and scipy.signal.fftconvolve. (On a greyscale 2d image.) > > Based on the documentation of fftconvolve (which is simply 'See > convolve.'), I am assuming that they should give (mostly) the same > result. (I.e. the result won't be exactly identical since they are > using different methods, but they shouldn't be too different.) > > However, I am getting drastically different results: using convolve > results in basically random noise, while fftconvolve gives a very > sharp peak. > > I uploaded a short program with the input and the results I am getting > to http://hal.elte.hu/~abeld/scipy_signal_issue/ > > Am I doing something wrong? Should there be such a difference in the > output of these functions? What is causing the difference? > > (I am using Ubuntu Lucid, version of python-scipy package is > 0.7.0-2ubuntu0.1) > > Thanks in advance, > Daniel Abel > abli at freemail.hu > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Jan 18 21:57:53 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Jan 2011 21:57:53 -0500 Subject: [SciPy-User] Using scipy.signal.fftconvolve() and scipy.signal.convolve() In-Reply-To: References: <519471.22161.qm@web113412.mail.gq1.yahoo.com> Message-ID: On Tue, Jan 18, 2011 at 9:21 PM, Chris Rodgers wrote: > While the output from the convolution statements looks qualitatively > correct to me, I always have difficulty interpreting them because I > don't know the "lags" associated with each value. By lag I mean the > amount by which one signal was delayed before calculating the dot > product of the two signals. In Matlab for example, the conv function > returns both the values and a separate array called "lags" of the same > size, which is very helpful. > > The scipy documentation is not very clear on this point. Does anyone > know a good resource documenting the lags in each of the 3 modes of > operation ('same', 'valid', 'full')? I tried convolving dummy > sequences ([0,1,0,0] etc) but I couldn't figure it out. valid and full have a clear definition for valid, signal convolve cuts on both sides symmetrically, I think. I wrote some helper functions, so I don't have to think. And I found the comparison with ndimage useful, because it has an option for this. integer is start, decimal is end, if I did it correctly: >>> from scipy.signal import convolve >>> convolve(np.arange(1,10), [0.1, 0, 1], mode='valid') array([ 1.3, 2.4, 3.5, 4.6, 5.7, 6.8, 7.9]) >>> convolve(np.arange(1,10), [0.1, 0, 1], mode='full') array([ 0.1, 0.2, 1.3, 2.4, 3.5, 4.6, 5.7, 6.8, 7.9, 8. , 9. ]) >>> convolve(np.arange(1,10), [0.1, 0, 1], mode='same') array([ 0.2, 1.3, 2.4, 3.5, 4.6, 5.7, 6.8, 7.9, 8. ]) not so clear with even length window >>> convolve(np.arange(1,10), [0.1, 0, 0, 1], mode='full') array([ 0.1, 0.2, 0.3, 1.4, 2.5, 3.6, 4.7, 5.8, 6.9, 7. , 8. , 9. ]) >>> convolve(np.arange(1,10), [0.1, 0, 0, 1], mode='same') array([ 0.2, 0.3, 1.4, 2.5, 3.6, 4.7, 5.8, 6.9, 7. ]) >>> convolve(np.arange(1,10), [0.1, 0, 0,0, 1], mode='full') array([ 0.1, 0.2, 0.3, 0.4, 1.5, 2.6, 3.7, 4.8, 5.9, 6. , 7. , 8. , 9. ]) >>> convolve(np.arange(1,10), [0.1, 0, 0,0, 1], mode='same') array([ 0.3, 0.4, 1.5, 2.6, 3.7, 4.8, 5.9, 6. , 7. ]) I think I usually avoided same Josef > > Thanks!! > Chris > > > > On Tue, Jan 18, 2011 at 1:19 PM, David Baddeley > wrote: >> What you're getting form fftconvolve looks about right - with ordinary convolve >> I suspect your problem might be that you're using 8 bit ints and it's >> overflowing & thus giving you the random noise pattern. Ffts cast their inputs >> to double first. I'd suggest casting your image to float - ie: >> a = a.astype('f') >> before doing the standard convolutions. >> >> cheers, >> David >> >> >> ----- Original Message ---- >> From: ?bel D?niel >> To: scipy-user >> Sent: Wed, 19 January, 2011 8:58:28 AM >> Subject: [SciPy-User] Using scipy.signal.fftconvolve() and >> scipy.signal.convolve() >> >> [apologies if this might get duplicated, it appears my first >> submission didn't show up on the mailling list] >> >> Hi! >> >> I would like to ask some help with the use of scipy.signal.convolve >> and scipy.signal.fftconvolve. (On a greyscale 2d image.) >> >> Based on the documentation of fftconvolve (which is simply 'See >> convolve.'), I am assuming that they should give (mostly) the same >> result. (I.e. the result won't be exactly identical since they are >> using different methods, but they shouldn't be too different.) >> >> However, I am getting drastically different results: using convolve >> results in basically random noise, while fftconvolve gives a very >> sharp peak. >> >> I uploaded a short program with the input and the results I am getting >> to http://hal.elte.hu/~abeld/scipy_signal_issue/ >> >> Am I doing something wrong? Should there be such a difference in the >> output of these functions? What is causing the difference? >> >> (I am using Ubuntu Lucid, version of python-scipy package is >> 0.7.0-2ubuntu0.1) >> >> Thanks in advance, >> Daniel Abel >> abli at freemail.hu >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Jan 18 21:58:52 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Jan 2011 21:58:52 -0500 Subject: [SciPy-User] Using scipy.signal.fftconvolve() and scipy.signal.convolve() In-Reply-To: References: <519471.22161.qm@web113412.mail.gq1.yahoo.com> Message-ID: On Tue, Jan 18, 2011 at 9:57 PM, wrote: > On Tue, Jan 18, 2011 at 9:21 PM, Chris Rodgers > wrote: >> While the output from the convolution statements looks qualitatively >> correct to me, I always have difficulty interpreting them because I >> don't know the "lags" associated with each value. By lag I mean the >> amount by which one signal was delayed before calculating the dot >> product of the two signals. In Matlab for example, the conv function >> returns both the values and a separate array called "lags" of the same >> size, which is very helpful. >> >> The scipy documentation is not very clear on this point. Does anyone >> know a good resource documenting the lags in each of the 3 modes of >> operation ('same', 'valid', 'full')? I tried convolving dummy >> sequences ([0,1,0,0] etc) but I couldn't figure it out. > > valid and full have a clear definition > for valid, signal convolve cuts on both sides symmetrically, I think. typo for "same", signal convolve ... > I wrote some helper functions, so I don't have to think. And I found > the comparison > with ndimage useful, because it has an option for this. > > integer is start, decimal is end, if I did it correctly: > >>>> from scipy.signal import convolve >>>> convolve(np.arange(1,10), [0.1, 0, 1], mode='valid') > array([ 1.3, ?2.4, ?3.5, ?4.6, ?5.7, ?6.8, ?7.9]) >>>> convolve(np.arange(1,10), [0.1, 0, 1], mode='full') > array([ 0.1, ?0.2, ?1.3, ?2.4, ?3.5, ?4.6, ?5.7, ?6.8, ?7.9, ?8. , ?9. ]) >>>> convolve(np.arange(1,10), [0.1, 0, 1], mode='same') > array([ 0.2, ?1.3, ?2.4, ?3.5, ?4.6, ?5.7, ?6.8, ?7.9, ?8. ]) > > not so clear with even length window > >>>> convolve(np.arange(1,10), [0.1, 0, 0, 1], mode='full') > array([ 0.1, ?0.2, ?0.3, ?1.4, ?2.5, ?3.6, ?4.7, ?5.8, ?6.9, ?7. , ?8. , > ? ? ? ?9. ]) >>>> convolve(np.arange(1,10), [0.1, 0, 0, 1], mode='same') > array([ 0.2, ?0.3, ?1.4, ?2.5, ?3.6, ?4.7, ?5.8, ?6.9, ?7. ]) > >>>> convolve(np.arange(1,10), [0.1, 0, 0,0, 1], mode='full') > array([ 0.1, ?0.2, ?0.3, ?0.4, ?1.5, ?2.6, ?3.7, ?4.8, ?5.9, ?6. , ?7. , > ? ? ? ?8. , ?9. ]) >>>> convolve(np.arange(1,10), [0.1, 0, 0,0, 1], mode='same') > array([ 0.3, ?0.4, ?1.5, ?2.6, ?3.7, ?4.8, ?5.9, ?6. , ?7. ]) > > I think I usually avoided same > > Josef > >> >> Thanks!! >> Chris >> >> >> >> On Tue, Jan 18, 2011 at 1:19 PM, David Baddeley >> wrote: >>> What you're getting form fftconvolve looks about right - with ordinary convolve >>> I suspect your problem might be that you're using 8 bit ints and it's >>> overflowing & thus giving you the random noise pattern. Ffts cast their inputs >>> to double first. I'd suggest casting your image to float - ie: >>> a = a.astype('f') >>> before doing the standard convolutions. >>> >>> cheers, >>> David >>> >>> >>> ----- Original Message ---- >>> From: ?bel D?niel >>> To: scipy-user >>> Sent: Wed, 19 January, 2011 8:58:28 AM >>> Subject: [SciPy-User] Using scipy.signal.fftconvolve() and >>> scipy.signal.convolve() >>> >>> [apologies if this might get duplicated, it appears my first >>> submission didn't show up on the mailling list] >>> >>> Hi! >>> >>> I would like to ask some help with the use of scipy.signal.convolve >>> and scipy.signal.fftconvolve. (On a greyscale 2d image.) >>> >>> Based on the documentation of fftconvolve (which is simply 'See >>> convolve.'), I am assuming that they should give (mostly) the same >>> result. (I.e. the result won't be exactly identical since they are >>> using different methods, but they shouldn't be too different.) >>> >>> However, I am getting drastically different results: using convolve >>> results in basically random noise, while fftconvolve gives a very >>> sharp peak. >>> >>> I uploaded a short program with the input and the results I am getting >>> to http://hal.elte.hu/~abeld/scipy_signal_issue/ >>> >>> Am I doing something wrong? Should there be such a difference in the >>> output of these functions? What is causing the difference? >>> >>> (I am using Ubuntu Lucid, version of python-scipy package is >>> 0.7.0-2ubuntu0.1) >>> >>> Thanks in advance, >>> Daniel Abel >>> abli at freemail.hu >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From abli at freemail.hu Wed Jan 19 03:19:10 2011 From: abli at freemail.hu (Abel Daniel) Date: Wed, 19 Jan 2011 09:19:10 +0100 Subject: [SciPy-User] Using scipy.signal.fftconvolve() and scipy.signal.convolve() In-Reply-To: <519471.22161.qm@web113412.mail.gq1.yahoo.com> References: <519471.22161.qm@web113412.mail.gq1.yahoo.com> Message-ID: <20110119081910.GL30170@hal.elte.hu> On Tue, Jan 18, 2011 at 01:19:24PM -0800, David Baddeley wrote: > What you're getting form fftconvolve looks about right - with ordinary convolve > I suspect your problem might be that you're using 8 bit ints and it's > overflowing & thus giving you the random noise pattern. Ffts cast their inputs > to double first. I'd suggest casting your image to float - ie: > a = a.astype('f') > before doing the standard convolutions. Thanks! Indeed that was the problem, casting as you suggested results in convolve giving the same results as fftconvolve Daniel Abel abli at freemail.hu From denis-bz-gg at t-online.de Wed Jan 19 04:56:53 2011 From: denis-bz-gg at t-online.de (denis) Date: Wed, 19 Jan 2011 01:56:53 -0800 (PST) Subject: [SciPy-User] how to do easy things in octave with numpy ? In-Reply-To: <4D340B45.10600@cea.fr> References: <4D340B45.10600@cea.fr> Message-ID: Thierry, take a look at http://www.scipy.org/NumPy_for_Matlab_Users http://mathesaurus.sourceforge.net/matlab-numpy.html also the nice array cheat sheet http://pages.physics.cornell.edu/~myers/teaching/ComputationalMethods/python/arrays.html cheers -- denis On Jan 17, 10:26?am, MONTAGU Thierry wrote: > Hi all scipy user > > i wonder how to do the following easy thing in octave within a > python/numpy script. From rcsqtc at iqac.csic.es Wed Jan 19 05:50:37 2011 From: rcsqtc at iqac.csic.es (Ramon Crehuet) Date: Wed, 19 Jan 2011 11:50:37 +0100 Subject: [SciPy-User] Operations with matrices Message-ID: <4D36C1FD.1030601@iqac.csic.es> Dear scipy users, I would like to know if I can avoid the following double loop somehow, because I need to perform this operations may times and with relatively large matrices. I have a two-dimensional array r whose elements I have to add (to calculate an histogram) into a vector c. And the index of c where an element r[i,j] goes is in an array d, so that: c = zeros(n) for (i,j) in d: k = d[i,j] c[k] += r[i,j] Is there a way to avoid this loop? Thanks for your attention. Ramon From rcsqtc at iqac.csic.es Wed Jan 19 06:00:52 2011 From: rcsqtc at iqac.csic.es (Ramon Crehuet) Date: Wed, 19 Jan 2011 12:00:52 +0100 Subject: [SciPy-User] Operations with matrices In-Reply-To: <4D36C1FD.1030601@iqac.csic.es> References: <4D36C1FD.1030601@iqac.csic.es> Message-ID: <4D36C464.3030005@iqac.csic.es> Ooops! It is not like this! On 19/01/11 11:50, Ramon Crehuet wrote: > c = zeros(n) > for (i,j) in d: > k = d[i,j] > c[k] += r[i,j] it should be like: c = zeros(n) for position, value in ndenumerate(d): c[value] += r[position] This is simpler, but probably still too slow... Thanks again! Ramon From e.antero.tammi at gmail.com Wed Jan 19 06:20:26 2011 From: e.antero.tammi at gmail.com (eat) Date: Wed, 19 Jan 2011 11:20:26 +0000 (UTC) Subject: [SciPy-User] Operations with matrices References: <4D36C1FD.1030601@iqac.csic.es> <4D36C464.3030005@iqac.csic.es> Message-ID: Ramon Crehuet iqac.csic.es> writes: Hi > > Ooops! > It is not like this! > > On 19/01/11 11:50, Ramon Crehuet wrote: > > c = zeros(n) > > for (i,j) in d: > > k = d[i,j] > > c[k] += r[i,j] > it should be like: > c = zeros(n) > for position, value in ndenumerate(d): > c[value] += r[position] Have you checked bincount? Regards, eat > > This is simpler, but probably still too slow... > Thanks again! > Ramon > From e.antero.tammi at gmail.com Wed Jan 19 16:01:59 2011 From: e.antero.tammi at gmail.com (eat) Date: Wed, 19 Jan 2011 21:01:59 +0000 (UTC) Subject: [SciPy-User] Note: some mail list threads seem to be out of sync! Message-ID: Hi, Just noticed that the mail list threads at http://mail.scipy.org/pipermail/scipy-user and http://dir.gmane.org/gmane.comp.python.scientific.user aren't the same. For example only the head of the thread http://mail.scipy.org/pipermail/scipy- user/2011-January/028042.html is shown at http://dir.gmane.org/gmane.comp.python.scientific.user Also my personal e-mail archive seem to be sync with pipermail. Regards, eat From robert.kern at gmail.com Wed Jan 19 16:12:16 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Jan 2011 15:12:16 -0600 Subject: [SciPy-User] Note: some mail list threads seem to be out of sync! In-Reply-To: References: Message-ID: On Wed, Jan 19, 2011 at 15:01, eat wrote: > Hi, > > Just noticed that the mail list threads at > http://mail.scipy.org/pipermail/scipy-user and > http://dir.gmane.org/gmane.comp.python.scientific.user aren't the same. > > For example only the head of the thread http://mail.scipy.org/pipermail/scipy- > user/2011-January/028042.html is shown at > http://dir.gmane.org/gmane.comp.python.scientific.user > > Also my personal e-mail archive seem to be sync with pipermail. The individual messages appear to be there. http://search.gmane.org/?query=code+for+multidimensional+scaling&group=gmane.comp.python.scientific.user They may not be showing up properly threaded in the GMane web interface, though. If this is a problem for you, please contact the GMane maintainers. We have no control or visibility into the problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Wed Jan 19 16:17:06 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 19 Jan 2011 16:17:06 -0500 Subject: [SciPy-User] Note: some mail list threads seem to be out of sync! In-Reply-To: References: Message-ID: On Wed, Jan 19, 2011 at 4:12 PM, Robert Kern wrote: > On Wed, Jan 19, 2011 at 15:01, eat wrote: >> Hi, >> >> Just noticed that the mail list threads at >> http://mail.scipy.org/pipermail/scipy-user and >> http://dir.gmane.org/gmane.comp.python.scientific.user aren't the same. >> >> For example only the head of the thread http://mail.scipy.org/pipermail/scipy- >> user/2011-January/028042.html is shown at >> http://dir.gmane.org/gmane.comp.python.scientific.user >> >> Also my personal e-mail archive seem to be sync with pipermail. > > The individual messages appear to be there. > > http://search.gmane.org/?query=code+for+multidimensional+scaling&group=gmane.comp.python.scientific.user I see them threaded in the split window view http://thread.gmane.org/gmane.comp.python.scientific.user/27599 Josef > > They may not be showing up properly threaded in the GMane web > interface, though. If this is a problem for you, please contact the > GMane maintainers. We have no control or visibility into the problem. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Jan 19 16:22:12 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Jan 2011 15:22:12 -0600 Subject: [SciPy-User] Note: some mail list threads seem to be out of sync! In-Reply-To: References: Message-ID: On Wed, Jan 19, 2011 at 15:17, wrote: > On Wed, Jan 19, 2011 at 4:12 PM, Robert Kern wrote: >> On Wed, Jan 19, 2011 at 15:01, eat wrote: >>> Hi, >>> >>> Just noticed that the mail list threads at >>> http://mail.scipy.org/pipermail/scipy-user and >>> http://dir.gmane.org/gmane.comp.python.scientific.user aren't the same. >>> >>> For example only the head of the thread http://mail.scipy.org/pipermail/scipy- >>> user/2011-January/028042.html is shown at >>> http://dir.gmane.org/gmane.comp.python.scientific.user >>> >>> Also my personal e-mail archive seem to be sync with pipermail. >> >> The individual messages appear to be there. >> >> http://search.gmane.org/?query=code+for+multidimensional+scaling&group=gmane.comp.python.scientific.user > > I see them threaded in the split window view > > http://thread.gmane.org/gmane.comp.python.scientific.user/27599 But not here (go to page 2): http://news.gmane.org/gmane.comp.python.scientific.user Weird. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From e.antero.tammi at gmail.com Wed Jan 19 16:29:52 2011 From: e.antero.tammi at gmail.com (E. Antero Tammi) Date: Wed, 19 Jan 2011 23:29:52 +0200 Subject: [SciPy-User] Note: some mail list threads seem to be out of sync! In-Reply-To: References: Message-ID: Hi On Wed, Jan 19, 2011 at 11:12 PM, Robert Kern wrote: > On Wed, Jan 19, 2011 at 15:01, eat wrote: > > Hi, > > > > Just noticed that the mail list threads at > > http://mail.scipy.org/pipermail/scipy-user and > > http://dir.gmane.org/gmane.comp.python.scientific.user aren't the same. > > > > For example only the head of the thread > http://mail.scipy.org/pipermail/scipy- > > user/2011-January/028042.html is shown at > > http://dir.gmane.org/gmane.comp.python.scientific.user > > > > Also my personal e-mail archive seem to be sync with pipermail. > > The individual messages appear to be there. > > > http://search.gmane.org/?query=code+for+multidimensional+scaling&group=gmane.comp.python.scientific.user > > They may not be showing up properly threaded in the GMane web > interface, though. If this is a problem for you, please contact the > GMane maintainers. We have no control or visibility into the problem. Not so much problem for me personally but just tought to let people to be aware of this. Thanks, eat > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Wed Jan 19 18:48:42 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 19 Jan 2011 15:48:42 -0800 Subject: [SciPy-User] [ANN] Bottleneck 0.3 Message-ID: Bottleneck is a collection of fast NumPy array functions written in Cython. It contains functions like median, nanmedian, nanargmax, move_mean. The third release of Bottleneck is twice as fast for small input arrays and contains 10 new functions. Faster: - All functions are faster (less overhead in selector functions) New functions: - nansum() - move_sum() - move_nansum() - move_mean() - move_std() - move_nanstd() - move_min() - move_nanmin() - move_max() - move_nanmax() Enhancements: - You can now specify the dtype and axis to use in the benchmark timings - Improved documentation and more unit tests Breaks from 0.2.0: - Moving window functions now default to axis=-1 instead of axis=0 - Low-level moving window selector functions no longer take window as input Bug fix: - int input array resulted in call to slow, non-cython version of move_nanmean download http://pypi.python.org/pypi/Bottleneck docs http://berkeleyanalytics.com/bottleneck code http://github.com/kwgoodman/bottleneck mailing list http://groups.google.com/group/bottle-neck mailing list 2 http://mail.scipy.org/mailman/listinfo/scipy-user From rcsqtc at iqac.csic.es Thu Jan 20 03:51:59 2011 From: rcsqtc at iqac.csic.es (Ramon Crehuet) Date: Thu, 20 Jan 2011 09:51:59 +0100 Subject: [SciPy-User] Operations with matrices In-Reply-To: References: Message-ID: <4D37F7AF.6010208@iqac.csic.es> Hi Eat, I do not see how bincount could help. It only accepts a 1D array as input, which is not the case, and the hard thing is to avoid the loop in indices on d. Cheers, Ramon > Date: Wed, 19 Jan 2011 11:20:26 +0000 (UTC) > From: eat > Subject: Re: [SciPy-User] Operations with matrices > To: scipy-user at scipy.org > Message-ID: > Content-Type: text/plain; charset=us-ascii > > Ramon Crehuet iqac.csic.es> writes: > > Hi >> Ooops! >> It is not like this! >> >> On 19/01/11 11:50, Ramon Crehuet wrote: >>> c = zeros(n) >>> for (i,j) in d: >>> k = d[i,j] >>> c[k] += r[i,j] >> it should be like: >> c = zeros(n) >> for position, value in ndenumerate(d): >> c[value] += r[position] > Have you checked bincount? > > Regards, > eat >> This is simpler, but probably still too slow... >> Thanks again! >> Ramon >> > > > > > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 89, Issue 35 > ****************************************** > From sebastian at sipsolutions.net Thu Jan 20 04:46:37 2011 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 20 Jan 2011 10:46:37 +0100 Subject: [SciPy-User] Operations with matrices In-Reply-To: <4D37F7AF.6010208@iqac.csic.es> References: <4D37F7AF.6010208@iqac.csic.es> Message-ID: <1295516797.2395.3.camel@sebastian> Hello, On Thu, 2011-01-20 at 09:51 +0100, Ramon Crehuet wrote: > Hi Eat, > I do not see how bincount could help. It only accepts a 1D array as input, which > is not the case, and the hard thing is to avoid the loop in indices on d. Well, maybe there is no higher dimensional bincount, but the histogramdd or histogram2d functions will be able to do what you want in any case... Regards, Sebastian > Cheers, > Ramon > > > Date: Wed, 19 Jan 2011 11:20:26 +0000 (UTC) > > From: eat > > Subject: Re: [SciPy-User] Operations with matrices > > To: scipy-user at scipy.org > > Message-ID: > > Content-Type: text/plain; charset=us-ascii > > > > Ramon Crehuet iqac.csic.es> writes: > > > > Hi > >> Ooops! > >> It is not like this! > >> > >> On 19/01/11 11:50, Ramon Crehuet wrote: > >>> c = zeros(n) > >>> for (i,j) in d: > >>> k = d[i,j] > >>> c[k] += r[i,j] > >> it should be like: > >> c = zeros(n) > >> for position, value in ndenumerate(d): > >> c[value] += r[position] > > Have you checked bincount? > > > > Regards, > > eat > >> This is simpler, but probably still too slow... > >> Thanks again! > >> Ramon > >> > > > > > > > > > > > > ------------------------------ > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > End of SciPy-User Digest, Vol 89, Issue 35 > > ****************************************** > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From opossumnano at gmail.com Thu Jan 20 05:12:50 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Thu, 20 Jan 2011 11:12:50 +0100 Subject: [SciPy-User] [Ann] EuroScipy 2011 - Call for papers Message-ID: <20110120101250.GG31049@tulpenbaum.cognition.tu-berlin.de> ========================= Announcing EuroScipy 2011 ========================= --------------------------------------------- The 4th European meeting on Python in Science --------------------------------------------- **Paris, Ecole Normale Sup?rieure, August 25-28 2011** We are happy to announce the 4th EuroScipy meeting, in Paris, August 2011. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Main topics =========== - Presentations of scientific tools and libraries using the Python language, including but not limited to: - vector and array manipulation - parallel computing - scientific visualization - scientific data flow and persistence - algorithms implemented or exposed in Python - web applications and portals for science and engineering. - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Tutorials ========= There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Keynote Speaker: Fernando Perez =============================== We are excited to welcome Fernando Perez (UC Berkeley, Helen Wills Neuroscience Institute, USA) as our keynote speaker. Fernando Perez is the original author of the enhanced interactive python shell IPython and a very active contributor to the Python for Science ecosystem. Important dates =============== Talk submission deadline: Sunday May 8 Program announced: Sunday May 29 Tutorials tracks: Thursday August 25 - Friday August 26 Conference track: Saturday August 27 - Sunday August 28 Call for papers =============== We are soliciting talks that discuss topics related to scientific computing using Python. These include applications, teaching, future development directions, and research. We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Submission guidelines ===================== - We solicit talk proposals in the form of a one-page long abstract. - Submissions whose main purpose is to promote a commercial product or service will be refused. - All accepted proposals must be presented at the EuroSciPy conference by at least one author. The one-page long abstracts are for conference planing and selection purposes only. We will later select papers for publication of post-proceedings in a peer-reviewed journal. How to submit an abstract ========================= To submit a talk to the EuroScipy conference follow the instructions here: http://www.euroscipy.org/card/euroscipy2011_call_for_papers Organizers ========== Chairs: - Ga?l Varoquaux (INSERM, Unicog team, and INRIA, Parietal team) - Nicolas Chauvat (Logilab) Local organization committee: - Emmanuelle Gouillart (Saint-Gobain Recherche) - Jean-Philippe Chauvat (Logilab) Tutorial chair: - Valentin Haenel (MKP, Technische Universit?t Berlin) Program committee: - Chair: Tiziano Zito (MKP, Technische Universit?t Berlin) - Romain Brette (ENS Paris, DEC) - Emmanuelle Gouillart (Saint-Gobain Recherche) - Eric Lebigot (Laboratoire Kastler Brossel, Universit? Pierre et Marie Curie) - Konrad Hinsen (Soleil Synchrotron, CNRS) - Hans Petter Langtangen (Simula laboratories) - Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute) - Mike M?ller (Python Academy) - Didrik Pinte (Enthought Inc) - Marc Poinot (ONERA) - Christophe Pradal (CIRAD/INRIA, Virtual Plantes team) - Andreas Schreiber (DLR) - St?fan van der Walt (University of Stellenbosch) Website ======= http://www.euroscipy.org/conference/euroscipy_2011 From seb.haase at gmail.com Thu Jan 20 08:36:20 2011 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 20 Jan 2011 14:36:20 +0100 Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym In-Reply-To: <904894.90812.qm@web113410.mail.gq1.yahoo.com> References: <904894.90812.qm@web113410.mail.gq1.yahoo.com> Message-ID: Hi David, thanks for your comments. I was now trying to write some numpy C API code myself. I'm making only slow progress. So if you wouldn't mind sending your C coded objective function I would appreciate it. Maybe one could eventually convince leastsq to also accept a C objective function directly (w/o the extra step of wrapping it into a Python function). I would start by duplicating leastsq into a second version which would not make such heave use of introspection. Regarding my original question: I could rephrase this to how to do Levenberg-Marquardt least square fitting in a completely standalone C program. Maybe this could be done by linking to lmder.o the same way minpack_lmder() in "__minpack.h". Once this is figured out, I would obviously want to go back and SWIG the "big picture function".... Thanks for you help, Sebastian On Tue, Jan 18, 2011 at 9:16 PM, David Baddeley wrote: > Hi Sebastian, > > I've done something similar - I can tell you that writing a c objective function > definitely speeds things up, even with the function call overhead from python, > although I too would like to be able to call the fortran methods directly from > c. Had a quick look at the leastsq code, but it was sufficiently intractable > that I put it towards the back of my priority list. If writing a c objective > function I'd suggest you avoid SWIG and go with the numpy c api as SWIG does > introduce a bit of overhead. You can also get a speed up by generating the > Gaussian into a pre-allocated numpy array, although you have to be a bit careful > if you do this for the residuals as lmderr holds a pointer/reference to the last > set of residuals so if you overwrite these things don't work. I also found that > you can't give leastsq a c goal function directly & that you have to wrap it in > a python function (it does some introspection on the function object to get the > function name to print a nice message about convergence, or lack thereof). Even > with these caveats, I did find a substantial improvement over just using a > Gaussian coded in python. I could probably give you my c coded objective > functions to give you something to work from, if desired. > > cheers, > David > > > ----- Original Message ---- > From: Sebastian Haase > To: SciPy Users List > Sent: Wed, 19 January, 2011 4:38:14 AM > Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym > > Hi, > I was trying to do some 2d Gaussian fitting on my image data using > scipy.optimize.leastsq. > This worked, but it still takes about 1ms for fitting sigma and > peakHeight on a 7x7 pixel image area. > So I'm thinking about speeding things up: > Writing the objective function and the Jacobian in C (I like using > SWIG) would be one thing. > But I'm also thinking that for such small problems, theHi overhead of > function calls back and forth in Python (for each parameter set > evaluation) would be problematic. > > So my question is: > Can I access the underlying lmder_ function (which is normally called > from leastsq via minpack [ _minpackmodule.c ]) > directly from my SWIGged extenstion module ? > > I'm thinking of LoadLibrary (win32) and dlopen/dlsym (linux) ... > > The advantage would be that I would _not_ have to find another C > implementation of Levenberg-Marquardt. > I found http://www.ics.forth.gr/~lourakis/levmar/index.html, but that > didn't even compile on debian and would be another dependency. > > Thanks, > Sebastian Haase > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From Gregor.Thalhammer at gmail.com Thu Jan 20 10:45:44 2011 From: Gregor.Thalhammer at gmail.com (Gregor Thalhammer) Date: Thu, 20 Jan 2011 16:45:44 +0100 Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym In-Reply-To: References: Message-ID: Am 18.1.2011 um 16:38 schrieb Sebastian Haase: > Hi, > I was trying to do some 2d Gaussian fitting on my image data using > scipy.optimize.leastsq. > This worked, but it still takes about 1ms for fitting sigma and > peakHeight on a 7x7 pixel image area. > So I'm thinking about speeding things up: > Writing the objective function and the Jacobian in C (I like using > SWIG) would be one thing. > But I'm also thinking that for such small problems, the overhead of > function calls back and forth in Python (for each parameter set > evaluation) would be problematic. > Once I also once wrote a program to fit a 2d Gaussian to an image, which in my case was about 500x500 in size. Contrary to your approach, I ended up with a pure python replacement for the minpack based leastsq to get better performance. I did some profiling, for my data size most time is indeed spent for calculating the objective function, Jacobian and some linear algebra (dot, qr). minpack contains its own implementation for the qr decomposition, which is slower than optimized LAPACK routines. Furthermore, for my data size solving the normal equations instead of qr gave a large speed boost. For your image size the Python overhead might be a problem, cython might be helpful in this regard. In my case, a good choice of the starting values was the most important point to improve the speed. A good algorithm is more important than a fast implementation. If you are interested, I can share the code. Gregor > So my question is: > Can I access the underlying lmder_ function (which is normally called > from leastsq via minpack [ _minpackmodule.c ]) > directly from my SWIGged extenstion module ? > > I'm thinking of LoadLibrary (win32) and dlopen/dlsym (linux) ... > > The advantage would be that I would _not_ have to find another C > implementation of Levenberg-Marquardt. > I found http://www.ics.forth.gr/~lourakis/levmar/index.html, but that > didn't even compile on debian and would be another dependency. > > Thanks, > Sebastian Haase > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From joonpyro at gmail.com Thu Jan 20 10:47:55 2011 From: joonpyro at gmail.com (Joon Ro) Date: Thu, 20 Jan 2011 09:47:55 -0600 Subject: [SciPy-User] Ask Scipy - cannot log in Message-ID: Hi, I don't know if ask scipy site is not used anymore - but it seems I cannot login. I tried to use an open id but never have gotten the activation email. The login page seems to be buggy as well; if I click on Yahoo! icon it just goes back to the openid. Thank you, Joon -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From seb.haase at gmail.com Thu Jan 20 11:05:26 2011 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 20 Jan 2011 17:05:26 +0100 Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym In-Reply-To: References: Message-ID: Hi Gregor, sure. If you don't mind I would like to take a look at your code. My images are time series of 512x512 pixels. But my spots are only few pixels in diameter, so that I can - after some filtering and segmentation - crop the image into many small 7x7 piclets. Those are the ones where I have to do the 2d Gaussian fitting. Do you have any pointer on where I can learn about the part where you say "solving the normal equations instead of qr " ... ? Thanks, Sebastian On Thu, Jan 20, 2011 at 4:45 PM, Gregor Thalhammer wrote: > > Am 18.1.2011 um 16:38 schrieb Sebastian Haase: > >> Hi, >> I was trying to do some 2d Gaussian fitting on my image data using >> scipy.optimize.leastsq. >> This worked, but it still takes about 1ms for fitting sigma and >> peakHeight on a 7x7 pixel image area. >> So I'm thinking about speeding things up: >> Writing the objective function and the Jacobian in C (I like using >> SWIG) would be one thing. >> But I'm also thinking that for such small problems, the overhead of >> function calls back and forth in Python (for each parameter set >> evaluation) would be problematic. >> > > Once I also once wrote a program to fit a 2d Gaussian to an image, which in my case was about 500x500 in size. Contrary to your approach, I ended up with a pure python replacement for the minpack based leastsq to get better performance. I did some profiling, for my data size most time is indeed spent for calculating the objective function, Jacobian and some linear algebra (dot, qr). > minpack contains its own implementation for the qr decomposition, which is slower than optimized LAPACK routines. Furthermore, for my data size solving the normal equations instead of qr gave a large speed boost. For your image size the Python overhead might be a problem, cython might be helpful in this regard. In my case, a good choice of the starting values was the most important point to improve the speed. A good algorithm is more important than a fast implementation. If you are interested, I can share the code. > > Gregor From jsseabold at gmail.com Thu Jan 20 11:12:33 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 20 Jan 2011 11:12:33 -0500 Subject: [SciPy-User] Ask Scipy - cannot log in In-Reply-To: References: Message-ID: On Thu, Jan 20, 2011 at 10:47 AM, Joon Ro wrote: > Hi, > I don't know if ask scipy site is not used anymore - but it seems I cannot > login. I tried to use an open id but never have gotten the activation email. > The login page seems to be buggy as well; if I click on Yahoo! icon it just > goes back to the openid. > Thank you, > Joon > The only workaround I've ever been able to use is to copy and paste https://www.google.com/accounts/o8/id into the login field and then click login. Skipper From joonpyro at gmail.com Thu Jan 20 11:56:32 2011 From: joonpyro at gmail.com (Joon Ro) Date: Thu, 20 Jan 2011 10:56:32 -0600 Subject: [SciPy-User] Ask Scipy - cannot log in Message-ID: Actually the problem was I never have gotten any activation email from ask.scipy after account generation. I just tried different username, and it said check the email to activate the account, but the email never arrived. (No spam, nothing) When I try to log in, "The user is not yet activated." message comes up. I cannot resend the activation email. Now I ended up like 3 not activated accounts. I don't know if I'm the only one having this problem, but account/login problem needs to be fixed for the site to be usable. I was wondering if there is any administrators of this site? Thank you, Joon On Thu, Jan 20, 2011 at 10:47 AM, Joon Ro wrote: Hi, I don't know if ask scipy site is not used anymore - but it seems I cannot login. I tried to use an open id but never have gotten the activation email. The login page seems to be buggy as well; if I click on Yahoo! icon it just goes back to the openid. Thank you, Joon The only workaround I've ever been able to use is to copy and paste https://www.google.com/accounts/o8/id into the login field and then click login. Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdekauwe at gmail.com Thu Jan 20 17:07:25 2011 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 20 Jan 2011 14:07:25 -0800 (PST) Subject: [SciPy-User] [SciPy-user] fmin error surface Message-ID: <30723809.post@talk.nabble.com> Hi, Apologises I will use the incorrect technical terminology here... So I was playing around with the scipy simplex (fmin) and I didn't seem to be able to see any option to return the value evaluated by whichever cost function you choose. I see you can return all the minimised iterations, but I quite like plotting (x, y) minimised value against cost to visualise the error surface. The only way I can see you can do it is by editing optimize.py and adding if retall: allvecs = [sim[0]] cost = [fsim[0]] if retall: allvecs.append(sim[0]) cost.append(fsim[0]) if full_output: retlist = x, fval, iterations, fcalls[0], warnflag if retall: retlist += (allvecs, cost) I think it would be a useful thing to have returned? Or perhaps not? thanks, Martin -- View this message in context: http://old.nabble.com/fmin-error-surface-tp30723809p30723809.html Sent from the Scipy-User mailing list archive at Nabble.com. From sebastian at sipsolutions.net Thu Jan 20 17:29:25 2011 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 20 Jan 2011 23:29:25 +0100 Subject: [SciPy-User] [SciPy-user] fmin error surface In-Reply-To: <30723809.post@talk.nabble.com> References: <30723809.post@talk.nabble.com> Message-ID: <1295562565.2482.20.camel@sebastian> Hello, On Thu, 2011-01-20 at 14:07 -0800, mdekauwe wrote: > Hi, > > Apologises I will use the incorrect technical terminology here... > > So I was playing around with the scipy simplex (fmin) and I didn't seem to > be able to see any option to return the value evaluated by whichever cost > function you choose. I see you can return all the minimised iterations, but > I quite like plotting (x, y) minimised value against cost to visualise the > error surface. The only way I can see you can do it is by editing > optimize.py and adding > I think you should probably just add that to your own function that you call, as thats rather simple. There is not much need in adding it to the fmin itself. If you like to put it on and off easily, or like the option of just adding it quickly to an existing program, maybe write yourself a decorator, ie: def store_cost(func): x_list = [] cost_list = [] def new_func(x, *args): x_list.append(x.copy()) e = func(x, *args) cost_list.append(e) return e new_func.x = x_list new_func.cost = cost_list return new_func @store_cost def func(x): return (x - 10)**2 fmin(func, [0]) print func.x print func.cost Some neat python foo for you ;) Regards, Sebastian > if retall: > allvecs = [sim[0]] > cost = [fsim[0]] > > if retall: > allvecs.append(sim[0]) > cost.append(fsim[0]) > > if full_output: > retlist = x, fval, iterations, fcalls[0], warnflag > if retall: > retlist += (allvecs, cost) > > I think it would be a useful thing to have returned? Or perhaps not? > > thanks, > > Martin From mdekauwe at gmail.com Thu Jan 20 17:39:48 2011 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 20 Jan 2011 14:39:48 -0800 (PST) Subject: [SciPy-User] [SciPy-user] fmin error surface In-Reply-To: <1295562565.2482.20.camel@sebastian> References: <30723809.post@talk.nabble.com> <1295562565.2482.20.camel@sebastian> Message-ID: <30724030.post@talk.nabble.com> Hi, thanks that is exactly what I want thanks for that will need to do a bit of reading as I can't quite work out how that works ha ha! Much appreciated, Martin Sebastian Berg wrote: > > Hello, > > On Thu, 2011-01-20 at 14:07 -0800, mdekauwe wrote: >> Hi, >> >> Apologises I will use the incorrect technical terminology here... >> >> So I was playing around with the scipy simplex (fmin) and I didn't seem >> to >> be able to see any option to return the value evaluated by whichever cost >> function you choose. I see you can return all the minimised iterations, >> but >> I quite like plotting (x, y) minimised value against cost to visualise >> the >> error surface. The only way I can see you can do it is by editing >> optimize.py and adding >> > I think you should probably just add that to your own function that you > call, as thats rather simple. There is not much need in adding it to the > fmin itself. If you like to put it on and off easily, or like the option > of just adding it quickly to an existing program, maybe write yourself a > decorator, ie: > > def store_cost(func): > x_list = [] > cost_list = [] > def new_func(x, *args): > x_list.append(x.copy()) > e = func(x, *args) > cost_list.append(e) > return e > new_func.x = x_list > new_func.cost = cost_list > return new_func > > @store_cost > def func(x): > return (x - 10)**2 > fmin(func, [0]) > > print func.x > print func.cost > > > Some neat python foo for you ;) > > Regards, > > Sebastian > >> if retall: >> allvecs = [sim[0]] >> cost = [fsim[0]] >> >> if retall: >> allvecs.append(sim[0]) >> cost.append(fsim[0]) >> >> if full_output: >> retlist = x, fval, iterations, fcalls[0], warnflag >> if retall: >> retlist += (allvecs, cost) >> >> I think it would be a useful thing to have returned? Or perhaps not? >> >> thanks, >> >> Martin > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/fmin-error-surface-tp30723809p30724030.html Sent from the Scipy-User mailing list archive at Nabble.com. From wardefar at iro.umontreal.ca Fri Jan 21 00:34:03 2011 From: wardefar at iro.umontreal.ca (David Warde-Farley) Date: Fri, 21 Jan 2011 00:34:03 -0500 Subject: [SciPy-User] Ask Scipy - cannot log in In-Reply-To: References: Message-ID: <72CE56D7-4F8D-4DAA-9B8E-A581282FBC88@iro.umontreal.ca> On 2011-01-20, at 11:56 AM, Joon Ro wrote: > Actually the problem was I never have gotten any activation email from ask.scipy after account generation. > I just tried different username, and it said check the email to activate the account, but the email never arrived. > (No spam, nothing) > > When I try to log in, "The user is not yet activated." message comes up. I cannot resend the activation email. > > Now I ended up like 3 not activated accounts. > I don't know if I'm the only one having this problem, but account/login problem needs to be fixed for the site to be usable. > > I was wondering if there is any administrators of this site? There are, I'm just not sure who. Maybe one of the Enthought folks can chime in, though I don't know how much time they have to devote to it. David From klemm at phys.ethz.ch Fri Jan 21 10:22:49 2011 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Fri, 21 Jan 2011 16:22:49 +0100 Subject: [SciPy-User] Bottleneck 0.3 - some tests Fail In-Reply-To: Message-ID: Keith, thanks for the new release. When I build bottleneck 0.3.0 two of the tests fail. Are you aware of that,or is that a problem with my build? I attached a text file of the session. Regards, Hanno On Thu, Jan 20, 2011, Keith Goodman said: > Bottleneck is a collection of fast NumPy array functions written in > Cython. It contains functions like median, nanmedian, nanargmax, > move_mean. > > The third release of Bottleneck is twice as fast for small input > arrays and contains 10 new functions. > > Faster: > - All functions are faster (less overhead in selector functions) > > New functions: > - nansum() > - move_sum() > - move_nansum() > - move_mean() > - move_std() > - move_nanstd() > - move_min() > - move_nanmin() > - move_max() > - move_nanmax() > > Enhancements: > - You can now specify the dtype and axis to use in the benchmark timings > - Improved documentation and more unit tests > > Breaks from 0.2.0: > - Moving window functions now default to axis=-1 instead of axis=0 > - Low-level moving window selector functions no longer take window as input > > Bug fix: > - int input array resulted in call to slow, non-cython version of move_nanmean > > download > http://pypi.python.org/pypi/Bottleneck > docs > http://berkeleyanalytics.com/bottleneck > code > http://github.com/kwgoodman/bottleneck > mailing list > http://groups.google.com/group/bottle-neck > mailing list 2 > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Hanno Klemm klemm at phys.ethz.ch -------------- next part -------------- Python 2.6.6 |EPD 6.3-1 (64-bit)| (r266:84292, Sep 18 2010, 08:39:12) [GCC 3.4.6] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import bottleneck as bn >>> bn.__version__ '0.3.0' >>> import numpy as np >>> bn.test() Running unit tests for bottleneck NumPy version 1.4.0 NumPy is installed in /scratch/epd-6.3/lib/python2.6/site-packages/numpy Python version 2.6.6 |EPD 6.3-1 (64-bit)| (r266:84292, Sep 18 2010, 08:39:12) [GCC 3.4.6] nose version 0.11.4 .........................................F.F ====================================================================== FAIL: Test move_max. ---------------------------------------------------------------------- Traceback (most recent call last): File "/scratch/epd-6.3/lib/python2.6/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/scratch/epd-6.3/lib/python2.6/site-packages/bottleneck/tests/move_test.py", line 58, in unit_maker err_msg) File "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", line 765, in assert_array_almost_equal header='Arrays are not almost equal') File "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", line 587, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal func move_max | window 2 | input a96 (int32) | shape (1, 2, 3, 4) | axis -2 Input array: [[[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]]] (x and y nan location mismatch [[[[False False False False] [False False False False] [False False False False]] [[False False False False] [False False False False] [False False False False]]]], [[[[ True True True True] [False False False False] [False False False False]] [[ True True True True] [False False False False] [False False False False]]]] mismatch) x: array([[[[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]],... y: array([[[[ NaN, NaN, NaN, NaN], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]],... ====================================================================== FAIL: Test move_nanmax. ---------------------------------------------------------------------- Traceback (most recent call last): File "/scratch/epd-6.3/lib/python2.6/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/scratch/epd-6.3/lib/python2.6/site-packages/bottleneck/tests/move_test.py", line 58, in unit_maker err_msg) File "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", line 765, in assert_array_almost_equal header='Arrays are not almost equal') File "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", line 587, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal func move_nanmax | window 2 | input a96 (int32) | shape (1, 2, 3, 4) | axis -2 Input array: [[[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]]] (x and y nan location mismatch [[[[False False False False] [False False False False] [False False False False]] [[False False False False] [False False False False] [False False False False]]]], [[[[ True True True True] [False False False False] [False False False False]] [[ True True True True] [False False False False] [False False False False]]]] mismatch) x: array([[[[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]],... y: array([[[[ NaN, NaN, NaN, NaN], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]],... ---------------------------------------------------------------------- Ran 44 tests in 31.774s FAILED (failures=2) >>> From kwgoodman at gmail.com Fri Jan 21 10:56:43 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 21 Jan 2011 07:56:43 -0800 Subject: [SciPy-User] Bottleneck 0.3 - some tests Fail In-Reply-To: References: Message-ID: On Fri, Jan 21, 2011 at 7:22 AM, Hanno Klemm wrote: > > Keith, > > thanks for the new release. When I build bottleneck 0.3.0 two of the tests > fail. Are you aware of that,or is that a problem with my build? > > I attached a text file of the session. Thank you for the report! I don't see any failures. Anyone else? Some background on the failing unit test: Currently only 1d, 2d, and 3d input arrays with data type (dtype) int32, int64, float32, and float64 are accelerated. All other ndim/dtype combinations result in calls to slower, unaccelerated functions. The tests that are failing on your system use a 4d input array so it calls the slow version of moving_nanmax and moving_max, which uses scipy.ndimage.maximum_filter1d. So my first thought is to blame scipy ;) What version of scipy are you using? I see you are using numpy 1.4.0 so you are probably using scipy 0.7. Would using scipy 0.8 solve it? Here's the function (bottleneck/slow/move.py): def move_max_filter(arr, window, axis=-1): "Moving window maximium implemented with a filter." if axis == None: raise ValueError, "An `axis` value of None is not supported." if window < 1: raise ValueError, "`window` must be at least 1." if window > arr.shape[axis]: raise ValueError, "`window` is too long." y = arr.astype(float) x0 = (window - 1) // 2 maximum_filter1d(y, window, axis=axis, mode='constant', cval=np.nan, origin=x0, output=y) > Regards, > Hanno > > > > On Thu, Jan 20, 2011, Keith Goodman said: > >> Bottleneck is a collection of fast NumPy array functions written in >> Cython. It contains functions like median, nanmedian, nanargmax, >> move_mean. >> >> The third release of Bottleneck is twice as fast for small input >> arrays and contains 10 new functions. >> >> Faster: >> - All functions are faster (less overhead in selector functions) >> >> New functions: >> - nansum() >> - move_sum() >> - move_nansum() >> - move_mean() >> - move_std() >> - move_nanstd() >> - move_min() >> - move_nanmin() >> - move_max() >> - move_nanmax() >> >> Enhancements: >> - You can now specify the dtype and axis to use in the benchmark timings >> - Improved documentation and more unit tests >> >> Breaks from 0.2.0: >> - Moving window functions now default to axis=-1 instead of axis=0 >> - Low-level moving window selector functions no longer take window as input >> >> Bug fix: >> - int input array resulted in call to slow, non-cython version of move_nanmean >> >> download >> ? ?http://pypi.python.org/pypi/Bottleneck >> docs >> ? ?http://berkeleyanalytics.com/bottleneck >> code >> ? ?http://github.com/kwgoodman/bottleneck >> mailing list >> ? ?http://groups.google.com/group/bottle-neck >> mailing list 2 >> ? ?http://mail.scipy.org/mailman/listinfo/scipy-user >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > -- > Hanno Klemm > klemm at phys.ethz.ch > > > > Python 2.6.6 |EPD 6.3-1 (64-bit)| (r266:84292, Sep 18 2010, 08:39:12) > [GCC 3.4.6] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import bottleneck as bn >>>> bn.__version__ > '0.3.0' >>>> import numpy as np >>>> bn.test() > Running unit tests for bottleneck > NumPy version 1.4.0 > NumPy is installed in /scratch/epd-6.3/lib/python2.6/site-packages/numpy > Python version 2.6.6 |EPD 6.3-1 (64-bit)| (r266:84292, Sep 18 2010, > 08:39:12) [GCC 3.4.6] > nose version 0.11.4 > .........................................F.F > ====================================================================== > FAIL: Test move_max. > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/scratch/epd-6.3/lib/python2.6/site-packages/nose/case.py", line > 186, in runTest > ? ?self.test(*self.arg) > ?File > "/scratch/epd-6.3/lib/python2.6/site-packages/bottleneck/tests/move_test.py", > line 58, in unit_maker > ? ?err_msg) > ?File > "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", > line 765, in assert_array_almost_equal > ? ?header='Arrays are not almost equal') > ?File > "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", > line 587, in assert_array_compare > ? ?raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > func move_max | window 2 | input a96 (int32) | shape (1, 2, 3, 4) | axis > -2 > > Input array: > [[[[ 0 ?1 ?2 ?3] > ? [ 4 ?5 ?6 ?7] > ? [ 8 ?9 10 11]] > > ?[[12 13 14 15] > ? [16 17 18 19] > ? [20 21 22 23]]]] > > (x and y nan location mismatch [[[[False False False False] > ? [False False False False] > ? [False False False False]] > > ?[[False False False False] > ? [False False False False] > ? [False False False False]]]], [[[[ True ?True ?True ?True] > ? [False False False False] > ? [False False False False]] > > ?[[ True ?True ?True ?True] > ? [False False False False] > ? [False False False False]]]] mismatch) > ?x: array([[[[ ?0., ? 1., ? 2., ? 3.], > ? ? ? ? [ ?4., ? 5., ? 6., ? 7.], > ? ? ? ? [ ?8., ? 9., ?10., ?11.]],... > ?y: array([[[[ NaN, ?NaN, ?NaN, ?NaN], > ? ? ? ? [ ?4., ? 5., ? 6., ? 7.], > ? ? ? ? [ ?8., ? 9., ?10., ?11.]],... > > ====================================================================== > FAIL: Test move_nanmax. > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/scratch/epd-6.3/lib/python2.6/site-packages/nose/case.py", line > 186, in runTest > ? ?self.test(*self.arg) > ?File > "/scratch/epd-6.3/lib/python2.6/site-packages/bottleneck/tests/move_test.py", > line 58, in unit_maker > ? ?err_msg) > ?File > "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", > line 765, in assert_array_almost_equal > ? ?header='Arrays are not almost equal') > ?File > "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", > line 587, in assert_array_compare > ? ?raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > func move_nanmax | window 2 | input a96 (int32) | shape (1, 2, 3, 4) | > axis -2 > > Input array: > [[[[ 0 ?1 ?2 ?3] > ? [ 4 ?5 ?6 ?7] > ? [ 8 ?9 10 11]] > > ?[[12 13 14 15] > ? [16 17 18 19] > ? [20 21 22 23]]]] > > (x and y nan location mismatch [[[[False False False False] > ? [False False False False] > ? [False False False False]] > > ?[[False False False False] > ? [False False False False] > ? [False False False False]]]], [[[[ True ?True ?True ?True] > ? [False False False False] > ? [False False False False]] > > ?[[ True ?True ?True ?True] > ? [False False False False] > ? [False False False False]]]] mismatch) > ?x: array([[[[ ?0., ? 1., ? 2., ? 3.], > ? ? ? ? [ ?4., ? 5., ? 6., ? 7.], > ? ? ? ? [ ?8., ? 9., ?10., ?11.]],... > ?y: array([[[[ NaN, ?NaN, ?NaN, ?NaN], > ? ? ? ? [ ?4., ? 5., ? 6., ? 7.], > ? ? ? ? [ ?8., ? 9., ?10., ?11.]],... > > ---------------------------------------------------------------------- > Ran 44 tests in 31.774s > > FAILED (failures=2) > >>>> > > > From jsalvati at u.washington.edu Fri Jan 21 16:07:49 2011 From: jsalvati at u.washington.edu (John Salvatier) Date: Fri, 21 Jan 2011 13:07:49 -0800 Subject: [SciPy-User] Uninformative type error from scipy.optimize.fmin_ncg In-Reply-To: References: Message-ID: Hello, I am getting a very uninformative error from scipy.optimize.fmin_ncg (posted below). Traceback (most recent call last): File "C:\Program Files (x86)\pythonxy\eclipse\plugins\org.python.pydev.debug_1.5.6.2010033101\pysrc\pydevd.py", line 953, in debugger.run(setup['file'], None, None) File "C:\Program Files (x86)\pythonxy\eclipse\plugins\org.python.pydev.debug_1.5.6.2010033101\pysrc\pydevd.py", line 780, in run execfile(file, globals, locals) #execute the script File "C:\Users\jsalvatier\workspace\analysis\src\residuals\run.py", line 15, in sampler.sample(nChains = 5, ndraw = 500, maxGradient = 100) File "C:\Python26\lib\site-packages\multichain_mcmc\amala.py", line 150, in sample mode = scipy.optimize.fmin_ncg(logp, x0, grad_logp, disp = True) File "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line 857, in fmin_ncg update = alphak * pk TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' As far as I can tell this originates from scipy.optimize.fmin_ncg calling line_search_BFGS which calls line_search_armijo which calls scalar_search_armijo which returns None if it "Failed to find a suitable step length" . This should probably throw an error, so its clearer what went wrong. I can open a ticket if that seems appropriate. However, I cannot tell why it cannot find a suitable step length. Does anyone have insight? Here is the output for the return values of the optimizing function and the gradient function during the optimization: x [-0.01995784 -1.31632641] value 24706.0435485 x [-0.01995784 -1.31632641] gradient [ -14.54505377 -295.74692019] x [-0.01995762 -1.316322 ] gradient [ -14.5450301 -295.74691824] x [-0.01995784 -1.31632641] gradient [ -14.54505377 -295.74692019] x [-0.01997875 -1.31607352] gradient [ -14.54744088 -295.74679776] x [-0.01995784 -1.31632641] gradient [ -14.54505377 -295.74692019] x [ 1.29734797 649.9702011 ] value 875465.803394 x [ 0.10164492 58.80504953] value 35218.6454954 x [ 1.68521821e-02 1.68828406e+01] value 23768.2914038 x [ 1.68521821e-02 1.68828406e+01] gradient [ -13.37003274 -294.99827255] x [ 1.68523813e-02 1.68828450e+01] gradient [ -13.37001056 -294.99827441] x [ 1.68521821e-02 1.68828406e+01] gradient [ -13.37003274 -294.99827255] x [ -68.71915443 -1499.71802395] value 4760337.66401 x [ 3.60983859 96.15899543] value 44345.9802857 Traceback (most recent call last): File "C:\Program Files (x86)\pythonxy\eclipse\plugins\org.python.pydev.debug_1.5.6.2010033101\pysrc\pydevd.py", line 953, in debugger.run(setup['file'], None, None) File "C:\Program Files (x86)\pythonxy\eclipse\plugins\org.python.pydev.debug_1.5.6.2010033101\pysrc\pydevd.py", line 780, in run execfile(file, globals, locals) #execute the script File "C:\Users\jsalvatier\workspace\analysis\src\residuals\run.py", line 15, in sampler.sample(nChains = 5, ndraw = 500, maxGradient = 100) File "C:\Python26\lib\site-packages\multichain_mcmc\amala.py", line 150, in sample mode = scipy.optimize.fmin_ncg(logp, x0, grad_logp, disp = True) File "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line 857, in fmin_ncg update = alphak * pk TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsalvati at u.washington.edu Fri Jan 21 17:42:53 2011 From: jsalvati at u.washington.edu (John Salvatier) Date: Fri, 21 Jan 2011 14:42:53 -0800 Subject: [SciPy-User] Uninformative type error from scipy.optimize.fmin_ncg In-Reply-To: References: Message-ID: My problem turned out to be a poorly specified gradient function. However, I do think more informative errors are a good idea. On Fri, Jan 21, 2011 at 1:07 PM, John Salvatier wrote: > Hello, I am getting a very uninformative error from scipy.optimize.fmin_ncg > (posted below). > > Traceback (most recent call last): > File "C:\Program Files > (x86)\pythonxy\eclipse\plugins\org.python.pydev.debug_1.5.6.2010033101\pysrc\pydevd.py", > line 953, in > debugger.run(setup['file'], None, None) > File "C:\Program Files > (x86)\pythonxy\eclipse\plugins\org.python.pydev.debug_1.5.6.2010033101\pysrc\pydevd.py", > line 780, in run > execfile(file, globals, locals) #execute the script > File "C:\Users\jsalvatier\workspace\analysis\src\residuals\run.py", line > 15, in > sampler.sample(nChains = 5, ndraw = 500, maxGradient = 100) > File "C:\Python26\lib\site-packages\multichain_mcmc\amala.py", line 150, > in sample > mode = scipy.optimize.fmin_ncg(logp, x0, grad_logp, disp = True) > File "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line > 857, in fmin_ncg > update = alphak * pk > TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' > > As far as I can tell this originates from scipy.optimize.fmin_ncg calling > line_search_BFGS which calls line_search_armijo which calls > scalar_search_armijo which returns None if it "Failed to find a suitable > step length" . This should probably throw an error, so its clearer what went > wrong. I can open a ticket if that seems appropriate. > > However, I cannot tell why it cannot find a suitable step length. Does > anyone have insight? Here is the output for the return values of the > optimizing function and the gradient function during the optimization: > > > x [-0.01995784 -1.31632641] value 24706.0435485 > x [-0.01995784 -1.31632641] gradient [ -14.54505377 -295.74692019] > x [-0.01995762 -1.316322 ] gradient [ -14.5450301 -295.74691824] > x [-0.01995784 -1.31632641] gradient [ -14.54505377 -295.74692019] > x [-0.01997875 -1.31607352] gradient [ -14.54744088 -295.74679776] > x [-0.01995784 -1.31632641] gradient [ -14.54505377 -295.74692019] > x [ 1.29734797 649.9702011 ] value 875465.803394 > x [ 0.10164492 58.80504953] value 35218.6454954 > x [ 1.68521821e-02 1.68828406e+01] value 23768.2914038 > x [ 1.68521821e-02 1.68828406e+01] gradient [ -13.37003274 > -294.99827255] > x [ 1.68523813e-02 1.68828450e+01] gradient [ -13.37001056 > -294.99827441] > x [ 1.68521821e-02 1.68828406e+01] gradient [ -13.37003274 > -294.99827255] > x [ -68.71915443 -1499.71802395] value 4760337.66401 > x [ 3.60983859 96.15899543] value 44345.9802857 > Traceback (most recent call last): > File "C:\Program Files > (x86)\pythonxy\eclipse\plugins\org.python.pydev.debug_1.5.6.2010033101\pysrc\pydevd.py", > line 953, in > debugger.run(setup['file'], None, None) > File "C:\Program Files > (x86)\pythonxy\eclipse\plugins\org.python.pydev.debug_1.5.6.2010033101\pysrc\pydevd.py", > line 780, in run > execfile(file, globals, locals) #execute the script > File "C:\Users\jsalvatier\workspace\analysis\src\residuals\run.py", line > 15, in > sampler.sample(nChains = 5, ndraw = 500, maxGradient = 100) > File "C:\Python26\lib\site-packages\multichain_mcmc\amala.py", line 150, > in sample > mode = scipy.optimize.fmin_ncg(logp, x0, grad_logp, disp = True) > File "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line > 857, in fmin_ncg > update = alphak * pk > TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melissawm at gmail.com Sat Jan 22 10:53:48 2011 From: melissawm at gmail.com (=?ISO-8859-1?Q?Melissa_Mendon=E7a?=) Date: Sat, 22 Jan 2011 13:53:48 -0200 Subject: [SciPy-User] Ask Scipy - cannot log in In-Reply-To: <72CE56D7-4F8D-4DAA-9B8E-A581282FBC88@iro.umontreal.ca> References: <72CE56D7-4F8D-4DAA-9B8E-A581282FBC88@iro.umontreal.ca> Message-ID: I have exactly the same problem. Tried logging in this week to report some missing stuff in the scipy.sparse docs, and I just get "the user is not yet activated" message. Would be nice if someone could fix it... Thanks - Melissa On Fri, Jan 21, 2011 at 3:34 AM, David Warde-Farley wrote: > On 2011-01-20, at 11:56 AM, Joon Ro wrote: > >> Actually the problem was I never have gotten any activation email from ask.scipy after account generation. >> I just tried different username, and it said check the email to activate the account, but the email never arrived. >> (No spam, nothing) >> >> When I try to log in, "The user is not yet activated." message comes up. I cannot resend the activation email. >> >> Now I ended up like 3 not activated accounts. >> I don't know if I'm the only one having this problem, but account/login problem needs to be fixed for the site to be usable. >> >> I was wondering if there is any administrators of this site? > > There are, I'm just not sure who. > > Maybe one of the Enthought folks can chime in, though I don't know how much time they have to devote to it. > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Melissa Weber Mendon?a -- "Knowledge is knowing a tomato is a fruit; wisdom is knowing you don't put tomato in fruit salad." From cygnusx1 at mac.com Sat Jan 22 12:16:31 2011 From: cygnusx1 at mac.com (W.T. Bridgman) Date: Sat, 22 Jan 2011 12:16:31 -0500 Subject: [SciPy-User] numpy.random generator specifications Message-ID: <5041A282-C76D-45A0-A054-3E3B88AEBD39@mac.com> When I use numpy.random, in numpy v 1.5.1, what type of random number generator am I using? There seems to be mixed claims online as to the actual generator. Also, what is the period of the generator? Thanks, Tom -- W.T. Bridgman, Ph.D. Physics & Astronomy -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Jan 22 12:44:17 2011 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 22 Jan 2011 11:44:17 -0600 Subject: [SciPy-User] numpy.random generator specifications In-Reply-To: <5041A282-C76D-45A0-A054-3E3B88AEBD39@mac.com> References: <5041A282-C76D-45A0-A054-3E3B88AEBD39@mac.com> Message-ID: On Sat, Jan 22, 2011 at 11:16, W.T. Bridgman wrote: > When I use numpy.random, in numpy v 1.5.1, what type of random number > generator am I using? ?There seems to be mixed claims online as to the > actual generator. The underlying uniform PRNG is the Mersenne Twister, specifically MT19937. http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.mtrand.RandomState.html#numpy.random.mtrand.RandomState I'd be interested to know who is claiming otherwise. > Also, what is the period of the generator? 2**19937 - 1 http://en.wikipedia.org/wiki/Mersenne_twister -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From cygnusx1 at mac.com Sat Jan 22 13:53:23 2011 From: cygnusx1 at mac.com (W.T. Bridgman) Date: Sat, 22 Jan 2011 13:53:23 -0500 Subject: [SciPy-User] numpy.random generator specifications Message-ID: <17A1BC14-CEA1-4F86-918C-E973821EE332@mac.com> > On Sat, Jan 22, 2011 at 11:16, W.T. Bridgman wrote: >> When I use numpy.random, in numpy v 1.5.1, what type of random number >> generator am I using? ?There seems to be mixed claims online as to >> the >> actual generator. > > The underlying uniform PRNG is the Mersenne Twister, specifically > MT19937. > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.mtrand.RandomState.html#numpy.random.mtrand.RandomState This is great. My searches for 'random generator' and similar on scipy returns loads of hits and there wasn't enough in the summary to clearly identify what I was seeking. > > I'd be interested to know who is claiming otherwise. Most of the sources seemed rather old versions of numpy/numarray/NumPy and uncertain, found by google searches for "numpy random generator" and similar terms. > >> Also, what is the period of the generator? > > 2**19937 - 1 > > http://en.wikipedia.org/wiki/Mersenne_twister > Thanks. I suspected this was the answer from my previous searches but wasn't sure where I could get it verified. Tom -- W.T. Bridgman, Ph.D. Physics & Astronomy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ariver at enthought.com Sat Jan 22 15:19:30 2011 From: ariver at enthought.com (Aaron River) Date: Sat, 22 Jan 2011 14:19:30 -0600 Subject: [SciPy-User] Ask Scipy - cannot log in In-Reply-To: References: <72CE56D7-4F8D-4DAA-9B8E-A581282FBC88@iro.umontreal.ca> Message-ID: Hi! I'll take a look at this and cinch it up. On Saturday, January 22, 2011, Melissa Mendon?a wrote: > I have exactly the same problem. Tried logging in this week to report > some missing stuff in the scipy.sparse docs, and I just get "the user > is not yet activated" message. Would be nice if someone could fix > it... > > Thanks > > - Melissa > > On Fri, Jan 21, 2011 at 3:34 AM, David Warde-Farley > wrote: >> On 2011-01-20, at 11:56 AM, Joon Ro wrote: >> >>> Actually the problem was I never have gotten any activation email from ask.scipy after account generation. >>> I just tried different username, and it said check the email to activate the account, but the email never arrived. >>> (No spam, nothing) >>> >>> When I try to log in, "The user is not yet activated." message comes up. I cannot resend the activation email. >>> >>> Now I ended up like 3 not activated accounts. >>> I don't know if I'm the only one having this problem, but account/login problem needs to be fixed for the site to be usable. >>> >>> I was wondering if there is any administrators of this site? >> >> There are, I'm just not sure who. >> >> Maybe one of the Enthought folks can chime in, though I don't know how much time they have to devote to it. >> >> David >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Melissa Weber Mendon?a > -- > "Knowledge is knowing a tomato is a fruit; wisdom is knowing you don't > put tomato in fruit salad." > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From tmp50 at ukr.net Sun Jan 23 05:00:38 2011 From: tmp50 at ukr.net (Dmitrey) Date: Sun, 23 Jan 2011 12:00:38 +0200 Subject: [SciPy-User] Is numpy/scipy linux apt or PYPI installation linked with ACML? Message-ID: Hi all, I have AMD processor and I would like to get to know what's the easiest way to install numpy/scipy linked with ACML. Is it possible to link linux apt or PYPI installation linked with ACML? Answer for the same question about MKL also would be useful, however, AFAIK it has commercial license and thus can't be handled in the ways. Thank you in advance, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Jan 23 05:07:29 2011 From: cournape at gmail.com (David Cournapeau) Date: Sun, 23 Jan 2011 19:07:29 +0900 Subject: [SciPy-User] [Numpy-discussion] Is numpy/scipy linux apt or PYPI installation linked with ACML? In-Reply-To: References: Message-ID: 2011/1/23 Dmitrey : > Hi all, > I have AMD processor and I would like to get to know what's the easiest way > to install numpy/scipy linked with ACML. > Is it possible to link linux apt or PYPI installation linked with ACML? > Answer for the same question about MKL also would be useful, however, AFAIK > it has commercial license and thus can't be handled in the ways. For the MKL, the easiest solution is to get EPD, or to build numpy/scipy by yourself, although the later is not that easy. For ACML, I don't know how difficult it is, but I would be surprised if it worked out of the box. cheers, David From tmp50 at ukr.net Sun Jan 23 05:27:35 2011 From: tmp50 at ukr.net (Dmitrey) Date: Sun, 23 Jan 2011 12:27:35 +0200 Subject: [SciPy-User] [Numpy-discussion] Is numpy/scipy linux apt or PYPI installation linked with ACML? In-Reply-To: References: Message-ID: Are free EPD distributions linked with MKL and ACML? Does anyone know is SAGE or PythonXY already linked with ACML or MKL? Thanks, D. --- ???????? ????????? --- ?? ????: "David Cournapeau" ????: "Discussion of Numerical Python" ????: 23 ?????? 2011, 12:07:29 ????: Re: [Numpy-discussion] Is numpy/scipy linux apt or PYPI installation linked with ACML? 2011/1/23 Dmitrey < tmp50 at ukr.net >: > Hi all, > I have AMD processor and I would like to get to know what's the easiest way > to install numpy/scipy linked with ACML. > Is it possible to link linux apt or PYPI installation linked with ACML? > Answer for the same question about MKL also would be useful, however, AFAIK > it has commercial license and thus can't be handled in the ways. For the MKL, the easiest solution is to get EPD, or to build numpy/scipy by yourself, although the later is not that easy. For ACML, I don't know how difficult it is, but I would be surprised if it worked out of the box. cheers, David _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sun Jan 23 11:49:05 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 23 Jan 2011 17:49:05 +0100 Subject: [SciPy-User] [Numpy-discussion] Is numpy/scipy linux apt or PYPI installation linked with ACML? In-Reply-To: References: Message-ID: I think the main issue is that ACML didn't have an official CBLAS interface, so you have to check if they provide one now. If thy do, it should be almost easy to link against it. Matthieu 2011/1/23 David Cournapeau > 2011/1/23 Dmitrey : > > Hi all, > > I have AMD processor and I would like to get to know what's the easiest > way > > to install numpy/scipy linked with ACML. > > Is it possible to link linux apt or PYPI installation linked with ACML? > > Answer for the same question about MKL also would be useful, however, > AFAIK > > it has commercial license and thus can't be handled in the ways. > > For the MKL, the easiest solution is to get EPD, or to build > numpy/scipy by yourself, although the later is not that easy. For > ACML, I don't know how difficult it is, but I would be surprised if it > worked out of the box. > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From joonpyro at gmail.com Sun Jan 23 13:50:24 2011 From: joonpyro at gmail.com (Joon Ro) Date: Sun, 23 Jan 2011 12:50:24 -0600 Subject: [SciPy-User] SciPy-User Digest, Vol 89, Issue 40 In-Reply-To: References: Message-ID: On Sun, 23 Jan 2011 12:00:02 -0600, wrote: Hi Aaron, That would be great. I think the low activity on the ask scipy site is partially due to this login/registration problem. Thank you, Joon > Date: Sat, 22 Jan 2011 14:19:30 -0600 > From: Aaron River > Subject: Re: [SciPy-User] Ask Scipy - cannot log in > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > Hi! > > I'll take a look at this and cinch it up. > > On Saturday, January 22, 2011, Melissa Mendon?a > wrote: >> I have exactly the same problem. Tried logging in this week to report >> some missing stuff in the scipy.sparse docs, and I just get "the user >> is not yet activated" message. Would be nice if someone could fix >> it... >> >> Thanks >> >> - Melissa >> >> On Fri, Jan 21, 2011 at 3:34 AM, David Warde-Farley >> wrote: >>> On 2011-01-20, at 11:56 AM, Joon Ro wrote: >>> >>>> Actually the problem was I never have gotten any activation email >>>> from ask.scipy after account generation. >>>> I just tried different username, and it said check the email to >>>> activate the account, but the email never arrived. >>>> (No spam, nothing) >>>> >>>> When I try to log in, "The user is not yet activated." message comes >>>> up. I cannot resend the activation email. >>>> >>>> Now I ended up like 3 not activated accounts. >>>> I don't know if I'm the only one having this problem, but >>>> account/login problem needs to be fixed for the site to be usable. >>>> >>>> I was wondering if there is any administrators of this site? >>> >>> There are, I'm just not sure who. >>> >>> Maybe one of the Enthought folks can chime in, though I don't know how >>> much time they have to devote to it. >>> >>> David >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> >> -- >> Melissa Weber Mendon?a >> -- >> "Knowledge is knowing a tomato is a fruit; wisdom is knowing you don't >> put tomato in fruit salad." >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> From xavier.gnata at gmail.com Sun Jan 23 18:02:38 2011 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Mon, 24 Jan 2011 00:02:38 +0100 Subject: [SciPy-User] Extract a segment from a 2D array Message-ID: <4D3CB38E.4030104@gmail.com> Hi, Let's take an example: I have a 100x100 2D array. I want to extract all the values on a segment starting at (10,12) and ending at (42,42) and put them in a 1D array. Of course I can code the type Bresenham's line algorithm but I think I reinventing the wheel. What's the scipy way to deal with that? (If needed, I can use matplotlib) @+, Xavier From david_baddeley at yahoo.com.au Sun Jan 23 19:31:36 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Sun, 23 Jan 2011 16:31:36 -0800 (PST) Subject: [SciPy-User] Extract a segment from a 2D array In-Reply-To: <4D3CB38E.4030104@gmail.com> References: <4D3CB38E.4030104@gmail.com> Message-ID: <14647.11769.qm@web113416.mail.gq1.yahoo.com> I'd probably use scipy.ndimage.map_coordinates eg: (assuming your array is called data) #define the line parametrically len = sqrt(32**2 + 30**2) #length of line t = arange(0, len) #evenly spaced points along the line with unit spacing x = 10 + 32*t/len y = 12 + 30*t/len seg = scipy.ndimage.map_coordinates(data, array([x, y]).T) this will interpolate between neighbouring pixels & give you an evenly spaced section through your data, rather than collecting all pixels - but 'all pixels' requires some notion of connectedness, and won't be evenly spaced. cheers, David ----- Original Message ---- From: Xavier Gnata To: scipy-user at scipy.org Sent: Mon, 24 January, 2011 12:02:38 PM Subject: [SciPy-User] Extract a segment from a 2D array Hi, Let's take an example: I have a 100x100 2D array. I want to extract all the values on a segment starting at (10,12) and ending at (42,42) and put them in a 1D array. Of course I can code the type Bresenham's line algorithm but I think I reinventing the wheel. What's the scipy way to deal with that? (If needed, I can use matplotlib) @+, Xavier _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From maubriga at gmail.com Mon Jan 24 04:06:41 2011 From: maubriga at gmail.com (Mauro) Date: Mon, 24 Jan 2011 09:06:41 +0000 Subject: [SciPy-User] Problem with win32com and scikits timeseries Message-ID: Hello, I get the following error when importing Date from scikits.timeseries. The error is there only when I interface python with Excel using COM. It is enough to import anything from scikits.timeseries to get the error. Any help? I am using Python2.7 and scikits.timeseries-0.91.3 in _Invoke_ with 1004 1033 3 (, 20.0, 0.0027397260273972603, 34.0) Traceback (most recent call last): File "C:\Appl\Python27\lib\site- packages\win32com\server\dispatcher.py", line 47, in _Invoke_ return self.policy._Invoke_(dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 277, in _Invoke_ return self._invoke_(dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 282, in _invoke_ return S_OK, -1, self._invokeex_(dispid, lcid, wFlags, args, None, None) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 585, in _invokeex_ return func(*args) File "C:\Repositories\ComInterface.py", line 109, in addTree newmatrix, header = convertObjectToList(forwardVolatilityArray) File "C:\Repositories\ComInterface.py", line 40, in convertObjectToList newrow.append(fromTimeToDateTime(int(cell))) File "C:\Repositories\ComInterface.py", line 13, in fromTimeToDateTime from scikits.timeseries import Date File "C:\Appl\Python27\lib\site-packages\scikits.timeseries-0.91.3-py2.7-win32.egg\scikits\timeseries\__init__.py", line 13, in import const File "C:\Appl\Python27\lib\site-packages\scikits.timeseries-0.91.3-py2.7-win32.egg\scikits\timeseries\const.py", line 79, in from cseries import freq_constants ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed. pythoncom error: Python error invoking COM method. Traceback (most recent call last): File "C:\Appl\Python27\lib\site-packages\win32com\server\dispatcher.py", line 163, in _Invoke_ return DispatcherBase._Invoke_(self, dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\dispatcher.py", line 49, in _Invoke_ return self._HandleException_() File "C:\Appl\Python27\lib\site-packages\win32com\server\dispatcher.py", line 47, in _Invoke_ return self.policy._Invoke_(dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 277, in _Invoke_ return self._invoke_(dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 282, in _invoke_ return S_OK, -1, self._invokeex_(dispid, lcid, wFlags, args, None, None) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 585, in _invokeex_ return func(*args) File "C:\Repositories\ComInterface.py", line 109, in addTree newmatrix, header = convertObjectToList(forwardVolatilityArray) File "C:\Repositories\ComInterface.py", line 40, in convertObjectToList newrow.append(fromTimeToDateTime(int(cell))) File "C:\Repositories\ComInterface.py", line 13, in fromTimeToDateTime from scikits.timeseries import Date File "C:\Appl\Python27\lib\site-packages\scikits.timeseries-0.91.3-py2.7-win32.egg\scikits\timeseries\__init__.py", line 13, in import const File "C:\Appl\Python27\lib\site-packages\scikits.timeseries-0.91.3-py2.7-win32.egg\scikits\timeseries\const.py", line 79, in from cseries import freq_constants ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed. in ._QueryInterface_ with unsupported IID IProvideClassInfo ({B196B283-BAB4-101A-B69C-00AA00341D07}) in ._QueryInterface_ with unsupported IID {CACC1E85-622B-11D2-AA78-00C04F9901D2} ({CACC1E85-622B-11D2-AA78-00C04F9901D2}) in _GetTypeInfo_ with index=0, lcid=1033 in _GetTypeInfo_ with index=0, lcid=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox.ca at gmail.com Mon Jan 24 09:34:47 2011 From: mattknox.ca at gmail.com (Matt Knox) Date: Mon, 24 Jan 2011 14:34:47 +0000 (UTC) Subject: [SciPy-User] Problem with win32com and scikits timeseries References: Message-ID: Mauro gmail.com> writes: > > > Hello,I get the following error when importing Date from > scikits.timeseries. The error is there only when I interface python with > Excel using COM. It is enough to import anything from scikits.timeseries to > get the error.Any help? Do other C extensions work? What happens if you import just numpy? Also, can I ask what you are using excel COM for? I used to use it quite a bit, but found it to cause me quite a lot of headaches (COM anything causes a lot of headaches really). As a lighter weight alternative, you may want to consider xlrd, xlwt or just generating plain XML spreadsheets which works very nicely for my use cases. SpreadsheetML: http://msdn.microsoft.com/en-us/library/bb226687(v=office.11).aspx From ralf.gommers at googlemail.com Mon Jan 24 09:35:54 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 24 Jan 2011 22:35:54 +0800 Subject: [SciPy-User] future of maxentropy module (was: sparse rmatvec and maxentropy) Message-ID: (excuse the cross-post, but this may be of interest to scipy-user and the scikits.learn crowd) On Sat, Jan 22, 2011 at 10:44 PM, wrote: > On Sat, Jan 22, 2011 at 8:50 AM, Ralf Gommers > wrote: > > On Thu, Jan 20, 2011 at 10:13 PM, Skipper Seabold > > wrote: > >> > >> I picked up the montecarlo code when I was playing around with these. > >> > >> > http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ > >> > >> I'm curious if the maxentropy stuff as it is in scipy wouldn't find > >> more use and maintenance in scikits.learn. The implementation is > >> somewhat use specific (natural language processing), though this is > >> not by any means set in stone. > >> > > Probably, but wouldn't it need a lot of work before it could be moved? It > > has a grand total of one test, mostly non-working examples, and is > obviously > > hardly used at all (see r6919 and r6920 for more examples of broken > code). > > > > Perhaps it's worth asking the scikits.learn guys, and otherwise consider > > deprecating it if they're not interested? > > I haven't seen or heard anyone using it besides Skipper. There are > also still some features that where designed for pysparse and never > fully updated to scipy.sparse. > http://projects.scipy.org/scipy/ticket/856 > > I also thought deprecating and removing maxentropy will be the best > idea, if nobody volunteers to give it a workout. > So I guess we just have to ask this out loud: is anyone using the scipy.maxentropy module or interested in doing so? If you are, would you be interested in putting some work into it, like making the examples work and adding some tests? The current status is that 3 out of 4 examples are broken, the module has only a single test, and from broken code that went unnoticed for a long time it is clear that there are very few users. If no one steps up, I propose to deprecate the module for the 0.10 release. If there are any users out there that missed this email and step up then, we can always un-deprecate again. To the scikits.learn developers: would this code fit better and see more use in scikits.learn than in scipy? Would you be interested to pick it up? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pub at jesuislibre.net Mon Jan 24 10:01:43 2011 From: pub at jesuislibre.net (E) Date: Mon, 24 Jan 2011 16:01:43 +0100 Subject: [SciPy-User] Find the RMS from a PSD Message-ID: <1295881303.6808.12.camel@localhost> Hello scipy users. I'm new to signal processing and I've read that RMS could be found from a PSD. I'm interested in as I would further like to know energy in a signal through it's frequencies. My problem is I don't find how to calculate the RMS from the PSD output. It seems it's a matter of scale (frequencies bandwith is taken in account already). I wrote a test case with a simple sinus. I should be able to find the same RMS value from the PSD method and direct RMS over signal method. Could you please have a look and tell me how to find good RMS value from PSD output? Thanks #!/usr/bin/python # -*- coding: utf-8 -*- import matplotlib, platform if platform.system() == 'Linux' : matplotlib.use("gtk") import pylab import scipy ## PSD vs RMS #Parameters samplerate = 48000 nfft = 1024*2 graph = False #create 1 sec sinus signal t = scipy.arange(0, 1 , 1/float(samplerate)) signal = .25*scipy.sin(2*scipy.pi*(samplerate/10.)*t) print len(signal) #RMS of an array def RMS(data): rms = data**2 rms = scipy.sqrt(rms.sum()/len(data)) return rms #PSD of an array. I want this to return the RMS def RMSfromPSD(data) : y, x = pylab.psd(data, NFFT = nfft, Fs = samplerate) ##Calculate the RMS #The energy returned by PSD depends on FFT size freqbandwith = x[1] y = y*freqbandwith #The energy returned by PSD depends on Samplerate y = y/float(samplerate) #Summing the power in freq domain to get RMS rms = scipy.sqrt(y.sum()) return rms print "RMS method", RMS(signal) print "RMS using PSD method", RMSfromPSD(signal) #Graph if graph == True : pylab.subplot(211) pylab.plot(t,signal) pylab.subplot(212) pylab.psd(signal, nfft, samplerate) pylab.show() -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Mon Jan 24 10:03:02 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 24 Jan 2011 10:03:02 -0500 Subject: [SciPy-User] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: References: Message-ID: On Mon, Jan 24, 2011 at 9:35 AM, Ralf Gommers wrote: > (excuse the cross-post, but this may be of interest to scipy-user and the > scikits.learn crowd) > > > On Sat, Jan 22, 2011 at 10:44 PM, wrote: >> >> On Sat, Jan 22, 2011 at 8:50 AM, Ralf Gommers >> wrote: >> > On Thu, Jan 20, 2011 at 10:13 PM, Skipper Seabold >> > wrote: >> >> >> >> I picked up the montecarlo code when I was playing around with these. >> >> >> >> >> >> http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ >> >> >> >> I'm curious if the maxentropy stuff as it is in scipy wouldn't find >> >> more use and maintenance in scikits.learn. ?The implementation is >> >> somewhat use specific (natural language processing), though this is >> >> not by any means set in stone. >> >> >> > Probably, but wouldn't it need a lot of work before it could be moved? >> > It >> > has a grand total of one test, mostly non-working examples, and is >> > obviously >> > hardly used at all (see r6919 and r6920 for more examples of broken >> > code). >> > >> > Perhaps it's worth asking the scikits.learn guys, and otherwise consider >> > deprecating it if they're not interested? >> >> I haven't seen or heard anyone using it besides Skipper. There are >> also still some features that where designed for pysparse and never >> fully updated to scipy.sparse. >> http://projects.scipy.org/scipy/ticket/856 >> >> I also thought deprecating and removing maxentropy will be the best >> idea, if nobody volunteers to give it a workout. > > So I guess we just have to ask this out loud: is anyone using the > scipy.maxentropy module or interested in doing so? If you are, would you be > interested in putting some work into it, like making the examples work and > adding some tests? > > The current status is that 3 out of 4 examples are broken, the module has > only a single test, and from broken code that went unnoticed for a long time > it is clear that there are very few users. > I just checked again, and I do have the examples working in statsmodels with scipy before rmatvec was removed, so it's not so dire. It just depends on the montecarlo code, so we would have to include this in an install if we want the examples to run. I can make a branch that includes this code if there's interest to keep it and have the examples work. > If no one steps up, I propose to deprecate the module for the 0.10 release. > If there are any users out there that missed this email and step up then, we > can always un-deprecate again. > I do use things from the code, ie., the scipy.maxentropy.logsumexp, so I wouldn't want to lose that at the very least. Skipper From maubriga at gmail.com Mon Jan 24 10:14:03 2011 From: maubriga at gmail.com (Mauro) Date: Mon, 24 Jan 2011 15:14:03 +0000 Subject: [SciPy-User] Problem with win32com and scikits timeseries In-Reply-To: References: Message-ID: Matt, Thanks for your answer. Numpy works now that I have updated my python to 2.7 and my numpy to 1.5.0. With Python 2.6 and a previous numpy I was not able to import numpy. I am trying to use Excel as a front end for some code. I used xlrd in the past, but it will not be enough for what I am trying to do. I am actually using VBA to call and execute python code. I already have a pretty neat demo, but without dates, unfortunately. On Mon, Jan 24, 2011 at 2:34 PM, Matt Knox wrote: > Mauro gmail.com> writes: > > > > > > > Hello,I get the following error when importing Date from > > scikits.timeseries. The error is there only when I interface python with > > Excel using COM. It is enough to import anything from scikits.timeseries > to > > get the error.Any help? > > Do other C extensions work? What happens if you import just numpy? > > Also, can I ask what you are using excel COM for? I used to use it quite a > bit, > but found it to cause me quite a lot of headaches (COM anything causes a > lot > of headaches really). As a lighter weight alternative, you may want to > consider > xlrd, xlwt or just generating plain XML spreadsheets which works very > nicely > for my use cases. > > SpreadsheetML: > http://msdn.microsoft.com/en-us/library/bb226687(v=office.11).aspx > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klemm at phys.ethz.ch Mon Jan 24 10:42:09 2011 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Mon, 24 Jan 2011 16:42:09 +0100 Subject: [SciPy-User] Bottleneck 0.3 - some tests Fail In-Reply-To: References: , Message-ID: I was indeed using scipy 0.8.0. However, upgrading to the most recent EPD distribution made the error go away. Apparently there are some minute changes between EPD-6.3-1 and EPD-6.3-2 that solved the problem. On Fri, Jan 21, 2011, Keith Goodman said: > On Fri, Jan 21, 2011 at 7:22 AM, Hanno Klemm wrote: >> >> Keith, >> >> thanks for the new release. When I build bottleneck 0.3.0 two of the tests >> fail. Are you aware of that,or is that a problem with my build? >> >> I attached a text file of the session. > > Thank you for the report! > > I don't see any failures. Anyone else? > > Some background on the failing unit test: > > Currently only 1d, 2d, and 3d input arrays with data type (dtype) > int32, int64, float32, and float64 are accelerated. All other > ndim/dtype combinations result in calls to slower, unaccelerated > functions. The tests that are failing on your system use a 4d input > array so it calls the slow version of moving_nanmax and moving_max, > which uses scipy.ndimage.maximum_filter1d. > > So my first thought is to blame scipy ;) What version of scipy are you > using? I see you are using numpy 1.4.0 so you are probably using scipy > 0.7. Would using scipy 0.8 solve it? > > Here's the function (bottleneck/slow/move.py): > > def move_max_filter(arr, window, axis=-1): > "Moving window maximium implemented with a filter." > if axis == None: > raise ValueError, "An `axis` value of None is not supported." > if window < 1: > raise ValueError, "`window` must be at least 1." > if window > arr.shape[axis]: > raise ValueError, "`window` is too long." > y = arr.astype(float) > x0 = (window - 1) // 2 > maximum_filter1d(y, window, axis=axis, mode='constant', cval=np.nan, > origin=x0, output=y) > > >> Regards, >> Hanno >> >> >> >> On Thu, Jan 20, 2011, Keith Goodman said: >> >>> Bottleneck is a collection of fast NumPy array functions written in >>> Cython. It contains functions like median, nanmedian, nanargmax, >>> move_mean. >>> >>> The third release of Bottleneck is twice as fast for small input >>> arrays and contains 10 new functions. >>> >>> Faster: >>> - All functions are faster (less overhead in selector functions) >>> >>> New functions: >>> - nansum() >>> - move_sum() >>> - move_nansum() >>> - move_mean() >>> - move_std() >>> - move_nanstd() >>> - move_min() >>> - move_nanmin() >>> - move_max() >>> - move_nanmax() >>> >>> Enhancements: >>> - You can now specify the dtype and axis to use in the benchmark timings >>> - Improved documentation and more unit tests >>> >>> Breaks from 0.2.0: >>> - Moving window functions now default to axis=-1 instead of axis=0 >>> - Low-level moving window selector functions no longer take window as input >>> >>> Bug fix: >>> - int input array resulted in call to slow, non-cython version of move_nanmean >>> >>> download >>> ? ?http://pypi.python.org/pypi/Bottleneck >>> docs >>> ? ?http://berkeleyanalytics.com/bottleneck >>> code >>> ? ?http://github.com/kwgoodman/bottleneck >>> mailing list >>> ? ?http://groups.google.com/group/bottle-neck >>> mailing list 2 >>> ? ?http://mail.scipy.org/mailman/listinfo/scipy-user >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> -- >> Hanno Klemm >> klemm at phys.ethz.ch >> >> >> >> Python 2.6.6 |EPD 6.3-1 (64-bit)| (r266:84292, Sep 18 2010, 08:39:12) >> [GCC 3.4.6] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >>>>> import bottleneck as bn >>>>> bn.__version__ >> '0.3.0' >>>>> import numpy as np >>>>> bn.test() >> Running unit tests for bottleneck >> NumPy version 1.4.0 >> NumPy is installed in /scratch/epd-6.3/lib/python2.6/site-packages/numpy >> Python version 2.6.6 |EPD 6.3-1 (64-bit)| (r266:84292, Sep 18 2010, >> 08:39:12) [GCC 3.4.6] >> nose version 0.11.4 >> .........................................F.F >> ====================================================================== >> FAIL: Test move_max. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> ?File "/scratch/epd-6.3/lib/python2.6/site-packages/nose/case.py", line >> 186, in runTest >> ? ?self.test(*self.arg) >> ?File >> "/scratch/epd-6.3/lib/python2.6/site-packages/bottleneck/tests/move_test.py", >> line 58, in unit_maker >> ? ?err_msg) >> ?File >> "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", >> line 765, in assert_array_almost_equal >> ? ?header='Arrays are not almost equal') >> ?File >> "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", >> line 587, in assert_array_compare >> ? ?raise AssertionError(msg) >> AssertionError: >> Arrays are not almost equal >> >> func move_max | window 2 | input a96 (int32) | shape (1, 2, 3, 4) | axis >> -2 >> >> Input array: >> [[[[ 0 ?1 ?2 ?3] >> ? [ 4 ?5 ?6 ?7] >> ? [ 8 ?9 10 11]] >> >> ?[[12 13 14 15] >> ? [16 17 18 19] >> ? [20 21 22 23]]]] >> >> (x and y nan location mismatch [[[[False False False False] >> ? [False False False False] >> ? [False False False False]] >> >> ?[[False False False False] >> ? [False False False False] >> ? [False False False False]]]], [[[[ True ?True ?True ?True] >> ? [False False False False] >> ? [False False False False]] >> >> ?[[ True ?True ?True ?True] >> ? [False False False False] >> ? [False False False False]]]] mismatch) >> ?x: array([[[[ ?0., ? 1., ? 2., ? 3.], >> ? ? ? ? [ ?4., ? 5., ? 6., ? 7.], >> ? ? ? ? [ ?8., ? 9., ?10., ?11.]],... >> ?y: array([[[[ NaN, ?NaN, ?NaN, ?NaN], >> ? ? ? ? [ ?4., ? 5., ? 6., ? 7.], >> ? ? ? ? [ ?8., ? 9., ?10., ?11.]],... >> >> ====================================================================== >> FAIL: Test move_nanmax. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> ?File "/scratch/epd-6.3/lib/python2.6/site-packages/nose/case.py", line >> 186, in runTest >> ? ?self.test(*self.arg) >> ?File >> "/scratch/epd-6.3/lib/python2.6/site-packages/bottleneck/tests/move_test.py", >> line 58, in unit_maker >> ? ?err_msg) >> ?File >> "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", >> line 765, in assert_array_almost_equal >> ? ?header='Arrays are not almost equal') >> ?File >> "/scratch/epd-6.3/lib/python2.6/site-packages/numpy/testing/utils.py", >> line 587, in assert_array_compare >> ? ?raise AssertionError(msg) >> AssertionError: >> Arrays are not almost equal >> >> func move_nanmax | window 2 | input a96 (int32) | shape (1, 2, 3, 4) | >> axis -2 >> >> Input array: >> [[[[ 0 ?1 ?2 ?3] >> ? [ 4 ?5 ?6 ?7] >> ? [ 8 ?9 10 11]] >> >> ?[[12 13 14 15] >> ? [16 17 18 19] >> ? [20 21 22 23]]]] >> >> (x and y nan location mismatch [[[[False False False False] >> ? [False False False False] >> ? [False False False False]] >> >> ?[[False False False False] >> ? [False False False False] >> ? [False False False False]]]], [[[[ True ?True ?True ?True] >> ? [False False False False] >> ? [False False False False]] >> >> ?[[ True ?True ?True ?True] >> ? [False False False False] >> ? [False False False False]]]] mismatch) >> ?x: array([[[[ ?0., ? 1., ? 2., ? 3.], >> ? ? ? ? [ ?4., ? 5., ? 6., ? 7.], >> ? ? ? ? [ ?8., ? 9., ?10., ?11.]],... >> ?y: array([[[[ NaN, ?NaN, ?NaN, ?NaN], >> ? ? ? ? [ ?4., ? 5., ? 6., ? 7.], >> ? ? ? ? [ ?8., ? 9., ?10., ?11.]],... >> >> ---------------------------------------------------------------------- >> Ran 44 tests in 31.774s >> >> FAILED (failures=2) >> >>>>> >> >> >> > > -- Hanno Klemm klemm at phys.ethz.ch From charlesr.harris at gmail.com Mon Jan 24 11:05:40 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Jan 2011 09:05:40 -0700 Subject: [SciPy-User] Find the RMS from a PSD In-Reply-To: <1295881303.6808.12.camel@localhost> References: <1295881303.6808.12.camel@localhost> Message-ID: On Mon, Jan 24, 2011 at 8:01 AM, E wrote: > Hello scipy users. > > I'm new to signal processing and I've read that RMS could be found from a > PSD. I'm interested in as I would further like to know energy in a signal > through it's frequencies. > My problem is I don't find how to calculate the RMS from the PSD output. It > seems it's a matter of scale (frequencies bandwith is taken in account > already). > > The noise variance is the integral of the PSD, so take the square root. The tricky part is knowing if you have a single sided or double sided PSD and making sure you have the right units everywhere. The units of the PSD should be something like Volts**2/Hz and you need to integrate over Hz. I like to use the FFT of the autocorrelation, windowed to reduce the noise, multiplied by the sample interval, to get the two sided PSD. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From luca.codutti at uniud.it Mon Jan 24 12:56:54 2011 From: luca.codutti at uniud.it (Luca Codutti) Date: Mon, 24 Jan 2011 18:56:54 +0100 Subject: [SciPy-User] scipy.optimize: fitting multiple data arrays with the same function methodological and general problems Message-ID: <20110124185654.20386t8ocq3os8di@webmail.uniud.it> Dear Scipy users, I'm wondering if you can help me with an optimization problem I'm stuck for quite some time. I need to fit several experimental data arrays using a common function in order to determine common "global" parameters. Each dataset has further known constants depending on each dataset that need to be feed to the fitting function. Till now I've figured out, browsing into the scipy mailing list, that a good strategy might be in creating a one-dimensional least-square differences array to be minimized using optmize.leastsq or curve_fit algorithms. The problem is that I'm working with sums and products of exponential decays which possibly are difficult to optimize (I observe many times NaN during the optimization) and I can't obtain valid or robust output from the optimization. I'm wondering if I understood correctly how to solve my global optimization problem or if I'm doing things untidily. Being more specific in a pseudo code way: function_to_be_fit = (K*e^(-a)*e^(-b))/(K*e^(-c) + e^(-b)) to be optimized : a, b, and c x_data = list(numpy.arrayX1, numpy.arrayX2 ...) y_data = list(numpy.arrayY1, numpy.arrayY2 ...) *experimental data known_constants = list(k1, k2,...) starting optimization values = Opt = [a = Float1, b = Float2, c= Float3] def Fitter ( Opt, constants, x_data, y_data): s = [] def func = lambda Opt, k, x = log ((K*e^(-Opt[0])*e^(-Opt[1]))/(K*e^(-Opt[2]) + e^(-Opt[1])) ) for i in xrange(len(constants)): for j in xrange(len(x_data)): X_theor = func(Opt, constants[i], x[i][j]) Difference = X_theor - log (y_data[i][j]) s.append(Difference) return s fit = scipy.optimize.leastsq(Fitter, Opt, args = {known_constants, x_data, y_data}) While this method seems to work fine while fitting e.g. parabolas sharing the same amplitude but having different heights, for my function I'm stuck every time into local minima which depend substantially on the initial guessing parameters. So here I'm pretty lost and I could not manage to find any working protocol in order to obtain a global minima. Do you have any suggestions? Thank you in advance. ---------------------------------------------------------------------- SEMEL (SErvizio di Messaging ELettronico) - CSIT -Universita' di Udine From luca.codutti at uniud.it Mon Jan 24 13:06:28 2011 From: luca.codutti at uniud.it (Luca Codutti) Date: Mon, 24 Jan 2011 19:06:28 +0100 Subject: [SciPy-User] scipy.optimize: fitting multiple data arrays with the same function methodological and general problems In-Reply-To: <20110124185654.20386t8ocq3os8di@webmail.uniud.it> References: <20110124185654.20386t8ocq3os8di@webmail.uniud.it> Message-ID: <20110124190628.1920659wy301s690@webmail.uniud.it> I must apologize: the function is func = (K*e^(-a*x)*e^(-b*x))/(K*e^(-c*x) + e^(-b*x)) and def Fitter ( Opt, constants, x_data, y_data): s = [] def func = lambda Opt, k, x = log (K*e^(-a*x)*e^(-b*x))/(K*e^(-c*x) + e^(-b*x)) ) for i in xrange(len(constants)): for j in xrange(len(x_data[i])): X_theor = func(Opt, constants[i], x[i][j]) Difference = X_theor - log (y_data[i][j]) s.append(Difference) return s Thank you Luca Quoting Luca Codutti : > Dear Scipy users, > I'm wondering if you can help me with an optimization problem I'm > stuck for quite some time. > I need to fit several experimental data arrays using a common > function in order to determine common "global" parameters. Each > dataset has further known constants depending on each dataset that > need to be feed to the fitting function. > Till now I've figured out, browsing into the scipy mailing list, > that a good strategy might be in creating a one-dimensional > least-square differences array to be minimized using optmize.leastsq > or curve_fit algorithms. > The problem is that I'm working with sums and products of > exponential decays which possibly are difficult to optimize (I > observe many times NaN during the optimization) and I can't obtain > valid or robust output from the optimization. > > I'm wondering if I understood correctly how to solve my global > optimization problem or if I'm doing things untidily. > > Being more specific in a pseudo code way: > > function_to_be_fit = (K*e^(-a)*e^(-b))/(K*e^(-c) + e^(-b)) > to be optimized : a, b, and c > x_data = list(numpy.arrayX1, numpy.arrayX2 ...) > y_data = list(numpy.arrayY1, numpy.arrayY2 ...) *experimental data > known_constants = list(k1, k2,...) > starting optimization values = Opt = [a = Float1, b = Float2, c= Float3] > > def Fitter ( Opt, constants, x_data, y_data): > s = [] > def func = lambda Opt, k, x = log > ((K*e^(-Opt[0])*e^(-Opt[1]))/(K*e^(-Opt[2]) + e^(-Opt[1])) ) > for i in xrange(len(constants)): > for j in xrange(len(x_data)): > X_theor = func(Opt, constants[i], x[i][j]) > Difference = X_theor - log (y_data[i][j]) > s.append(Difference) > return s > > fit = scipy.optimize.leastsq(Fitter, Opt, args = {known_constants, > x_data, y_data}) > > While this method seems to work fine while fitting e.g. parabolas > sharing the same amplitude but having different heights, for my > function I'm stuck every time into local minima which depend > substantially on the initial guessing parameters. So here I'm pretty > lost and I could not manage to find any working protocol in order to > obtain a global minima. Do you have any suggestions? > > Thank you in advance. > > > > > > > > > > > > > ---------------------------------------------------------------------- > SEMEL (SErvizio di Messaging ELettronico) - CSIT -Universita' di Udine > ---------------------------------------------------------------------- SEMEL (SErvizio di Messaging ELettronico) - CSIT -Universita' di Udine From josef.pktd at gmail.com Mon Jan 24 13:46:14 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Jan 2011 13:46:14 -0500 Subject: [SciPy-User] scipy.optimize: fitting multiple data arrays with the same function methodological and general problems In-Reply-To: <20110124190628.1920659wy301s690@webmail.uniud.it> References: <20110124185654.20386t8ocq3os8di@webmail.uniud.it> <20110124190628.1920659wy301s690@webmail.uniud.it> Message-ID: On Mon, Jan 24, 2011 at 1:06 PM, Luca Codutti wrote: > I must apologize: > the function is > func = (K*e^(-a*x)*e^(-b*x))/(K*e^(-c*x) + e^(-b*x)) I don't know about the local optima, but first I would expand and simplify the log of this function, roughly logfunc = -a*x - np.log(exp((-c+b)*x + K)) but rewriting it this way, it looks like c and b are not separately identified, only the difference b-c > > and > > ?def Fitter ( Opt, constants, x_data, y_data): > ? ? s = [] > ? ? def func = lambda Opt, k, x = log > (K*e^(-a*x)*e^(-b*x))/(K*e^(-c*x) + e^(-b*x)) ) > ? ? for i in xrange(len(constants)): > ? ? ? ? for j in xrange(len(x_data[i])): > ? ? ? ? ? ?X_theor = func(Opt, constants[i], x[i][j]) > ? ? ? ? ? ?Difference = X_theor - log (y_data[i][j]) > ? ? ? ? ? ?s.append(Difference) it looks like the inner loop can be vectorized, since func works on arrays some thoughts, Josef > ? ? return s > > > Thank you Luca > Quoting Luca Codutti : > >> Dear Scipy users, >> I'm wondering if you can help me with an optimization problem I'm >> stuck for quite some time. >> I need to fit several experimental data arrays using a common >> function in order to determine common "global" parameters. Each >> dataset has further known constants depending on each dataset that >> need to be feed to the fitting function. >> Till now I've figured out, browsing into the scipy mailing list, >> that a good strategy might be in creating a one-dimensional >> least-square differences array to be minimized using optmize.leastsq >> or curve_fit algorithms. >> The problem is that I'm working with sums and products of >> exponential decays which possibly are difficult to optimize (I >> observe many times NaN during the optimization) and I can't obtain >> valid or robust output from the optimization. >> >> I'm wondering if I understood correctly how to solve my global >> optimization problem or if I'm doing things untidily. >> >> Being more specific in a pseudo code way: >> >> function_to_be_fit = (K*e^(-a)*e^(-b))/(K*e^(-c) + e^(-b)) >> to be optimized : a, b, and c >> x_data = list(numpy.arrayX1, numpy.arrayX2 ...) >> y_data = list(numpy.arrayY1, numpy.arrayY2 ...) ?*experimental data >> known_constants = list(k1, k2,...) >> starting optimization values = Opt = [a = Float1, b = Float2, c= Float3] >> >> def Fitter ( Opt, constants, x_data, y_data): >> ? ?s = [] >> ? ?def func = lambda Opt, k, x = log >> ((K*e^(-Opt[0])*e^(-Opt[1]))/(K*e^(-Opt[2]) + e^(-Opt[1])) ) >> ? ?for i in xrange(len(constants)): >> ? ? ? ?for j in xrange(len(x_data)): >> ? ? ? ? ? X_theor = func(Opt, constants[i], x[i][j]) >> ? ? ? ? ? Difference = X_theor - log (y_data[i][j]) >> ? ? ? ? ? s.append(Difference) >> ? ?return s >> >> fit = scipy.optimize.leastsq(Fitter, Opt, args = {known_constants, >> x_data, y_data}) >> >> While this method seems to work fine while fitting e.g. parabolas >> sharing the same amplitude but having different heights, for my >> function I'm stuck every time into local minima which depend >> substantially on the initial guessing parameters. So here I'm pretty >> lost and I could not manage to find any working protocol in order to >> obtain a global minima. Do you have any suggestions? >> >> Thank you in advance. >> >> >> >> >> >> >> >> >> >> >> >> >> ---------------------------------------------------------------------- >> SEMEL (SErvizio di Messaging ELettronico) - CSIT -Universita' di Udine >> > > > > ---------------------------------------------------------------------- > SEMEL (SErvizio di Messaging ELettronico) - CSIT -Universita' di Udine > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From mattknox.ca at gmail.com Mon Jan 24 14:41:48 2011 From: mattknox.ca at gmail.com (Matt Knox) Date: Mon, 24 Jan 2011 19:41:48 +0000 (UTC) Subject: [SciPy-User] Problem with win32com and scikits timeseries References: Message-ID: Mauro gmail.com> writes: > > I am actually using VBA to call and execute python code. I already have a > pretty neat demo, but without dates, unfortunately. Oh, I see. Are you doing this by running a python com server on the client? I played around with that a long time ago, caused me lots of headaches though :) . Have you tried calling CoInitialize or CoInitializeEx? (see links below). Not entirely sure what these do, but apparently they are required in some situations to use COM in a thread. And if you are running a COM server, it may be spawning threads, although I don't know for sure. http://docs.activestate.com/activepython/2.5/pywin32/pythoncom__CoInitialize_meth. html http://docs.activestate.com/activepython/2.5/pywin32/pythoncom__CoInitializeEx_met h.html http://timgolden.me.uk/python/wmi/cookbook.html#use-wmi-in-a-thread From xavier.gnata at gmail.com Mon Jan 24 18:49:16 2011 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Tue, 25 Jan 2011 00:49:16 +0100 Subject: [SciPy-User] Extract a segment from a 2D array In-Reply-To: <14647.11769.qm@web113416.mail.gq1.yahoo.com> References: <4D3CB38E.4030104@gmail.com> <14647.11769.qm@web113416.mail.gq1.yahoo.com> Message-ID: <4D3E0FFC.7080001@gmail.com> Ok thanks. scipy.ndimage.map_coordinates does the job. It does even more than expected because it does interpolate. Cheers, Xavier > I'd probably use scipy.ndimage.map_coordinates > > eg: (assuming your array is called data) > > #define the line parametrically > len = sqrt(32**2 + 30**2) #length of line > t = arange(0, len) #evenly spaced points along the line with unit spacing > x = 10 + 32*t/len > y = 12 + 30*t/len > > seg = scipy.ndimage.map_coordinates(data, array([x, y]).T) > > this will interpolate between neighbouring pixels& give you an evenly spaced > section through your data, rather than collecting all pixels - but 'all pixels' > requires some notion of connectedness, and won't be evenly spaced. > > cheers, > David > > > ----- Original Message ---- > From: Xavier Gnata > To: scipy-user at scipy.org > Sent: Mon, 24 January, 2011 12:02:38 PM > Subject: [SciPy-User] Extract a segment from a 2D array > > Hi, > > Let's take an example: > I have a 100x100 2D array. > I want to extract all the values on a segment starting at (10,12) and > ending at (42,42) and put them in a 1D array. > Of course I can code the type Bresenham's line algorithm but I think I > reinventing the wheel. > What's the scipy way to deal with that? (If needed, I can use matplotlib) > > > @+, > Xavier > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From scotta_2002 at yahoo.com Mon Jan 24 20:09:41 2011 From: scotta_2002 at yahoo.com (Scott Askey) Date: Mon, 24 Jan 2011 17:09:41 -0800 (PST) Subject: [SciPy-User] python3 pip/easy_install alternative for scipy .9? Message-ID: <392088.8492.qm@web36506.mail.mud.yahoo.com> Do any tools like pip or easy_install work with python3 numpy and scipy .9? easy_install3 in debian sid did not work for me when I tried to install any python3 packages. sudo python3 setup.py install work for me on scipy .9 and numpy 1.5.1. I am work in debian sid/squeeze amd64 environment. Cheers, Scott From david at silveregg.co.jp Tue Jan 25 02:07:10 2011 From: david at silveregg.co.jp (David) Date: Tue, 25 Jan 2011 16:07:10 +0900 Subject: [SciPy-User] python3 pip/easy_install alternative for scipy .9? In-Reply-To: <392088.8492.qm@web36506.mail.mud.yahoo.com> References: <392088.8492.qm@web36506.mail.mud.yahoo.com> Message-ID: <4D3E769E.4070000@silveregg.co.jp> On 01/25/2011 10:09 AM, Scott Askey wrote: > Do any tools like pip or easy_install work with python3 numpy and scipy .9? I believe setuptools (or is it distribute ?) has been ported to python 3, and that would bring easy_install. Pip has not been ported to the best of my knowledge. > > easy_install3 in debian sid did not work for me when I tried to install any > python3 packages. > sudo python3 setup.py install work for me on scipy .9 and numpy 1.5.1. None of those tool work very well, and they bring nothing compared to normal install for numpy or scipy. Just use python setup.py install, or python setupegg.py install if you want to install numpy inside a virtual environment. cheers, David From maubriga at gmail.com Tue Jan 25 04:31:05 2011 From: maubriga at gmail.com (Mauro) Date: Tue, 25 Jan 2011 09:31:05 +0000 Subject: [SciPy-User] Problem with win32com and scikits timeseries In-Reply-To: References: Message-ID: Matt, tried to call pythoncom.CoInitialize before registering the win32com server, but no good... I am little stuck here! On Mon, Jan 24, 2011 at 7:41 PM, Matt Knox wrote: > Mauro gmail.com> writes: > > > > > I am actually using VBA to call and execute python code. I already have a > > pretty neat demo, but without dates, unfortunately. > > Oh, I see. Are you doing this by running a python com server on the client? > I played around with that a long time ago, caused me lots of headaches > though :) . > > Have you tried calling CoInitialize or CoInitializeEx? (see links below). > Not > entirely sure what these do, but apparently they are required in some > situations to use COM in a thread. And if you are running a COM server, it > may > be spawning threads, although I don't know for sure. > > > http://docs.activestate.com/activepython/2.5/pywin32/pythoncom__CoInitialize_meth. > html > > http://docs.activestate.com/activepython/2.5/pywin32/pythoncom__CoInitializeEx_met > h.html > http://timgolden.me.uk/python/wmi/cookbook.html#use-wmi-in-a-thread > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Jan 25 04:51:18 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 25 Jan 2011 09:51:18 +0000 (UTC) Subject: [SciPy-User] python3 pip/easy_install alternative for scipy .9? References: <392088.8492.qm@web36506.mail.mud.yahoo.com> Message-ID: Mon, 24 Jan 2011 17:09:41 -0800, Scott Askey wrote: > Do any tools like pip or easy_install work with python3 numpy and scipy > .9? Probably cannot be done at all, due to the way we handle automatic 2to3 on build. The "standard" way to do it offered by setuptools/distutils does not work for us as it cannot be customized. -- Pauli Virtanen From ralf.gommers at googlemail.com Tue Jan 25 05:15:22 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 25 Jan 2011 18:15:22 +0800 Subject: [SciPy-User] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: References: Message-ID: On Mon, Jan 24, 2011 at 11:03 PM, Skipper Seabold wrote: > On Mon, Jan 24, 2011 at 9:35 AM, Ralf Gommers > wrote: > > (excuse the cross-post, but this may be of interest to scipy-user and the > > scikits.learn crowd) > > > > > > On Sat, Jan 22, 2011 at 10:44 PM, wrote: > >> > >> On Sat, Jan 22, 2011 at 8:50 AM, Ralf Gommers > >> wrote: > >> > On Thu, Jan 20, 2011 at 10:13 PM, Skipper Seabold < > jsseabold at gmail.com> > >> > wrote: > >> >> > >> >> I picked up the montecarlo code when I was playing around with these. > >> >> > >> >> > >> >> > http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ > >> >> > >> >> I'm curious if the maxentropy stuff as it is in scipy wouldn't find > >> >> more use and maintenance in scikits.learn. The implementation is > >> >> somewhat use specific (natural language processing), though this is > >> >> not by any means set in stone. > >> >> > >> > Probably, but wouldn't it need a lot of work before it could be moved? > >> > It > >> > has a grand total of one test, mostly non-working examples, and is > >> > obviously > >> > hardly used at all (see r6919 and r6920 for more examples of broken > >> > code). > >> > > >> > Perhaps it's worth asking the scikits.learn guys, and otherwise > consider > >> > deprecating it if they're not interested? > >> > >> I haven't seen or heard anyone using it besides Skipper. There are > >> also still some features that where designed for pysparse and never > >> fully updated to scipy.sparse. > >> http://projects.scipy.org/scipy/ticket/856 > >> > >> I also thought deprecating and removing maxentropy will be the best > >> idea, if nobody volunteers to give it a workout. > > > > So I guess we just have to ask this out loud: is anyone using the > > scipy.maxentropy module or interested in doing so? If you are, would you > be > > interested in putting some work into it, like making the examples work > and > > adding some tests? > > > > The current status is that 3 out of 4 examples are broken, the module has > > only a single test, and from broken code that went unnoticed for a long > time > > it is clear that there are very few users. > > > > I just checked again, and I do have the examples working in > statsmodels with scipy before rmatvec was removed, so it's not so > dire. It just depends on the montecarlo code, so we would have to > include this in an install if we want the examples to run. I can make > a branch that includes this code if there's interest to keep it and > have the examples work. > > The montecarlo code was removed for a reason I assume, so that would be even more work to include again.... On the scikits.learn list someone said the maxentropy examples are nice, so perhaps they could be made to work with (translated to) the logistic regression code in scikits.learn. > If no one steps up, I propose to deprecate the module for the 0.10 > release. > > If there are any users out there that missed this email and step up then, > we > > can always un-deprecate again. > > > > I do use things from the code, ie., the scipy.maxentropy.logsumexp, so > I wouldn't want to lose that at the very least. > > That's a 3-line long utility function, I'm sure a place could be found for it. Anyway, I'm not proposing to throw the code out tomorrow - after 0.10 is out for a while we could go through it and move anything useful. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From luca.codutti at uniud.it Tue Jan 25 06:55:23 2011 From: luca.codutti at uniud.it (Luca Codutti) Date: Tue, 25 Jan 2011 12:55:23 +0100 Subject: [SciPy-User] scipy.optimize: fitting multiple data arrays with the same function methodological and general problems In-Reply-To: <20110124190628.1920659wy301s690@webmail.uniud.it> References: <20110124185654.20386t8ocq3os8di@webmail.uniud.it> <20110124190628.1920659wy301s690@webmail.uniud.it> Message-ID: <20110125125523.733117p26zeam623@webmail.uniud.it> Dear Josef, Thank you for your advices, I'm implementing both the log expansion and the inner loop optimization. I'm also checking the results of performing some brute minimization over a plausible range of values. I still need to check the consistency of the method though. Luca ---------------------------------------------------------------------- SEMEL (SErvizio di Messaging ELettronico) - CSIT -Universita' di Udine From scotta_2002 at yahoo.com Tue Jan 25 09:04:15 2011 From: scotta_2002 at yahoo.com (Scott Askey) Date: Tue, 25 Jan 2011 06:04:15 -0800 (PST) Subject: [SciPy-User] python3 pip/easy_install alternative for scipy .9? Message-ID: <574895.92496.qm@web36502.mail.mud.yahoo.com> Resolved, I tried the easy_install3 package from distribute and it did not work. It sounds like using using such a setup tool to sandbox scipy and numpy is likely to not possible. I will give the .egg option a try. Cheers, Scott From pub at jesuislibre.net Tue Jan 25 09:06:49 2011 From: pub at jesuislibre.net (E) Date: Tue, 25 Jan 2011 15:06:49 +0100 Subject: [SciPy-User] Find the RMS from a PSD In-Reply-To: <1295881303.6808.12.camel@localhost> References: <1295881303.6808.12.camel@localhost> Message-ID: <1295964409.14935.5.camel@localhost> I think this was a bug in the PSD function in matplotlib 0.98.1 (Debian lenny). This was making me crazy. I took newer source code of this function from the project and all is going well. So to get the RMS power from PSD : y, x = pylab.psd(data, NFFT = nfft, Fs = samplerate) freqbandwith = x[1] y = y*freqbandwith rms = scipy.sqrt(y.sum()) It should be the same as : rms = data**2 rms = scipy.sqrt(rms.sum()/len(data)) -------------- next part -------------- An HTML attachment was scrubbed... URL: From runar.tenfjord at gmail.com Tue Jan 25 10:59:49 2011 From: runar.tenfjord at gmail.com (Runar Tenfjord) Date: Tue, 25 Jan 2011 16:59:49 +0100 Subject: [SciPy-User] ANN: python-sundials Message-ID: python-sundials is a Cython wrapper for the Sundials solver suite. The wrapper is based on code posten on the cython-dev mailing list by Mr. Jon Olav Vik. Highlights CVODE - Solver for stiff and nonstiff ordinary differential equation IDA - Solver for the solution of differential-algebraic equation (DAE) systems KINSOL - solver for nonlinear algebraic systems The CVODE and IDA solvers support root finding and the solver throws an exception on finding a root. There is also an example of implementing the right side equation in Cython for speed. home: http://code.google.com/p/python-sundials/ From simon.more at univ-provence.fr Tue Jan 25 11:26:01 2011 From: simon.more at univ-provence.fr (Simon =?ISO-8859-1?Q?Mor=E9?=) Date: Tue, 25 Jan 2011 17:26:01 +0100 Subject: [SciPy-User] empty array and loadmat Message-ID: <1295972761.1940.625.camel@simonworkstation> Dear Scipy Users, I'm trying to open EEGlab files using loadmat. EEGLab is a matlab toolbox processing continuous and event-related EEG, MEG and other electrophysiological data. But some of my fields in the EEGlab structure are empty and loadmat fails. my error : In [6]: test = scipy.io.loadmat('EMG_all.set',appendmat=False) [...] /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in chars_to_str(self, str_arr) 336 def chars_to_str(self, str_arr): 337 ''' Convert string array to string ''' --> 338 dt = np.dtype('U' + str(small_product(str_arr.shape))) 339 return np.ndarray(shape=(), 340 dtype = dt, TypeError: data type not understood In case of empty string, in line 338, dt = 'U0', which is not valid: indeed, In [30]: dt = np.dtype('U0') --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/.../ in () TypeError: data type not understood To solve this issue, we can either : - force loadmat to fill the empty array - add tests in loadmat An other information is that loadmat works with empty string with the matlab_compatible option. And please find attached a small example EEGLab file, and the entire error in a text file. I found a ticket about the same problem but not in the same method : http://projects.scipy.org/scipy/ticket/885 Digging into this problem, I realise that i don't understand the difference between np.dtype('U#') and np.dtype(('U',#)), with # being an integer. I thought the two were equivalent but np.dtype(('U',0)) works : In [11]: dt = numpy.dtype(('U',0)) In [12]: dt Out[12]: dtype(' -------------- next part -------------- In [4]: test = scipy.io.loadmat('EMG_all.set',appendmat=False) /usr/lib/python2.6/dist-packages/scipy/io/matlab/mio.py:84: FutureWarning: Using struct_as_record default value (False) This will change to True in future versions return MatFile5Reader(byte_stream, **kwargs) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/simon/Work/Donnees/eeglab/ in () /usr/lib/python2.6/dist-packages/scipy/io/matlab/mio.pyc in loadmat(file_name, mdict, appendmat, **kwargs) 109 ''' 110 MR = mat_reader_factory(file_name, appendmat, **kwargs) --> 111 matfile_dict = MR.get_variables() 112 if mdict is not None: 113 mdict.update(matfile_dict) /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in get_variables(self, variable_names) 359 getter.to_next() 360 continue --> 361 res = getter.get_array() 362 mdict[name] = res 363 if getter.is_global: /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in get_array(self) 400 def get_array(self): 401 ''' Gets an array from matrix, and applies any necessary processing ''' --> 402 arr = self.get_raw_array() 403 return self.array_reader.processor_func(arr, self) 404 /usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.pyc in get_raw_array(self) 484 item = pycopy(self.obj_template) 485 for name in field_names: --> 486 item.__dict__[name] = self.read_element() 487 result[i] = item 488 return result.reshape(tupdims).T /usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.pyc in read_element(self, *args, **kwargs) 351 352 def read_element(self, *args, **kwargs): --> 353 return self.array_reader.read_element(*args, **kwargs) 354 355 /usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.pyc in read_element(self, copy) 246 # Deal with miMATRIX type (cannot pass byte string) 247 if mdtype == miMATRIX: --> 248 return self.current_getter(byte_count).get_array() 249 # All other types can be read from string 250 raw_str = self.mat_stream.read(byte_count) /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in get_array(self) 400 def get_array(self): 401 ''' Gets an array from matrix, and applies any necessary processing ''' --> 402 arr = self.get_raw_array() 403 return self.array_reader.processor_func(arr, self) 404 /usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.pyc in get_raw_array(self) 484 item = pycopy(self.obj_template) 485 for name in field_names: --> 486 item.__dict__[name] = self.read_element() 487 result[i] = item 488 return result.reshape(tupdims).T /usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.pyc in read_element(self, *args, **kwargs) 351 352 def read_element(self, *args, **kwargs): --> 353 return self.array_reader.read_element(*args, **kwargs) 354 355 /usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.pyc in read_element(self, copy) 246 # Deal with miMATRIX type (cannot pass byte string) 247 if mdtype == miMATRIX: --> 248 return self.current_getter(byte_count).get_array() 249 # All other types can be read from string 250 raw_str = self.mat_stream.read(byte_count) /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in get_array(self) 401 ''' Gets an array from matrix, and applies any necessary processing ''' 402 arr = self.get_raw_array() --> 403 return self.array_reader.processor_func(arr, self) 404 405 def get_raw_array(self): /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in func(arr, getter) 317 arr = np.empty(n_dims, dtype=dtstr) 318 for i in range(0, n_dims[-1]): --> 319 arr[...,i] = self.chars_to_str(str_arr[i]) 320 else: # return string 321 arr = self.chars_to_str(arr) /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in chars_to_str(self, str_arr) 336 def chars_to_str(self, str_arr): 337 ''' Convert string array to string ''' --> 338 dt = np.dtype('U' + str(small_product(str_arr.shape))) 339 return np.ndarray(shape=(), 340 dtype = dt, TypeError: data type not understood From joonpyro at gmail.com Tue Jan 25 11:41:07 2011 From: joonpyro at gmail.com (Joon Ro) Date: Tue, 25 Jan 2011 10:41:07 -0600 Subject: [SciPy-User] scipy.optimize named argument inconsistency Message-ID: Hi, I just found that for some functions such as fmin_bfgs, the argument name for the objective function to be minimized is f, and for others such as fmin, it is func. I was wondering if this was intended, because I think it would be better to have consistent argument names across those functions. http://docs.scipy.org/doc/scipy/reference/optimize.html -Joon -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From yyc at solvcon.net Tue Jan 25 11:45:09 2011 From: yyc at solvcon.net (Yung-Yu Chen) Date: Tue, 25 Jan 2011 11:45:09 -0500 Subject: [SciPy-User] Request for comments: Python HPC package for solving PDEs (CFD, solid mechanics, electromagnetics, etc.) Message-ID: Dear all, I am developing a Python package (a framework), named SOLVCON, for solving partial differential equations (PDEs) by using large CPU/GPU clusters for high-fidelity solutions. The main structure of SOLVCON, along with two solvers, has been pretty much done. I have open-sourced it under GPL, and maintained its web site at http://solvcon.net . At this stage, to direct the future course of development, I need comments from Python users of scientific computing, especially those who use or want to use Python for large-scale calculations of hyperbolic PDEs. As a matter of fact, SOLVCON does not use scipy, but heavily depends on numpy. I think subscribers of this list could be interested in this subject, though. (I will be happy to know if there's a better place for me to post the request for comments.) I will also give a talk and a poster in the upcoming PyCon US 2011. The applications of SOLVCON currently focus on solving conservation laws, or first-order hyperbolic PDEs. SOLVCON is designed for segregation of very modular solver kernels for different numerical methods and/or physical models. I use the space-time Conservation Element and Solution Element (CESE) method (http://www.grc.nasa.gov/WWW/microbus/ ) as the default numerical method, for its outstanding capability for multi-physics. I have done solvers of the Euler equations for gas dynamics and the velocity-stress equations for stress waves in anisotropic solids. Of course, I am still developing for more physical processes. My expectation for SOLVCON is to make it a productive supercomputing tool for calculating PDEs. There's a long way to go, but SOLVCON can now handle hundreds of nodes without any problem (current record: 512 nodes, 2,048 cores for 23 million cells). SOLVCON uses compiled code written in other languages (such as C or CUDA) for speed in number-crunching core via ctypes. A proof-of-concept GPU solver has also been developed (but not released; still needs improvement). Important capabilities of SOLVCON include: - Pluggable multi-physics with the built-in CESE method. - Unstructured mesh composed by mixed elements (tets, hexs, wedges, pyramids). - Mesh readers for Fluent Gambit Neutral and Cubit Genesis/ExodusII. - Native writers to VTK legacy and XML files. - Hybrid parallelism; simultaneous shared- and distributed-memory parallel computing. - Parallel I/O. - In situ visualization (experimental; very primitive). SOLVCON is developed from the ground up. During the development of SOLVCON, I have continuously surveyed related projects, including PETSc, Trilinos, FEniCS, Hypre, and hpGEM. Although these projects are of great quality and much more mature than SOLVCON, they do not fulfill my needs. I developed SOLVCON because I need something to be high-performance, productive, and flexible for large-scale, transient calculations of conservation laws. Any comment will be appreciated a lot. If it's too off-topic to discuss SOLVCON on the scipy-user list, you are very welcomed to the SOLVCON mailing list: http://groups.google.com/group/solvcon or to email me. Other information about SOLVCON can be obtained at our web site: http://solvcon.net/ . You can access my publications about SOLVCON at my home page http://cfd.eng.ohio-state.edu/~yungyuc/ . Thanks. with regards, Yung-Yu Chen -- Yung-Yu Chen PhD candidate of Mechanical Engineering The Ohio State University, Columbus, Ohio +1 (614) 859 2436 http://cfd.eng.ohio-state.edu/~yungyuc/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Jan 25 12:05:22 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Jan 2011 18:05:22 +0100 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: Just curious. What is the difference between http://sourceforge.net/projects/pysundials/develop and http://code.google.com/p/python-sundials/ Nils On Tue, 25 Jan 2011 16:59:49 +0100 Runar Tenfjord wrote: > python-sundials is a Cython wrapper for the Sundials >solver suite. > > The wrapper is based on code posten on the cython-dev >mailing list by > Mr. Jon Olav Vik. > > Highlights > > CVODE - Solver for stiff and nonstiff ordinary >differential equation > IDA - Solver for the solution of differential-algebraic >equation (DAE) systems > KINSOL - solver for nonlinear algebraic systems > > The CVODE and IDA solvers support root finding and the >solver throws > an exception on finding a root. > There is also an example of implementing the right side >equation in > Cython for speed. > > home: http://code.google.com/p/python-sundials/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From runar.tenfjord at gmail.com Tue Jan 25 12:06:08 2011 From: runar.tenfjord at gmail.com (Runar Tenfjord) Date: Tue, 25 Jan 2011 18:06:08 +0100 Subject: [SciPy-User] ANN: python-orcc Message-ID: Cython wrapper for the orc assembly compiler homepage: http://code.google.com/p/python-orcc/ orc homepage: http://code.entropywave.com/projects/orc/ Orc is a library and set of tools for compiling and executing very simple programs that operate on arrays of data. The 'language' is a generic assembly language that represents many of the features available in SIMD architectures, including saturated addition and subtraction, and many arithmetic operations. Current targets: SSE, MMX, ARM, Altivec, and NEON. The main target for the original Orc library is to accelerate video and audio codecs. Several new opcodes are therefore added for supporting scientific calculation. There are several examples of use included in the source distribution. Preliminary tests show huge (5-10x) compared with the numexpr tool. Further speed increase is expected if multi threading is added. Runar Tenfjord From runar.tenfjord at gmail.com Tue Jan 25 12:10:50 2011 From: runar.tenfjord at gmail.com (Runar Tenfjord) Date: Tue, 25 Jan 2011 18:10:50 +0100 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: Hello, The 'pysundials' project is based on a pure python ctype wrapper. The 'python-sundials' is based on a cython wrapper and therfore allows to create a prototype in pure python and later quickly move the code to cython for speed which compared to a implementatino in c/c++. Best regards Runar Tenfjord On Tue, Jan 25, 2011 at 6:05 PM, Nils Wagner wrote: > > Just curious. What is the difference between > http://sourceforge.net/projects/pysundials/develop > and > http://code.google.com/p/python-sundials/ > > Nils > > > On Tue, 25 Jan 2011 16:59:49 +0100 > ?Runar Tenfjord wrote: >> python-sundials is a Cython wrapper for the Sundials >>solver suite. >> >> The wrapper is based on code posten on the cython-dev >>mailing list by >> Mr. Jon Olav Vik. >> >> Highlights >> >> CVODE - Solver for stiff and nonstiff ordinary >>differential equation >> IDA - Solver for the solution of differential-algebraic >>equation (DAE) systems >> KINSOL - solver for nonlinear algebraic systems >> >> The CVODE and IDA solvers support root finding and the >>solver throws >> an exception on finding a root. >> There is also an example of implementing the right side >>equation in >> Cython for speed. >> >> home: http://code.google.com/p/python-sundials/ >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue Jan 25 12:18:43 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Jan 2011 18:18:43 +0100 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: Hi Runar, Thank you for your response. Are there any prerequisites to install python-sundials ? I tried svn checkout http://python-sundials.googlecode.com/svn/trunk/ python-sundials-read-only cd python-sundials-read-only python setup.py install --prefix=$HOME/local Nils On Tue, 25 Jan 2011 18:10:50 +0100 Runar Tenfjord wrote: > Hello, > > The 'pysundials' project is based on a pure python ctype >wrapper. > The 'python-sundials' is based on a cython wrapper and >therfore > allows to create a prototype in pure python and later >quickly move the > code to cython for speed which compared to a >implementatino in c/c++. > > Best regards > Runar Tenfjord > > On Tue, Jan 25, 2011 at 6:05 PM, Nils Wagner > wrote: >> >> Just curious. What is the difference between >> http://sourceforge.net/projects/pysundials/develop >> and >> http://code.google.com/p/python-sundials/ >> >> Nils >> >> >> On Tue, 25 Jan 2011 16:59:49 +0100 >> ?Runar Tenfjord wrote: >>> python-sundials is a Cython wrapper for the Sundials >>>solver suite. >>> >>> The wrapper is based on code posten on the cython-dev >>>mailing list by >>> Mr. Jon Olav Vik. >>> >>> Highlights >>> >>> CVODE - Solver for stiff and nonstiff ordinary >>>differential equation >>> IDA - Solver for the solution of differential-algebraic >>>equation (DAE) systems >>> KINSOL - solver for nonlinear algebraic systems >>> >>> The CVODE and IDA solvers support root finding and the >>>solver throws >>> an exception on finding a root. >>> There is also an example of implementing the right side >>>equation in >>> Cython for speed. >>> >>> home: http://code.google.com/p/python-sundials/ From faltet at pytables.org Tue Jan 25 12:39:11 2011 From: faltet at pytables.org (Francesc Alted) Date: Tue, 25 Jan 2011 18:39:11 +0100 Subject: [SciPy-User] ANN: Numexpr 1.4.2 released Message-ID: <201101251839.11824.faltet@pytables.org> ========================== Announcing Numexpr 1.4.2 ========================== Numexpr is a fast numerical expression evaluator for NumPy. With it, expressions that operate on arrays (like "3*a+4*b") are accelerated and use less memory than doing the same calculation in Python. What's new ========== This is a maintenance release. The most annying issues have been fixed (including the reduction bugs introduced in 1.4 series). Also, several performance enhancements are included too. In case you want to know more in detail what has changed in this version, see: http://code.google.com/p/numexpr/wiki/ReleaseNotes or have a look at RELEASE_NOTES.txt in the tarball. Where I can find Numexpr? ========================= The project is hosted at Google code in: http://code.google.com/p/numexpr/ You can get the packages from PyPI as well: http://pypi.python.org/pypi/numexpr Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. Enjoy! -- Francesc Alted From runar.tenfjord at gmail.com Tue Jan 25 12:45:59 2011 From: runar.tenfjord at gmail.com (Runar Tenfjord) Date: Tue, 25 Jan 2011 18:45:59 +0100 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: Hello, A binary build is available for Windows and Python version 2.7 If you need to build for your platform you need a working c compiler. Cython is needed for the building process. (Not for running) You also need to build the sundials library itself and place the library and headers in an location the Distutils can find it. The Sundials library should be quite easy to build as it is self contained. https://computation.llnl.gov/casc/sundials/download/download.html (Download the complete archive 7.7MB) Runar On Tue, Jan 25, 2011 at 6:18 PM, Nils Wagner wrote: > Hi Runar, > > Thank you for your response. > Are there any prerequisites to install python-sundials ? > > I tried > svn checkout > http://python-sundials.googlecode.com/svn/trunk/ > python-sundials-read-only > cd python-sundials-read-only > python setup.py install --prefix=$HOME/local > > Nils > > On Tue, 25 Jan 2011 18:10:50 +0100 > ?Runar Tenfjord wrote: >> Hello, >> >> The 'pysundials' project is based on a pure python ctype >>wrapper. >> The 'python-sundials' is based on a cython wrapper and >>therfore >> allows to create a prototype in pure python and later >>quickly move the >> code to cython for speed which compared to a >>implementatino in c/c++. >> >> Best regards >> Runar Tenfjord >> >> On Tue, Jan 25, 2011 at 6:05 PM, Nils Wagner >> wrote: >>> >>> Just curious. What is the difference between >>> http://sourceforge.net/projects/pysundials/develop >>> and >>> http://code.google.com/p/python-sundials/ >>> >>> Nils >>> >>> >>> On Tue, 25 Jan 2011 16:59:49 +0100 >>> ?Runar Tenfjord wrote: >>>> python-sundials is a Cython wrapper for the Sundials >>>>solver suite. >>>> >>>> The wrapper is based on code posten on the cython-dev >>>>mailing list by >>>> Mr. Jon Olav Vik. >>>> >>>> Highlights >>>> >>>> CVODE - Solver for stiff and nonstiff ordinary >>>>differential equation >>>> IDA - Solver for the solution of differential-algebraic >>>>equation (DAE) systems >>>> KINSOL - solver for nonlinear algebraic systems >>>> >>>> The CVODE and IDA solvers support root finding and the >>>>solver throws >>>> an exception on finding a root. >>>> There is also an example of implementing the right side >>>>equation in >>>> Cython for speed. >>>> >>>> home: http://code.google.com/p/python-sundials/ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue Jan 25 12:57:10 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Jan 2011 18:57:10 +0100 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: Runar, I would like to install python-sundials from source. What is a suitable location for the sundials library and header files ? Can I use a site.cfg file similar to numpy/scipy ? Nils On Tue, 25 Jan 2011 18:45:59 +0100 Runar Tenfjord wrote: > Hello, > > A binary build is available for Windows and Python >version 2.7 > > If you need to build for your platform you need a >working c compiler. > Cython is needed for the building process. (Not for >running) > > You also need to build the sundials library itself and >place the > library and headers > in an location the Distutils can find it. The Sundials >library should be > quite easy to build as it is self contained. > > https://computation.llnl.gov/casc/sundials/download/download.html > (Download the complete archive 7.7MB) > > Runar > > On Tue, Jan 25, 2011 at 6:18 PM, Nils Wagner > wrote: >> Hi Runar, >> >> Thank you for your response. >> Are there any prerequisites to install python-sundials ? >> >> I tried >> svn checkout >> http://python-sundials.googlecode.com/svn/trunk/ >> python-sundials-read-only >> cd python-sundials-read-only >> python setup.py install --prefix=$HOME/local >> >> Nils >> >> On Tue, 25 Jan 2011 18:10:50 +0100 >> ?Runar Tenfjord wrote: >>> Hello, >>> >>> The 'pysundials' project is based on a pure python ctype >>>wrapper. >>> The 'python-sundials' is based on a cython wrapper and >>>therfore >>> allows to create a prototype in pure python and later >>>quickly move the >>> code to cython for speed which compared to a >>>implementatino in c/c++. >>> >>> Best regards >>> Runar Tenfjord >>> >>> On Tue, Jan 25, 2011 at 6:05 PM, Nils Wagner >>> wrote: >>>> >>>> Just curious. What is the difference between >>>> http://sourceforge.net/projects/pysundials/develop >>>> and >>>> http://code.google.com/p/python-sundials/ >>>> >>>> Nils >>>> >>>> >>>> On Tue, 25 Jan 2011 16:59:49 +0100 >>>> ?Runar Tenfjord wrote: >>>>> python-sundials is a Cython wrapper for the Sundials >>>>>solver suite. >>>>> >>>>> The wrapper is based on code posten on the cython-dev >>>>>mailing list by >>>>> Mr. Jon Olav Vik. >>>>> >>>>> Highlights >>>>> >>>>> CVODE - Solver for stiff and nonstiff ordinary >>>>>differential equation >>>>> IDA - Solver for the solution of differential-algebraic >>>>>equation (DAE) systems >>>>> KINSOL - solver for nonlinear algebraic systems >>>>> >>>>> The CVODE and IDA solvers support root finding and the >>>>>solver throws >>>>> an exception on finding a root. >>>>> There is also an example of implementing the right side >>>>>equation in >>>>> Cython for speed. >>>>> >>>>> home: http://code.google.com/p/python-sundials/ >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user From runar.tenfjord at gmail.com Tue Jan 25 13:04:55 2011 From: runar.tenfjord at gmail.com (Runar Tenfjord) Date: Tue, 25 Jan 2011 19:04:55 +0100 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: Hello, Put the static *.a library files in the folder 'python-sundials/sundials' and the all include files in under 'python-sundials/sundials/include' . You can just change to search paths in the 'setup.py' file. I belive 'site.cfg' i specific to numpy/scipy as it has it own fork of the python build system. Runar On Tue, Jan 25, 2011 at 6:57 PM, Nils Wagner wrote: > Runar, > > I would like to install python-sundials from source. > > What is a suitable location for the sundials library and > header files ? > > Can I use a site.cfg file similar to numpy/scipy ? > > Nils > > > On Tue, 25 Jan 2011 18:45:59 +0100 > ?Runar Tenfjord wrote: >> Hello, >> >> A binary build is available for Windows and Python >>version 2.7 >> >> If you need to build for your platform you need a >>working c compiler. >> Cython is needed for the building process. (Not for >>running) >> >> You also need to build the sundials library itself and >>place the >> library and headers >> in an location the Distutils can find it. The Sundials >>library should be >> quite easy to build as it is self contained. >> >> https://computation.llnl.gov/casc/sundials/download/download.html >> (Download the complete archive 7.7MB) >> >> Runar >> >> On Tue, Jan 25, 2011 at 6:18 PM, Nils Wagner >> wrote: >>> Hi Runar, >>> >>> Thank you for your response. >>> Are there any prerequisites to install python-sundials ? >>> >>> I tried >>> svn checkout >>> http://python-sundials.googlecode.com/svn/trunk/ >>> python-sundials-read-only >>> cd python-sundials-read-only >>> python setup.py install --prefix=$HOME/local >>> >>> Nils >>> >>> On Tue, 25 Jan 2011 18:10:50 +0100 >>> ?Runar Tenfjord wrote: >>>> Hello, >>>> >>>> The 'pysundials' project is based on a pure python ctype >>>>wrapper. >>>> The 'python-sundials' is based on a cython wrapper and >>>>therfore >>>> allows to create a prototype in pure python and later >>>>quickly move the >>>> code to cython for speed which compared to a >>>>implementatino in c/c++. >>>> >>>> Best regards >>>> Runar Tenfjord >>>> >>>> On Tue, Jan 25, 2011 at 6:05 PM, Nils Wagner >>>> wrote: >>>>> >>>>> Just curious. What is the difference between >>>>> http://sourceforge.net/projects/pysundials/develop >>>>> and >>>>> http://code.google.com/p/python-sundials/ >>>>> >>>>> Nils >>>>> >>>>> >>>>> On Tue, 25 Jan 2011 16:59:49 +0100 >>>>> ?Runar Tenfjord wrote: >>>>>> python-sundials is a Cython wrapper for the Sundials >>>>>>solver suite. >>>>>> >>>>>> The wrapper is based on code posten on the cython-dev >>>>>>mailing list by >>>>>> Mr. Jon Olav Vik. >>>>>> >>>>>> Highlights >>>>>> >>>>>> CVODE - Solver for stiff and nonstiff ordinary >>>>>>differential equation >>>>>> IDA - Solver for the solution of differential-algebraic >>>>>>equation (DAE) systems >>>>>> KINSOL - solver for nonlinear algebraic systems >>>>>> >>>>>> The CVODE and IDA solvers support root finding and the >>>>>>solver throws >>>>>> an exception on finding a root. >>>>>> There is also an example of implementing the right side >>>>>>equation in >>>>>> Cython for speed. >>>>>> >>>>>> home: http://code.google.com/p/python-sundials/ >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue Jan 25 13:23:38 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Jan 2011 19:23:38 +0100 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: I followed your advice. However, the installation failed with /home/nwagner/svn/python-sundials-read-only/sundials/KINSOL.pxi:272:57: undeclared name not builtin: KIN_INITIAL_GUESS_OK Error converting Pyrex file to C: ------------------------------------------------------------ ... return self.x def __dealloc__(self): if self.thisptr != NULL: KINFree(self.thisptr) ^ ------------------------------------------------------------ /home/nwagner/svn/python-sundials-read-only/sundials/KINSOL.pxi:279:19: undeclared name not builtin: KINFree building 'sundials' extension /usr/bin/gcc -fno-strict-aliasing -DNDEBUG -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fPIC -Isundials -Isundials/include -I/usr/include/python2.6 -c sundials/sundials.c -o build/temp.linux-x86_64-2.6/sundials/sundials.o sundials/sundials.c:1:2: error: #error Do not use this file, it is the result of a failed Cython compilation. Traceback :error: command '/usr/bin/gcc' failed with exit status 1 Press enter to continue Traceback (most recent call last): File "setup.py", line 77, in input('Press enter to continue') File "", line 0 ^ SyntaxError: unexpected EOF while parsing Any idea ? Thanks in advance. Nils From matthew.brett at gmail.com Tue Jan 25 13:29:24 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 25 Jan 2011 10:29:24 -0800 Subject: [SciPy-User] empty array and loadmat In-Reply-To: <1295972761.1940.625.camel@simonworkstation> References: <1295972761.1940.625.camel@simonworkstation> Message-ID: Hi, > I'm trying to open EEGlab files using loadmat. EEGLab is a matlab > toolbox processing continuous and event-related EEG, MEG and other > electrophysiological data. > But some of my fields in the EEGlab structure are empty and loadmat > fails. > > my error : > > In [6]: test = scipy.io.loadmat('EMG_all.set',appendmat=False) > > [...] > /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in > chars_to_str(self, str_arr) > ? ?336 ? ? def chars_to_str(self, str_arr): > ? ?337 ? ? ? ? ''' Convert string array to string ''' > --> 338 ? ? ? ? dt = np.dtype('U' + str(small_product(str_arr.shape))) > ? ?339 ? ? ? ? return np.ndarray(shape=(), > ? ?340 ? ? ? ? ? ? ? ? ? ? ? ? ? dtype = dt, > > TypeError: data type not understood Thanks for the report, and the file, that's very helpful. I'm getting a different error in the latest scipy dev version, but there's clearly a problem, I will try and look at it today. Best, Matthew From matthew.brett at gmail.com Tue Jan 25 14:32:28 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 25 Jan 2011 11:32:28 -0800 Subject: [SciPy-User] empty array and loadmat In-Reply-To: References: <1295972761.1940.625.camel@simonworkstation> Message-ID: On Tue, Jan 25, 2011 at 10:29 AM, Matthew Brett wrote: > Hi, > >> I'm trying to open EEGlab files using loadmat. EEGLab is a matlab >> toolbox processing continuous and event-related EEG, MEG and other >> electrophysiological data. >> But some of my fields in the EEGlab structure are empty and loadmat >> fails. >> >> my error : >> >> In [6]: test = scipy.io.loadmat('EMG_all.set',appendmat=False) >> >> [...] >> /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in >> chars_to_str(self, str_arr) >> ? ?336 ? ? def chars_to_str(self, str_arr): >> ? ?337 ? ? ? ? ''' Convert string array to string ''' >> --> 338 ? ? ? ? dt = np.dtype('U' + str(small_product(str_arr.shape))) >> ? ?339 ? ? ? ? return np.ndarray(shape=(), >> ? ?340 ? ? ? ? ? ? ? ? ? ? ? ? ? dtype = dt, >> >> TypeError: data type not understood > > Thanks for the report, and the file, that's very helpful. ?I'm getting > a different error in the latest scipy dev version, but there's clearly > a problem, I will try and look at it today. Boils down to: octave-3.2.3:59> load Downloads/EMG_all.set octave-3.2.3:60> var = EEG.urevent(2).type; octave-3.2.3:61> whos var Variables in the current scope: Attr Name Size Bytes Class ==== ==== ==== ===== ===== var 1x0 0 char Total is 0 elements using 0 bytes octave-3.2.3:62> save one_by_zero_char.mat var -6 Matthew -------------- next part -------------- A non-text attachment was scrubbed... Name: one_by_zero_char.mat Type: application/octet-stream Size: 184 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Tue Jan 25 16:08:18 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 25 Jan 2011 22:08:18 +0100 Subject: [SciPy-User] [SciPy-Dev] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: References: Message-ID: <20110125210818.GF13877@phare.normalesup.org> On Tue, Jan 25, 2011 at 06:15:22PM +0800, Ralf Gommers wrote: > On the scikits.learn list someone said the maxentropy examples are nice, > so perhaps they could be made to work with (translated to) the logistic > regression code in scikits.learn. OK, I'll see what we can do. I had a quick look at the examples, and they seemed so synthetic that I couldn't get the point. But then again, I am not a Natural Language Processing guy, so I'll see if I can get an NLP guy translate (and explain) the examples to the scikit. Ga?l From gokhansever at gmail.com Tue Jan 25 16:09:32 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Tue, 25 Jan 2011 14:09:32 -0700 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: Hello, I have heard of Sundials library used in a cloud modelling application (Particularly in a C++ model -- a modelling / simulation paper titled Adaptive method of lines for multi-component aerosol condensational growth and CCN activation ) Could you please give some highlights of what advantages the Sundials package have over Scipy's ODE solvers? Do these two solvers aim at different domains? Thanks. On Tue, Jan 25, 2011 at 8:59 AM, Runar Tenfjord wrote: > python-sundials is a Cython wrapper for the Sundials solver suite. > > The wrapper is based on code posten on the cython-dev mailing list by > Mr. Jon Olav Vik. > > Highlights > > CVODE - Solver for stiff and nonstiff ordinary differential equation > IDA - Solver for the solution of differential-algebraic equation (DAE) > systems > KINSOL - solver for nonlinear algebraic systems > > The CVODE and IDA solvers support root finding and the solver throws > an exception on finding a root. > There is also an example of implementing the right side equation in > Cython for speed. > > home: http://code.google.com/p/python-sundials/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Jan 25 16:30:15 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 25 Jan 2011 13:30:15 -0800 Subject: [SciPy-User] empty array and loadmat In-Reply-To: References: <1295972761.1940.625.camel@simonworkstation> Message-ID: Hi, On Tue, Jan 25, 2011 at 11:32 AM, Matthew Brett wrote: > On Tue, Jan 25, 2011 at 10:29 AM, Matthew Brett wrote: >> Hi, >> >>> I'm trying to open EEGlab files using loadmat. EEGLab is a matlab >>> toolbox processing continuous and event-related EEG, MEG and other >>> electrophysiological data. >>> But some of my fields in the EEGlab structure are empty and loadmat >>> fails. >>> >>> my error : >>> >>> In [6]: test = scipy.io.loadmat('EMG_all.set',appendmat=False) >>> >>> [...] >>> /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in >>> chars_to_str(self, str_arr) >>> ? ?336 ? ? def chars_to_str(self, str_arr): >>> ? ?337 ? ? ? ? ''' Convert string array to string ''' >>> --> 338 ? ? ? ? dt = np.dtype('U' + str(small_product(str_arr.shape))) >>> ? ?339 ? ? ? ? return np.ndarray(shape=(), >>> ? ?340 ? ? ? ? ? ? ? ? ? ? ? ? ? dtype = dt, >>> >>> TypeError: data type not understood >> >> Thanks for the report, and the file, that's very helpful. ?I'm getting >> a different error in the latest scipy dev version, but there's clearly >> a problem, I will try and look at it today. > > Boils down to: > > octave-3.2.3:59> load Downloads/EMG_all.set > octave-3.2.3:60> var = EEG.urevent(2).type; > octave-3.2.3:61> whos var > Variables in the current scope: > > ?Attr Name ? ? ? ?Size ? ? ? ? ? ? ? ? ? ? Bytes ?Class > ?==== ==== ? ? ? ?==== ? ? ? ? ? ? ? ? ? ? ===== ?===== > ? ? ? var ? ? ? ? 1x0 ? ? ? ? ? ? ? ? ? ? ? ? ?0 ?char > > Total is 0 elements using 0 bytes > > octave-3.2.3:62> save one_by_zero_char.mat var -6 Fixed in svn revision r7087 I believe. I added the file above as a test, Matthew From alex.liberzon at gmail.com Wed Jan 26 02:18:32 2011 From: alex.liberzon at gmail.com (Alex Liberzon) Date: Wed, 26 Jan 2011 07:18:32 +0000 Subject: [SciPy-User] SciPy-User Digest, Vol 89, Issue 46 Message-ID: <4d3fcacb.cafdd80a.7439.ffffe560@mx.google.com> This could be great Yes, they are responsive. Turbulence Structure Laboratory Tel Aviv University alexlib at eng.tau.ac.il -----Original Message----- From: scipy-user-request at scipy.org Sent: 25/01/2011 20:00:06 Subject: SciPy-User Digest, Vol 89, Issue 46 Send SciPy-User mailing list submissions to scipy-user at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.scipy.org/mailman/listinfo/scipy-user or, via email, send a message with subject or body 'help' to scipy-user-request at scipy.org You can reach the person managing the list at scipy-user-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-User digest..." Today's Topics: 1. Re: ANN: python-sundials (Nils Wagner) ---------------------------------------------------------------------- Message: 1 Date: Tue, 25 Jan 2011 18:57:10 +0100 From: "Nils Wagner" Subject: Re: [SciPy-User] ANN: python-sundials To: SciPy Users List Message-ID: Content-Type: text/plain;charset=utf-8; format="flowed" Runar, I would like to install python-sundials from source. What is a suitable location for the sundials library and header files ? Can I use a site.cfg file similar to numpy/scipy ? Nils On Tue, 25 Jan 2011 18:45:59 +0100 Runar Tenfjord wrote: > Hello, > > A binary build is available for Windows and Python >version 2.7 > > If you need to build for your platform you need a >working c compiler. > Cython is needed for the building process. (Not for >running) > > You also need to build the sundials library itself and >place the > library and headers > in an location the Distutils can find it. The Sundials >library should be > quite easy to build as it is self contained. > > https://computation.llnl.gov/casc/sundials/download/download.html > (Download the complete archive 7.7MB) > > Runar > > On Tue, Jan 25, 2011 at 6:18 PM, Nils Wagner > wrote: >> Hi Runar, >> >> Thank you for your response. >> Are there any prerequisites to install python-sundials ? >> Email truncated to 2,000 characters From tomo.bbe at gmail.com Wed Jan 26 07:27:37 2011 From: tomo.bbe at gmail.com (James) Date: Wed, 26 Jan 2011 12:27:37 +0000 Subject: [SciPy-User] ANN: python-sundials In-Reply-To: References: Message-ID: Thank you, this is very interesting indeed and I will try it out when I get a spare minute. I wonder also if this will facilitate using the parallel solvers in Sundials using MPI from Python. Using MPI in Python is not something I have experience of, let alone Cython. Any thoughts on this.. would mpi4py work for example? Regards, James On Tue, Jan 25, 2011 at 3:59 PM, Runar Tenfjord wrote: > python-sundials is a Cython wrapper for the Sundials solver suite. > > The wrapper is based on code posten on the cython-dev mailing list by > Mr. Jon Olav Vik. > > Highlights > > CVODE - Solver for stiff and nonstiff ordinary differential equation > IDA - Solver for the solution of differential-algebraic equation (DAE) > systems > KINSOL - solver for nonlinear algebraic systems > > The CVODE and IDA solvers support root finding and the solver throws > an exception on finding a root. > There is also an example of implementing the right side equation in > Cython for speed. > > home: http://code.google.com/p/python-sundials/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.more at univ-provence.fr Wed Jan 26 08:21:04 2011 From: simon.more at univ-provence.fr (Simon =?ISO-8859-1?Q?Mor=E9?=) Date: Wed, 26 Jan 2011 14:21:04 +0100 Subject: [SciPy-User] empty array and loadmat In-Reply-To: References: <1295972761.1940.625.camel@simonworkstation> Message-ID: <1296048064.1940.735.camel@simonworkstation> Hi, Matthew, Thank you for your efficient help ! The last svn version works great for me ! Regards, Simon Le mardi 25 janvier 2011 ? 13:30 -0800, Matthew Brett a ?crit : > Hi, > > On Tue, Jan 25, 2011 at 11:32 AM, Matthew Brett wrote: > > On Tue, Jan 25, 2011 at 10:29 AM, Matthew Brett wrote: > >> Hi, > >> > >>> I'm trying to open EEGlab files using loadmat. EEGLab is a matlab > >>> toolbox processing continuous and event-related EEG, MEG and other > >>> electrophysiological data. > >>> But some of my fields in the EEGlab structure are empty and loadmat > >>> fails. > >>> > >>> my error : > >>> > >>> In [6]: test = scipy.io.loadmat('EMG_all.set',appendmat=False) > >>> > >>> [...] > >>> /usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py in > >>> chars_to_str(self, str_arr) > >>> 336 def chars_to_str(self, str_arr): > >>> 337 ''' Convert string array to string ''' > >>> --> 338 dt = np.dtype('U' + str(small_product(str_arr.shape))) > >>> 339 return np.ndarray(shape=(), > >>> 340 dtype = dt, > >>> > >>> TypeError: data type not understood > >> > >> Thanks for the report, and the file, that's very helpful. I'm getting > >> a different error in the latest scipy dev version, but there's clearly > >> a problem, I will try and look at it today. > > > > Boils down to: > > > > octave-3.2.3:59> load Downloads/EMG_all.set > > octave-3.2.3:60> var = EEG.urevent(2).type; > > octave-3.2.3:61> whos var > > Variables in the current scope: > > > > Attr Name Size Bytes Class > > ==== ==== ==== ===== ===== > > var 1x0 0 char > > > > Total is 0 elements using 0 bytes > > > > octave-3.2.3:62> save one_by_zero_char.mat var -6 > > Fixed in svn revision r7087 I believe. I added the file above as a test, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Simon MOR? Ing?nieur d'?tude au Laboratoire de Neurobiologie de la Cognition, Pole 3C, CNRS, UMR 6155, Marseille simon.more at univ-provence.fr 04 13 55 09 38 From abli at freemail.hu Wed Jan 26 09:20:05 2011 From: abli at freemail.hu (=?ISO-8859-2?Q?=C1bel_D=E1niel?=) Date: Wed, 26 Jan 2011 15:20:05 +0100 (CET) Subject: [SciPy-User] silently different results when running scipy.signal.fftconvolve on ubuntu karmic vs. lucid Message-ID: Hi! About a week ago I wrote about an issue with using scipy.signal.fftconvolve and scipy.signal.correlate2d. I hit another issue since: apparently scipy.signal.fftconvolve gives different results under ubuntu karmic and ubuntu lucid. I assume this is due to some small difference in the versions of the libraries used by the two, I alread submitted a bugreport to ubuntu's bugtracker (https://bugs.launchpad.net/ubuntu/+source/python-scipy/+bug/705354) Since that report didn't get any reaction, I thought I would ask here, as well: any idea what might be causing this difference? In addition, the fact that running the same program on two very similar systems silently gives two very different results, I have been thinking: is there a regression-testing suite that scipy uses? I assume that a lot of scientists use scipy and programs based on scipy, and having results change for no apparent reason doesn't exactly instill confidence, and might even call into question the reproducibility of any scientific paper based on data analysis that used scipy. Any thoughts? Daniel Abel abli at freemail.hu From pav at iki.fi Wed Jan 26 10:02:31 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 26 Jan 2011 15:02:31 +0000 (UTC) Subject: [SciPy-User] silently different results when running scipy.signal.fftconvolve on ubuntu karmic vs. lucid References: Message-ID: Wed, 26 Jan 2011 15:20:05 +0100, ?bel D?niel wrote: > About a week ago I wrote about an issue with using > scipy.signal.fftconvolve and scipy.signal.correlate2d. I hit another > issue since: apparently scipy.signal.fftconvolve gives different results > under ubuntu karmic and ubuntu lucid. I assume this is due to some small > difference in the versions of the libraries used by the two, I alread > submitted a bugreport to ubuntu's bugtracker > (https://bugs.launchpad.net/ubuntu/+source/python-scipy/+bug/705354) > > Since that report didn't get any reaction, I thought I would ask here, > as well: any idea what might be causing this difference? [clip] Ubuntu Karmic and Lucid have the *same* versions of Numpy and Scipy, 1.3.0 and 0.7.0: http://packages.ubuntu.com/search?keywords=python-scipy http://packages.ubuntu.com/search?keywords=python-numpy Seems hardly possible that it can be an issue with Numpy/Scipy. Probably the image package you used to save the images had bugs with dealing with arrays, as its version is different: http://packages.ubuntu.com/search?keywords=python-imaging Did you check that the actual numerical contents of the arrays disagree? (Try saving the results with numpy.savez to compare across machines.) -- Pauli Virtanen From pav at iki.fi Wed Jan 26 10:22:48 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 26 Jan 2011 15:22:48 +0000 (UTC) Subject: [SciPy-User] silently different results when running scipy.signal.fftconvolve on ubuntu karmic vs. lucid References: Message-ID: Wed, 26 Jan 2011 15:02:31 +0000, Pauli Virtanen wrote: [clip] > Probably the image package you used to save the images had bugs with > dealing with arrays, as its version is different: [clip] This turns out to be the case: using Python-imaging 1.1.6 on Maverick, the example program produces garbled images, whereas with 1.1.7 it works as expected. So, the problem is not in Numpy/Scipy and the numerical data is OK. However, the image library apparently used to interpret the array data given to it as 8-bit integers in all cases (resulting to the strange 8-pixel and 16-pixel stripe structure in the figures). -- Pauli Virtanen From wkerzendorf at googlemail.com Wed Jan 26 21:33:16 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Thu, 27 Jan 2011 13:33:16 +1100 Subject: [SciPy-User] compiling scipy with sunperf library Message-ID: <4D40D96C.2070604@gmail.com> Hello, I'm stuck compiling scipy with sunperf on a solaris x86. ------ Traceback (most recent call last): File "setup.py", line 160, in setup_package() File "setup.py", line 152, in setup_package configuration=configuration ) File "/opt/csw/lib/python/site-packages/numpy/distutils/core.py", line 152, in setup config = configuration() File "setup.py", line 118, in configuration config.add_subpackage('scipy') File "/opt/csw/lib/python/site-packages/numpy/distutils/misc_util.py", line 957, in add_subpackage caller_level = 2) File "/opt/csw/lib/python/site-packages/numpy/distutils/misc_util.py", line 926, in get_subpackage caller_level = caller_level + 1) File "/opt/csw/lib/python/site-packages/numpy/distutils/misc_util.py", line 863, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 8, in configuration config.add_subpackage('integrate') File "/opt/csw/lib/python/site-packages/numpy/distutils/misc_util.py", line 957, in add_subpackage caller_level = 2) File "/opt/csw/lib/python/site-packages/numpy/distutils/misc_util.py", line 926, in get_subpackage caller_level = caller_level + 1) File "/opt/csw/lib/python/site-packages/numpy/distutils/misc_util.py", line 863, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/integrate/setup.py", line 10, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File "/opt/csw/lib/python/site-packages/numpy/distutils/system_info.py", line 303, in get_info return cl().get_info(notfound_action) File "/opt/csw/lib/python/site-packages/numpy/distutils/system_info.py", line 454, in get_info raise self.notfounderror,self.notfounderror.__doc__ numpy.distutils.system_info.BlasNotFoundError: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. ------ We tried several things to point it to the libsunperf.so. but it doesn't seem to take it. What are your suggestions? Cheers Wolfgang From Michael.Potter at ga.gov.au Wed Jan 26 22:42:38 2011 From: Michael.Potter at ga.gov.au (Michael.Potter at ga.gov.au) Date: Thu, 27 Jan 2011 14:42:38 +1100 Subject: [SciPy-User] build problem [SEC=UNCLASSIFIED] Message-ID: <8155A9415EAC954985A2D6CF588D04D8032CFA@EXCCR01.agso.gov.au> Hi Trying to build SciPy 64-bit on Solaris 10, using Python 2.7.1 and SciPy 0.8.0. I am able to pass options to the compiler(s) OK, and product 64-bit object files. However, the linker trips up because it is not being passed the -m64 option, and I can't work out how to pass or set the linker options: /opt/SUNWspro/bin/f90 -Bdynamic -G -Bdynamic -G build/temp.solaris-2.10-sun4u-2.7/build/src.solaris-2.10-sun4u-2.7/scipy/integrate/_dopmodule.o build/temp.solaris-2.10-sun4u-2.7/build/src.solaris-2.10-sun4u-2.7/fortranobject.o -Lbuild/temp.solaris-2.10-sun4u-2.7 -ldop -lfsu -lsunmath -lmvec -o build/lib.solaris-2.10-sun4u-2.7/scipy/integrate/_dop.so ld: fatal: file build/temp.solaris-2.10-sun4u-2.7/build/src.solaris-2.10-sun4u-2.7/scipy/integrate/_dopmodule.o: wrong ELF class: ELFCLASS64 ld: fatal: File processing errors. No output written to build/lib.solaris-2.10-sun4u-2.7/scipy/integrate/_dop.so ld: fatal: file build/temp.solaris-2.10-sun4u-2.7/build/src.solaris-2.10-sun4u-2.7/scipy/integrate/_dopmodule.o: wrong ELF class: ELFCLASS64 ld: fatal: File processing errors. No output written to build/lib.solaris-2.10-sun4u-2.7/scipy/integrate/_dop.so error: Command "/opt/SUNWspro/bin/f90 -Bdynamic -G -Bdynamic -G build/temp.solaris-2.10-sun4u-2.7/build/src.solaris-2.10-sun4u-2.7/scipy/integrate/_dopmodule.o build/temp.solaris-2.10-sun4u-2.7/build/src.solaris-2.10-sun4u-2.7/fortranobject.o -Lbuild/temp.solaris-2.10-sun4u-2.7 -ldop -lfsu -lsunmath -lmvec -o build/lib.solaris-2.10-sun4u-2.7/scipy/integrate/_dop.so" failed with exit status 1 If I run the link command manually, and pass in -m64 option, it works fine. I have tried setting -library-dirs and -libraries in order to get the option passed in, but they still don't appear in this link step. Any help appreciated! Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From valene.pellissier at nag.co.uk Tue Jan 18 06:53:41 2011 From: valene.pellissier at nag.co.uk (Valene Pellissier) Date: Tue, 18 Jan 2011 11:53:41 +0000 Subject: [SciPy-User] Cross-compiling scipy on Cray-XE6 Message-ID: <4D357F45.7050204@nag.co.uk> Hi, I have some problems installing scipy dynamically on Cray XE6. I can install Python-2.6.5 and Numpy-1.4.1 without problems. I know Numpy doesn't need Fortran compiler, but even at this step, distutil doesn't find any Fortran compiler while, with another Gnu environment, I managed to install Numpy and Scipy. I'm actually using PrgEnv-gnu/3.1.37G with gcc/4.5.1. A user on another Cray XE6 manage to install Scipy with PrgEnv-gnu/3.1.37E and gcc/4.5.1 . This version of gnu environment is obviously not available any more. What do you suggest ? A newer version of Python ? An older version of gnu compilers ? Thank you, Valene ________________________________________________________________________ The Numerical Algorithms Group Ltd is a company registered in England and Wales with company number 1249803. The registered office is: Wilkinson House, Jordan Hill Road, Oxford OX2 8DR, United Kingdom. This e-mail has been scanned for all viruses by Star. The service is powered by MessageLabs. ________________________________________________________________________ From pholvey at gmail.com Tue Jan 18 11:31:16 2011 From: pholvey at gmail.com (Patrick Holvey) Date: Tue, 18 Jan 2011 11:31:16 -0500 Subject: [SciPy-User] Scipy.Optimize on GPUs? Message-ID: Good morning everyone. I'm currently calling scipy.optimize.fmin_cg() to minimize the energy of a crystal system by changing the xyz coordinates of atoms in the system. I'm working on deriving the fprime function, but currently, the gradient is being estimated by fmin_cg. I have access to some Nvidia Tesla's but not much experience running Python on GPUs. I was wondering if there was an already GPU-enabled optimization algorithm somewhere in scipy or some other package. Does anyone know of one? Many thanks, Patrick -- Patrick Holvey Graduate Student Dept. of Materials Science and Engineering Johns Hopkins University pholvey1 at jhu.edu Cell: (865)-659-9908 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gregor.Thalhammer at i-med.ac.at Fri Jan 21 03:12:49 2011 From: Gregor.Thalhammer at i-med.ac.at (Gregor Thalhammer) Date: Fri, 21 Jan 2011 09:12:49 +0100 Subject: [SciPy-User] scipy.optimize lmder_ and dlopen / dlsym In-Reply-To: References: Message-ID: <55295624-00E4-484E-9F8C-1B042E80FC10@i-med.ac.at> Am 20.1.2011 um 17:05 schrieb Sebastian Haase: > Hi Gregor, > > sure. If you don't mind I would like to take a look at your code. > My images are time series of 512x512 pixels. But my spots are only few > pixels in diameter, so that I can - after some filtering and > segmentation - crop the image into many small 7x7 piclets. Those are > the ones where I have to do the 2d Gaussian fitting. > Do you have any pointer on where I can learn about the part where you > say "solving the normal equations instead of qr " ... ? > I like very much this manuscript: METHODS FOR NON-LINEAR LEAST SQUARES PROBLEMS In your case the Gaussian fit function depends linearly on the peak value, an nonlinear on the position. In this case I found this "variable projection" algorithm for very useful: http://www2.imm.dtu.dk/~hbn/publ/TR0001.ps In my experience the main advantage of this algorithm is the better robustness of the fit against poor starting values. Gregor -------------- next part -------------- An HTML attachment was scrubbed... URL: From Xiao.Wang at liverpool.ac.uk Thu Jan 20 12:54:38 2011 From: Xiao.Wang at liverpool.ac.uk (Wang, Xiao) Date: Thu, 20 Jan 2011 17:54:38 +0000 Subject: [SciPy-User] Scipy installation Message-ID: Hello, I am trying to install Scipy in my openSuSE 11.3 system, but encountered a problem. I look your documentation from the link of http://scipy.org/Installing_SciPy/Linux#head-3dbbf9abe395e974d3ec911ae56863f4af70df9b. I followed the section of openSUSE to access http://download.opensuse.org/repositories/science/openSUSE_11.3/src/ and downloaded python-scipy-0.8.0-12.14.src.rpm. when I ran rpm, I got warning. The message is displayed at below: # rpm -U python-scipy-0.8.0-12.14.src.rpm # warning: python-scipy-0.8.0-12.14.src.rpm: Header V3 DSA/SHA1 Signature, key ID 943d8bb8: NOKEY Have I installed the Scipy successfully? Any idea? Thanks, Xiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From Charles.Lytle at portlandoregon.gov Thu Jan 20 13:41:28 2011 From: Charles.Lytle at portlandoregon.gov (Lytle, Charles) Date: Thu, 20 Jan 2011 10:41:28 -0800 Subject: [SciPy-User] [SciPy-user] Python EM Project Message-ID: <9D26DACE71E54345ADA98EC88D52C202015CB22275E1@MAIL2.rose.portland.local> Hi, Just found your thread posting on Rob Lytle's EM Python Project. Rob passed away unexpectedly in November 2008. Chuck Lytle Charles R. Lytle, Ph.D. City of Portland Water Pollution Control Laboratory 6543 North Burlington Avenue Portland, Oregon 97203 Phone: (503)823-5568 Fax: (503)823-5656 Confidentiality Note: This electronic mail transmission contains information belonging to City of Portland, Oregon - Environmental Services. This information may be confidential and/or legally privileged and is intended only for the use of the addressee designated above. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or the taking of any action in reliance on the contents of this electronic information is strictly prohibited. If you have received this electronic mail in error, please notify us immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maubriga at gmail.com Fri Jan 21 08:44:34 2011 From: maubriga at gmail.com (Mauro) Date: Fri, 21 Jan 2011 13:44:34 +0000 Subject: [SciPy-User] Problem with win32com and scikits timeseries Message-ID: Hello, I get the following error when importing Date from scikits.timeseries. The error is there only when I interface python with Excel using COM. Any help? I am using Python2.7 and scikits.timeseries-0.91.3 in _Invoke_ with 1004 1033 3 (, 20.0, 0.0027397260273972603, 34.0) Traceback (most recent call last): File "C:\Appl\Python27\lib\site-packages\win32com\server\dispatcher.py", line 47, in _Invoke_ return self.policy._Invoke_(dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 277, in _Invoke_ return self._invoke_(dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 282, in _invoke_ return S_OK, -1, self._invokeex_(dispid, lcid, wFlags, args, None, None) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 585, in _invokeex_ return func(*args) File "C:\Repositories\ComInterface.py", line 109, in addTree newmatrix, header = convertObjectToList(forwardVolatilityArray) File "C:\Repositories\ComInterface.py", line 40, in convertObjectToList newrow.append(fromTimeToDateTime(int(cell))) File "C:\Repositories\ComInterface.py", line 13, in fromTimeToDateTime from scikits.timeseries import Date File "C:\Appl\Python27\lib\site-packages\scikits.timeseries-0.91.3-py2.7-win32.egg\scikits\timeseries\__init__.py", line 13, in import const File "C:\Appl\Python27\lib\site-packages\scikits.timeseries-0.91.3-py2.7-win32.egg\scikits\timeseries\const.py", line 79, in from cseries import freq_constants ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed. pythoncom error: Python error invoking COM method. Traceback (most recent call last): File "C:\Appl\Python27\lib\site-packages\win32com\server\dispatcher.py", line 163, in _Invoke_ return DispatcherBase._Invoke_(self, dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\dispatcher.py", line 49, in _Invoke_ return self._HandleException_() File "C:\Appl\Python27\lib\site-packages\win32com\server\dispatcher.py", line 47, in _Invoke_ return self.policy._Invoke_(dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 277, in _Invoke_ return self._invoke_(dispid, lcid, wFlags, args) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 282, in _invoke_ return S_OK, -1, self._invokeex_(dispid, lcid, wFlags, args, None, None) File "C:\Appl\Python27\lib\site-packages\win32com\server\policy.py", line 585, in _invokeex_ return func(*args) File "C:\Repositories\ComInterface.py", line 109, in addTree newmatrix, header = convertObjectToList(forwardVolatilityArray) File "C:\Repositories\ComInterface.py", line 40, in convertObjectToList newrow.append(fromTimeToDateTime(int(cell))) File "C:\Repositories\ComInterface.py", line 13, in fromTimeToDateTime from scikits.timeseries import Date File "C:\Appl\Python27\lib\site-packages\scikits.timeseries-0.91.3-py2.7-win32.egg\scikits\timeseries\__init__.py", line 13, in import const File "C:\Appl\Python27\lib\site-packages\scikits.timeseries-0.91.3-py2.7-win32.egg\scikits\timeseries\const.py", line 79, in from cseries import freq_constants ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed. in ._QueryInterface_ with unsupported IID IProvideClassInfo ({B196B283-BAB4-101A-B69C-00AA00341D07}) in ._QueryInterface_ with unsupported IID {CACC1E85-622B-11D2-AA78-00C04F9901D2} ({CACC1E85-622B-11D2-AA78-00C04F9901D2}) in _GetTypeInfo_ with index=0, lcid=1033 in _GetTypeInfo_ with index=0, lcid=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Xiao.Wang at liverpool.ac.uk Fri Jan 21 09:41:11 2011 From: Xiao.Wang at liverpool.ac.uk (Wang, Xiao) Date: Fri, 21 Jan 2011 14:41:11 +0000 Subject: [SciPy-User] matplotlib installation Message-ID: Hello, I am trying to install matplotlib in my openSuSE 11.3 system, but encountered a problem. I look your documentation from the link of http://scipy.org/Installing_SciPy/Linux#head-3dbbf9abe395e974d3ec911ae56863f4af70df9b. I followed the section of openSUSE to access http://download.opensuse.org/repositories/science/openSUSE_11.3/src/ and downloaded python-matplotlib-1.0.0-19.2.i586.rpm. when I ran rpm, I got warning. The message is displayed at below: # rpm -U python-matplotlib-1.0.0-19.2.i586.rpm # warning: python-matplotlib-1.0.0-19.2.i586.rpm: Header V3 DSA/SHA1 Signature, key ID 943d8bb8: NOKEY # error: Failed dependencies: Python-configobj is needed by python-matplotlib-1.0.0-19.2.i586 Python-dateutil is needed by python-matplotlib-1.0.0-19.2.i586 Python-tk is needed by python-matplotlib-1.0.0-19.2.i586 So I downloaded python-tz-2006p-1.2.i586.rpm from the above link and installed. But where can I find out rpm of python-configobj and python-dateutil? Thanks in advance, Xiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Mon Jan 24 09:56:13 2011 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 24 Jan 2011 15:56:13 +0100 Subject: [SciPy-User] [Scikit-learn-general] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: References: Message-ID: 2011/1/24 Ralf Gommers : > > To the scikits.learn developers: would this code fit better and see more use > in scikits.learn than in scipy? Would you be interested to pick it up? There is already a maxent model in scikit learn which is a wrapper for LibLinear : scikits.learn.linear_model.LogisticRegression AFAIK, LibLinear is pretty much state of the art so I don't think the scikits.learn project is interested reusing this code. Best, -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From egrefen at gmail.com Mon Jan 24 11:07:27 2011 From: egrefen at gmail.com (Edward Grefenstette) Date: Mon, 24 Jan 2011 08:07:27 -0800 (PST) Subject: [SciPy-User] Subclassing sparse matrices: problems with repr Message-ID: <2456618.1782.1295885247328.JavaMail.geo-discussion-forums@yqmb9> For a project I'm working on I'm building vectors with a large amount of zero values, and thought it would be cool (and efficient) to use scipy's sparse tools. The basic deal is that these vector objects subclass lil_matrix (or some other sparse matrix class), set themselves up as an empty sparse matrix of dimensions based on the data they are being built from, and then fill themselves with values based on the data provided. Here is the constructor: ============== from scipy.sparse import lil_matrix as sparsematrix class nounVector(sparsematrix): def __init__(self, noun, corpusReader, basisMap, wordFilter = None, relFilter = None, basisList = None): # Some definitions self.noun = noun self.processedChunks = corpusReader self.basisMap = basisMap.getBasisMap() self.dimensions = len(basisMap.getBasisMap()) self.relFilter = relFilter self.basisList = basisList # End of defs # Initialise self as sparse matrix sparsematrix.__init__(self, (1,self.dimensions)) # Fill values based self.buildVector() ============== The filling is done in self.buildVector(). No need to go into the details, but it basically does some processing of the data given to it by the corpus reader, calculates the count a particular item in the matrix needs to be incremented by, and then performs: self[0,index] += count So far, so good. In fact, everything works like I want it to (as far as I can tell). However, the problem arises when I try to print a list, dict etc of an instance of this class. As far as I can tell, it's all down to the __repr__ function. For example: >>> a = nounVector(/*datastuffs*/) >>> print a ... (Correct output) >>> print str(a) ... (All good) >>> print repr(a) # or print [a], etc... Traceback (most recent call last): File "/Users/Edward/Workspace/CoSemantic Vectors/src/testVectors.py", line 35, in main(sys.argv) File "/Users/Edward/Workspace/CoSemantic Vectors/src/testVectors.py", line 28, in main print repr(nounv) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/base.py", line 158, in __repr__ (self.shape + (self.dtype.type, nnz, _formats[format][1])) KeyError: 'nou' I have no clue what's gone wrong here or what this error means, and poking around in the source hasn't brought me much joy. Am I doing something wrong in my way of subclassing lil_matrix (etc)? Is something missing? Everything works on the functional side of things, but I'm afraid this sort of error is the tip of the iceberg and that other problems may crop up. Any help, suggestions, criticism welcome. Thanks for reading, and thanks in advance for any info. Best, Edward PS: I think I posted this earlier, but it didn't show up. Apologies if I double post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From egrefen at gmail.com Mon Jan 24 23:08:37 2011 From: egrefen at gmail.com (Edward Grefenstette) Date: Mon, 24 Jan 2011 20:08:37 -0800 (PST) Subject: [SciPy-User] Subclassing sparse matrices: problems with repr In-Reply-To: <2456618.1782.1295885247328.JavaMail.geo-discussion-forums@yqmb9> Message-ID: <30568656.370.1295928517090.JavaMail.geo-discussion-forums@yqac31> Solved! If anyone's curious, it boils down to what's happening in sparray.__init__ called by lil_matrix__init__ (and other sparse matrices). The problem is fixed by having self.format = sparsematrix.__name__[:3] after the call to sparsematrix.__init__ in the above code. This will work for sparsematrix being a reference to any of the matrix types in scipy.sparse Tadaa... Best, Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From ejefree at yandex.ru Tue Jan 25 06:30:09 2011 From: ejefree at yandex.ru (ejefree) Date: Tue, 25 Jan 2011 16:30:09 +0500 Subject: [SciPy-User] Extract a segment from a 2D array In-Reply-To: <4D3CB38E.4030104@gmail.com> References: <4D3CB38E.4030104@gmail.com> Message-ID: >>> import numpy as np >>> a = np.ones([100,100]) >>> b = a[10:43, 12:43].flatten() 2011/1/24 Xavier Gnata > Hi, > > Let's take an example: > I have a 100x100 2D array. > I want to extract all the values on a segment starting at (10,12) and > ending at (42,42) and put them in a 1D array. > Of course I can code the type Bresenham's line algorithm but I think I > reinventing the wheel. > What's the scipy way to deal with that? (If needed, I can use matplotlib) > > > @+, > Xavier > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.kienzle at nist.gov Thu Jan 20 12:16:40 2011 From: paul.kienzle at nist.gov (Paul Kienzle) Date: Thu, 20 Jan 2011 12:16:40 -0500 Subject: [SciPy-User] with seed(s) Message-ID: <5E55E966-A626-4130-89EB-0A48826FE0A8@nist.gov> I find that I sometimes want to save the state of the random number generator, push a new seed, perform some calculations, and restore the state of the generator. This happens when I want to run a simulation from a particular starting point, but don't want to mess with the random stream in the rest of my code. One strategy is to pass around private random number generators to my objects, but this makes the interface cumbersome, particularly since all parts of my code, including third party plugins, need to agree to use that generator. An alternative strategy is to set up a context manager, so that I can say: with seed(value): do some stuff I've put together a little routine to do this (see below). - Paul ---- class seed(object): """ Set the seed value for the random number generator. When used in a with statement, the random number generator state is restored after the with statement is complete. Parameters ---------- *seed* : int or array_like, optional Seed for RandomState Example ------- Seed can be used directly to set the seed:: >>> import numpy >>> seed(24) # doctest:+ELLIPSIS <...seed object at...> >>> print numpy.random.randint(0,1000000,3) [242082 899 211136] Seed can also be used in a with statement, which sets the random number generator state for the enclosed computations and restores it to the previous state on completion:: >>> with seed(24): ... print numpy.random.randint(0,1000000,3) [242082 899 211136] Using nested contexts, we can demonstrate that state is indeed restored after the block completes:: >>> with seed(24): ... print numpy.random.randint(0,1000000) ... with seed(24): ... print numpy.random.randint(0,1000000,3) ... print numpy.random.randint(0,1000000) 242082 [242082 899 211136] 899 The restore step is protected against exceptions in the block:: >>> with seed(24): ... print numpy.random.randint(0,1000000) ... try: ... with seed(24): ... print numpy.random.randint(0,1000000,3) ... raise Exception() ... except: ... print "Exception raised" ... print numpy.random.randint(0,1000000) 242082 [242082 899 211136] Exception raised 899 """ def __init__(self, seed=None): self._state = numpy.random.get_state() numpy.random.seed(seed) def __enter__(self): return None def __exit__(self, *args): numpy.random.set_state(self._state) From robert.kern at gmail.com Thu Jan 27 00:07:08 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 26 Jan 2011 23:07:08 -0600 Subject: [SciPy-User] matplotlib installation In-Reply-To: References: Message-ID: On Fri, Jan 21, 2011 at 08:41, Wang, Xiao wrote: > Hello, > > I am trying to install matplotlib in my openSuSE 11.3 system, but > encountered a problem. Problems with openSUSE's packages should be reported to openSUSE. I don't think we can help you with them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From ralf.gommers at googlemail.com Thu Jan 27 05:54:39 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 27 Jan 2011 18:54:39 +0800 Subject: [SciPy-User] scipy.optimize named argument inconsistency In-Reply-To: References: Message-ID: On Wed, Jan 26, 2011 at 12:41 AM, Joon Ro wrote: > Hi, > > I just found that for some functions such as fmin_bfgs, the argument name > for the objective function to be minimized is f, and for others such as > fmin, it is func. > I was wondering if this was intended, because I think it would be better to > have consistent argument names across those functions. > It's unlikely that that was intentional. A patch would be welcome. "func" looks better to me than "f" or "F". Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Jan 27 06:38:16 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 27 Jan 2011 19:38:16 +0800 Subject: [SciPy-User] [SciPy-Dev] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: <20110125210818.GF13877@phare.normalesup.org> References: <20110125210818.GF13877@phare.normalesup.org> Message-ID: On Wed, Jan 26, 2011 at 5:08 AM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Tue, Jan 25, 2011 at 06:15:22PM +0800, Ralf Gommers wrote: > > On the scikits.learn list someone said the maxentropy examples are > nice, > > so perhaps they could be made to work with (translated to) the > logistic > > regression code in scikits.learn. > > OK, I'll see what we can do. > > I had a quick look at the examples, and they seemed so synthetic that I > couldn't get the point. But then again, I am not a Natural Language > Processing guy, so I'll see if I can get an NLP guy translate (and > explain) the examples to the scikit. > > That would be great, thanks. If it's not possible or useful, an answer from an expert explaining why would also be helpful. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.ratcliff at gmail.com Thu Jan 27 08:48:45 2011 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 27 Jan 2011 08:48:45 -0500 Subject: [SciPy-User] Scipy.Optimize on GPUs? In-Reply-To: References: Message-ID: How large is your lattice? Is this for MD, or monte-carlo? On Tue, Jan 18, 2011 at 11:31 AM, Patrick Holvey wrote: > Good morning everyone. > > I'm currently calling scipy.optimize.fmin_cg() to minimize the energy of a > crystal system by changing the xyz coordinates of atoms in the system. I'm > working on deriving the fprime function, but currently, the gradient is > being estimated by fmin_cg. I have access to some Nvidia Tesla's but not > much experience running Python on GPUs. I was wondering if there was an > already GPU-enabled optimization algorithm somewhere in scipy or some other > package. Does anyone know of one? > > Many thanks, > > Patrick > > -- > Patrick Holvey > Graduate Student > Dept. of Materials Science and Engineering > Johns Hopkins University > pholvey1 at jhu.edu > Cell: (865)-659-9908 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allxufang at gmail.com Thu Jan 27 10:27:44 2011 From: allxufang at gmail.com (Fang Xu) Date: Thu, 27 Jan 2011 16:27:44 +0100 Subject: [SciPy-User] error of compiling scipy on x86_64 linux cluster Message-ID: Hi all, I managed to compile numpy, but there're some errors of compiling scipy. computer information: Linux node008 2.6.18.2-34-default #1 SMP Mon Nov 27 11:46:27 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux OS: openSUSE 10.2 (X86-64) kernel: 2.6.18.2-34-default python-2.6 error message: Traceback (most recent call last): File "setup.py", line 160, in setup_package() File "setup.py", line 127, in setup_package from numpy.distutils.core import setup File "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/__init__.py", line 136, in import add_newdocs File "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/add_newdocs.py", line 9, in from numpy.lib import add_newdoc File "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/lib/__init__.py", line 4, in from type_check import * File "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/lib/type_check.py", line 8, in import numpy.core.numeric as _nx File "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/core/__init__.py", line 5, in import multiarray *ImportError: libpython2.6.so.1.0: cannot open shared object file: No such file or directory* I have been digging the web for a while, but didn't come with solution. Could you help me ? Thank you. Best Fang -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Thu Jan 27 11:02:38 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 27 Jan 2011 10:02:38 -0600 Subject: [SciPy-User] error of compiling scipy on x86_64 linux cluster In-Reply-To: References: Message-ID: <4D41971E.7000309@gmail.com> On 01/27/2011 09:27 AM, Fang Xu wrote: > Hi all, > > I managed to compile numpy, but there're some errors of compiling > scipy. > > computer information: Linux node008 2.6.18.2-34-default #1 SMP Mon > Nov 27 11:46:27 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux > > OS: openSUSE 10.2 (X86-64) > > kernel: 2.6.18.2-34-default > > python-2.6 > > error message: > > Traceback (most recent call last): > File "setup.py", line 160, in > setup_package() > File "setup.py", line 127, in setup_package > from numpy.distutils.core import setup > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/__init__.py", > line 136, in > import add_newdocs > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/add_newdocs.py", > line 9, in > from numpy.lib import add_newdoc > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/lib/__init__.py", > line 4, in > from type_check import * > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/lib/type_check.py", > line 8, in > import numpy.core.numeric as _nx > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/core/__init__.py", > line 5, in > import multiarray > /*ImportError: libpython2.6.so.1.0: cannot open shared object file: No > such file or directory*/ > > I have been digging the web for a while, but didn't come with > solution. Could you help me ? Thank you. > > Best > Fang > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Out of curiosity, do you have openoffice or libreoffice installed? I do not recall the exact problem or solution but it related to incorrect usage of shared libraries. If you do search for libpython such as some of the examples on my system: $ locate libpython | grep so /opt/libreoffice/basis3.3/program/libpython2.6.so /opt/libreoffice/basis3.3/program/libpython2.6.so.1.0 /usr/lib64/libpython2.7.so /usr/lib64/libpython2.7.so.1.0 /usr/lib64/libpython3.1.so /usr/lib64/libpython3.1.so.1.0 The 2.7 and 3.1 versions are from the distro (F14) and 2.6 version from the one I build myself. The other is from libreoffice and it probably the one that is being found instead of the correct python library for your system. Obviously removing or probably updating libreoffice (3.3 final was released 2 days ago) would solve it. Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From allxufang at gmail.com Thu Jan 27 14:40:38 2011 From: allxufang at gmail.com (Fang Xu) Date: Thu, 27 Jan 2011 20:40:38 +0100 Subject: [SciPy-User] SciPy-User Digest, Vol 89, Issue 52 In-Reply-To: References: Message-ID: Hi? Thank you. I checked there is no openoffice or libreoffice. and no problem with libpython2.6.so.1.0. Best On Thu, Jan 27, 2011 at 7:00 PM, wrote: > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. Re: error of compiling scipy on x86_64 linux cluster > (Bruce Southey) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 27 Jan 2011 10:02:38 -0600 > From: Bruce Southey > Subject: Re: [SciPy-User] error of compiling scipy on x86_64 linux > cluster > To: scipy-user at scipy.org > Message-ID: <4D41971E.7000309 at gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > On 01/27/2011 09:27 AM, Fang Xu wrote: > > Hi all, > > > > I managed to compile numpy, but there're some errors of compiling > > scipy. > > > > computer information: Linux node008 2.6.18.2-34-default #1 SMP Mon > > Nov 27 11:46:27 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux > > > > OS: openSUSE 10.2 (X86-64) > > > > kernel: 2.6.18.2-34-default > > > > python-2.6 > > > > error message: > > > > Traceback (most recent call last): > > File "setup.py", line 160, in > > setup_package() > > File "setup.py", line 127, in setup_package > > from numpy.distutils.core import setup > > File > > > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/__init__.py", > > line 136, in > > import add_newdocs > > File > > > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/add_newdocs.py", > > line 9, in > > from numpy.lib import add_newdoc > > File > > > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/lib/__init__.py", > > line 4, in > > from type_check import * > > File > > > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/lib/type_check.py", > > line 8, in > > import numpy.core.numeric as _nx > > File > > > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/core/__init__.py", > > line 5, in > > import multiarray > > /*ImportError: libpython2.6.so.1.0: cannot open shared object file: No > > such file or directory*/ > > > > I have been digging the web for a while, but didn't come with > > solution. Could you help me ? Thank you. > > > > Best > > Fang > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > Out of curiosity, do you have openoffice or libreoffice installed? > I do not recall the exact problem or solution but it related to > incorrect usage of shared libraries. > > If you do search for libpython such as some of the examples on my system: > $ locate libpython | grep so > /opt/libreoffice/basis3.3/program/libpython2.6.so > /opt/libreoffice/basis3.3/program/libpython2.6.so.1.0 > /usr/lib64/libpython2.7.so > /usr/lib64/libpython2.7.so.1.0 > /usr/lib64/libpython3.1.so > /usr/lib64/libpython3.1.so.1.0 > > The 2.7 and 3.1 versions are from the distro (F14) and 2.6 version from > the one I build myself. The other is from libreoffice and it probably > the one that is being found instead of the correct python library for > your system. > > Obviously removing or probably updating libreoffice (3.3 final was > released 2 days ago) would solve it. > > Bruce > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20110127/1fdb82de/attachment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 89, Issue 52 > ****************************************** > -- XU Fang ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From joonpyro at gmail.com Thu Jan 27 15:43:52 2011 From: joonpyro at gmail.com (Joon Ro) Date: Thu, 27 Jan 2011 14:43:52 -0600 Subject: [SciPy-User] matplotlib installation In-Reply-To: References: Message-ID: Hi, Why don't you just goto yast -> software management and install them? openSUSE's powerful package manager would do those things for you.It will take care of all the dependency problems. Just add the science repository (you can do this in the software management as well) and search for matplotlib, and install python-matplotlib and python-matplotlib-tk or python-matplotlib-wx. In addition, I believe the science and education repo and have pretty much all the scientific python packages you might need. -Joon On Wed, 26 Jan 2011 22:52:20 -0600, wrote: > ------------------------------ > > Message: 3 > Date: Fri, 21 Jan 2011 14:41:11 +0000 > From: "Wang, Xiao" > Subject: [SciPy-User] matplotlib installation > To: "'scipy-user at scipy.org'" > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hello, > > I am trying to install matplotlib in my openSuSE 11.3 system, but > encountered a problem. I look your documentation from the link of > http://scipy.org/Installing_SciPy/Linux#head-3dbbf9abe395e974d3ec911ae56863f4af70df9b. > I followed the section of openSUSE to access > http://download.opensuse.org/repositories/science/openSUSE_11.3/src/ and > downloaded python-matplotlib-1.0.0-19.2.i586.rpm. when I ran rpm, I got > warning. > > The message is displayed at below: > > # rpm -U python-matplotlib-1.0.0-19.2.i586.rpm > # warning: python-matplotlib-1.0.0-19.2.i586.rpm: Header V3 DSA/SHA1 > Signature, key ID 943d8bb8: NOKEY > # error: Failed dependencies: > Python-configobj is needed by python-matplotlib-1.0.0-19.2.i586 > Python-dateutil is needed by python-matplotlib-1.0.0-19.2.i586 > Python-tk is needed by python-matplotlib-1.0.0-19.2.i586 > > So I downloaded python-tz-2006p-1.2.i586.rpm from the above link and > installed. But where can I find out rpm of python-configobj and > python-dateutil? > > > Thanks in advance, > > Xiao > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20110121/3bcf7523/attachment.html From tmp50 at ukr.net Fri Jan 28 08:54:44 2011 From: tmp50 at ukr.net (Dmitrey) Date: Fri, 28 Jan 2011 15:54:44 +0200 Subject: [SciPy-User] Extremely slow scipy.sparse lil_diags - any walkaround? Message-ID: The following script takes 26 sec on my computer and consumes several hundreds MB RAM. from numpy import ones from time import time import scipy.sparse as SP N = 10**6 a =ones(N) t = time() SP.lil_diags([a], [0], (N,N)) print('time elapsed: ' + str(time()-t)) Can I somehow in other way create a scipy.sparse format matrix from the single diagonal? Thank you in advance, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrennie at gmail.com Fri Jan 28 09:05:05 2011 From: jrennie at gmail.com (Jason Rennie) Date: Fri, 28 Jan 2011 09:05:05 -0500 Subject: [SciPy-User] Extremely slow scipy.sparse lil_diags - any walkaround? In-Reply-To: References: Message-ID: Try constructing the matrix using dok_matrix or coo_matrix, then converting to csr or csc. I've found it helpful to look at the source code as the documentation is not always complete and accurate. Jason 2011/1/28 Dmitrey > The following script takes 26 sec on my computer and consumes several > hundreds MB RAM. > > from numpy import ones > from time import time > import scipy.sparse as SP > N = 10**6 > a =ones(N) > t = time() > SP.lil_diags([a], [0], (N,N)) > print('time elapsed: ' + str(time()-t)) > > Can I somehow in other way create a scipy.sparse format matrix from the > single diagonal? > Thank you in advance, D. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Jan 28 09:13:21 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 28 Jan 2011 09:13:21 -0500 Subject: [SciPy-User] Extremely slow scipy.sparse lil_diags - any walkaround? In-Reply-To: References: Message-ID: On Fri, Jan 28, 2011 at 9:05 AM, Jason Rennie wrote: > Try constructing the matrix using dok_matrix or coo_matrix, then converting > to csr or csc. ?I've found it helpful to look at the source code as the > documentation is not always complete and accurate. the source of spdiags shows that there is also a diag_matrix class/format csr looks fast, diag returns immediately >>> n 1000000 >>> a=sparse.spdiags(np.arange(10**6),0,n,n, format='csr') >>> b=sparse.dia_matrix( (np.arange(n),0), shape=(n,n)) Josef > Jason > > 2011/1/28 Dmitrey >> >> The following script takes 26 sec on my computer and consumes several >> hundreds MB RAM. >> >> from numpy import ones >> from time import time >> import scipy.sparse as SP >> N = 10**6 >> a =ones(N) >> t = time() >> SP.lil_diags([a], [0], (N,N)) >> print('time elapsed: ' + str(time()-t)) >> >> Can I somehow in other way create a scipy.sparse format matrix from the >> single diagonal? >> Thank you in advance, D. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Jason Rennie > Research Scientist, ITA Software > 617-714-2645 > http://www.itasoftware.com/ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From valene.pellissier at nag.co.uk Thu Jan 27 10:30:44 2011 From: valene.pellissier at nag.co.uk (Valene Pellissier) Date: Thu, 27 Jan 2011 15:30:44 +0000 Subject: [SciPy-User] error of compiling scipy on x86_64 linux cluster In-Reply-To: References: Message-ID: <4D418FA4.6020907@nag.co.uk> Are you sure you installed scipy ? No error message at all after installing it ? On 27/01/2011 15:27, Fang Xu wrote: > Hi all, > > I managed to compile numpy, but there're some errors of compiling > scipy. > > computer information: Linux node008 2.6.18.2-34-default #1 SMP Mon > Nov 27 11:46:27 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux > > OS: openSUSE 10.2 (X86-64) > > kernel: 2.6.18.2-34-default > > python-2.6 > > error message: > > Traceback (most recent call last): > File "setup.py", line 160, in > setup_package() > File "setup.py", line 127, in setup_package > from numpy.distutils.core import setup > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/__init__.py", > line 136, in > import add_newdocs > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/add_newdocs.py", > line 9, in > from numpy.lib import add_newdoc > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/lib/__init__.py", > line 4, in > from type_check import * > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/lib/type_check.py", > line 8, in > import numpy.core.numeric as _nx > File > "/projects/AlyssaMaster2008/ext/lib64/python2.6/site-packages/numpy/core/__init__.py", > line 5, in > import multiarray > /*ImportError: libpython2.6.so.1.0: cannot open shared object file: No > such file or directory*/ > > I have been digging the web for a while, but didn't come with > solution. Could you help me ? Thank you. > > Best > Fang > > > ________________________________________________________________________ > This e-mail has been scanned for all viruses by Star. > ________________________________________________________________________ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user ________________________________________________________________________ The Numerical Algorithms Group Ltd is a company registered in England and Wales with company number 1249803. The registered office is: Wilkinson House, Jordan Hill Road, Oxford OX2 8DR, United Kingdom. This e-mail has been scanned for all viruses by Star. The service is powered by MessageLabs. ________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From penningtonthing at gmail.com Fri Jan 28 07:42:32 2011 From: penningtonthing at gmail.com (David Michael Pennington) Date: Fri, 28 Jan 2011 06:42:32 -0600 Subject: [SciPy-User] weave throws no match for operator in seemingly good code Message-ID: I'm trying to accelerate scoreatpercentile() from scipy.stats.stats with weave, but when I put it all together, it won't compile properly... perhaps this is a C-coding error on my part, but it looks right to me. Any assistance tracking down the reason for the errors would be appreciated. Thanks in advance... === weave code === def cpctl(a, per): values=np.sort(a,axis=0) shp = values.shape[0] nanval = np.nan retval = np.float64(0) expr = """ #include double idx = per/100.0*(shp-1); long int idx_l = (long int)idx; if (fmod(idx,1.0) == 0.0) { retval = values[idx_l]; } else if (shp == 0) { retval = nanval; } else { retval = (values[idx_l]+(values[(idx_l+1)]))*fmod(idx,1.0); } """ wv.inline(expr, ['retval','a','per','values','nanval','shp'], \ type_converters=converters.blitz, \ compiler = 'gcc') return retval psyco.unbind(cpctl) === === execution errors === [mpenning at Bucksnort data]$ python reverse.py /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp: In function ?PyObject* compiled_func(PyObject*, PyObject*)?: /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp:740: error: no match for ?operator=? in ?retval = values.blitz::Array::operator[] [with T_indexContainer = long int, P_numtype = double, int N_rank = 1](((const long int&)((const long int*)(& idx_l))))? /usr/lib/python2.5/site-packages/scipy/weave/scxx/object.h:179: note: candidates are: py::object& py::object::operator=(const py::object&) /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp:748: error: no match for ?operator+? in ?values.blitz::Array::operator[] [with T_indexContainer = long int, P_numtype = double, int N_rank = 1](((const long int&)((const long int*)(& idx_l)))) + values.blitz::Array::operator[] [with T_indexContainer = long int, P_numtype = double, int N_rank = 1](((const long int&)((const long int*)(&(idx_l + 1l)))))? /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp: In function ?PyObject* compiled_func(PyObject*, PyObject*)?: /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp:740: error: no match for ?operator=? in ?retval = values.blitz::Array::operator[] [with T_indexContainer = long int, P_numtype = double, int N_rank = 1](((const long int&)((const long int*)(& idx_l))))? /usr/lib/python2.5/site-packages/scipy/weave/scxx/object.h:179: note: candidates are: py::object& py::object::operator=(const py::object&) /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp:748: error: no match for ?operator+? in ?values.blitz::Array::operator[] [with T_indexContainer = long int, P_numtype = double, int N_rank = 1](((const long int&)((const long int*)(& idx_l)))) + values.blitz::Array::operator[] [with T_indexContainer = long int, P_numtype = double, int N_rank = 1](((const long int&)((const long int*)(&(idx_l + 1l)))))? Traceback (most recent call last): File "reverse.py", line 1177, in long_stats = stat_cache(sig,current) File "reverse.py", line 64, in __init__ self.create_stat('t_l_std_diff_vel600_7200_enter',fn,sig,current,300) File "reverse.py", line 113, in create_stat self.stats[fnkey] = func(sig,current) File "reverse.py", line 63, in fn = lambda sig,current: pct(sig['fut_std_diff_600_median_600_vel600'][sig['fut_std_diff_600_median_600_vel600']>0],1,15,current,7200) File "reverse.py", line 934, in pct return cpctl(array.truncate(before=current-dt.timedelta(seconds=backdelta),after=current).valid().values,value) File "reverse.py", line 899, in cpctl compiler = 'gcc') File "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line 355, in inline **kw) File "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line 482, in compile_function verbose=verbose, **kw) File "/usr/lib/python2.5/site-packages/scipy/weave/ext_tools.py", line 367, in compile verbose = verbose, **kw) File "/usr/lib/python2.5/site-packages/scipy/weave/build_tools.py", line 273, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", line 186, in setup return old_setup(**new_attr) File "/usr/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) scipy.weave.build_tools.CompileError: error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -fPIC -I/usr/lib/python2.5/site-packages/scipy/weave -I/usr/lib/python2.5/site-packages/scipy/weave/scxx -I/usr/lib/python2.5/site-packages/scipy/weave/blitz -I/usr/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp -o /tmp/mpenning/python25_intermediate/compiler_0308e7b8c023f1021702bfe033c392a4/home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.o" failed with exit status 1 [mpenning at Bucksnort data]$ === From nicolo.perino at usi.ch Fri Jan 28 07:51:43 2011 From: nicolo.perino at usi.ch (=?iso-8859-1?Q?Nicol=F2_Perino?=) Date: Fri, 28 Jan 2011 13:51:43 +0100 Subject: [SciPy-User] Redundancy in SciPy Message-ID: <97A51B7D-A5BC-472F-ABFA-052F3CC7413A@usi.ch> Hi, do you know how SciPy (or numpy) is redundant? For example if there are many way to initialize the same object or to compute some operations in different way (using different method calls). Thanks. Nicol? Perino ______ PhD Student @ USI - Faculty of Informatics http://www.people.usi.ch/perinon/ From josef.pktd at gmail.com Fri Jan 28 12:40:52 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 28 Jan 2011 12:40:52 -0500 Subject: [SciPy-User] weave throws no match for operator in seemingly good code In-Reply-To: References: Message-ID: On Fri, Jan 28, 2011 at 7:42 AM, David Michael Pennington wrote: > I'm trying to accelerate scoreatpercentile() from scipy.stats.stats > with weave, but when I put it all together, it won't compile > properly... perhaps this is a C-coding error on my part, but it looks > right to me. ?Any assistance tracking down the reason for the errors > would be appreciated. > > Thanks in advance... I don't know about weave Are you sure there is much too speed up this way? The expensive part is the sort. It throws an error when the percentiles are an array or a list, but it looks like it could be easily vectorized, which would make it considerably faster than repeated calls stats.scoreatpercentile(np.arange(10), np.array([25, 50, 75])) stats.scoreatpercentile(np.arange(10), [25, 50, 75]) or even stats.scoreatpercentile(np.random.randn(20,4), [25, 50, 75], axis=0) making it closer to stats.mstats.scoreatpercentiles Josef > > === weave code === > def cpctl(a, per): > ? ?values=np.sort(a,axis=0) > ? ?shp = values.shape[0] > ? ?nanval = np.nan > ? ?retval = np.float64(0) > ? ?expr = """ > ? ? ? ?#include > ? ? ? ?double idx = per/100.0*(shp-1); > ? ? ? ?long int idx_l = (long int)idx; > ? ? ? ?if (fmod(idx,1.0) == 0.0) > ? ? ? ?{ > ? ? ? ? ? ?retval = values[idx_l]; > ? ? ? ?} > ? ? ? ?else if (shp == 0) > ? ? ? ?{ > ? ? ? ? ? ?retval = nanval; > ? ? ? ?} > ? ? ? ?else > ? ? ? ?{ > ? ? ? ? ? ?retval = (values[idx_l]+(values[(idx_l+1)]))*fmod(idx,1.0); > ? ? ? ?} > ? ? ? ? ? """ > ? ?wv.inline(expr, ['retval','a','per','values','nanval','shp'], \ > ? ? ? ?type_converters=converters.blitz, \ > ? ? ? ?compiler = 'gcc') > > ? ?return retval > psyco.unbind(cpctl) > === > > > === execution errors === > [mpenning at Bucksnort data]$ python reverse.py > /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp: > In function ?PyObject* compiled_func(PyObject*, PyObject*)?: > /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp:740: > error: no match for ?operator=? in ?retval = values.blitz::Array N>::operator[] [with T_indexContainer = long int, P_numtype = double, > int N_rank = 1](((const long int&)((const long int*)(& idx_l))))? > /usr/lib/python2.5/site-packages/scipy/weave/scxx/object.h:179: note: > candidates are: py::object& py::object::operator=(const py::object&) > /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp:748: > error: no match for ?operator+? in ?values.blitz::Array N>::operator[] [with T_indexContainer = long int, P_numtype = double, > int N_rank = 1](((const long int&)((const long int*)(& idx_l)))) + > values.blitz::Array::operator[] [with T_indexContainer = long > int, P_numtype = double, int N_rank = 1](((const long int&)((const > long int*)(&(idx_l + 1l)))))? > /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp: > In function ?PyObject* compiled_func(PyObject*, PyObject*)?: > /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp:740: > error: no match for ?operator=? in ?retval = values.blitz::Array N>::operator[] [with T_indexContainer = long int, P_numtype = double, > int N_rank = 1](((const long int&)((const long int*)(& idx_l))))? > /usr/lib/python2.5/site-packages/scipy/weave/scxx/object.h:179: note: > candidates are: py::object& py::object::operator=(const py::object&) > /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp:748: > error: no match for ?operator+? in ?values.blitz::Array N>::operator[] [with T_indexContainer = long int, P_numtype = double, > int N_rank = 1](((const long int&)((const long int*)(& idx_l)))) + > values.blitz::Array::operator[] [with T_indexContainer = long > int, P_numtype = double, int N_rank = 1](((const long int&)((const > long int*)(&(idx_l + 1l)))))? > Traceback (most recent call last): > ?File "reverse.py", line 1177, in > ? ?long_stats = stat_cache(sig,current) > ?File "reverse.py", line 64, in __init__ > ? ?self.create_stat('t_l_std_diff_vel600_7200_enter',fn,sig,current,300) > ?File "reverse.py", line 113, in create_stat > ? ?self.stats[fnkey] = func(sig,current) > ?File "reverse.py", line 63, in > ? ?fn = lambda sig,current: > pct(sig['fut_std_diff_600_median_600_vel600'][sig['fut_std_diff_600_median_600_vel600']>0],1,15,current,7200) > ?File "reverse.py", line 934, in pct > ? ?return cpctl(array.truncate(before=current-dt.timedelta(seconds=backdelta),after=current).valid().values,value) > ?File "reverse.py", line 899, in cpctl > ? ?compiler = 'gcc') > ?File "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", > line 355, in inline > ? ?**kw) > ?File "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", > line 482, in compile_function > ? ?verbose=verbose, **kw) > ?File "/usr/lib/python2.5/site-packages/scipy/weave/ext_tools.py", > line 367, in compile > ? ?verbose = verbose, **kw) > ?File "/usr/lib/python2.5/site-packages/scipy/weave/build_tools.py", > line 273, in build_extension > ? ?setup(name = module_name, ext_modules = [ext],verbose=verb) > ?File "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", > line 186, in setup > ? ?return old_setup(**new_attr) > ?File "/usr/lib/python2.5/distutils/core.py", line 168, in setup > ? ?raise SystemExit, "error: " + str(msg) > scipy.weave.build_tools.CompileError: error: Command "g++ -pthread > -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -fPIC > -I/usr/lib/python2.5/site-packages/scipy/weave > -I/usr/lib/python2.5/site-packages/scipy/weave/scxx > -I/usr/lib/python2.5/site-packages/scipy/weave/blitz > -I/usr/lib/python2.5/site-packages/numpy/core/include > -I/usr/include/python2.5 -c > /home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.cpp > -o /tmp/mpenning/python25_intermediate/compiler_0308e7b8c023f1021702bfe033c392a4/home/mpenning/.python25_compiled/sc_948b3eed0cf550344f0bcefdd75e62940.o" > failed with exit status 1 > [mpenning at Bucksnort data]$ > === > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kwgoodman at gmail.com Fri Jan 28 13:09:32 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 28 Jan 2011 10:09:32 -0800 Subject: [SciPy-User] weave throws no match for operator in seemingly good code In-Reply-To: References: Message-ID: On Fri, Jan 28, 2011 at 9:40 AM, wrote: > On Fri, Jan 28, 2011 at 7:42 AM, David Michael Pennington > wrote: >> I'm trying to accelerate scoreatpercentile() from scipy.stats.stats >> with weave, but when I put it all together, it won't compile >> properly... perhaps this is a C-coding error on my part, but it looks >> right to me. ?Any assistance tracking down the reason for the errors >> would be appreciated. >> >> Thanks in advance... > > I don't know about weave > Are you sure there is much too speed up this way? The expensive part > is the sort. Yeah, the sort is the expensive part. A partial sort would speed things up if you only want the score at one percentile. I use a partial sort in Bottleneck (http://pypi.python.org/pypi/Bottleneck) for median and nanmedian which are probably not far off, code wise, from scoreatpercentile. From kwgoodman at gmail.com Fri Jan 28 13:39:21 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 28 Jan 2011 10:39:21 -0800 Subject: [SciPy-User] weave throws no match for operator in seemingly good code In-Reply-To: References: Message-ID: On Fri, Jan 28, 2011 at 10:09 AM, Keith Goodman wrote: > On Fri, Jan 28, 2011 at 9:40 AM, ? wrote: >> On Fri, Jan 28, 2011 at 7:42 AM, David Michael Pennington >> wrote: >>> I'm trying to accelerate scoreatpercentile() from scipy.stats.stats >>> with weave, but when I put it all together, it won't compile >>> properly... perhaps this is a C-coding error on my part, but it looks >>> right to me. ?Any assistance tracking down the reason for the errors >>> would be appreciated. >>> >>> Thanks in advance... >> >> I don't know about weave >> Are you sure there is much too speed up this way? The expensive part >> is the sort. > > Yeah, the sort is the expensive part. A partial sort would speed > things up if you only want the score at one percentile. I use a > partial sort in Bottleneck (http://pypi.python.org/pypi/Bottleneck) > for median and nanmedian which are probably not far off, code wise, > from scoreatpercentile. To give an idea of the possible speed up: >> from scipy.stats import scoreatpercentile >> import bottleneck as bn >> >> a = np.random.rand(10000) >> scoreatpercentile(a, 50) 0.4991995135677661 >> bn.median(a) 0.4991995135677661 >> >> timeit scoreatpercentile(a, 50) 1000 loops, best of 3: 646 us per loop >> timeit bn.median(a) 10000 loops, best of 3: 26.6 us per loop From ckkart at hoc.net Fri Jan 28 14:08:37 2011 From: ckkart at hoc.net (Christian K.) Date: Fri, 28 Jan 2011 20:08:37 +0100 Subject: [SciPy-User] Scipy.Optimize on GPUs? In-Reply-To: References: Message-ID: Am 18.01.11 17:31, schrieb Patrick Holvey: > Good morning everyone. > > I'm currently calling scipy.optimize.fmin_cg() to minimize the energy of > a crystal system by changing the xyz coordinates of atoms in the I worked on simliar problems and there it turned out that fmin_tnc was by far the best and fastest solver. So maybe it is worth a try. I did have analytic derivatives though. Regards, Christian From almar.klein at gmail.com Fri Jan 28 16:35:34 2011 From: almar.klein at gmail.com (Almar Klein) Date: Fri, 28 Jan 2011 22:35:34 +0100 Subject: [SciPy-User] Scipy.Optimize on GPUs? In-Reply-To: References: Message-ID: On 18 January 2011 17:31, Patrick Holvey wrote: > Good morning everyone. > > I'm currently calling scipy.optimize.fmin_cg() to minimize the energy of a > crystal system by changing the xyz coordinates of atoms in the system. I'm > working on deriving the fprime function, but currently, the gradient is > being estimated by fmin_cg. I have access to some Nvidia Tesla's but not > much experience running Python on GPUs. I was wondering if there was an > already GPU-enabled optimization algorithm somewhere in scipy or some other > package. Does anyone know of one? > For parallelization on the GPU you could take a look at pyOpenCL. Not sure how optimization is available out-of-the box though... Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Jan 29 10:32:57 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 29 Jan 2011 10:32:57 -0500 Subject: [SciPy-User] quantizing a distribution from a cdf in nd Message-ID: I'm starting to get slowly into multivariate distributions. One functionality I need, is to quantize the pdf on a regular grid when I have the cdf given. 1d is easy, 2d I figured out myself. Question: Is there code for the more than two dimensional case? Or does anyone know a formula or a reference? It's kind of a nd version of np.diff. (In case anyone is interested, it is going toward goodness of fit tests and maybe estimation of multivariate distributions. For the empirical part, I expect that np.histogramdd will be handy.) below is the recipe for 1d and 2d 1d is easy with np.diff >>> from scipy import stats >>> stats.beta.cdf([0, 0.25, 0.5, 0.75, 1], 10, 10) array([ 0. , 0.00890328, 0.5 , 0.99109672, 1. ]) >>> np.diff(stats.beta.cdf([0, 0.25, 0.5, 0.75, 1], 10, 10)) array([ 0.00890328, 0.49109672, 0.49109672, 0.00890328]) I wrote a first version for a function for 2d, functions in attachment >>> unif_2d = lambda x,y: x*y >>> prob_bv_rectangle([0,0], [1,0.5], unif_2d) 0.5 >>> prob_bv_rectangle([0,0], [0.5,0.5], unif_2d) 0.25 >>> prob_quantize_cdf2(np.linspace(0,1,6), np.linspace(0,1,5), unif_2d) array([[ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05]]) >>> prob_quantize_cdf(np.linspace(0,1,6), np.linspace(0,1,5), unif_2d) array([[ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05]]) >>> prob_quantize_cdf([0, 0.25, 0.75, 1], np.linspace(0,1,5), unif_2d) array([[ 0.0625, 0.0625, 0.0625, 0.0625], [ 0.125 , 0.125 , 0.125 , 0.125 ], [ 0.0625, 0.0625, 0.0625, 0.0625]]) check: >>> prob_quantize_cdf([0, 0.25, 0.75, 1], np.linspace(0,1,5), unif_2d).sum() 1.0 >>> 0.5/4 0.125 >>> 0.25/4 0.0625 Josef -------------- next part -------------- '''Quantizing a continuous distribution in 2d Author: josef-pktd ''' import numpy as np def prob_bv_rectangle(lower, upper, cdf): '''helper function for probability of a rectangle in a bivariate distribution how does this generalize to more than 2 variates ? ''' probuu = cdf(*upper) probul = cdf(upper[0], lower[1]) problu = cdf(lower[0], upper[1]) probll = cdf(*lower) return probuu - probul - problu + probll def prob_quantize_cdf(binsx, binsy, cdf): '''quantize a continuous distribution given by a cdf Parameters ---------- binsx : array_like, 1d binedges ''' binsx = np.asarray(binsx) binsy = np.asarray(binsy) nx = len(binsx) - 1 ny = len(binsy) - 1 probs = np.nan * np.ones((nx, ny)) #np.empty(nx,ny) cdf_values = cdf(binsx[:,None], binsy) cdf_func = lambda x, y: cdf_values[x,y] for xind in range(1, nx+1): for yind in range(1, ny+1): upper = (xind, yind) lower = (xind-1, yind-1) #print upper,lower, probs[xind-1,yind-1] = prob_bv_rectangle(lower, upper, cdf_func) assert not np.isnan(probs).any() return probs if __name__ == '__main__': from numpy.testing import assert_almost_equal unif_2d = lambda x,y: x*y assert_almost_equal(prob_bv_rectangle([0,0], [1,0.5], unif_2d), 0.5, 14) assert_almost_equal(prob_bv_rectangle([0,0], [0.5,0.5], unif_2d), 0.25, 14) arr1b = np.array([[ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05], [ 0.05, 0.05, 0.05, 0.05]]) arr1a = prob_quantize_cdf(np.linspace(0,1,6), np.linspace(0,1,5), unif_2d) assert_almost_equal(arr1a, arr1b, 14) arr2b = np.array([[ 0.25], [ 0.25], [ 0.25], [ 0.25]]) arr2a = prob_quantize_cdf(np.linspace(0,1,5), np.linspace(0,1,2), unif_2d) assert_almost_equal(arr2a, arr2b, 14) arr3b = np.array([[ 0.25, 0.25, 0.25, 0.25]]) arr3a = prob_quantize_cdf(np.linspace(0,1,2), np.linspace(0,1,5), unif_2d) assert_almost_equal(arr3a, arr3b, 14) From thoeger at fys.ku.dk Sun Jan 30 06:33:18 2011 From: thoeger at fys.ku.dk (=?ISO-8859-1?Q?Th=F8ger?= Emil Juul Thorsen) Date: Sun, 30 Jan 2011 12:33:18 +0100 Subject: [SciPy-User] Voigt function? Message-ID: <1296387198.2381.10.camel@zetkin> Hello Scipy-gurus; I was wondering if there was any scipy Voigt profile function out there, similar to the one that exists for IDL which takes in the values of A and U...? It would be immensely helpful to be able to quickly and flexibly plot the function for different values to explain e.g. the curve of growth etc., without having to go to a university computer running IDL. Cheers; Emil -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From pav at iki.fi Sun Jan 30 07:56:15 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 30 Jan 2011 12:56:15 +0000 (UTC) Subject: [SciPy-User] Voigt function? References: <1296387198.2381.10.camel@zetkin> Message-ID: On Sun, 30 Jan 2011 12:33:18 +0100, Th?ger Emil Juul Thorsen wrote: > I was wondering if there was any scipy Voigt profile function out there, > similar to the one that exists for IDL which takes in the values of A > and U...? Wikipedia tells that the Voigt profile & functions are related to the complex error function w(z) = exp(-z^2)(1 - erf(-i z)). Scipy has an implementation for erf() that supports complex values. However, be sure to use Scipy >= 0.8.0, as the erf() implementation in earlier versions had bugs that made results from erf() inaccurate in parts of the complex plane. -- Pauli Virtanen From gael.varoquaux at normalesup.org Sun Jan 30 09:29:36 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 30 Jan 2011 15:29:36 +0100 Subject: [SciPy-User] [ANN] FEMTEC: Trac on open source scientific software Message-ID: <20110130142936.GI12659@phare.normalesup.org> Hi list, This is just a note that an extra track at FEMTEC, a conference for computational methods in engineering and science, is open for open source scientific software. The organisers have a taste for Python, so if you want to submit a paper on numerical methods with Python, this is an excellent venue. Abstract submission is open till end of February. To submit you need to create an account and edit you profile. Gael ________________________________________________________________________________ The 3rd International Conference on Finite Element Methods in Engineering and Science (FEMTEC 2011, http://hpfem.org/events/femtec-2011/) will have a track on Open-source projects and Python in scientific computing. FEMTEC 2011 is co-organized by the University of Nevada (Reno), Desert Reseach Institute (Reno), Idaho National Laboratory (Idaho Falls, Idaho), and U.S. Army Engineer Research and Development Center (Vicksburg, Mississippi). The objective of the meeting is to strengthen the interaction between researchers who develop new computational methods, and scientists and engineers from various fields who employ numerical methods in their research. Specific focus areas of FEMTEC 2011 include, but are not limited to, the following: * Computational methods in hydrology, atmospheric modeling, and other earth sciences. * Computational methods in nuclear, mechanical, civil, electrical, and other engineering fields. * Mesh generation and scientific visualization. * Open-source projects and Python in scientific computing. Part of the conference will be a software afternoon featuring open source projects of participants. Proceedings Proceedings of FEMTEC 2011 will appear as a special issue of Journal of Computational and Applied Mathematics (2008 SCI impact factor 1.292), and additional high-impact international journals as needed. From anass.belcaid at gmail.com Sun Jan 30 10:09:04 2011 From: anass.belcaid at gmail.com (Anass) Date: Sun, 30 Jan 2011 15:09:04 +0000 Subject: [SciPy-User] [ANN] FEMTEC: Trac on open source scientific software In-Reply-To: <20110130142936.GI12659@phare.normalesup.org> (Gael Varoquaux's message of "Sun, 30 Jan 2011 15:29:36 +0100") References: <20110130142936.GI12659@phare.normalesup.org> Message-ID: <87vd1677jj.fsf@gmail.com> Gael Varoquaux writes: > Hi list, > > This is just a note that an extra track at FEMTEC, a conference for > computational methods in engineering and science, is open for open source > scientific software. The organisers have a taste for Python, so if you > want to submit a paper on numerical methods with Python, this is an > excellent venue. Abstract submission is open till end of February. To > submit you need to create an account and edit you profile. > > Gael > > ________________________________________________________________________________ > > The 3rd International Conference on Finite Element Methods in Engineering > and Science (FEMTEC 2011, http://hpfem.org/events/femtec-2011/) will have > a track on Open-source projects and Python in scientific computing. > > FEMTEC 2011 is co-organized by the University of Nevada (Reno), Desert > Reseach Institute (Reno), Idaho National Laboratory (Idaho Falls, Idaho), > and U.S. Army Engineer Research and Development Center (Vicksburg, > Mississippi). The objective of the meeting is to strengthen the > interaction between researchers who develop new computational methods, > and scientists and engineers from various fields who employ numerical > methods in their research. Specific focus areas of FEMTEC 2011 include, > but are not limited to, the following: > > * Computational methods in hydrology, atmospheric > modeling, and other earth sciences. > * Computational methods in nuclear, mechanical, > civil, electrical, and other engineering fields. > * Mesh generation and scientific visualization. > * Open-source projects and Python in scientific computing. > > Part of the conference will be a software afternoon featuring open source > projects of participants. > > Proceedings > > Proceedings of FEMTEC 2011 will appear as a special issue of Journal of > Computational and Applied Mathematics (2008 SCI impact factor 1.292), and > additional high-impact international journals as needed. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hi, Professor, thank you a lot for this information, but I was wondering if there is a possibility to assist to the event online for the students who can travel. From gael.varoquaux at normalesup.org Sun Jan 30 11:17:56 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 30 Jan 2011 17:17:56 +0100 Subject: [SciPy-User] [ANN] FEMTEC: Trac on open source scientific software In-Reply-To: <87vd1677jj.fsf@gmail.com> References: <20110130142936.GI12659@phare.normalesup.org> <87vd1677jj.fsf@gmail.com> Message-ID: <20110130161756.GC14858@phare.normalesup.org> On Sun, Jan 30, 2011 at 03:09:04PM +0000, Anass wrote: > was wondering if there is a possibility to assist to the event online > for the students who can travel. You should ask the local organizers, femtec2011(a_t)unr(d_o_t)edu but I suspect not, as the organization committee is small. Cheers, Gael From ckkart at hoc.net Sun Jan 30 11:29:14 2011 From: ckkart at hoc.net (Christian K.) Date: Sun, 30 Jan 2011 17:29:14 +0100 Subject: [SciPy-User] Voigt function? In-Reply-To: <1296387198.2381.10.camel@zetkin> References: <1296387198.2381.10.camel@zetkin> Message-ID: Am 30.01.11 12:33, schrieb Th?ger Emil Juul Thorsen: > Hello Scipy-gurus; > > I was wondering if there was any scipy Voigt profile function out there, > similar to the one that exists for IDL which takes in the values of A > and U...? > > It would be immensely helpful to be able to quickly and flexibly plot > the function for different values to explain e.g. the curve of growth > etc., without having to go to a university computer running IDL. I have been using scipy.special.wofz for quite a while: from scipy import special def voigt(x,amp,pos,fwhm,shape): """\ voigt profile V(x,sig,gam) = Re(w(z))/(sig*sqrt(2*pi)) z = (x+i*gam)/(sig*sqrt(2)) """ tmp = 1/special.wofz(N.zeros((len(x))) \ +1j*N.sqrt(N.log(2.0))*shape).real tmp = tmp*amp* \ special.wofz(2*N.sqrt(N.log(2.0))*(x-pos)/fwhm+1j* \ N.sqrt(N.log(2.0))*shape).real return tmp Regards, Christian From pav at iki.fi Sun Jan 30 12:10:06 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 30 Jan 2011 17:10:06 +0000 (UTC) Subject: [SciPy-User] Voigt function? References: <1296387198.2381.10.camel@zetkin> Message-ID: On Sun, 30 Jan 2011 17:29:14 +0100, Christian K. wrote: [clip] > I have been using scipy.special.wofz for quite a while: [clip] Neat, I didn't recall that Scipy had an implementation for that. Incidentally, its implemention is completely separate from that of erf(), and appears to be solid. -- Pauli Virtanen From ralf.gommers at googlemail.com Sun Jan 30 20:50:33 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 31 Jan 2011 09:50:33 +0800 Subject: [SciPy-User] ANN: SciPy 0.9.0 release candidate 2 Message-ID: Hi, I am pleased to announce the availability of the second release candidate of SciPy 0.9.0. This will be the first SciPy release to include support for Python 3 (all modules except scipy.weave), as well as for Python 2.7. Due to the Sourceforge outage I am not able to put binaries on the normal download site right now, that will probably only happen in a week or so. If you want to try the RC now please build from svn, and report any issues. Changes since release candidate 1: - fixes for build problems with MSVC + MKL (#1210, #1376) - fix pilutil test to work with numpy master branch - fix constants.codata to be backwards-compatible Enjoy, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgarc063 at fiu.edu Mon Jan 31 17:08:04 2011 From: jgarc063 at fiu.edu (Jorge Garcia) Date: Mon, 31 Jan 2011 17:08:04 -0500 Subject: [SciPy-User] Installing Scipy from source Message-ID: I know it's probably a stupid questions but I can't get the scipy module to properly install on my Ubuntu 10.04 machine. I already have Numpy working so I don't think that's the problem. I'm not going to be doing anything to crazy so I don't really need ATLAS or BLAS. Here's the output that I get when I run the setup.py file. I'm using Python 3.1.2, and the modules work on my Windows machine so I don't think it's a Python 3 problem. RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/tests/test_interface.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/benchmarks/bench_sparse.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/sparsetools/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/sparsetools/csc.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/sparsetools/csgraph.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/sparsetools/coo.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/sparsetools/csr.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/sparsetools/bsr.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/sparsetools/dia.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/tests/test_construct.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/tests/test_spfuncs.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/tests/test_extract.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/tests/test_base.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/stats.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/kde.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/vonmises.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/_support.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/mstats_basic.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/distributions.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/rv.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/mstats.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/morestats.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/mstats_extras.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_distributions.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_mstats_basic.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_continuous_extra.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_continuous_basic.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_mstats_extras.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_stats.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_kdeoth.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_morestats.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_fit.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/tests/test_discrete_basic.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/odepack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/quadrature.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/quadpack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/ode.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/tests/test_quadrature.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/tests/test_integrate.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/tests/test_quadpack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/constants/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/constants/constants.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/constants/codata.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/generate_qhull.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/kdtree.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/distance.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/tests/test_kdtree.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/tests/test_distance.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/tests/test_qhull.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/morphology.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/_ni_support.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/measurements.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/io.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/filters.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/fourier.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/interpolation.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/tests/test_ndimage.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/tests/test_filters.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/tests/test_io.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/tests/test_measurements.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/blas/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/blas/scons_support.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/blas/tests/test_fblas.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/scons_support.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/tests/test_esv.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/tests/test_lapack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/tests/common.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/tests/test_gesv.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/interpolate.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/fitpack2.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/ndgriddata.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/rbf.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/fitpack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/polyint.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/interpolate_wrapper.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/generate_interpnd.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/interpnd_info.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/tests/test_polyint.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/tests/test_regression.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/tests/test_ndgriddata.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/tests/test_rbf.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/tests/test_interpolate_wrapper.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/tests/test_interpolate.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/tests/test_fitpack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/tests/test_interpnd.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/odr/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/odr/odrpack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/odr/models.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/odr/tests/test_odr.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/mmio.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/wavfile.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/data_store.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/dumb_shelve.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/netcdf.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/dumbdbm_patched.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/idl.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/mio5.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/mio5_params.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/mio4.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/miobase.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/byteordercodes.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/mio.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/benchmarks/bench_structarr.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/tests/test_mio.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/tests/test_mio5_utils.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/tests/test_streams.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/tests/test_pathological.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/tests/test_mio_funcs.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/arff/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/arff/arffread.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/arff/utils.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/arff/tests/test_arffread.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/tests/test_idl.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/tests/test_netcdf.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/tests/test_wavfile.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/tests/test_mmio.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/optimize.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/__init__.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/nonlin.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/lbfgsb.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/slsqp.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/nnls.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/cobyla.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/anneal.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/zeros.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/linesearch.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tnc.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/minpack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/_tstutils.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/benchmarks/bench_zeros.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tests/test_regression.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tests/test_cobyla.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tests/test_minpack.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tests/test_linesearch.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tests/test_slsqp.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tests/test_optimize.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tests/test_nonlin.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/tests/test_zeros.py RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/weave/c_spec.py ### RefactoringTool: Line 380: absolute and local imports together RefactoringTool: ### In file /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/weave/blitz_tools.py ### RefactoringTool: Line 36: could not convert: raise "inputs failed to pass size check." RefactoringTool: Python 3 does not support string exceptions RefactoringTool: ### In file /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/weave/ast_tools.py ### RefactoringTool: Line 183: cannot convert map(None, ...) with multiple arguments because map() now truncates to the shortest sequence RefactoringTool: ### In file /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/weave/bytecodecompiler.py ### RefactoringTool: Line 150: cannot convert map(None, ...) with multiple arguments because map() now truncates to the shortest sequence RefactoringTool: ### In file /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/weave/bytecodecompiler.py ### RefactoringTool: Line 242: could not convert: raise "Executing code failed." RefactoringTool: Python 3 does not support string exceptions RefactoringTool: Line 1157: cannot convert map(None, ...) with multiple arguments because map() now truncates to the shortest sequence RefactoringTool: Line 1157: cannot convert map(None, ...) with multiple arguments because map() now truncates to the shortest sequence RefactoringTool: ### In file /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/fitpack.py ### RefactoringTool: Line 1173: cannot convert map(None, ...) with multiple arguments because map() now truncates to the shortest sequence RefactoringTool: Skipping implicit fixer: buffer RefactoringTool: Skipping implicit fixer: idioms RefactoringTool: Skipping implicit fixer: set_literal RefactoringTool: Skipping implicit fixer: ws_comma RefactoringTool: Refactored /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/linalg/setup.py RefactoringTool: No changes to /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/special/setup.py RefactoringTool: No changes to /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/cluster/setup.py RefactoringTool: No changes to /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/dsolve/umfpack/setup.py RefactoringTool: No changes to /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/isolve/setup.py RefactoringTool: No changes to /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/eigen/arpack/setup.py RefactoringTool: Refactored /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/setup.py RefactoringTool: No changes to /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/setup.py RefactoringTool: Refactored /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/blas/setup.py RefactoringTool: Refactored /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/setup.py RefactoringTool: Files that were modified: RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/linalg/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/special/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/cluster/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/dsolve/umfpack/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/isolve/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/eigen/arpack/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/blas/setup.py RefactoringTool: /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/setup.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/linalg/lapack.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/linalg/flinalg.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/linalg/basic.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/linalg/decomp.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/special/__init__.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/special/orthogonal.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/special/basic.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/signal/__init__.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/signal/signaltools.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/signal/bsplines.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/signal/fir_filter_design.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/fftpack/pseudo_diffs.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/fftpack/basic.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/cluster/__init__.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/cluster/hierarchy.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/cluster/vq.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/dsolve/linsolve.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/dsolve/umfpack/umfpack.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/isolve/iterative.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/sparse/linalg/eigen/arpack/arpack.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/stats.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/kde.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/mstats_basic.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/distributions.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/stats/morestats.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/odepack.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/quadpack.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/integrate/ode.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/__init__.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/spatial/distance.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/morphology.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/measurements.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/filters.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/fourier.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/ndimage/interpolation.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/blas/__init__.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/lib/lapack/__init__.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/interpolate.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/fitpack2.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/ndgriddata.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/fitpack.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/interpolate/interpolate_wrapper.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/__init__.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/mio5.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/mio5_params.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/mio4.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/io/matlab/miobase.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/lbfgsb.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/slsqp.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/nnls.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/cobyla.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/zeros.py /home/cadsoft/src/scipy-0.9.0rc1/build/py3k/scipy/optimize/minpack.py blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib/atlas libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2 libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib/atlas libraries f77blas,cblas,atlas not found in /usr/lib/sse2 libraries f77blas,cblas,atlas not found in /usr/lib NOT AVAILABLE Warning: No configuration returned, assuming unavailable./usr/local/lib/python3.1/dist-packages/numpy/distutils/system_info.py:1399: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/local/lib libraries blas not found in /usr/lib NOT AVAILABLE /usr/local/lib/python3.1/dist-packages/numpy/distutils/system_info.py:1408: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /usr/local/lib/python3.1/dist-packages/numpy/distutils/system_info.py:1411: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 180, in setup_package() File "setup.py", line 172, in setup_package configuration=configuration ) File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/core.py", line 152, in setup config = configuration() File "setup.py", line 121, in configuration config.add_subpackage('scipy') File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/misc_util.py", line 972, in add_subpackage caller_level = 2) File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/misc_util.py", line 941, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/misc_util.py", line 878, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 8, in configuration config.add_subpackage('integrate') File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/misc_util.py", line 972, in add_subpackage caller_level = 2) File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/misc_util.py", line 941, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/misc_util.py", line 878, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/integrate/setup.py", line 10, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/system_info.py", line 310, in get_info return cl().get_info(notfound_action) File "/usr/local/lib/python3.1/dist-packages/numpy/distutils/system_info.py", line 461, in get_info raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Jan 31 17:24:31 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 31 Jan 2011 22:24:31 +0000 (UTC) Subject: [SciPy-User] Installing Scipy from source References: Message-ID: On Mon, 31 Jan 2011 17:08:04 -0500, Jorge Garcia wrote: > I know it's probably a stupid questions but I can't get the scipy module > to properly install on my Ubuntu 10.04 machine. I already have Numpy > working so I don't think that's the problem. I'm not going to be doing > anything to crazy so I don't really need ATLAS or BLAS. Scipy requires having BLAS and LAPACK installed. Install the corresponding *-dev packages from Ubuntu's repositories. -- Pauli Virtanen From jgarc063 at fiu.edu Mon Jan 31 17:39:02 2011 From: jgarc063 at fiu.edu (Jorge Garcia) Date: Mon, 31 Jan 2011 17:39:02 -0500 Subject: [SciPy-User] Installing Scipy from source In-Reply-To: References: Message-ID: I tried using apt-get to get the blas package but it couldn't find it. I would I perfom one of the operations indicated below: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) Thanks Pauli On Mon, Jan 31, 2011 at 5:24 PM, Pauli Virtanen wrote: > On Mon, 31 Jan 2011 17:08:04 -0500, Jorge Garcia wrote: > > I know it's probably a stupid questions but I can't get the scipy module > > to properly install on my Ubuntu 10.04 machine. I already have Numpy > > working so I don't think that's the problem. I'm not going to be doing > > anything to crazy so I don't really need ATLAS or BLAS. > > Scipy requires having BLAS and LAPACK installed. Install the > corresponding *-dev packages from Ubuntu's repositories. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: